mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Yes, Mozilla Prototypes With Chrome Tech, But Has No Plans To Dump Firefox - Lifehacker Australia

Nieuws verzameld via Google - za, 16/04/2016 - 06:19

Yes, Mozilla Prototypes With Chrome Tech, But Has No Plans To Dump Firefox
Lifehacker Australia
We've already seen some consolidation in the browser space with Opera dropping its technology base and moving to Blink, Google's fork of WebKit and the meat behind Chrome. Would Mozilla ever consider such a move for Firefox? Not right now, but the ...

Categorieën: Mozilla-nl planet

Air Mozilla: Foundation Demos April 15 2016

Mozilla planet - za, 16/04/2016 - 00:11

Foundation Demos April 15 2016 Foundation Demos April 15 2016

Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Slide Bite: Transitional Technologies

Mozilla planet - vr, 15/04/2016 - 21:11

transitionalquickly

A transitional technology is a technology that emerges as a computing era settles into maturity and which is a precursor to the successor era. Transitional technologies are firmly rooted in the “old” era but also contain important elements of the “new” era. It’s easy to think that what we experience using transitional technologies is what the emerging era is going to be like. Not likely! Transitional technologies carry too much baggage from the waning era. For a new computing era to fully emerge we need to move “quickly through” the transition period and get on with the business of inventing the key technologies of the new era.

Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Beer and Tell: April 2016

Mozilla planet - vr, 15/04/2016 - 20:00

 April 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Categorieën: Mozilla-nl planet

Hub Figuière: Modernizing AbiWord code

Mozilla planet - vr, 15/04/2016 - 18:31

When you work on a 18 year old code base like AbiWord, you encounter stuff from another age. This is the way it is in the lifecycle of software where the requirement and the tooling evolve.

Nonetheless, when AbiWord started in 1998, it was meant as a cross-platform code base written in C++ that had to compile on both Windows and Linux. C++ compiler where not as standard compliant as today so a lot of things where excluded: no template, so not standard C++ library (it was called STL at the time). Over the years, things have evolved, Mac support was added, gcc 4 got released (with much better C++ support), and in 2003 we started using template for the containers (not necessarily in that oder, BTW). Still no standard library. This came later. I just flipped the switch to make C++11 mandatory, more on that later.

As I was looking for some bugs I found it that with all that hodge podge of coding standard there wasn't any, and this caused some serious ownership problems where we'd be using freed memory. The worse is this lead to file corruption where we write garbage memory into files as are supposed to be valid XML. This is bad.

The core of the problem is the way we pass attributes / properties around. They are passed as a NULL terminated array of pointer to strings. Even index are keys, odd are string values. While keys are always considered static, values are not always. Sometime they are taken out of a std::string or a one of the custom string containers from the code base (more on that one later), sometime they are just strdup() and free() later (uh oh, memory leaks).

Maybe this is the good time to do a cleanup and modernize the code base and make sure we have safer code rather that trying to figure out one by one all the corner cases. And shall I add that there is virtually no tests on AbiWord? So it is gonna be epic.

As I'm writing this I have 8 patches with a couple very big, amounting to the following stats (from git):

134 files changed, 5673 insertions(+), 7730 deletions(-)

These numbers just show how broad the changes are, and it seems to work. The bugs I was seeing with valgrind are gone, no more access to freed memory. That's a good start.

Some of the 2000+ lines deleted are redundant code that could have been refactored (there are still a few places I marked for that), but a lot have to do with what I'm fixing. Also some changes are purely whitespace / indentation where it was relevant usually around an actual change.

Now, instead of passing around const char ** pointers, we pass around a const PP_PropertyVector & which is, currently, a typedef to std::vector<std::string>. To make things nice the main storage for these properties is now also a std::map<std::string, std::string> (possibly I will switch it to an unordered map) so that assignments are transparent to the std::string implementation. Before that it was a one of the custom containers.

Patterns like this:

const char *props[] = { NULL, NULL, NULL, NULL }; int i = 0; std::string value = object->getValue(); props[i++] = "key"; const char *s = strdup(value.c_str()); props[i++] = s; thing->setProperties(props); free(s);

Turns to

PP_PropertyValue props = { "key", object->getValue() }; thing->setProperties(props);

Shorter, readable, less error prone. This uses C++11 initializer list. This explain some of the line removal.

Use C++ 11!

Something I can't recommend enough if you have a C++ code base is to switch to C++ 11. Amongst the new features, let me list the few that I find important:

  • auto for automatic type deduction. Make life easier in typing and also in code changes. I mostly always use it whe declaring an iterator from a container.
  • unique_ptr<> and shared_ptr<>. Smart pointer inherited from boost. But without the need for boost. unique_ptr<> replaces the dreaded auto_ptr<> that is now deprecated.
  • unordered_map<> and unordered_set<>: hash based map and set in the standard library.
  • lambda functions. Not need to explain, it was one of the big missing feature of C++ in the age of JavaScript popularity
  • move semantics: transfer the ownership of an object. Not easy to use in C++ but clearly beneficial for when you always ended up copying. This is a key part of the unique_ptr<> implementation to be usable in a container where auto_ptr<> didn't. The move semantic is the default behaviour of Rust while C++ copies.
  • initializer list allow construction of object by passing a list of initial values. I use this one a lot in this patch set for property vectors.

Don't implement your own containers.

Don't implement vector, map, set, associative container, string, lists. Use the standard C++ library instead. It is portable, it works and it likely does a better job than your own. I have another set of patches to properly remove these UT_Vector, UT_String, etc. from the AbiWord codebase. Some have been removed progressively, but it is still ongoing.

Also write tests.

This is something that is missing on AbiWord that I have tried to tackle a few time.

One more thing.

I could have mechanised these code changes to some extent, but then I wouldn't have had to review all that code in which I found issues that I addressed. Eyeball mark II is still good for that.

The patch (in progress)

Categorieën: Mozilla-nl planet

Christopher Arnold

Mozilla planet - vr, 15/04/2016 - 17:48

Back in 2005-2006 my friend Liesl told me about the coming age of chat bots.  I had a hard time imagining how people would embrace products that simulated human voice communication but were less “intelligent”.  She ended up building a company that allowed people to have polite automated service agents that you could program with a certain specific area of intelligence.  Upon launch she found that people spent a lot more time conversing with the bots than they did with the average human service agent.  I wondered if this was because it was harder to get questions answered, or if people just enjoyed the experience of conversing with the bots more than they enjoyed talking to people.  Perhaps when we know the customer service agent is paid hourly, we don't gab in excess.  But if it's chat bot you're talking to, we don't feel the need to be hasty?

Fast forwarding over a decade later, IBM has acquired her company into the Watson group.  During a dinner party we talked about Amazon’s Echo sitting on her porch.  She and her husband would occasionally make DJ requests to “Alexa” (the name for Echo’s internal chat bot) as if it was a person attending the party.  It was definitely seeming that the age of more intelligent bots is upon us.  Most folk who have experimented with speech-input products of the last decade have become accustomed to talking to bots in a robotic monotone devoid of accent because of the somewhat random speech capture mistakes that early technology was burdened with.  If the bots don't adapt to us, we go to them it seems, mimicking the 50's and 60's movies of how we've heard robotic voices depicted to us in science fiction films.

This month both Microsoft and Facebook have announced open bot APIs for their respective platforms.  Microsoft’s platform for integration is an open source "Bot Framework" that allows any web developer to re-purpose the code to inject new actions or content tools in the active discussion flow of their conversational chat bot called Cortana, which is built into the search box of every Windows 10 operating system they license.  They also demonstrated how the new bot framework allows their Skype messenger to respond to queries intelligently if they have the right libraries loaded. Amazon refers to the app-sockets for the Echo platform as "talents", whereby you load a specific field of intelligence into the speech engine to allow Alexa to query the external sources you wish.  I noticed that both Alexa team and Cortana team seem to be focusing on pizza ordering in both their product demos.  But one day we'll be able to query beyond the basic necessities.  In my early demonstration back in 2005 of the technology Liesl and Dr. Zakos (her cofounder) built, they had their chat bot ingest all my blog writings about folk percussion, then answer questions about certain topics that were in my personal blog.  If a bot narrows a question to a subject matter, its answers can be uncannily accurate to the field!

Facebook’s plan is to inject bot-intelligence into the main Facebook Messenger app.)  Their announcements actually seem to follow quite closely the concept Microsoft announced of developers being able to port in new capabilities for the chatting engines of each platform vendor.  It may be that both Microsoft and Facebook are planning for the social capabilities of their joint collaborations on the launch of Oculus, Facebook's immersive virtual environment of head-set based virtual world environments which run on Windows 10 machines.

The outliers in this era of chat bot openness are the Apple Siri and Ok Google speech tools that are like a centrally managed brain.  (Siri may query the web using specific sources like Wolfram Alpha, but most of the answers you get from either will be consistent with the answers others receive for similar questions.)  The thing that I think is very elegant about the approaches Amazon, Microsoft and Facebook are taking is that they make the knowledge engine of the core platform extensible in ways that a single company could not.  Also, the approach allows customers to personalize their experience of the platform by specifically adding new ported service to the tools.  My interest here is that the speech platforms will become much more like the Internet of today where we are used to having very diverse “content” experiences based on our personal preferences and proclivities.

It is very exciting to see that speech is becoming a very real and useful interface for interacting with computers.  While the content of the web is already one of the knowledge ports of these speech tools, the open-APIs of Cortana, Alexa and Facebook Messenger will usher in a very exciting new means to create compelling internet experiences.  My hope is that there is a bit of standardization so that a merchant like Domino's doesn't have to keep rebuilding their chat bot tools for each platform.

I remember my first experience having a typed conversation with the Dr. Know computer at the Oregon Museum of Science and Industry when I was a teenager.  It was a simulated Turing test program designed to give a reasonably acceptable experience of interacting with a computer in a human way.  While Dr. Know was able to artfully dodge or re-frame questions when it detected input that wasn’t in its knowledge database, I can see that the next generations of teenagers will be able to have exactly the same kind of experience I had in the 1980’s.  But their discussions will go in the direction of exploring knowledge and exploring logic structures of a mysterious mind instead of ending up in rhetorical Cul-de-sacs of the Dr. Know program.

While we may not chat with machines with quite the same intimacy of Spike Jonze’s character in “Her”, the days where we talk in robotic tones to operate with the last decade’s speech input systems is soon to end.  Each of these innovative companies is dealing with the hard questions of how to get us out of our stereotypes of robot behavior and get us back to acting like people again, returning to the main interface that humans have used for eons to interact with each other.  Ideally the technology will fade into the background and we'll start acting normally again instead of staring at screens and tapping fingers.




P.S  Of course Mozilla has several initiatives on speech in process.  We'll talk about those very soon.  But this post is just about how the other innovators in the industry are doing an admirable job making our machines more human-friendly.
Categorieën: Mozilla-nl planet

Pagina's