mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 6 uur 48 min geleden

Kim Moir: Mozilla pushes - June 2014

vr, 18/07/2014 - 23:46
Here's June 2014's  analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file
Trends This was another record breaking month with a total of 12534 pushes.  As a note of interest, this is is over double the number of pushes we had in June 2013. So big kudos to everyone who helped us scale our infrastructure and tooling.  (Actually we had 6,433 pushes in April 2013 which would make this less than half because June 2013 was a bit of a dip.  But still impressive :-)
Highlights
  • 12534 pushes
    • new record
  • 418 pushes/day (average)
    • new record
  • Highest number of pushes/day: 662 pushes on June 5, 2014
    • new record
  • Highest 23.17 pushes/hour (average)
    • new record

General RemarksThe introduction of Gaia-try in April has been very popular and comprised around 30% of pushes in June compared to 29% last month.The Try branch itself consisted of around 38% of pushes.The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushes, compared to 22% in the previous month.
RecordsJune 2014 was the month with most pushes (12534 pushes)
June 2014 has the highest pushes/day average with 418 pushes/day
June 2014 has the highest average of "pushes-per-hour" is 23.17 pushes/hour
June 4th, 2014 had the highest number of pushes in one day with 662 pushes




Categorieën: Mozilla-nl planet

Mark Surman: Quick thoughts from Kenya

vr, 18/07/2014 - 22:36

Going anywhere in Africa always energizes me. It surprises me. Challenges my assumptions. Gives me new ideas. And makes me smile. The week I just spent in Nairobi did all these things.

Airtel top up agenda in Nairobi

The main goal of my trip was to talk to people about the local content and simple appmaking work Mozilla is doing. I spent an evening talking with Mozilla community members, a day and a bit with people at Equity Bank and a bunch of time with people from iHub. Here are three of the many thoughts I had while reflecting on the flight home:

Microbusiness is our biggest opportunity for AppMaker

I talked to ALOT of people about the idea of non-techie smartphone users being able to make their own apps.

My main question was: who would want to make their own app rather than just use Facebook? Most of the good answers had to with someone running a very small business. A person selling juice to office workers who wastes alot of travel time taking orders. An up and coming musician who wants a way to pre-sell tickets to loyal fans using mobile money. A chicken farmer outside Nairobi who is always on the phone with the hotels she sells to (pic below, met her and her husband while on a trip with Equity Bank folks). The common thread: simple to make and remix apps could be very useful to very small real world businesses that would benefit from better communications, record keeping and transaction processing via mobile phone.

IMG_20140717_085731~2

Our main priority with AppMaker (or whatever we call it) right now is to get a first cut at on-device authoring out there. In the background, we also really need to be pushing on use cases like these — and the kind of app templates that would enable them. Some people at the iHub in Nairobi have offered to help with prototyping template apps specific to Kenya over the next few months, which will help with figuring this out.

Even online is offline in much of Africa

As I was reminded at MozFest East Africa, even online is offline in much of Africa (and many other parts of the world). In the city, the cost of data for high bandwidth applications like media streaming — or running a Webmaker workshop — is expensive. And, outside the city, huge areas have connections that are spotty or non-existent.

BRCK-in-use

It was great to meet the BRCK people who are building products to address issues like this. Specifically: BRCK is a ruggedized wifi router with a SIM card, useful I/O ports and local storage. Brainstorming with Juliana and Erik from iHub, it quickly became clear that it could be useful for things like Webmaker workshops in places where connectivity is expensive, slow or even non-existent. If you popped a Raspberry Pi on the side, you might even be able create a working version of Webmaker tools like Thimble and Appmaker that people could use locally — with published web pages and apps trickling back or syncing once the BRCK had a connection. The Kenyan Mozillians I talked to were very excited about this idea. Worth exploring.

People buy brands

During a dinner with local Mozillians, a question was raised: ‘what will it take for Firefox OS to succeed in Kenya?’ A debate ensued. “Price,” said one person, “you can’t get a $30 smartphone like the one Mozilla is going to sell.” “Yes you can!”, said another. “But those are China phones,” said someone else. “People want real phones backed by a real brand. If people believe Firefox phones are authentic, they will buy them.”

IMG_20140717_103451~4

Essentially, they were talking about the tension between brand / authenticity / price in commodity markets like smartphones. The contention was: young Kenyan’s are aspiring to move up in the world. An affordable phone backed by a global brand like Mozilla stands for this. Of course, we know this. But it’s a good reminder from the people who care most about Mozilla (our community, pic below of Mozillians from Kenya) that the Firefox brand really needs to shine through on our devices and in the product experience as we roll out phones in more parts of the world.

Mozillians from Nairobi

I’ve got alot more than this rumbling around in my head, of course. My week in Uganda and Kenya really has my mind spinning. In a good way. It’s all a good reminder that the diverse perspectives of our community and our partners are one of our greatest strengths. As an organization, we need to tap into that even more than we already do. I truly believe that the big brain that is the Mozilla Community will be a key factor in winning the next round in our efforts to stand up for the web.


Filed under: mozilla, webmakers
Categorieën: Mozilla-nl planet

Doug Belshaw: HOWTO: apply for Webmaker badges

vr, 18/07/2014 - 13:51

super-mentor-02.jpg

We’re in full swing with Webmaker contribution and web literacy badges at the moment, so I wanted to take a moment to give some advice to people thinking of applying. We already have a couple of pages on the Webmaker blog for the Mentor and Super Mentor badges:

However, I wanted to give some general advice and fill in the gaps.

First of all, it’s worth sharing the guidance page for the people reviewing your application. In the case of a Webmaker Super Mentor badge, this will be a Mozilla paid contributor (i.e. staff member), but for all other badges it may be community member who has unlocked the necessary privileges.

To be clear:

The best applications we’ve seen for the Webmaker badges so far take the explain how the applicant meets each one of the criteria on the badge detail page.

For example, this was Stefan Bohacek’s successful application for the Sharing ‘maker’ badge:

1) Sharing a resource using an appropriate tool and format for the audience: I wrote tutorials for people learning to make websites and web apps and shared them on my blog: http://blog.fourtonfish.com/tagged/tutorial. These also exist as a teaching kit on Webmaker – see my blogpost with a link here: http://blog.fourtonfish.com/post/89157427285/mozilla-webmaker-featured-teaching-kit. Furthermore I created resources for web developers such as http://simplesharingbuttons.com (also see: http://badges.p2pu.org/en/project/477) and some other (mini-)projects here: https://github.com/fourtonfish

2) Tracking changes made to co-created Web resources: I use GitHub for some of my personal projects (although I only received a handful of opened issues) and GitLab with clients I worked with/for.

3) Using synchronous and asynchronous tools to communicate with web communities, networks and groups https://twitter.com/fourtonfish – I follow some of the members of Webmaker (and seldomly frustrate Doug Belshaw with questions) https://plus.google.com/+StefanBohacek/posts – I am a member of the Webmaker community http://webdevrefinery.com/forums/user/18887-ftfish/ – I (infrequently) post here, share ideas, comment on ideas of others etc. stefan@fourtonfish.com – I wouldn’t be able to finish my teaching kit without the help of other webmakers and my email account to communicate with them

Note that Stefan earned his badge for numbers 1) and 3) in the above example. This was enough to meet the requirements as the badge is awarded for meeting any two of the criteria listed on the badge detail page. He did not provide any evidence for using GitHub, as mentioned in 2), so this was not used as evidence by the person reviewing his application.

Applying for a badge is just like applying for anything in life:

  • Make the reviewer’s job easy – they’re looking at lots of applications!
  • Tell the reviewer which of the criteria you think you have met.
  • Include a link for each of the criteria – more than one if you can.
  • If you are stuck, ask for help. A good place to start is the Webmaker discussion forum, or if you know someone who’s already got that badge, ask them to help you!

Questions? Comments? I’m @dajbelshaw or you can email me at doug@mozillafoundation.org. Note that specific badge questions should go in the discussion forum.

Image CC Mozilla in Europe

Categorieën: Mozilla-nl planet

Botond Ballo: Trip Report: C++ Standards Committee Meeting in Rapperswil, June 2014

vr, 18/07/2014 - 03:12
Summary / TL;DR Project Status C++14 On track to be published late 2014 C++17 A few minor features so far, including for (elem : range) Networking TS Ambitious proposal to standardize sockets based on Boost.ASIO Filesystems TS On track to be published late 2014 Library Fundamentals TS Contains optional, any, string_view and more. Progressing well, expected early 2015 Library Fundamentals TS II Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage Array Extensions TS Completely stalled. No proposal related to runtime-sized arrays/objects currently has consensus Parallelism TS Progressing well; expected 2015 Concurrency TS Executors and resumable functions need more work Transactional Memory TS Progressing well; expected 2015-ish Concepts (“Lite”) TS Progressing well; expected 2015 Reflection A relatively full-featured compile-time introspection proposal was favourably reviewed. Might target a TS or C++17 Graphics Moving forward with a cairo-based API, to be published in the form of a TS Modules Clang has a complete implementation for C++, plan to push it for C++17 Introduction

Last week I attended another meeting of the ISO C++ Standards Committee in Rapperswil, Switzerland (near Zurich). This is the third Committee meeting I have attended; you can find my reports about the previous two here (September 2013, Chicago) and here (February 2014, Issaquah). These reports, particularly the Issaquah one, provide useful context for this post.

With C++14′s final ballot being still in progress, the focus of this meeting was the various language and library Technical Specifications (TS) that are planned as follow-ups to C++14, and on C++17.

C++14

C++14 is currently out for its “DIS” (Draft International Standard) ballot (see my Issaquah report for a description of the procedure for publishing a new language standard). This ballot was sent out at the end of the Issaquah meeting, and will close mid-August. If no national standards body poses an objection by then – an outcome considered very likely – then the standard will be published before the end of the year.

Since a ballot was in progress, no changes to the C++14 draft were made during the Rapperswil meeting.

C++17, and what’s up with all these TS’s?

ISO procedure allows the Committee to publish two types of documents:

  • International Standards (IS). These are official standards with stringent backwards-compatibility requirements.
  • Technical Specifications (TS) (formerly called Technical Reports (TR)). These are for things that are not quite ready to be enshrined into an official standard yet, and have no backwards-compatibility requirements. Specifications contained in a TS may or may not be added, possibly with modifications, into a future IS.

C++98 and C++11 are IS’s, as will be C++14 and C++17. The TS/TR form factor has, up until recently, only been used once by the Committee: for TR1, the 2005 library spec that gave us std::tr1::shared_ptr and other library enhancements that were then added into C++11.

Since C++11, in an attempt to make the standardization process more agile, the Committee has been aiming to place significant new language and library features into TS’s, published on a schedule independent of the IS’s. The idea is that being in a TS allows the feature to gain user and implementation experience, which the committee can then use to re-evaluate and possibly revise the feature before putting it into an IS.

As such, much of the standardization work taking place concurrently with and immediately after C++14 is in the form of TS’s, the first wave of which will be published over the next year or so, and the contents of which may then go into C++17 or a subsequent IS, as schedules permit.

Therefore, at this stage, only some fairly minor features have been added directly to C++17.

The most notable among them is the ability to write a range-based for loop of the form for (elem : range), i.e. with the type of the element omitted altogether. As explained in detail in my Issaquah report, this is a shorthand for for (auto&& elem : range) which is almost always what you want. The Evolution Working Group (EWG) approved this proposal in Issaquah; in Rapperswil it was also approved by the Core Working Group (CWG) and voted into C++17.

Other minor things voted into C++17 include:

  • static_assert(condition), i.e. with the message omitted. An implementation-defined message is displayed.
  • auto var{expr}; is now valid and equivalent to T var{expr}; (where T is the deduced type)
  • A template template parameter can now be written as template <...> typename Name in addition to template <...> class Name, to mirror the way a type template parameter can be written as typename Name in addition to class Name
  • Trigraphs (an obscure feature that allowed certain characters, such as #, which are not present on some ancient keyboards, to be written as a three-character sequence, such as ??=) were removed from the language
Evolution Working Group (EWG)

As with previous meetings, I spent most of time in the Evolution Working Group, which spends its time looking at proposals for new language features that either do not fall into the scope of any Study Group, or have already been approved by a Study Group. There was certainly no lack of proposals at this meeting; to EWG’s credit, it got through all of them, at least the ones which had papers in the official pre-Rapperswil mailing.

Incoming proposals were categorized into three rough categories:

  • Approved. The proposal is approved without design changes. They are sent on to CWG, which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals:

  • Opening two nested namespaces with namespace A::B { in addition to namespace A { namespace B {
  • “Making return explicit”. This means that if a class A has an explicit constructor which takes a parameter of type B, then, in a function whose return type is A, it is valid to write return b; where b has type B. (Currently, one has to writen return A(b);.) The idea is to avoid repetition; a very common use case is A being std::unique_ptr<T> for some T, and B being T*. This proposal was relatively controversial; it passed with a weak consensus in EWG, and was also discussed in the Library Evolution Working Group (LEWG), where there was no consensus for it. I was surprised that EWG passed this to CWG, given the state of the consensus; in fact, CWG indicated that they would like additional discussion of it in a forum that includes both EWG and LEWG members, before looking at it in CWG.
  • A preprocessor feature for testing for the presence of a (C++11-style) attribute: __has_cpp_attribute(attribute_name)
  • A couple that I already mentioned because they were also passed in CWG and voted into C++17 at the same meeting:

Proposals for which further work is encouraged:

  • A proposal to make C++ more friendly for embedded systems development by reducing the overhead of exception handling, and further expanding the expressiveness of constexpr. EWG encouraged the author to gather people interested in this topic and form a Study Group to explore it in more detail.
  • A proposal to convey information to the compiler about aliasing, via attributes. This is intended to be an improvment to C99′s restrict.
  • A way to get the compiler to, on an opt-in basis, generate equality and comparison operators for a structure/class. Everyone wanted this feature, but there were disagreements about how terse the syntax should be, whether complementary operators should be generated automatically (e.g. != based on ==), how exactly the compiler should implement the operators (particularly for comparison – questions of total order vs. weaker orders came up), and how mutable members should be handled.
  • A proposal for a per-platform portable C++ ABI. I will talk about this in more detail below.
  • A way to force the compiler to omit padding between two structure fields
  • A way to specify that a class should be converted to another type in auto initialization. That is, for a class C, to specify that in auto var = c; (with c having type C), the type of var should actually be some other type D. The motivating use here is expression templates; in Matrix X, Y; auto Z = X * Y; we want the type of Z to be Matrix even if the type of X * Y is some expression template type. EWG liked the motivation, but the proposal tried to modify the semantics of template parameter deduction for by-value parameters so as to remain consistent with auto, and EWG was concerned that this was starting to encroach on too many areas of the language. The author was encouraged to come back with a more limited-scope proposal that concerned auto initialization only.
  • Fixed-size template parameter packs (typename...[K]) , and packs where all parameters must be of the same type (T...[N]). EWG liked the idea, but had some concerns about syntactic ambiguities. The proposal also inspired an offshoot idea of subscripting parameter packs (e.g. Pack[0] gives you the first parameter), to avoid having to use recursion to iterate over the parameters in many cases.
  • Expanding parameter packs as expressions. Currently, if T is a parameter pack bound to parameters A, B, and C, then T... expands to A, B, C; this expansion is allowed in various contexts where a comma-separated list of things (types or expressions, as the parameters may be) is allowed. The proposal here is to allow things like T +... which would expand to A + B + C, which would be allowed in an expression context.

Rejected proposals:

  • Objects of runtime size. This would have allowed a pure library implementation of something like std::dynarray (and allowed users to write similar classes of their own), but it unfortunately failed to gain consensus. More about this in the Array Extensions TS section.
  • Additional flow control mechanisms like break label;, continue label; and goto case value;. EWG thought these encouraged hard-to-follow control flow.
  • Allowing specifiers such as virtual, static, override, and some others, to apply to a group of members the way acess specifiers (private: etc.) currently do. The basis for rejection here was that separating these specifiers from the members they apply to can make class definitions less readable.
  • Specializing an entity in a different namespace without closing the namespaces you are currently in. Rejected because it’s not clear what would be in scope inside the specialization (names from the entity’s namespace, the current namespace, or both).
  • <<< and >>> operators for circular bit-shifts. EWG felt these would be more appropriate as library functions.
  • A rather complicated proposal for annotating template parameter packs that claimed to be a generalization of the proposal for fixed-size template parameter packs. Rejected because it would have made the language much more complicated, while the benefit would mostly have been for template metaprogramming; also, several of the use cases can be satisfied with Concepts instead.
  • Throwing an exception on stack exhaustion. The implementers in the room felt this was not implementable.

I should also mention the proposal for named arguments that Ehsan and I have been working on. We did not prepare this proposal in time to get it into the pre-Rapperswil mailing, and as such, EWG did not look at it in Rapperswil. However, I did have some informal discussions with people about it. The main concerns were:

  • consideration for constructor calls with {...} syntax and, by extension, aggregate initialization
  • the relationship to C99 designated initializers (if we are covering aggregate initializtion, then these can be viewed as competing syntaxes)
  • most significantly: parameter names becoming part of a library’s interface that library authors then have to be careful not to break

Assuming we are able to address these concerns, we will likely write an updated proposal, get it into the pre-Urbana mailing (Urbana-Champaign, Illinois is the location of the next Committee meeting in November), and present it at the Urbana meeting.

Portable C++ ABI

One of the most exciting proposals at this meeting, in my opinion, was Herb Sutter’s proposal for a per-platform portable C++ ABI.

A per-platform portable ABI means that, on a given platform (where “platform” is generally understood to a mean the combination of an operating system, processor family, and bitness), binary components can be linked together even if they were compiled with different compilers, or different versions of the same compiler, or different compiler options. The current lack of this in C++ is, I think, one of C++’s greatest weaknesses compared to other languages like C or Java.

More specifically, there are two aspects to ABI portability: language and library. On the language side, portability means that binary components can be linked together as long as, for any interface between two components (for example, for a function that one component defines and the other calls, the interface would consist of the function’s declaration, and the definitions of any types used as parameters or return type), the two components are compiled from identical standard-conforming source code for that interface. On the library side, portability additionally means that interfaces between components can make use of standard library types (this does not follow solely from the language part, because different compilers may not have identical source code for their standard library types).

It has long been established that it is out of scope for the C++ Standard to prescribe an ABI that vendors should use (among other reasons, because parts of an ABI are inherently platform-specific, and the standard cannot enumerate every platform and prescribe something for each one). Instead, Herb’s proposal is that the standard codify the notions of a platform and a platform owner (an organization/entity who controls the platform); require that platform owners document an ABI (in the case of the standard library, this means making available the source code of a standard library implementation) which is then considered the platform ABI; and require compiler and standard library vendors to support the platform ABI to be conformant on any given platform.

In order to ease transitioning from the current world where, on a given platform, the ABI can be highly dependent on the compiler, the compiler version, or even compiler options, Herb also proposes some mechanisms for delineating a portion of one’s code which should be ABI-portable, and therefore compiled using the platform ABI. These mechanisms are a new linkage (extern "abi") on the language side, and a new namespace (std::abi, containing the same members as std) on the library side. The idea is that one can restrict the use of these mechanisms to code that constitutes component interfaces, thereby achieving ABI portability without affecting other code.

This proposal was generally well-received, and certainly people agreed that a portable ABI is something C++ needs badly, but some people had concerns about the specific approach. In particular, implementers were uncomfortable with the idea of potentially having to support two different ABI’s side-by-side in the same program (the platform ABI for extern "abi" entities, and the existing ABI for other entities), and, especially, with having two copies of every library entity (one in std and one in std::abi). Other concerns about std::abi were raised as well, such as the performance penalty arising from having to convert between std and std::abi types in some places, and the duplication being difficult to teach. It seemed that a modified proposal that concerned the language only and dropped std::abi would have greater consensus.

Array Extensions TS

The Array Extensions TS was initally formed at the Chicago meeting (September 2013) when the committee decided to pull out arrays of runtime bound (ARBs, the C++ version of C’s VLAs) and dynarray, the standard library class for encapsulating them, out of C++14 and into a TS. This was done mostly because people were concerned that dynarray required too much compiler magic to implement. People expressed a desire for a language feature that would allow them to implement a class like dynarray themselves, without any compiler magic.

In Issaquah a couple of proposals for such a language feature were presented, but they were relatively early-stage proposals, and had various issues such as having quirky syntax and not being sufficiently general. Nonetheless there was consensus that a library component is necessary, and we’d rather not have ARBs at all than get them without a library component to wrap them into a C++ interface.

At this meeting, a relatively fully fleshed-out proposal was presented that gave programmers a fairly general/flexible way to define classes of runtime size. Unfortunately, it was considered a complex change that touches many parts of the language, and there was no consensus for going forward with it.

As a result, the Array Extensions TS is completely stalled: ARBs themselves are ready, but we don’t want them without a library wrapper, and no proposal for a library wrapper (or mechanism that would enable one to be written) has consensus. This means that the status quo of not being able to use VLAs in C++ (unless a vendor enables C-stle VLAs in C++ as an extension) will remain for now.

Library / Library Evolution Working Groups (LWG and LEWG)

Library work at this meeting included the Library Fundamentals TS (and its planned follow-up, Library Fundamentals II), the Filesystems TS and Networking TS (about which I’ll talk in the SG 3 and SG 4 sections below), and reviewing library components of other projects like the Concurrency TS.

The Library Fundamentals TS was in the wording review stage at this meeting, with no new proposals being added to it. It contains general library utilities such as optional, any, string_view, and more; see my Issaquah report for a full list. The current draft of the TS can be found here. At the end of the meeting, the Committee voted to send out the TS for its first national body ballot, the PDTS (Preliminary Draft TS) ballot. This ballot concludes before the Urbana meeting in November; if the comments can be addressed during that meeting and the TS sent out for its second and final ballot, the DTS (Draft TS) ballot, it could be published in early 2015.

The Committee is also planning a follow-up to the Library Fundamentals TS, called the Library Fundamentals TS II, which will contain general utilities that did not make it into the first one. Currently, it contains one proposal, a generalized callable negator; another proposal, containing library facilities for contract programming, got rejected for several reasons, one of them being that it is expected to be obsoleted in large part by reflection. Proposals under consideration to be added include:

Study Groups

SG 1 (Concurrency)

SG 1 focuses on two areas, concurrency and parallelism, and there is one TS in the works for each.

I don’t know much about the Parallelism TS other than it’s in good shape and was sent out for its PDTS ballot at the end of the meeting, which could lead to publication in 2015.

The status of the Concurrency TS is less certain. Coming into the Rapperswil meeting, the Concurrency TS contained two things: improvements to std::future (notably a then() method for chaining a second operation to it), and executors and schedulers, with resumable function being slated for addition.

However, the pre-Rapperswil mailing contained several papers arguing against the existing designs for executors and resumable functions, and proposing alternative designs instead. These papers led to executors and schedulers being removed from the Concurrency TS, and resumable functions not being added, until people come to a consensus regarding the alternative designs. I’m not sure whether publication of the Concurrency TS (which now contains only the std::future improvements) will proceed, leaving executors and resumable functions for a follow-up TS, or be stalled until consensus on the latter topics is reached.

For resumable functions, I was in the room during the technical discussion, and found it quite interesting. The alternative proposal is a coroutines library based on Boost.Coroutine. The two proposals differ both in syntax (new keywords async and await vs. the magic being hidden entirely behind a library interface), and implementation technique for storing the local variables of a resumable function (heap-allocated “activation frames” vs. side stacks). The feedback from SG 1 was to disentangle these two aspects, possibly yielding a proposal where either syntax could be matched with either implementation technique.

There are also other concurrency-related proposals before SG 1, such as ostream buffers, latches and barries, shared_mutex, atomic operations on non-atomic data, and a synchronized wrapper. I assume these will go into either the current Concurrency TS, or a follow-up TS, depending on how they progress through the committee.

SG 2 (Modules)

Modules is, in my opinion, one of the most sorely needed features in C++. They have the potential of increasing compile times by an order of magnitude or more, thus bringing compile-time performance more in line with more modern languages, and of solving the combinatorial explosion problem, caused by macros, that hampers the development of powerful tooling such as automated refactoring.

The standardization of modules has been a slow process for two reasons. First, it’s an objectively difficult problem to solve. Second, the solution shares some of the implementation difficulties of the export keyword, a poorly thought-out feature in C++98 that sought to allow separate compilation of templates; export was only ever implemented by one compiler (EDG), and the implementation process revealed flaws that led not only to other compilers not even bothering to implement it, but also to the feature being removed from the language in C++11. This bad experience with export led people to be uncertain about whether modules are even implementable. As a result, while some papers have been written proposing designs for modules (notably, one by Daveed Vandevoorde a couple of years back, and one by Gabriel Dos Reis very recently, what everyone has really been holding their breaths for was an implementation (of any variation/design), to see that one was possible.

Google and others have been working on such an implementation in Clang, and I was very excited to hear Chandler Carruth (head of Google’s team working on Clang) report that they have now completed it! As this work was completely very recently prior to the meeting, they did not get a chance to write a paper to present at this meeting, but Chandler said one will be forthcoming for the next meeting.

EWG held a session on Modules, where Gaby presented his paper, and the Clang folks discussed their implementation. There were definitely some differences between the two. Gaby’s proposal came across as more idealistic: this is what a module system should look like if you’re writing new code from scratch with modules. Clang’s implementation is more practical: this is how we can start using modules right away in existing codebases. For example, in Gaby’s proposal, a macro defined in a module is not visible to an importing module; in Clang’s implementation, it is, reflecting the reality that today’s codebases still use macros heavily. As another example, in Gaby’s proposal, a module writer must say which declarations in the module file are visible to importing modules by surrounding them in an export block (not to be confused with the failed language feature I talked about above); in Clang’s implementation, a header file can be used as a module without any changes, using an external “module map” file to tell the compiler it is a module. Another interesting design question that came up was whether private members of a class exported from a module are “visible” to an importing module (in the sense that importing modules need to be recompiled if such a private member is added or modified); in Clang’s implementation, this is the case, but there would certainly be value in avoiding this (among other things, it would obsolete the laborious “Pimpl” design pattern).

The takeaway was that, while everyone wants this feature, and everyone is excited about there finally being an implementation, several design points still need to be decided. EWG deemed that it was too early to take any polls on this topic, but instead encouraged the two parties (the Clang folks, and Gaby, who works for Microsoft and hinted at a possible Microsoft implementation effort as well) to collaborate on future work. Specifically, EWG encourages that the following papers be written for Urbana: one about what is common to the various proposals, and one or more about the remaining deep technical issues. I eagerly await such future work and papers.

SG 3 (Filesystems)

At this meeting, the Library Working Group finished addressing the ballot comments for the Filesystem TS’s PDTS ballot, and sent out the TS for the final “DTS” ballot. If this ballot is successful, the Filesystems TS will be published by the end of 2014.

Beman (the SG 3 chair) stated that SG 3 will entertain new filesystem-related proposals that build upon the Filesystems TS, targetting a follow-up Filesystems TS II. To my knowledge no such proposals have been submitted so far.

SG 4 (Networking)

SG 4 had been working on standardizing basic building blocks related to networking, such as IP addresses and URIs. However, these efforts are stalled.

As a result, the LEWG decided at this meeting to take it upon itself to advance networking-related proposals, and they set their sights on something much more ambitious than IP addresses and URIs: a full-blown sockets library, based on Boost.ASIO. The plan is basically to pick up Chris Kohlhoff’s (the author of ASIO) 2007 paper proposing ASIO for standardization, incorporating changes to update the library for C++11, as well as C++14 (forthcoming). This idea received very widespread support in LEWG; the group decided to give people another meeting to digest the new direction, and then propose adopting these papers as the working paper for the Networking TS in Urbana.

This change in pace and direction might seem radical, but it’s in line with the committee’s philosophy for moving more rapidly with TS’s. Adopting the ASIO spec as the initial Networking TS working paper does not mean that the committee agrees with every design decision in ASIO; on the contrary, people are likely to propose numerous changes to it before it gets standardized. However, having a working paper will give people something to propose changes against, and thus facilitate progress.

SG 5 (Transactional Memory)

The Transactional Memory TS is progressing well through the committee. CWG began reviewing its wording at this meeting, and referred one design issue to EWG. (The issue concerned functions that were declared to be neither transaction-safe nor transaction-unsafe, and defined out of line (so the compiler cannot compute the transaction safety from the definition). The state of the proposal coming into the discussion was that for such functions, the compiler must assume that they can be either transaction-safe or transaction-unsafe; this resulted in the compiler sometimes needing to generate two versions of some functions, with the linker stripping out the unused version if you’re lucky. EWG preferred avoiding this, and instead assuming that such functions are transaction-unsafe.) CWG will continue reviewing the wording in Urbana, and hopefully sendout the TS for its PDTS ballot then.

SG 6 (Numerics)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 7 (Reflection)

SG 7 met for an evening session and looked at three papers:

  • The latest version of the source code information capture proposal, which aims to replace the __LINE__, __FILE__, and __FUNCTION__ macros with a first-class language feature. There was a lot of enthusiasm for this idea in Issaquah, and now that it’s been written up as a paper, SG 7 is moving on it quickly, deciding to send it right on to LEWG with only minor changes. The publication vehicle – with possible choices being Library Fundamentals TS II, a hypothetical Reflection TS, or C++17 – will be decided by LEWG.
  • The type member property queries proposal by Andrew Tomazos. This is an evolution of an earlier proposal which concerned enumerations only, and which was favourably reviewed in Issaquah; the updated proposal extends the approach taken for enumerations, to all types. The result is already a quite featureful compile-time introspection facility, on top of which facilities such as serialization can be built. It does have one significant limitation: it relies on forming pointers to members, and thus cannot be used to introspect members to which pointers cannot be formed – namely, references and bitfields. The author acknowledged this, and pointed out that supporting such members with the current approach would require language changes. SG 7 did not deem this a deal-breaker problem, possibly out of optimism that such language changes would be forthcoming if this facility created a demand for them. Overall, the general direction of the proposal had basically unanimous support, and the author was encouraged to come back with a revised proposal that splits out the included compile-time string facility (used to represent names of members for introspection) into a separate, non-reflection-specific proposal, possibly targeted at Library Fundamentals II. The question of extending this facility to introspect things other than types (notably, namespaces, although there was some opposition to being able to introspect namespaces) also came up; the consensus here was that such extensions can be proposed separately when desired.
  • A more comprehensive static reflection proposal was looked at very, very briefly (the author was not present to speak about it in detail). This was a higher-level and more featureful proposal than Tomazos’ one; the prevailing opinion was that it is best to standardize something lower-level like Tomazos’ proposal first, and then consider standardizing higher-level libraries that build on it if appropriate.
SG 8 (Concepts)

The Concepts TS (formerly called “Concepts Lite”, but then people thought “Lite” was too informal to be in the title of a published standard) is still in the CWG review stage. Even though CWG looked at it in Issaquah, and the author and project editor, Andrew Sutton, revised the draft TS significantly for Rapperswil, the feature touches many areas of the language, and as such more review of the wording was required; in fact, CWG spent almost three full days looking at it this time.

The purpose of a CWG review of a new language feature is twofold: first, to make sure the feature meshes well with all areas of the language, including interactions that the author and EWG may have glossed over; and second, to make sure that the wording reflects the author’s intention accurately. In fulfilling the first objective, CWG often ends up making minor changes to a feature, while staying away from making fundamental changes to the design (sometimes, recommendations for more significant changes do come up during a CWG review – these are run by EWG before being acted on).

In the case of the Concepts TS, CWG made numerous minor changes over the course of the review. It was initially hoped that there would be time to revise the wording to reflect these changes, and put the reivsed wording out for a PDTS ballot by the end of the meeting, but the changes were too numerous to make this feasible. Therefore, the PDTS ballot proposal was postponed until Urbana, and Andrew has until then to implement the wording changes.

SG 9 (Ranges)

SG 9 did not meet in Rapperswil, but does plan to meet in Urbana, and I anticipate some exciting developments in Urbana.

First, I learned that Eric Niebler, who in Issaquah talked about an idea for a Ranges proposal that I thought was very exciting (I describe it in my Issaquah report), plans to write up his idea as a proposal and present it in Urbana.

Second, one of the attendees at Rapperswil, Fabio Fracassi, told me that he is also working on a (different) Ranges proposal that he plans to present in Urbana as well. I’m not familiar with his proposal, but I look forward to it. Competition is always healthy when it comes up early-stage standards proposal / choosing an approach to solving a problem.

SG 10 (Feature Test)

I didn’t follow the work of SG 10 very closely. I assume that, in addition to the __has_cpp_attribute() preprocessor feature that I mentioned above in the EWG section, they are kept busy by the flow of new features being added into working papers, for each of which they have to decide whether the feature deserves a feature-test macro, and if so standardize a name for one.

Clark (the SG 10 chair) did mention that the existence of TS’s complicates matters for SG 10, but I suppose that’s a small price to pay for the benefits of TS’s.

SG 12 (Undefined Behaviour)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 13 (Human Interaction, formerly “Graphics”)

SG 13 met for a quarter-day session, during which Herb presented an updated version of the proposal for a cairo-based 2D drawing API. A few interesting points came up in the discussion:

  • The interface being standardized is a C++ wrapper interface around cairo that was generated using a set of mechanical transformation rules applied to cairo’s interface. The transformation rules are not being standardized, only their result (so the standard interface can potentially diverge from cairo in the future, though presumably this wouldn’t be done without a very good reason).
  • I noted that Mozilla is moving away from cairo, in part due to inefficiencies caused by cairo being a stateful API (as explained here). It was pointed out that this inefficiency is an issue of implementation (due to cairo using a stateless layer internally), not one of interface. This is a good point, although I’m not sure how much it matters in practice, as standard library vendors are much more likely to ship cairo’s implementation than write their own. (Jonathan Wakely said so about libstdc++, but I think it’s very likely the case for other vendors as well.)
  • Regarding the tradeoff between a nice interface and high performance, Herb said the goal was to provide a nice interface while providing as good of a performance as we can get, without necesarily squeezing every last ounce of performance.
  • The library has the necessary extension points in place to allow for uses such as hooking into drawing onto a drawing surface of an existing library, such as a Qt canvas (with the cooperation of the existing library, cairo, and the platform, of course).

The proposal is moving forward: the authors are encouraged to come back with wording.

TS Content Guidelines

One mildly controversial issue that came to a vote in the plenary meeting at the end of the week, is the treatment of modifications/extensions to standard library types in a Technical Specification. One group held that the simplest thing to do for users is to have the TS specify modifications to the types in std:: themselves. Another group was of the view that, in order to make life easier for a third-party library vendor to implement a TS, as well as to ensure that it remains practical for a subsequent IS to break the TS if it needs to, the types that are modified should be cloned into an std::experimental:: namespace, and the modifications applied there. This second view prevailed.

Next Meeting

The next Committee meeting (“Urbana”) will be at the University of Illinois at Urbana-Champaign, the week of November 3rd.

Conclusion

The highlights of the meeting for me, personally, were:

  • The relevation that clang has completed their modules implementation, that they will be pushing it for C++17, and that they are fairly confident that they will be able to get it in. The adoption of a proper modules system has the potential to revolutionize compilation speeds and the tooling landscape – revolutions that C++ needs badly.
  • Herb’s proposal for a portable C++ ABI. It is very encouraging to see the committee, which has long held this issue to be out of its scope, looking at a concrete proposal for solving a problem which, in my opinion, plays a significant role in hampering the use of C++ interfaces in libraries.
  • LEWG looking at bringing the entire Boost.ASIO proposal into the Networking TS. This dramatically brings forward the expected timeframe of having a standard sockets library, compared to the previous approach of standardizing first URIs and IP addresses, and then who knows what before finally getting to sockets.

I eagerly await further developments on these fronts and others, and continue to be very excited about the progress of C++ standardization.


Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 25/26 2014

do, 17/07/2014 - 15:57

In this post you can find an overview about the work happened in the Firefox Automation team during week 25 and 26.

Highlights

June the 11th was actually the last Automation Training day for our team in Q3. About the results you can read here. We will implement some changes for the next quarter, when we most likely want to host 2 of them.

Henrik finally got the time to upgrade our Mozmill-CI systems to the lastest LTS version of Jenkins. There were a bit of changes necessary but in general all went fine this time, and we can see some great improvements. Especially the long delays when sending out job results seem to be gone.

Further Henrik investigated the slow behavior with the mozmill-ci production master, when it is under load, e.g. QA runs ondemand update tests for releases of Firefox. The main problem stays with Java, which is taking up about 100% of the CPU. Because of this the integrated web server cannot serve pages in a timely manner. Adding a 2nd CPU to this node gave us way better response times.

Given that the new version of Ubuntu came out already in April, we want to have our Mozmill tests also run on that platform version. So we got new VM spun-up by IT, which we now have to puppetize and bring online. But this may still take a bit, given the remaining blockers for using PuppetAgain.

While talking about Puppet we got the next big change reviewed and landed. With bug 1021230 we now have our own user account, which can be customized to our needs. And that’s what we totally need, given that our infrastructure is so different from the Releng one.

Also for TPS we made progress, so the new TPS-CI production machine came online. Yet it cannot replace the current CI due to still a fair amount of open blockers, but hopefully by end of July we should be able to turn the switch.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 25 and week 26.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 25 and week 26.

Categorieën: Mozilla-nl planet

David Rajchenbach Teller: The Battle of Session Restore – Season 1 Episode 3 – All With Measure

do, 17/07/2014 - 14:34

Plot For the second time, our heroes prepared for battle. The startup of Firefox was too slow and Session Restore was one of the battle fields.

When Firefox starts, Session Restore is in charge of restoring the browser to its previous state, in case of a crash, a restart, or for the users who have configured Firefox to resume from its previous state. This entails numerous activities during startup:

  1. read sessionstore.js from disk, decode it and parse it (recall that the file is potentially several Mb large), handling errors;
  2. backup sessionstore.js in case of startup crash.
  3. create windows, tabs, frames;
  4. populate history, scroll position, forms, session cookies, session storage, etc.

It is common wisdom that Session Restore must have a large impact on Firefox startup. But before we could minimize this impact, we needed to measure it.

Benchmarking is not easy

When we first set foot on Session Restore territory, the contribution of that module to startup duration was uncharted. This was unsurprising, as this aspect of the Firefox performance effort was still quite young. To this day, we have not finished chartering startup or even Session Restore’s startup.

So how do we measure the impact of Session Restore on startup?

A first tool we use is Timeline Events, which let us determine how long it takes to reach a specific point of startup. Session Restore has had events `sessionRestoreInitialized` and `sessionRestored` for years. Unfortunately, these events did not tell us much about Session Restore itself.

The first serious attempt at measuring the impact of Session Restore on startup Performance was actually not due to the Performance team but rather to the metrics team. Indeed, data obtained through Firefox Health Report participants indicated that something wrong had happened.

Oops, something is going wrong

Indicator `d2` in the graph measures the duration between `firstPaint` (which is the instant at which we start displaying content in our windows) and `sessionRestored` (which is the instant at which we are satisfied that Session Restore has opened its first tab). While this measure is imperfect, the dip was worrying – indeed, it represented startups that lasted several seconds longer than usual.

Upon further investigation, we concluded that the performance regression was indeed due to Session Restore. While we had not planned to start optimizing the startup component of Session Restore, this battle was forced upon us. We had to recover from that regression and we had to start monitoring startup much better.

A second tool is Telemetry Histograms for measuring duration of individual operations, such as reading sessionstore.js or parsing it. We progressively added measures for most of the operations of Session Restore. While these measures are quite helpful, they are also unfortunately very unstable in real-world conditions, as they are affected both by scheduling (the operations are asynchronous), by the work load of the machine, by the actual contents of sessionstore.js, etc.

The following graph displays the average duration of reading and decoding sessionstore.js among Telemetry participants: Telemetry 4

Difference in colors represent successive versions of Firefox. As we can see, this graph is quite noisy, certainly due to the factors mentioned above (the spikes don’t correspond to any meaningful change in Firefox or Session Restore). Also, we can see a considerable increase in the duration of the read operation. This was quite surprising for us, given that this increase corresponds to the introduction of a much faster, off the main thread, reading and decoding primitive. At the time, we were stymied by this change, which did not correspond to our experience. We have now concluded that by changing the asynchronous operation used to read the file, we have simply changed the scheduling, which makes the operation appear longer, while in practice it simply does not block the rest of the startup from taking place on another thread.

One major tool was missing for our arsenal: a stable benchmark, always executed on the same machine, with the same contents of sessionstore.js, and that would let us determine more exactly (almost daily, actually) the impact of our patches upon Session Restore:Session Restore Talos

This test, based on our Talos benchmark suite, has proved both to be very stable, and to react quickly to patches that affected its performance. It measures the duration between the instant at which we start initializing Session Restore (a new event `sessionRestoreInit`) and the instant at which we start displaying the results (event `sessionRestored`).

With these measures at hand, we are now in a much better position to detect performance regressions (or improvements) to Session Restore startup, and to start actually working on optimizing it – we are now preparing to using this suite to experiment with “what if” situations to determine which levers would be most useful for such an optimization work.

Evolution of startup duration

Our first benchmark measures the time elapsed between start and stop of Session Restore if the user has requested all windows to be reopened automatically

restoreAs we can see, the performance on Linux 32 bits, Windows XP and Mac OS 10.6 is rather decreasing, while the performance on Linux 64 bits, Windows 7 and 8 and MacOS 10.8 is improving. Since the algorithm used by Session Restore upon startup is exactly the same for all platforms, and since “modern” platforms are speeding up while “old” platforms are slowing down, this suggests that the performance changes are not due to changes inside Session Restore. The origin of these changes is unclear. I suspect the influence of newer versions of the compilers or some of the external libraries we use, or perhaps new and improved (for some platforms) gfx.

Still, seeing the modern platforms speed up is good news. As of Firefox 31, any change we make that causes a slowdown of Session Restore will cause an immediate alert so that we can react immediately.

Our second benchmark measures the time elapsed if the user does not wish windows to be reopened automatically. We still need to read and parse sessionstore.js to find whether it is valid, so as to decide whether we can show the “Restore” button on about:home.

norestoreWe see peaks in Firefox 27 and Firefox 28, as well as a slight decrease of performance on Windows XP and Linux. Again, in the future, we will be able to react better to such regressions.

The influence of factors upon startup

With the help of our benchmarks, we were able to run “what if” scenarios to find out which of the data manipulated by Session Restore contributed to startup duration. We did this in a setting in which we restore windows:size-restore

and in a setting in which we do not:

size-norestore

Interestingly, increasing the size of sessionstore.js has apparently no influence on startup duration. Therefore, we do not need to optimize reading and parsing sessionstore.js. Similarly, optimizing history, cookies or form data would not gain us anything.

The single largest most expensive piece of data is the set of open windows – interestingly, this is the case even when we do not restore windows. More precisely, any optimization should target, by order of priority:

  1. the cost of opening/restoring windows;
  2. the cost of opening/restoring tabs;
  3. the cost of dealing with windows data, even when we do not restore them.
What’s next?

Now that we have information on which parts of Session Restore startup need to be optimized, the next step is to actually optimize them. Stay tuned!


Categorieën: Mozilla-nl planet

Marco Zehe: Quick tip: Add someone to circles on Google Plus using a screen reader

do, 17/07/2014 - 08:39

In my “WAI-ARIA for screen reader users” post in early May, I was asked by Donna to talk a bit about Google Plus. Especially, she asked how to add someone to circles. Google Plus has learned a thing or two about screen reader accessibility recently, but the fact that there is no official documentation on the Google Accessibility entry page yet suggests that people inside Google are not satisfied with the quality of Google Plus accessibility yet, or not placing a high enough priority on it. That quality, however, has improved, so adding someone to one or more circles using a screen reader is not that difficult any more.

Note that I tested the below steps with Firefox 31 Beta (out July 22) and NVDA 2014.2. Other screen reader/browser combos may vary in the way they output stuff or switch between their virtual cursor and focus/forms modes.

Here are the steps:

  1. Log into Google Plus. If you already have a profile, just go ahead and find someone. If not, create a profile and add people.
  2. The easiest way to find people is to go to the People tab. Note that these currently have no “selected” state yet, but they do have the word “active” as part of the link text.
  3. Once you found someone in the list of suggestions, find the “Add to circles” menu button, and press the Space Bar. Note that it is very important that you press Space here, not Enter!
  4. NVDA now automatically switches to focus mode. What happened is that a popup menu opened that has a list of current circles, and an item at the bottom that allows you to create a new circle on the fly. The circles themselves are checkable menu items. Use the up and down arrows to select a circle, for example Friends or Acquaintances, and press the Space Bar to add the person. The number of people in that circle will dynamically increase by one, and the state will be changing to “checked”. Likewise, if you want to remove a person from a particular circle, press Space Bar just the same. These all act like regular check boxes, and the menu stays active so you can shuffle that person around your circles as you please.
  5. At the bottom, there is a non-checkable menu item called “Add new circle”. Here, you have to press Enter. If you do this, a panel opens inside the menu, and focus lands on a text field where you can enter the name of a new circle, for example Web Developers. Press Tab to reach the Create Circle button and press Space Bar. The new circle will be added, the person you’re adding to circles will automatically be added to that circle, and you’re back in the menu of circle checkboxes.
  6. Once you’re done, press Escape twice. The first will end NVDA’s focus mode, the second will close the Add to Circles menu. Focus will land back on the button for that person, but the label will change to the name of a single circle, if you added the person to only one circle, or the label “x Circles”, where x is the number of circles you just put that person into.

The above steps also work on the menu button that you find if you opened the profile page of an individual person, not just in the list of suggested people, or any other list of people. The interaction is exactly the same.

Hope this helps you get around in Google Plus a bit more efficiently!

Categorieën: Mozilla-nl planet

Kent James: Following Wikipedia, Thunderbird Could Raise $1,600,000 in annual donations

do, 17/07/2014 - 08:31

What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.

One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.

Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.

If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.

Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.

But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.

Categorieën: Mozilla-nl planet

Nick Cameron: Rust for C++ programmers - part 9: destructuring pt2 - match and borrowing

do, 17/07/2014 - 03:19
(Continuing from part 8, destructuring).

When destructuring there are some surprises in store where borrowing is concerned. Hopefully, nothing surprising once you understand borrowed references really well, but worth discussing (it took me a while to figure out, that's for sure).

Imagine you have some `&Enum` variable `x` (where `Enum` is some enum type). You have two choices: you can match `*x` and list all the variants (`Variant1 => ...`, etc.) or you can match `x` and list reference to variant patterns (`&Variant1 => ...`, etc.). (As a matter of style, prefer the first form where possible since there is less syntactic noise). `x` is a borrowed reference and there are strict rules for how a borrowed reference can be dereferenced, these interact with match expressions in surprising ways (at least surprising to me), especially when you a modifying an existing enum in a seemingly innocuous way and then the compiler explodes on a match somewhere.

Before we get into the details of the match expression, lets recap Rust's rules for value passing. In C++, when assigning a value into a variable or passing it to a function there are two choices - pass-by-value and pass-by-reference. The former is the default case and means a value is copied either using a copy constructor or a bitwise copy. If you annotate the destination of the parameter pass or assignment with `&`, then the value is passed by reference - only a pointer to the value is copied and when you operate on the new variable, you are also operating on the old value.

Rust has the pass-by-reference option, although in Rust the source as well as the destination must be annotated with `&`. For pass-by-value in Rust, there are two further choices - copy or move. A copy is the same as C++'s semantics (except that there are no copy constructors in Rust). A move copies the value but destroys the old value - Rust's type system ensures you can no longer access the old value. As examples, `int` has copy semantics and `Box<int>` has move semantics:

fn foo() {
    let x = 7i;
    let y = x;                // x is copied
    println!("x is {}", x);   // Ok

    let x = box 7i;
    let y = x;                // x is moved
    //println!("x is {}", x); // error: use of moved value: `x`
}
Rust determines if an object has move or copy semantics by looking for destructors. Destructors probably need a post of their own, but for now, an object in Rust has a destructor if it implements the `Drop` trait. Just like C++, the destructor is executed just before an object is destroyed. If an object has a destructor then it has move semantics. If it does not, then all of its fields are examined and if any of those do then the whole object has move semantics. And so on down the object structure. If no destructors are found anywhere in an object, then it has copy semantics.

Now, it is important that a borrowed object is not moved, otherwise you would have a reference to the old object which is no longer valid. This is equivalent to holding a reference to an object which has been destroyed after going out of scope - it is a kind of dangling pointer. If you have a pointer to an object, there could be other references to it. So if an object has move semantics and you have a pointer to it, it is unsafe to dereference that pointer. (If the object has copy semantics, dereferencing creates a copy and the old object will still exist, so other references will be fine).

OK, back to match expressions. As I said earlier, if you want to match some `x` with type `&T` you can dereference once in the match clause or match the reference in every arm of the match expression. Example:

enum Enum1 {
    Var1,
    Var2,
    Var3
}

fn foo(x: &Enum1) {
    match *x {  // Option 1: deref here.
        Var1 => {}
        Var2 => {}
        Var3 => {}
    }

    match x {
        // Option 2: 'deref' in every arm.
        &Var1 => {}
        &Var2 => {}
        &Var3 => {}
    }
}
In this case you can take either approach because `Enum1` has copy semantics. Let's take a closer look at each approach: in the first approach we first dereference `x` to a temporary variable with type `Enum1` (which copies the value in `x`) and then do a match against the three variants of `Enum1`. This is a 'one level' match because we don't go deep into the value's type. In the second approach there is no dereferencing. We match a value with type `&Enum1` against a reference to each variant. This match goes two levels deep - it matches the type (always a reference) and looks inside the type to match the referred type (which is `Enum1`).

If we are matching a reference with move semantics, then the first approach is not an option. That is because `match *x` would move the enum value out of `*x` (rather than copy it). Any other references to the enum value would then be invalid. Option 2 is allowed, but that is not the end of the story. We have to be careful that any data nested in the enum is also not moved (well, the compiler has to be careful). That is to prevent an object being partially moved whilst someone else has a reference to it - this other referrer assumes the object is wholly immutable. For example,

enum Enum2 {
    // Box has a destructor so Enum2 has move semantics.
    Var1(Box<int>),
    Var2,
    Var3
}

fn foo(x: &Enum2) {
    // *x is no longer allowed.
    match x {
        // We're ignoring nested data, so this is OK
        &Var1(..) => {}
        // No change to the other arms.
        &Var2 => {}
        &Var3 => {}
    }
}
But what about if we want to use the data in `Var1`? We can't write:

    match x {
        &Var1(y) => {}
        _ => {}
    }

because that would mean moving part of `x` into `y`. We can use the 'ref' keyword to get a reference to the data in `Var1`: `&Var1(ref y) => {}`.That is OK, because now we are not dereferencing anywhere and thus not moving any part of `x`. Instead we are creating a pointer which points into the interior of `x`.

Alternatively, we could destructure the Box (this match is going three levels deep): `&Var1(box y) => {}`. This is Ok because `int` has copy semantics and `y` is a copy of the `int` inside the `Box` inside `Var1` (which is 'inside' a borrowed reference). Since `int` has copy semantics, we don't need to move any part of `x`. We could also create a reference to the int rather than copy it: `&Var1(box ref y) => {}`. Again, this is OK, because we don't do any dereferencing and thus don't need to move any part of `x`. If the contents of the Box had move semantics, then we could not write `&Var1(box y) => {}`, we would be forced to use the reference version.

If you do end up only being able to get a reference to some data and you need the value itself, you have no option except to copy that data. Usually that means using `clone()`. If the data doesn't implement clone, you're going to have to further destructure to make a manual copy or implement clone yourself.
Categorieën: Mozilla-nl planet

Kevin Ngo: More Happiness for Your Buck

do, 17/07/2014 - 02:00
Disney is the happiest places on Earth, but one of the most expensive. But it might be well worth the wallet hit.

With increasing assets, I have been thinking lately about what to purchase next, home purchasing, vacation planning, investment. You know, personal finances. Then I wonder how we spend in order to make ourselves happier. How can we use our money most efficiently to make ourselves happiest?

We have fine choices betweem 65" 3D plasma TVs, media-integrated BMWs and Audis, Tudor-style houses on the tree-lined avenue. Although we're all aware of the American Dream and although we might even consciously scoff at it, is it really ingrained in our heads enough to affect our purchases? Despite being aware of materialism, we still spend on items such as an Apple product upgrades or matching furniture sets. But really, compared to what we could potentially be allocating our money towards, are they really worth it?. Buck by buck, there are happier things to spend money on and happier ways to spend it.

Experiences Trumps Stuff

The happiness attained from a new toy is fleeting. When I buy a gadget, I get really excited about it for a couple weeks, and then it's just another item on the shelf. Once in freshman year, I dropped $70 on an HD camcorder. "Think about all the cool life experiences I could record!", I thought. After playing around with it for a bit, it got stowed away, just as Woody had when Buzz came to town. It wasn't the actual camcorder that I really wanted, it was thinking about the future experiences I could have.

Thinking back, the best things I have ever spent my money on were experiences. Trips around the world, places like the cultural streets of Beijing, the serenity of Oahu, or the cold isolation of Alaska. They bring back warm (or perhaps cold) memories and instill a rush of nostalgia. It brought about happiness in a way that those $100 beat-up sneakers or that now-stolen iPod touch ever did.

It's changed my thoughts on getting a nice house or car. Why spend to be stuck at a mundane home or spend to be stuck in traffic (just in cushier seats)? I'd rather use the money saved from not splurging $400K on a house to see the world. Spend money to be with people, go to places, attend shows, try new things. You won't forget it.

Instant Gratification is a Drag

It's not only what we spend on that makes us happy, it's how we spend. When we spend in a way such that we attain instant gratification, such as an in-store purchase on credit of that new fridge or getting that candy bar now, it destroys the whole fun of the waiting game. Have you ever eagerly awaited a package to come for weeks? Thinking about all the future possibilites, all the things you can do, all the fun you will have once that package comes. We are happier when we await something eagerly in anticipation. It's about the journey and not the destination.

Just yesterday, I booked my flight and hotel to Florida to visit my girlfriend working at Disney. It's almost two months out. But every day, I'll be thinking about how much fun we'll have watching the Fantasmic fireworks, how relaxing it will be staying at a 1940s Atlantic-city themed Disney inn, all the delicious food at the Flying Fish. With the date marked on my calendar, it makes me happier every day just eagerly anticipating it.

When you spend on something now, and defer the actual consumption or experience for later, you will much more gratified. Try pre-ordering something you enjoy, plan trips out months ahead, or purchasing online. By practicing patience, you'll probably even save a bit of cash.

Make It Scarce

Experiencing something too frequently makes it less of an experience. If you drink a frothy mocha cappucino every day, you become more and more desensitized to its creamy joys. By making something scarce by not buying or experiencing it too often, it becomes more of a treat. So if you're eating out for lunch every day at nice restaurants, you might want to think about only eating out once a week. Or only get expensive coffees on Fridays. It'll make those times you do go out that much more satisfying, and your wallet will thank you.

Time Trumps Money

Don't dwell too much on wasting your time to pinch some money. So Starbucks is giving out free 12oz coffees today? Free sounds enticing but is it really worth the gas, time in dreadful traffic, and waiting in line? View time as happiness. If you have more time, you can do more of the things you want to do. If you just feel like you have a lot of time, you feel much more free.

With that in mind, you should consider how purchases will affect your future time. Ask "will this really make me happier next week?". If you are contemplating a new TV, you might think it'll make you happier. Have so many friends over to play FIFA on the so-much-HD. But television doesn't make you happier or any less stressed. It's a numbing time-sink. Or perhaps think when you are debating between two similar products such as a Nexus 5 or an HTC One. Sure, when placed side-by-side, those extra megapixels and megahertz might seem like a huge advantage. But think about the product in isolation and see if it will really benefit your future time.

Give it Away

Warren Buffett pledged to give away 99% of his wealth, whether in his lifetime or posthumously. Giving away, passing it forward, being charitable makes people happy. Even happier had they splurged on themselves.

Helping others in need makes it feel like you have a lot of extra free time to give away. And feeling like you have a lot of free time is less of a boulder on your back. So invest in others and invest in relationships. We're inherently social creatures although sometimes selfish. It works against us. Donate to a charity where you know exactly where your money is going to, or buy something nice for a family member or friend without pressure. It's money happily spent.

Categorieën: Mozilla-nl planet

Mark Surman: How do we get depth *and* scale?

wo, 16/07/2014 - 22:20

We want millions of people learning about the web everyday with Mozilla. The ‘why’ is simple: web literacy is quickly becoming just as important as reading, writing and math. By 2024, there will be more than 5 billion people on the web. And, by then, the web will shape our everyday lives even more than it does today. Understanding how the it works, how to build it and how to make it your own will be essential for nearly everyone.

Maker Party Uganda

The tougher question is ‘how’ — how do we teach the web with both the depth *and* scale that’s needed? Most people who tackle a big learning challenge pick one path of the other. For example, the educators in our Hive Learning Networks are focused on depth of learning. Everything the do is high touch, hands-on and focused on innovating so learning happens in a deep way. On the flip side, MOOCs have quickly shown what scale looks like, but they almost universally have high drop out rates and limited learning impact for all but the most motivated learners. We rarely see depth and scale go together. Yet, as the web grows, we need both. Urgently.

I’m actually quite hopeful. I’m hopeful because the Mozilla community is deeply focused on tackling this challenge head on, with people rolling up their sleeves to help people learn by making and organizing themselves in new ways that could massively grow the number of people teaching the web. We’re seeing the seeds of both depth and scale emerge.

This snapped into focus for me at MozFest East Africa in Kampala a few days ago. Borrowing from the MozFest London model, the event showcased a variety of open tech efforts by Mozilla and others: FirefoxOS app development; open data tools from a local org called Mountabatten; Mozilla localization; Firefox Desktop engineering; the work of the Ugandan National Information Technology Agency. It also included a huge Maker Party, with 200 young Ugandans showing up to learn and hack with Webmaker tools.

Maker Party Uganda

The Maker Party itself was impressive — pulled off well despite rain and limited connectivity. But what was more impressive was seeing how the Mozilla community is stepping up to plant the seeds of teaching the web at depth and scale, which I’d call out as:

Mentors: IMHO, a key to depth is humans connecting face to face to learn. We’ve set up a Webmaker Mentors program in the last year to encourage this kind of learning. The question has been: will people step up to do this kind of teaching and mentoring, and do it well? MozFest EA was promising start: 30 motivated mentors showed up prepared, enthusiastic and ready to help the 200 young people at the event learn the web.

Curriculum: one of the hard parts of scaling a volunteer-based mentor program is getting people to focus their teaching on the most important web literacy skills. We released a new collection of open source web literacy curriculum over the past couple of months designed to solve this problem. We weren’t sure how things would work out, I’d say MozFestEA is early evidence that curriculum can do a good job of helping people quickly understand what and how to teach. Here, each of the mentors was confidently and articulately teaching a piece of the web literacy framework using Webmaker tools.

Making as learning: another challenge is getting people to teach / learn deeply based on written curriculum. Mozilla focuses on ‘making by learning’ as a way past this — putting hands-on, project based learning at the heart of most of our Webmaker teaching kits. For example, the basic remix teaching kit gets learners quickly hacking and personalizing their favourite big brand web site, which almost always gets people excited and curious. More importantly: this ‘making as learning’ approach lets mentors adapt the experience to a learner’s interests and local context in real time. It was exciting to see the Ugandan mentors having students work on web pages focused on local school tasks and local music stars, which worked well in making the standard teaching kits come to life.

Clubs: mentors + curriculum + making can likely get us to our 2014 goal of 10,000 people around the world teaching web literacy with Mozilla. But the bigger question is how do we keep the depth while scaling to a much bigger level? One answer is to create more ’nodes’ in the Webmaker network and get them teaching all year round. At MozFest EA, there was a session on Webmaker Clubs — after school web literacy clubs run by students and teachers. This is an idea that floated up from the Mozilla community in Uganda and Canada. In Uganda, the clubs are starting to form. For me, this is exciting. Right now we have 30 contributors working on Webmaker in Uganda. If we opened up clubs in schools, we could imagine 100s or even 1000s. I think clubs like this is a key next step towards scale.

Community leadership: the thing that most impressed me at MozFestEA was the leadership from the community. San Emmanuel James and Lawrence Kisuuki have grown the Mozilla community in Uganda in a major way over the last couple of years. More importantly, they have invested in building more community leaders. As one example, they organized a Webmaker train the trainer event a few weeks before MozFestEA. The result was what I described above: confident mentors showing up ready to teach, including people other than San and Lawrence taking leadership within the Maker Party side of the event. I was impressed.This is key to both depth and scale: building more and better Mozilla community leaders around the world.

Of course, MozFestEA was just one event for one weekend. But, as I said, it gave me hope: it made be feel that the Mozilla community is taking the core building blocks of Webmaker shaping them into something that could have a big impact.

IMG_20140716_185205

With Maker Party kicking off this week, I suspect we’ll see more of this in coming months. We’ll see more people rolling up their sleeves to help people learn by making. And more people organizing themselves in new ways that could massively grow the number of people teaching the web. If we can make happen this summer, much bigger things lay on the path ahead.


Filed under: education, mozilla, webmakers
Categorieën: Mozilla-nl planet

Gregory Szorc: Updates to firefoxtree Mercurial extension

wo, 16/07/2014 - 21:55

My Please Stop Using MQ post, has been generating a lot of interest for bookmark-based workflows at Mozilla. To make adoption easier, I quickly authored an extension to add remote refs of Firefox repositories to Mercurial.

There was still a bit of confusion and gripes about workflows that I thought it would be best to update the extension to make things more pleasant.

Automatic tree names

People wanted an ability to easy pull/aggregate the various Firefox trees without additional configuration to an hgrc file.

With firefoxtree, you can now hg pull central or hg pull inbound or hg pull aurora and it just works.

Pushing with aliases doesn't yet work. It is slightly harder to do in the Mercurial API. I have a solution, but I'm validating some code paths to ensure it is safe. This feature will likely appear soon.

fxheads commands

Once people adopted unified repositories with heads from multiple repositories, they asked how they could quickly identify the heads of the pulled Firefox repositories.

firefoxtree now provides a hg fxheads command that prints a concise output of the commits constituting the heads of the Firefox repos. e.g.

$ hg fxheads 224969:0ec0b9ac39f0 aurora (sort of) bug 898554 - raise expected hazard count for b2g to 4 until they are fixed, a=bustage+hazbuild-only 224290:6befadcaa685 beta Tagging /src/mdauto/build/mozilla-beta 1772e55568e4 with FIREFOX_RELEASE_31_BASE a=release CLOSED TREE 224848:8e8f3ba64655 central Merge inbound to m-c a=merge 225035:ec7f2245280c fx-team fx-team/default Merge m-c to fx-team 224877:63c52b7ddc28 inbound Bug 1039197 - Always build js engine with zlib. r=luke 225044:1560f67f4f93 release release/default tip Automated checkin: version bump for firefox 31.0 release. DONTBUILD CLOSED TREE a=release

Please note that the output is based upon local-only knowledge.

Reject pushing multiple heads

People were complaining that bookmark-based workflows resulted in Mercurial trying to push multiple heads to a remote. This complaint stems from the fact that Mercurial's default push behavior is to find all commits missing from the remote and push them. This behavior is extremely frustrating for Firefox development because the Firefox repos only have a single head and pushing multiple heads will only result in a server hook rejecting the push (after wasting a lot of time transferring that commit data).

firefoxtree now will refuse to push multiple heads to a known Firefox repo before any commit data is sent. In other words, we fail fast so your time is saved.

firefoxtree also changes the default behavior of hg push when pushing to a Firefox repo. If no -r argument is specified, hg push to a Firefox repo will automatically remap to hg push -r .. In other words, we attempt to push the working copy's commit by default. This change establishes sensible default and likely working behavior when typing just hg push.

Installing firefoxtree

Within the next 48 hours, mach mercurial-setup should prompt to install firefoxtree. Until then, clone https://hg.mozilla.org/hgcustom/version-control-tools and ensure your ~/hgrc file has the following:

[extensions] firefoxtree = /path/to/version-control-tools/hgext/firefoxtree

You likely already have a copy of version-control-tools in ~/.mozbuild/version-control-tools.

It is completely safe to install firefoxtree globally: the extension will only modify behavior of repositories that are clones of Firefox repositories.

Categorieën: Mozilla-nl planet

Pete Moore: Weekly review 2014-07-16

wo, 16/07/2014 - 16:37

Highlights

Last week build duty, therefore much less to report this week. I think we’ll have plenty to talk about though (wink wink).

l10n vcs sync was done by aki, and i posted my responses, and am writing up a patch which I hope to land in the next 24 hours. That will be l10n done.

I’ve been busy traiging queues too, and inviting people to meetings that I don’t attend myself, and cleaning up a lot of bugs (not just the triaging, but in general).

Today’s major incident was fallout from panda train 3 move - finally resolved now (yay). Basically, devices.json was out-of-date on the foopies. Disappointingly I thought to check devices.json, but did not consider it would be out-of-date on foopies, as I knew we’d been having lots of reconfigs every day. But for other reasons, the foopy updates were not performed (hanging ssh sessions when updating them) - so it took a while until this was discovered (by dustin!). In the meantime had to disable and enable > 250 pandas.

Other than that, working ferociously on finishing off vcs sync.

I think I probably updated 200 bugs this week! Was quite a clean up.

Categorieën: Mozilla-nl planet

Frédéric Harper: Community Evangelist: Firefox OS developer outreach at MozCamp India

wo, 16/07/2014 - 16:10
//j.mp/1jIYxWb (click to enlarge)

Copyright Ratnadeep Debnath
http://j.mp/1jIYxWb (click to enlarge)

At the end of June, I was in India to do a train the trainer session at MozCamp India. The purpose of the session Janet Swisher (first time we worked together, and I think we got a winning combo), and I delivered was to help Mozillians to become Community Evangelists. Our goal was to help them become part of our Technical Evangelist team: helping us inspiring and enabling developers in India to be successful with Firefox OS (we are starting with this technology because of the upcoming launch).

We would have been able to do a full day or more about developer outreach, but we only had three hours in which we shown the attendees how they can contribute, did a fun speaker idol and worked on their project plan. The contribution can be done at many levels, like public speaking, helping developers to build Firefox OS application, answering questions on StackOverflow, and more.

Since we had parallel tracks during our session, we gave it twice to give them the chance to assist to more than one track. For those who were there in the Saturday session, the following slides are the one we used:

Developer Outreach for Firefox OS – Mozcamp India – 2014-06-21 from Frédéric Harper

I also recorded the session for those of you that would like to refresh your memory:

For the session on Sunday, we fixed some slides, and adapted our session to give us more time for the speaker idol as the project plan. Here are the slides:

Developer Outreach for Firefox OS – Mozcamp India – 2014-06-22 from Frédéric Harper

If you were not there, I would suggest you to follow the slides as the video of the second day, as it’s an improve version of the first one (not that the first one was not good, but it was the first time we gave this session);

From the feedback we got, it was a pretty good session, and we were happy so see the excitement of the Indian community about this community evangelist role. I can’t wait to see what the Mozilla community in India will do! If you too, Mozillian or not, have any interest about evangelizing the open web, you should join the Mozilla Evangelism mailing list.

 


--
Community Evangelist: Firefox OS developer outreach at MozCamp India is a post on Out of Comfort Zone from Frédéric Harper

Related posts:

  1. Firefox OS love in Toronto Yesterday, I was in Toronto to share some Firefox OS...
  2. Working your magic with Firefox OS – Playing mp4 Everything you are looking for, about Firefox OS development, is...
  3. One month as a Firefox OS Technical Evangelist Time flies; I thought I started at Mozilla last week,...
Categorieën: Mozilla-nl planet

Pagina's