mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 18 uur 53 min geleden

Chris Pearce: Reducing Windows' background CPU load while building Firefox

vr, 11/07/2014 - 01:32
If you're building Firefox on Windows 8 like I am you might want to tweak the following settings to reduce the OS' background CPU load while you're building Firefox (some of these settings may also be applicable to Windows 7, but I haven't test this):
  1. Add your src/object directory to Windows Defender's list of locations excluded from real time scans. I realized that the "Antimalware Service Executable" was using up to 40% CPU utilization during builds before I did this. You can add your src/objdir to the exclude list using the UI at: Windows Defender > Settings > Excluded files and locations.
  2. Remove your src/objdir from Windows' list of locations to be indexed. I actually did the inverse, and removed my home directory (which my src/objdir was inside) from the list of indexable locations and re-added the specific subdirectories in my home dir that I wanted indexed (Documents, etc) without re-adding my src/objdir.
Update, 11 July 2014: Recently the "Antimalware Service Execuable" started hogging CPU again while building, so I added MSVC's cl.exe, and link.exe, to the list of "Excluded Processes" in Windows Defender > Settings, and that reduced "Antimalware Service Execuable"'s CPU usage while building.
    Categorieën: Mozilla-nl planet

    Christian Heilmann: [video+slides] FirefoxOS – HTML5 for a truly world-wide-web (Sapo Codebits 2014)

    do, 10/07/2014 - 18:07

    Chris Heilmann at SAPO codebits
    Today the good people at Sapo released the video of my Codebits 2014 keynote.

    In this keynote, I talk about FirefoxOS and what it means in terms of bringing mobile web connectivity to the world. I explain how mobile technology is unfairly distributed and how closed environments prevent budding creators from releasing their first app. The slides are
    “>available on Slideshare
    as the video doesn’t show them.

    There’s also a screencast on YouTube.

    Since this event, Google announced their Android One project, and I am very much looking forward how this world-wide initiative will play out and get more people connected.

    Photo by José P. Airosa ‏@joseairosa

    Categorieën: Mozilla-nl planet

    Henrik Skupin: Firefox Automation report – week 21/22 2014

    do, 10/07/2014 - 14:54

    In this post you can find an overview about the work happened in the Firefox Automation team during week 21 and 22.

    Highlights

    To assist everyone from our community to learn more about test automation at Mozilla, we targeted 4 full-day automation training days from mid of May to mid of June. The first training day was planned for May 21rd and went well. Lots of [people were present and actively learning more about automation[https://quality.mozilla.org/2014/05/automation-training-day-may-21st-results/). Especially about testing with Mozmill.

    To support community members to get in touch with Mozmill testing a bit easier, we also created a set of one-and-done tasks. Those start from easy tasks like running Mozmill via the Mozmill Crowd extension, and end with creating the first simple Mozmill test.

    Something, which hit us by surprise was that with the release of Firefox 30.0b3 we no longer run any automatically triggered Mozmill jobs in our CI. It took a bit of investigation but finally Henrik found out that the problem has been introduced by RelEng when they renamed the product from ”’firefox”’ to ”’Firefox”’. A quick workaround fixed it temporarily, but for a long term stable solution we might need a frozen API for build notifications via Mozilla Pulse.

    One of our goals in Q2 2014 is also to get our machines under the control of PuppetAgain. So Henrik started to investigate the first steps, and setup the base manifests as needed for our nodes and the appropriate administrative accounts.

    The second automation training day was also planned by Andreea and took place on May 28th. Again, a couple of people were present, and given the feedback on one-and-done tasks, we fine-tuned them.

    Last but not least Henrik setup the new Firefox-Automation-Contributor team, which finally allows us now to assign people to specific issues. That was necessary because Github doesn’t let you do that for anyone, but only known people.

    Individual Updates

    For more granular updates of each individual team member please visit our weekly team etherpad for week 21 and week 22.

    Meeting Details

    If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 21 and week 22.

    Categorieën: Mozilla-nl planet

    Henrik Gemal: Bringing SIMD to JavaScript

    do, 10/07/2014 - 09:16
    In an exciting collaboration with Mozilla and Google, Intel is bringing SIMD to JavaScript. This makes it possible to develop new classes of compute-intensive applications such as games and media processing—all in JavaScript—without the need to rely on any native plugins or non-portable native code. SIMD.JS can run anywhere JavaScript runs. It will, however, run a lot faster and more power efficiently on the platforms that support SIMD. This includes both the client platforms (browsers and hybrid mobile HTML5 apps) as well as servers that run JavaScript, for example through the Node.js V8 engine.

    https://01.org/blogs/tlcounts/2014/bringing-simd-javascript

    Categorieën: Mozilla-nl planet

    Nicholas Nethercote: Dipping my toes in the Servo waters

    do, 10/07/2014 - 05:59

    I’m very interested in Rust and Servo, and have been following their development closely. I wanted to actually do some coding in Rust, so I decided to start making small contributions to Servo.

    At this point I have landed two changes in the tree — one to add very basic memory measurements for Linux, and the other for Mac — and I thought it might be interesting for people to hear about the basics of contributing. So here’s a collection of impressions and thoughts in no particular order.

    Getting the code and building Servo was amazingly easy. The instructions actually worked first time on both Ubuntu and Mac! Basically it’s just apt-get install (on Ubuntu) or port install (on Mac), git clone, configure, and make. The configure step takes a while because it downloads an appropriate version of Rust, but that’s fine; I was expecting to have to install the appropriate version of Rust first so I was pleasantly surprised.

    Once you build it, Servo is very bare-boned. Here’s a screenshot.

    Servo

    There is no address bar, or menu bar, or chrome of any kind. You simply choose which page you want to display from the command line when you start Servo. The memory profiling I implemented is enabled by using the -m option, which causes memory measurements to be periodically printed to the console.

    Programming in Rust is interesting. I’m not the first person to observe that, compared to C++, it takes longer to get your code past the compiler, but it’s more likely to to work once you do. It reminds me a bit of my years programming in Mercury (imagine Haskell, but strict, and with a Prolog ancestry). Discriminated unions, pattern matching, etc. In particular, you have to provide code to handle all the error cases in place. Good stuff, in my opinion.

    One thing I didn’t expect but makes sense in hindsight: Servo has seg faulted for me a few times. Rust is memory-safe, and so shouldn’t crash like this. But Servo relies on numerous libraries written in C and/or C++, and that’s where the crashes originated.

    The Rust docs are a mixed bag. Libraries are pretty well-documented, but I haven’t seen a general language guide that really leaves me feeling like I understand a decent fraction of the language. (Most recently I read Rust By Example.) This is meant to be an observation rather than a complaint; I know that it’s a pre-1.0 language, and I’m aware that Steve Klabnik is now being paid by Mozilla to actively improve the docs, and I look forward to those improvements.

    The spotty documentation isn’t such a problem, though, because the people in the #rust and #servo IRC channels are fantastic. When I learned Python last year I found that simply typing “how to do X in Python” into Google almost always leads to a Stack Overflow page with a good answer. That’s not the case for Rust, because it’s too new, but the IRC channels are almost as good.

    The code is hosted on GitHub, and the project uses a typical pull request model. I’m not a particularly big fan of git — to me it feels like a Swiss Army light-sabre with a thousand buttons on the handle, and I’m confident using about ten of those buttons. And I’m also not a fan of major Mozilla projects being hosted on GitHub… but that’s a discussion for another time. Nonetheless, I’m sufficiently used to the GitHub workflow from working on pdf.js that this side of things has been quite straightforward.

    Overall, it’s been quite a pleasant experience, and I look forward to gradually helping build up the memory profiling infrastructure over time.

    Categorieën: Mozilla-nl planet

    John Ford: Node.js modules to talk to Treestatus and Build API

    do, 10/07/2014 - 05:38
    Have you ever wanted to talk to treestatus.mozilla.org or the buildapi in your node project?  Well, now you can!  I needed to be able to talk to these two systems so I wrote a couple wrappers today for them.  If you look at the code, they're remarkably similar.  That's because both of them are basically identical.  The API methods are defined in basic Javascript object, which is used to generate the functions of the API wrapper.  Because this is a little harder to understand when trying to figure out how to use the modules, I also wrote a function that automatically writes out a README.md file with all the functions and their parameters.

    Proof that it works:


    They aren't perfect, but they work well.  What I really should do is split the API data into a JSON file, create a single module that knows how to consume those JSON files and build the API Objects and Docs then have both of these APIs use that module to do their work.  Wanna do that?  I swear, I'll look at pull requests!

    Edit: Forgot to actually link to the code.  It's here and here.
    Categorieën: Mozilla-nl planet

    Patrick Cloke: Mentoring and Time

    wo, 09/07/2014 - 23:32

    No, this is not about being late places, it’s about respecting people’s time. I won’t go deep into why this is important as, Michael Haggerty wrote an awesome article on this. His thoughts boiled down to a single line of advice:

    DON’T WASTE OTHER PEOPLE’S TIME

    I think this applies to any type of mentoring, and not only open source work, but any formal or informal mentoring! This advice isn’t meant just for GSoC students, for interns or new employees, but also things I’d like to remind myself to do when someone is helping me.

    To make this sound positive, I’d reword the above advice as:

    Respect other people’s time!

    Someone is willing to help you, so assume some good faith, but help them help you! Some actions to focus on:

    • Ask focused questions! If you do not understand an answer, do not re-ask the same question, but ask followup question. Show you’ve researched the original answer and attempted to understand it. Write sample code, play with it, etc. If you think the answer given doesn’t apply to your question, reword your question: your mentor probably did not understand.
    • Be cognizant of timezones: if you’d like a question answered (in realtime), ask it when the person is awake! (And this includes realizing if they have just woken up or are going to bed.)
    • Your mentor may not have the context you do: they might be helping many people at once, or even working on something totally different than you! Try not to get frustrated if you have to explain your context to them multiple times or have to clarify your question. You are living and breathing the code you’re working in; they are not.
    • Don’t be afraid to share code. It’s much easier to ask a question when there’s a specific example in front of you. Be specific and don’t talk theoretically.
    • Don’t be upset if you’re asked to change code (e.g. receive an r-)! Part of helping you to grow is telling you what you’re doing wrong.
    • Working remotely is hard. It requires effort to build a level of trust between people. Don’t just assume it will come in time, but work at it and try to get to know and understand your mentor.
    • Quickly respond to both feedback and questions. Your mentor is taking their precious time to help you. If they ask you a question or ask something of you, do it ASAP. If you can’t answer their question immediately, at least let them know you received it and will soon look at it.
    • If there are multiple people helping you, assume that they communicate (without your knowledge). Don’t…
      • …try to get each of them to do separate parts of a project for you.
      • …ask the same question to multiple people hoping for different answers.

    The above is a lot to consider. I know that I have a tendency to do some of the above. Using your mentors time efficiently will not only make your mentor happy, but it will probably cause them to want to give you more of their time.

    Mentoring is also hard and a skill to practice. Although I’ve talked a lot about what a mentee needs to do, it is also important that a mentor makes h(im|er)self available and open. A few thoughts on interacting as a mentor:

    • Be cognizant of culture and language (as in, the level at which a mentor and mentee share a common language). In particular, colloquialisms should be avoided whenever possible. At least until a level of trust is reached.
    • Be tactful when giving feedback. Thank people for submitting a patch, give good, actionable feedback quickly. Concentrate more on overall code design and algorithms than nits. (Maybe even point out nits, but fix them yourself for an initial patch.)
    Categorieën: Mozilla-nl planet

    Byron Jones: using “bugmail filtering” to exclude notifications you don’t want

    wo, 09/07/2014 - 18:39

    a common problem with bugzilla emails (aka bugmail) is there’s too much of it.  if you are involved in a bug or watching a component you receive all changes made to bugs, even those you have no interest in receiving.

    earlier this week we pushed a new feature to bugzilla.mozilla.org : bugmail filtering.

    bugmail_filtering

    this feature is available on the “bugmail filtering” tab on the “user preference” page.

     

    there are many reasons why bugzilla may send you notification of a change to a bug:

    • you reported the bug
    • the bug is assigned to you
    • you are the qa-contact for the bug
    • you are on the cc list
    • you are watching the bug’s product/component
    • you are watching another user who received notification of the bug
    • you are a “global watcher”

    dealing with all that bugmail can be time consuming.  one way address this issue is to use the x-headers present in every bugmail to categorise and separate bugmail into different folders in your inbox.  unfortunately this option isn’t available to everyone (eg. gmail users still cannot filter on any email header).

    bugmail filtering allows you to tell bugzilla to notify you only if it’s a change that you’re interested in.

    for example, you can say:

    don’t send me an email when the qa-whiteboard field is changed unless the bug is assigned to me

      filter-qa

    if multiple filters are applicable to the same bug change, include filters override exclude filters.  this interplay allows you to write filters to express “don’t send me an email unless …”

    don’t send me an email for developer tools bugs that i’m CC’ed on unless the bug’s status is changed

    • first, exclude all developer tools emails:

    filter-devtools-exclude

    • then override the exclusion with an inclusion for just the status changes:

    filter-devtools-include


    Filed under: bmo
    Categorieën: Mozilla-nl planet

    Pete Moore: Weekly review 2014-07-09

    wo, 09/07/2014 - 16:31

    This week I am on build duty.

    At the tail end of last week, I managed to finish of l10n patches and sent them over to Aki to get them reviewed. He has now reviewed them, and the next step is to process his review comments.

    Other than this, I raised a bug about refactoring the beagle config and created a patch, and am currently in discussions with Aki about it.

    I’m still processing the loaners for Joel Maher (thanks Coop for taking care of the windows loaners) - I hit some problems on the way setting up vnc on Mountain Lion - working through this currently (also involved Armen to get his expertise).

    After the loaners are done, I also have my own queue of optimisations that I’d like to look at that are related to build duty (open bugs).

    Categorieën: Mozilla-nl planet

    Niko Matsakis: An experimental new type inference scheme for Rust

    wo, 09/07/2014 - 16:08

    While on vacation, I’ve been working on an alternate type inference scheme for rustc. (Actually, I got it 99% working on the plane, and have been slowly poking at it ever since.) This scheme simplifies the code of the type inferencer dramatically and (I think) helps to meet our intutions (as I will explain). It is however somewhat less flexible than the existing inference scheme, though all of rustc and all the libraries compile without any changes. The scheme will (I believe) make it much simpler to implement to proper one-way matching for traits (explained later).

    Note: Changing the type inference scheme doesn’t really mean much to end users. Roughly the same set of Rust code still compiles. So this post is really mostly of interest to rustc implementors.

    The new scheme in a nutshell

    The new scheme is fairly simple. It is based on the observation that most subtyping in Rust arises from lifetimes (though the scheme is extensible to other possible kinds of subtyping, e.g. virtual structs). It abandons unification and the H-M infrastructure and takes a different approach: when a type variable V is first related to some type T, we don’t set the value of V to T directly. Instead, we say that V is equal to some type U where U is derived by replacing all lifetimes in T with lifetime variables. We then relate T and U appropriately.

    Let me give an example. Here are two variables whose type must be inferred:

    'a: { // 'a --> name of block's lifetime let x = 3; let y = &x; ... }

    Let’s say that the type of x is $X and the type of y is $Y, where $X and $Y are both inference variables. In that case, the first assignment generates the constraint that int <: $X and the second generates the constraint that &'a $X <: $Y. To resolve the first constraint, we would set $X directly to int. This is because there are no lifetimes in the type int. To resolve the second constraint, we would set $Y to &'0 int – here '0 represents a fresh lifetime variable. We would then say that &'a int <: &'0 int, which in turn implies that '0 <= 'a. After lifetime inference is complete, the types of x and y would be int and &'a int as expected.

    Without unification, you might wonder what happens when two type variables are related that have not yet been associated with any concrete type. This is actually somewhat challenging to engineer, but it certainly does happen. For example, there might be some code like:

    let mut x; // type: $X let mut y = None; // type: Option<$0> loop { if y.is_some() { x = y.unwrap(); ... } ... }

    Here, at the point where we process x = y.unwrap(), we do not yet know the values of either $X or $0. We can say that the type of y.unwrap() will be $0 but we must now process the constrint that $0 <: $X. We do this by simply keeping a list of outstanding constraints. So neither $0 nor $X would (yet) be assigned a specific type, but we’d remember that they were related. Then, later, when either $0 or $X is set to some specific type T, we can go ahead and instantiate the other with U, where U is again derived from T by replacing all lifetimes with lifetime variables. Then we can relate T and U appropriately.

    If we wanted to extend the scheme to handle more kinds of inference beyond lifetimes, it can be done by adding new kinds of inference variables. For example, if we wanted to support subtyping between structs, we might add struct variables.

    What advantages does this scheme have to offer?

    The primary advantage of this scheme is that it is easier to think about for us compiler engineers. Every type variable is either set – in which case its type is known precisely – or unset – in which case its type is not known at all. In the current scheme, we track a lower- and upper-bound over time. This makes it hard to know just how much is really known about a type. Certainly I know that when I think about inference I still think of the state of a variable as a binary thing, even though I know that really it’s something which evolves.

    What prompted me to consider this redesign was the need to support one-way matching as part of trait resolution. One-way matching is basically a way of saying: is there any substitution S such that T <: S(U) (whereas normal matching searches for a substitution applied to both sides, like S(T) <: S(U)).

    One-way matching is very complicated to support in the current inference scheme: after all, if there are type variables that appear in T or U which are partially constrained, we only know bounds on their eventual type. In practice, these bounds actually tell us a lot: for example, if a type variable has a lower bound of int, it actually tells us that the type variable is int, since in Rust’s type system there are no super- of sub-types of int. However, encoding this sort of knowledge is rather complex – and ultimately amounts to precisely the same thing as this new inference scheme.

    Another advantage is that there are various places in the Rust’s type checker whether we query the current state of a type variable and make decisions as a result. For example, when processing *x, if the type of x is a type variable T, we would want to know the current state of T – is T known to be something inherent derefable (like &U or &mut U) or a struct that must implement the Deref trait? The current APIs for doing this bother me because they expose the bounds of U – but those bounds can change over time. This seems “risky” to me, since it’s only sound for us to examine those bounds if we either (a) freeze the type of T or (b) are certain that we examine properties of the bound that will not change. This problem does not exist in the new inference scheme: anything that might change over time is abstracted into a new inference variable of its own.

    What are the disadvantages?

    One form of subtyping that exists in Rust is not amenable to this inference. It has to do with universal quantification and function types. Function types that are “more polymorphic” can be subtypes of functions that are “less polymorphic”. For example, if I have a function type like <'a> fn(&'a T) -> &'a uint, this indicates a function that takes a reference to T with any lifetime 'a and returns a reference to a uint with that same lifetime. This is a subtype of the function type fn(&'b T) -> &'b uint. While these two function types look similar, they are quite different: the former accepts a reference with any lifetime but the latter accepts only a reference with the specific lifetime 'b.

    What this means is that today if you have a variable that is assigned many times from functions with varying amounts of polymorphism, we will generally infer its type correctly:

    fn example<'b>(..) { let foo: <'a> |&'a T| -> &'a int = ...; let bar: |&'b T| -> &'b int = ...; let mut v; v = foo; v = bar; // type of v is inferred to be |&'b T| -> &'b int }

    However, this will not work in the newer scheme. Type ascription of some form would be required. As you can imagine, this is not a very .common problem, and it did not arise in any existing code.

    (I believe that there are situations which the newer scheme infers correct types and the older scheme will fail to compile; however, I was unable to come up with a good example.)

    How does it perform?

    I haven’t done extensive measurements. The newer scheme creates a lot of region variables. It seems to perform roughly the same as the older scheme, perhaps a bit slower – optimizing region inference may be able to help.

    Categorieën: Mozilla-nl planet

    Doug Belshaw: Why Mozilla cares about Web Literacy [whitepaper]

    wo, 09/07/2014 - 15:41

    One of my responsibilities as Web Literacy Lead at Mozilla is to provide some kind of theoretical/conceptual underpinning for why we do what we do. Since the start of the year, along with Karen Smith and some members of the community, I’ve been working on a Whitepaper entitled Why Mozilla cares about Web Literacy.

    Webmaker whitepaper

    The thing that took time wasn’t really the writing of it – Karen (a post-doc researcher) and I are used to knocking out words quickly – but the re-scoping and design of it. The latter is extremely important as this will serve as a template for future whitepapers. We were heavily influenced by P2PU’s reports around assessment, but used our own Makerstrap styling. I’d like to thank FuzzyFox for all his work around this!

    Thanks also to all those colleagues and community members who gave feedback on earlier drafts of the whitepaper. It’s available under a Creative Commons Attribution 4.0 International license and you can fork/improve the template via the GitHub repository. We’re planning for the next whitepaper to be around learning pathways. Once that’s published, we’ll ensure there’s a friendlier way to access them - perhaps via a subdomain of webmaker.org.

    Questions? Comments? I’m @dajbelshaw and you can email me at doug@mozillafoundation.org.

    Categorieën: Mozilla-nl planet

    Gervase Markham: The Latest Airport Security Theatre

    wo, 09/07/2014 - 15:23

    All passengers flying into or out of the UK are being advised to ensure electronic and electrical devices in hand luggage are sufficiently charged to be switched on.

    All electronic devices? Including phones, right? So you must be concerned that something dangerous could be concealed inside a package the size of a phone. And including laptops, right? Which are more than big enough to contain said dangerous phone-sized electronics package in the CD drive bay, or the PCMCIA slot, and still work perfectly. Or, the evilness could even be occupying 90% of the body of the laptop, while the other 10% is taken up by an actual phone wired to the display and the power button which shows a pretty picture when the laptop is “switched on”.

    Or are the security people going to make us all run 3 applications of their choice and take a selfie using the onboard camera to demonstrate that the device is actually fully working, and not just showing a static image?

    I can’t see this as being difficult to engineer around. And meanwhile, it will cause even more problems trying to find charging points in airports. Particularly for people who are transferring from one long flight to another.

    Categorieën: Mozilla-nl planet

    Pagina's