mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 20 uur 4 min geleden

Laura Hilliger: Open Fluency

di, 07/04/2015 - 16:39

webliteracy-lens-open-fluency

webliteracy-lens-open-fluency I’ve been thinking about lenses on the Web Literacy Map again. Specifically the “Leadership” component of what we do at Mozilla. In his post, Mark called this piece fuzzy, but I think it will become clearer as we define what “leadership” in the context of Mozilla means, and how we can offer professional development that brings people closer to that definition. What does it mean to be “trained” by Mozilla? Or be part of Mozilla’s educational network? What do the leaders and passionate people in our community have in common? What makes them sustainable? What do we need to cognitively understand? What behaviors do we need to model? How do we unite with one another locally and globally? I have some theories on specific competencies a leader needs to be considered “fluent” in open source and participatory learning. I’ve indicated possibilities in the above graphic [edit note: the smaller text are just notes of topics that might be contained under the competencies). The Web Literacy Map Doug Belshaw and the Mozilla community created is extremely relevant in this work, which is why this post is using the word “fluency” – to indicate the relationship between the map and this lens on it. It feels like leadership in our context requires fluency in specific competencies - the highlighted ones on the web literacy map above. There is a lot of content for professional development around teaching Web Literacy. I’m working on collecting resources for an upcoming conceptual and complete remix of what was Webmaker Training (and before that the original Teach the Web MOOC). Last week in a team call, we talked about my first attempt to use blunt force in getting the Web Literacy Map to cover skills and competencies I think are part of the “Teach Like Mozilla” offering at Mozilla. I made the below graphic, trying to work out the stuff in my brain (it helps me think when I can SEE things), and I immediately knew I was forcing a square peg into a round hole. I’m including it so you can see the evolution of the thinking behind the above graphic: webliteracy-lens-TLM I’d love to hear thoughts on this approach to placing a lens on the Web Literacy Map. Please ask questions, push back, give feedback to this thinking-in-progress.
Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

di, 07/04/2015 - 09:17

the following changes have been pushed to bugzilla.mozilla.org:

  • [1118365] Write extension to use GitHub for Authentication
  • [1150965] “Due Date” field for the Mozilla Metrics queue
  • [1151592] Typo on custom recruiting form
  • [1149879] bug-modal’s editmode is broken for users without editbugs (Form field dup_id was not defined)
  • [1146960] replace the version number on bmo with a build number
  • [1149796] “Reset Assignee to default” and “Reset QA Contact to default” options missing when changing a bug’s component
  • [1149438] keyboard shortcut (hotkey) for “Edit” button
  • [1146760] cannot add other people to the cc list
  • [1150074] when a person blocks needinfo? requests, it prevents comments on a bug when there is an existing ni? request

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Andy McKay: Bill C51

di, 07/04/2015 - 09:00

Today Prime Minister Stephen Harper appeared at a school about 5 doors away from my house. We found this out on Easter Sunday when the school sent out a phone call to parents telling them, who then told us that he'd appear on Tuesday morning at the school.

We all immediately jumped at the opportunity to protest. Most people at the Enbridge and Kinder-Morgan pipeline expansions. Myself at Bill C51.

As Tuesday morning came around, I had a few meetings to polish off which meant I missed the start. Apparently upon seeing protestors, Stephen Harper went around the back and came in through a back path to avoid them. I came along later to find I wasn't the only one there. The protestors had split up between the main entrance and the back entrance.

Most of the protestors were there about oil, I was about that Bill C51. I felt pretty good turning up in my Mozilla t-shirt knowing that Mozilla cares about this sort of thing.

I was encouraged to find a bunch of people protesting Bill C51, which made me feel good about not bringing a sign since I could stand near them. I found it a little uncomfortable because there was one chap with a loud speaker who was being a bit vocal about his opinions which I didn't all agree with. I'm very bad at following crowds in chants.

This was a pretty big deal for Deep Cove, with RCMP plains clothes officers lurking in bushes and lots of other police around in flak jackets.

It seemed Harper spoke in front of a bunch of bored students and people in hard hats. Those people in hard hats got bussed in and out, they had nothing to do with the school. They just seemed to be excited to be on television with Harper.

For those unwaware bill C51 is a terrible bill that expands the agencies abilities to monitor Canadians with no oversight. The Mozilla blog sums this up well:

Meanwhile in Canada, the Canadian Parliament is considering an even more concerning bill, C-51, the Anti-Terrorism Act of 2015. C-51 is sweeping in scope, including granting Canadian intelligence agencies CSIS and CSE new authority for offensive online attacks, as well as allowing these agencies to obtain significant amounts of information held by the Canadian government. The open-ended internal information-sharing exceptions contained in the bill erode the relationship between individuals and their government by removing the compartmentalization that allows Canadians to provide the government some of their most private information (for census, tax compliance, health services, and a range of other purposes) and trust that that information will be used for only its original purposes. This compartmentalization, currently a requirement of the Privacy Act, will not exist after Bill C-51 comes into force.

The Bill further empowers CSIS to take unspecified and open-ended “measures,” which may include the overt takedown of websites, attacks on Internet infrastructure, introduction of malware, and more all without any judicial oversight. These kinds of attacks on the integrity and availability of the Web make us all less secure.

Mozilla

After a while it became clear he'd gone in one of the unmarked blacked out cars and that was that. People wandered off, I got back to work after spending just 30 mins on the protest.

In the end it was mentioned on the news the protestors turned up, so that felt cool. Although a lot of talk is about how Stephen Harper is at the other end of the country from the Mike Duffy trial. Very convenient.

Amsuingly I realised that to sneak into the school Stephen Harper had to walk down a path that I walk regularly with my dog. I always clean up my dogs poo, but I know other people don't on that path. Perhaps he stood in some.

Categorieën: Mozilla-nl planet

Raniere Silva: Hacking Gecko or 1011020

di, 07/04/2015 - 05:00
Hacking Gecko or 1011020

For some time now I want to be able to contribute to native support of MathML at Gecko but

  1. I only know C (not C++),
  2. I only program command line interfaces (not graphical user interface),
  3. I never program a “graphical framework”, ...

Frédéric recommended that I start with Bug 1011020 because shouldn’t be very hard to fix this bug. So I decided to give it a try.

Leia mais...

Categorieën: Mozilla-nl planet

Mike Hommey: Announcing git-cinnabar 0.2.0

di, 07/04/2015 - 04:18

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

What’s new since 0.1.1?
  • git cinnabar git2hg and git cinnabar hg2git commands that allow to translate (possibly abbreviated) git sha1s to mercurial sha1s and vice-versa.
  • A “native” helper that makes some operations faster. It is not required for git-cinnabar to work, but it can improve performance significantly. Check the Setup instructions in the README file.
  • Do not store mercurial metadata when pushing to non-publishing repositories. For Mozilla developers, this means not storing that metadata when pushing to try, which is a good thing when you know each of them makes pulling slower. This behavior can be changed if necessary. Future releases will allow to remove metadata that was created by previous releases but that wouldn’t be created with 0.2.0.
  • Made the discovery phase of pushes require less round trips (the phase that finds what is common between the local and remote repositories), hopefully making pushing faster.
  • Improved logging, which now doesn’t require fiddling with the code to get extra logging.
  • Made fsck validate more things, and act on more errors.
  • Fixed a few edge cases.
  • Better handle files with weird names, and that git quotes in its output.
  • Extensively tested on the following repositories: mozilla-central, mozilla-beta, mercurial, hg-git, cpython.
What to expect next?
  • Allow to push merge commits.
  • Improve memory footprint for pushes (currently, it’s fairly catastrophic on big repositories ; don’t try to push multiple hundreds of commits of a Mozilla-sized repository if you don’t have multiple gigabytes of memory available).
  • As mentioned above, allow to remove some metadata.
  • And more…

If you want to follow the improvements more closely, I encourage you to switch to the `next` branch. I won’t push anything there that hasn’t been extensively tested on the above mentioned repositories.

And as always, please report any issue you run into.

Categorieën: Mozilla-nl planet

Justin Wood: Find our footing on python best practices, of yesteryear.

di, 07/04/2015 - 03:52

In the beginning there was fire buildbot. This was Wed, 13 Feb 2008 for the first commit in the repository buildbot-configs.

For context, at this time:

Chief electronics engineer Kevin Sheridan receiving data in the makeshift Dapto radiospectrograph room

(CC BY-SA 3.0) via wikipedia

In picking buildbot as our tool we were improving vastly on the decade old technology we had at the time (tinderbox) which was also written in oft-confusing and not-as-shiny perl (we love to hate it now, but it was a good language) [see relevant: image of then-new cutting edge technology but strung together in clunky ways]

As such, we at Mozilla Release Engineering, while just starting to realize the benefits of CI for tests in our main products (like Firefox), were not accustomed to it.

We were writing our buildbot-related code in 3 main repositories at the time (buildbot-configs, buildbotcustom, and tools) all of which we still use today.

Fast forward 5 years and you would have seen a some common antipatterns in large codebases… (over 203k lines of code! )  It was hard to even read most code, let alone hack on it. Each patch was requiring lots of headspace. And we would consistently break things with patches that were not well tested. (even when we tried)

It was at a workweek here in 2013 that catlee got our group agreement on trying to improve that situation by continually running autopep8 over the codebase until there was no (or few) changes with each pass.

Thus began our first, attempt, at bringing our processes to what we call our modern practices.

This reduced, in buildbotcustom and tools alone our pep8 error rate from ~7,139 to ~1,999. (In contrast our current rate for those two repos is ~1485).

(NOTE: This is a good contributor piece, to drive pep8 errors/warnings down to 0 for any of our repos, such as these. We can then make our current tests fail if pep8 fails. Though newer repos started with pep8 compliance, older ones did not. See List of Repositories to pick some if you want to try. — Its not glorious work, but makes everyone more productive once its done.)

The one agreement we decided where pep8 wasn’t for us was line length, we have had many cases where a single line (or even url) barely fits in 80 characters for legit reasons, and felt that arbitrarily limiting variable names or depth just to satisfy that restriction was going to reduce readability. Therefore we generally use –max-line-length of ~159 when validating against pep8.  (The above numbers do not account for –max-line-length)

Around this time we had also setup an internal only jenkins instance as a test for validating at least pep8 and its trends, we have since found jenkins to not be suitable for what we wanted.

Stay tuned to this blog for more history and how we arrived at some best practices that most don’t take for granted these days.

Categorieën: Mozilla-nl planet

James Long: Stop Trying to Catch Me

di, 07/04/2015 - 02:00

I'm probably going to regret this, but this post is about promises. There are a few details that I'd like to spell out so I can point people to this post instead of repeating myself. This is not a hyperbolic post like Radical Statements about the Mobile Web, which I sort of regret posting. In that post I was just yelling from a mountaintop, which isn't very helpful.

I humbly submit these points as reasons why I don't like promises, with full realization that the ship has sailed and nothing is going to change.

First, let's talk about try/catch. The problem with exception handling in JavaScript is that it's too aggresive. Consider the following code:

try { var result = foo(getValue()); } catch(e) { // handle error from `foo` }

We've accidentally captured the getValue() expression within this handler, so any error within getValue is captured. This is how exceptions work, of course, but it's made worse in JavaScript because a simple typo becomes an exception.

Exceptions are meant to be just that, exceptional. In most other languages, many typo-style errors are caught at compile-time, even in dynamic languages like Clojure. But in JavaScript, with the above code, if I was happily hacking away within getValue and I typed fucn() instead of func(), it gets caught and treated as an exception here.

I don't like how easy it is to get tripped up try/catch. We could turn the above code into this:

try { var result = foo(getValue()); } catch(e) { if(e instanceof FooError) { // handle error from `foo` return; } throw e; }

Not only is this a ton of boilerplate, but it breaks an important feature of JavaScript debuggers: break on exception. If you have break on exception enabled, and you make an error inside getValue, it now pauses on the throw in the above code instead of inside getValue where you actually made the mistake.

So it's crazy to me that promises want to apply this behavior to async code and wrap everything in a try/catch. Break on exception is permanently broken now, and we have to go through all sorts of contortions and backflips to get back to reasonable debugging environment. All because it wraps code in try/catch by default.

I don't care about awkward .then() syntax. I don't mind automatic error propagation. I don't care having to call .done() on a promise chain. I don't care about losing the stack (which is inherent in any async work).

I care that promises grab all errors, just like try/catch. The cost of a simple typo is greatly magnified. When you do async work in most other systems, you deal with errors pertaining to your async call. If I make an HTTP request, I want the network error to automatically bubble up the promise chain. I don't want anything unrelated to the async work to bubble up. I don't care about it.

I should be able to reject a promise with an error, and it bubbles up. But I want to make stupid typo errors and have them appear as normal errors, not caught by promises. Don't run everything in try/catch.

Oh, and about async/await

ES7 proposes async functions for doing async work. A lot of people are extremely excited about it, and honestly I don't get the excitement. Async functions are only pretty generators with promises:

var asyncFunction = Task(function*() { var result = yield fetch(url); return process(result); }):

fetch returns a promise. With async functions, it would look like this:

async function asyncFunction() { var result = await fetch(url); return process(result); }

Ok, so that is nicer, and asyncFunction is hoisted (I think) like a normal function would be. It's cool, I just don't understand why everyone is so excited about a simple syntactic improvement.

Especially when we still have all the problems with promises. For example, some top-level promise code now looks like:

async function run() { console.log(await asyncFunction()); } run();

A newbie to JavaScript will write that code, and be totally bewildered when nothing happens. They have no idea that they made a typo in asyncFunction, and it takes them a while to learn that run actually returns a promise.

Here a few ideas I have:

  1. Allow run to mark itself as a top-level function somehow that automatically throws errors
  2. Now that we have #1, when an error happens inside a promise, the JS engine check the promise chain to see if the error should immediately throw or not. It should immediately throw (as a normal error) if there is a top-level async function at the beginning of the promise chain.

Ok, so that's really just one idea. Native async/await syntax could potentitally help here, if we are willing to think outside of promises.

You're Writing An Angry Tweet Right Now, Aren't You?

We are discussing error handling within the js-csp project, which implements go-style channels. Most likely we are going to propogate errors, but only ones that comes down channels. I've been trying this out for a while and I love it.

I'm not going to spend time here convincing you to use js-csp, I just wanted to offer a solution instead of just complaining.

Hopefully I explained this well. I don't expect anything to change. I think my idea about async/await is pretty cool, so I hope someone looks into it.

Categorieën: Mozilla-nl planet

Anthony Hughes: Ninety Days with DOM

di, 07/04/2015 - 00:35

Last quarter marked a fairly significant change in my career at Mozilla. I spent most of the quarter adjusting to multiple re-orgs which left me as the sole QA engineer on the DOM team. Fortunately, as the quarter wraps up I feel like I’ve begun to adjust to my new role and started to make an impact.

Engineering Impact

My main objective this quarter was to improve the flow of DOM bugs in Bugzilla by developing and documenting some QA processes. A big part of that work was determining how I was going to measure impact, and so I decided the most simple way to do that was to take the queries I was going to be working with and plot the data into Google docs.

The solution was fairly primitive and lacked the ability to feed into a dashboard in any meaningful way, but as a proof of concept it was good enough. I established a baseline using the week-by-week numbers going back a couple of years. What follows is a crude representation of these figures and how the first quarter of 2015 compares to the now three years of history I’ve recorded.

Volume of unresolved Regressions & Crashes
dom.regressions-vs-crashes.unresolved.alltime.2015q1Regressions +55%, Crashes +188% since 2012

Year-over-Year trend in Regressions and Crashes
dom.regressions-vs-crashes.unresolved.annual.2015q1Regressions +9%, Crashes +68% compared to same time last year.

Regressions and Crashes in First Quarters
dom.regressions-vs-crashes.unresolved.quarterly.2015q1Regressions -0.6%, Crashes +19% compared to previous 1st Quarters

Resolution Rate of Regressions and Crashes
dom.regressions-vs-crashes.fixrate.2015q190% of Regressions resolved (+2.5%), 80% of Crashes resolved (-7.0%)

Change in Resolution Rate compared to total Volume
dom.regressions-vs-crashes.volume.2015q1Regression resolution +2.5%, Crash resolution -6.9%, Total volume +68%

I know that’s a lot of data to digest but I believe they show embedding QA with the DOM team is having some initial success.

It’s important to acknowledge the DOM team for maintaining a very high resolution rate (90% for regressions, 80% for crashes) in the face of aggressive gains in total bug volume (68% in three years). They have done this largely on their own with minimal assistance from QA over the years, giving us a solid foundation from which we could build.

For DOM regressions I focused on making existing bug reports actionable with less focus on filing new regression bugs; this has been a two part effort. The first being focused on finding regression windows for known regression bugs, the second being focused on converting unconfirmed bugs into actionable regression reports. I believe this is why we see a marginal increase in the regression resolution rate (+0.4% last quarter).

For DOM crashes I focused on filing previously unreported crashes (basically anything above a 1% report threshold). Naturally this has led to an increase in reports but has also led to some crashes being fixed that wouldn’t have been otherwise. Overall the crash resolution rate declined by 2.6% last quarter but I believe this should ultimately lead to a more stable product in the future.

The Older Gets Older

The final chart below plots the age of unresolved DOM bugs week over week which currently sits at 542 days; an increase of 4.8% this past quarter and 241% since January 1, 2012. I include it here not as a visualization of impact but as a general curiosity.

Median Age of Unresolved DOM Bugs
dom.regressions-vs-crashes.fixrate.2015q1Median age is 542 days, +4.8% last quarter, +241% since 2012

I have not yet figured out what this means in terms of overall quality or whether it’s something we need to address. I suspect recently reported bugs tend to get fixed sooner since they tend to be more immediately visible than older bugs. A fact that is likely common to most, if not all components in Bugzilla. It might be interesting to see how this breaks down in terms of the age of the bugs being fixed.

What’s Next

My plan for the second quarter is to identify a subset of these to take outside of Google Docs and convert into a proof of concept dashboard. I’m hoping my peers on the DOM team can help me identify at least a couple that would be both interesting and useful. If it works out, I’d like to aim for expanding this to more Bugzilla components later in the year so more people can benefit.

If you share my interest and have any insights please leave a comment below.

As always, thank you for reading.

[UPDATE: I decided to quickly mock up a chart showing the age breakdown of bugs fixed this past quarter. As you can see below, younger bugs account for a much greater proportion of the bugs being fixed, perhaps expectedly.]

Screen Shot 2015-04-06 at 3.56.33 PM

Categorieën: Mozilla-nl planet

Niko Matsakis: Modeling graphs in Rust using vector indices

ma, 06/04/2015 - 20:58

After reading nrc’s blog post about graphs, I felt inspired to write up an alternative way to code graphs in Rust, based on vectors and indicates. This encoding has certain advantages over using Rc and RefCell; in particular, I think it’s a closer fit to Rust’s ownership model. (Of course, it has disadvantages too.)

I’m going to describe a simplified version of the strategy that rustc uses internally. The actual code in Rustc is written in a somewhat dated “Rust dialect”. I’ve also put the sources to this blog post in their own GitHub repository. At some point, presumably when I come up with a snazzy name, I’ll probably put an extended version of this library up on crates.io. Anyway, the code I cover in this blog post is pared down to the bare essentials, and so it doesn’t support (e.g.) enumerating incoming edges to a node, or attach arbitrary data to nodes/edges, etc. It would be easy to extend it to support that sort of thing, however.

The high-level idea

The high-level idea is that we will represent a “pointer” to a node or edge using an index. A graph consists of a vector of nodes and a vector of edges (much like the mathematical description G=(V,E) that you often see):

1 2 3 4 pub struct Graph { nodes: Vec<NodeData>, edges: Vec<EdgeData>, }

Each node is identified by an index. In this version, indices are just plain usize values. In the real code, I prefer a struct wrapper just to give a bit more type safety.

1 2 3 4 5 pub type NodeIndex = usize; pub struct NodeData { first_outgoing_edge: Option<EdgeIndex>, }

Each node just contains an optional edge index, which is the start of a linked list of outgoing edges. Each edge is described by the following structure:

1 2 3 4 5 6 pub type EdgeIndex = usize; pub struct EdgeData { target: NodeIndex, next_outgoing_edge: Option<EdgeIndex> }

As you can see, an edge contains a target node index and an optional index for the next outgoing edge. All edges in a particular linked list share the same source, which is implicit. Thus there is a linked list of outgoing edges for each node that begins in the node data for the source and is threaded through each of the edge datas.

The entire structure is shown in this diagram, which depicts a simple example graph and the various data structures. Node indices are indicated by a number like N3 and edge indices by a number like E2. The fields of each NodeData and EdgeData are shown.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Graph: N0 ---E0---> N1 ---E1---> 2 | ^ E2 | | | v | N3 ----------E3-----------+ Nodes (NodeData): N0 { Some(E0) } N1 { Some(E1) } N2 { None } N3 { Some(E2) } Edges: E0 { N1, Some(E2) } E1 { N2, None } E2 { N3, None } E3 { N2, None } Growing the graph

Writing methods to grow the graph is pretty straightforward. For example, here is the routine to add a new node:

1 2 3 4 5 6 7 impl Graph { pub fn add_node(&mut self) -> NodeIndex { let index = self.nodes.len(); self.nodes.push(NodeData { first_outgoing_edge: None }); index } }

This routine will add an edge between two nodes (for simplicity, we don’t bother to check for duplicates):

1 2 3 4 5 6 7 8 9 10 11 impl Graph { pub fn add_edge(&mut self, source: NodeIndex, target: NodeIndex) { let edge_index = self.edges.len(); let node_data = &mut self.nodes[source]; self.edges.push(EdgeData { target: target, next_outgoing_edge: node_data.first_outgoing_edge }); node_data.first_outgoing_edge = index; } }

Finally, we can write an iterator to enumerate the successors of a given node, which just walks down the linked list:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 impl Graph { pub fn successors(&self, source: NodeIndex) -> Successors { let first_outgoing_edge = self.nodes[source].first_outgoing_edge; Successors { graph: self, current_edge_index: first_outgoing_edge } } } pub struct Successors<'graph> { graph: &'graph Graph, current_edge_index: Option<EdgeIndex>, } impl<'graph> Iterator for Successors<'graph> { type Item = NodeIndex; fn next(&mut self) -> Option<NodeIndex> { match self.current_edge_index { None => None, Some(edge_num) => { let edge = &self.graph.edges[edge_num]; self.current_edge_index = edge.next_outgoing_edge; Some(edge.target) } } } } Advantages

This approach plays very well to Rust’s strengths. This is because, unlike an Rc pointer, an index alone is not enough to mutate the graph: you must use one of the &mut self methods in the graph. This means that can track the mutability of the graph as a whole in the same way that it tracks the mutability of any other data structure.

As a consequence, graphs implemented this way can easily be sent between threads and used in data-parallel code (any graph shared across multiple threads will be temporarily frozen while the threads are active). Similarly, you are statically prevented from modifying the graph while iterating over it, which is often desirable. If we were to use Rc nodes with RefCell, this would not be possible – we’d need locks, which feels like overkill.

Another advantage of this apprach over the Rc approach is efficiency: the overall data structure is very compact. There is no need for a separate allocation for every node, for example (since they are just pushes into a vector, additions to the graph are O(1), amortized). In fact, many C libaries that manipulate graphs also use indices, for this very reason.

Disadvantages

The primary disadvantage comes about if you try to remove things from the graph. The problem then is that you must make a choice: either you reuse the node/edge indices, perhaps by keeping a free list, or else you leave a placeholder. The former approach leaves you vulnerable to “dangling indices”, and the latter is a kind of leak. This is basically exactly analogous to malloc/free. Another similar problem arises if you use the index from one graph with another graph (you can mitigate that with fancy type tricks, but in my experience it’s not really worth the trouble).

However, there are some important qualifiers here:

  • It frequently happens that you don’t have to remove nodes or edges from the graph. Often you just want to build up a graph and use it for some analysis and then throw it away. In this case the danger is much, much less.
  • The danger of a “dangling index” is much less than a traditional dangling pointer. For example, it can’t cause memory unsafety.

Basically I find that this is a theoretical problem but for many use cases, it’s not a practical one.

The big exception would be if a long-lived graph is the heart of your application. In that case, I’d probably go with a Rc (or maybe Arc) based approach, or perhaps even a hybrid – that is, use indices as I’ve shown here, but reference count the indices too. This would preserve the data-parallel advantages.

Conclusion

The key insights in this approach are:

  • indices are often a compact and convenient way to represent complex data structures;
  • they play well with multithreaded code and with ownership;
  • but they also carry some risks, particularly for long-lived data structures, where there is an increased change of indices being misused between data structures or leaked.
Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting

ma, 06/04/2015 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla Science Lab: Mozilla Science Lab Week in Review, March 30 – April 5

ma, 06/04/2015 - 17:00

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Conferences & Meetings
  • Submissions for talks and posters at SciPy 2015 are closing on April 10. The conference itself will be in Austin, Texas, July 6-12; check out the details here.
  •  Amye Kenall summarized the recent meeting of leaders & thinkers in the open science community, hosted late last month at the Center for Open Science. In her blog post, Kenall touches on projects the group agreed looking ahead, including articulating standards for open data APIs, providing guidance through the constellation of data and code training programs available, and better outlining the incentives for open practices.
  • The Advancing Research Communication & Scholarship conference is coming up on April 26-28 in Philadelphia; from their website, ‘ARCS is a new conference focused on the evolving and increasingly complex scholarly communication network.‘ Conference organizers are offering students and early career researchers scholarships for attendance and lodging.
  • The next Mozilla Science Lab Community Call is this Thursday, April 9. We’ll be hearing from several speakers on the topic of publishing negative results.
On the Blogs Lessons & Teaching
  • Nancy Soontiens wrote a lesson on mapping in Python using Basemap. Soontiens recently delivered this lesson at UBC’s Earth & Ocean Science Study Group, and posted her notes at the growing Mozilla Science Lab Study Group Lesson repo.
  • Megan O’donell & Emma Molls from Iowa State University Library published a slide deck entitled ‘What is Open? A Primer for Early Career Researchers’. In it, O’donnell and Molls give a first introduction to the concepts of open access publishing, and how open access affects research & self-promotion in the sciences.
Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the month – March 2015

ma, 06/04/2015 - 12:39

Hello Fellow Reps,

Join me in welcoming our two new Reps of the Month for March, Ibrahima SARR and Faisal Aziz for their inspiring contributions to the Reps program.

Ibrahima SARR

A long term mozillian and Fulah localization team lead, a SUMO and KB l10n contributor, Ibrahima has created most of Fulah Internet and IT terminology from scratch.

“Our biggest achievement yet is the localization of Firefox in Fulah which was released in summer (2012).”

Supporting Web Literacy Map he believes in the idea of redefining the curriculum to include practical learning and making. Follow his posts.

Ibrahima was a part of the incredible team of Reps present at Mobile World Congress (MWC) 2015. Ibrahima Sarr, Alex Mayorga and Francisco Picollini did a great job by presenting hundreds of Firefox OS demos to attendees at MWC 2015. Congratulations to all the team and specially to Ibrahima who did an exceptional job at reporting the event in social media! More about MWC.

Faisal Aziz

Faisal, is an inspiring mentor to many Reps and fellow mozillians in India and has proven his mettle in many projects across mozilla universe. A proud open source activist & preacher, he has been helping communities like Bhopal, Indore, Warangal and Mumbai in India to flourish. He actively contributes to number of Mozilla projects and is Locale lead for “Ur” language.

He recently organized annual event MozConnect Part1 and Part2 in central India to bring different sub communities under one roof.

Faisal is an integral part of Leadership at Mozilla India and an inspiration to many in his local community. Follow his posts.

Thank you Ibrahima and Faisal for your amazing work.
The best of the Reps program is reflected in your accomplishments and leadership.

Cheers!

Don’t forget to congratulate them on Discourse!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 76

ma, 06/04/2015 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

This week's issue covers the previous two weeks.

What's cooking on master?

266 pull requests were merged in the last two weeks, and 10 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes Other Changes New Contributors
  • Adenilson Cavalcanti
  • Alex Quach
  • Andrew Hobden
  • Augusto Hack
  • bcoopers
  • Camille Roussel
  • Carlos Galarza
  • Ches Martin
  • Dan Callahan
  • Dan W.
  • Darin Morrison
  • Drew Crawford
  • Emeliov Dmitrii
  • Germano Gabbianelli
  • github-monoculture
  • Huachao Huang
  • Jordan Woehr
  • Julian Viereck
  • kgv
  • Liam Monahan
  • Mark Mossberg
  • Nicholas Mazzuca
  • Or Neeman
  • ray glover
Approved RFCs New RFCs Notable Links Project Updates Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"Diagnostics are the UX of a compiler, and so they're deserving of the exact same care that is put into mobile app or Web design."

Declared by pcwalton on Hacker News.

Thanks to sanxiyn for the tip. Submit your quotes for next week!.

Categorieën: Mozilla-nl planet

Mark Surman: Looking for smart MBA-ish person

za, 04/04/2015 - 19:58

Over the next six months, I need to write up an initial design for Mozilla Academy (or whatever we call it). The idea: create a global classroom and lab for the citizens of the web, using Mozilla’s existing community and learning programs as a foundation. Ultimately, this is about empowerment — but we also want to build something as impactful as Firefox. So, whatever we need to do needs to really make sense as a large scale philanthropy play, a viable business or both.

I’m looking for an early career MBA-ish (or MPA-ish) type person to work closely with me and others across Mozilla as part of this design process. It’s someone who wants to:

  • Write
  • Project manage
  • Help clarify our offerings
  • Benchmark our offerings
  • Size markets
  • Understand where the opportunities are
  • Figure out revenue and cost models
  • Work with our community
  • Work with partners
  • Coordinate people who have ideas
  • Call bullshit on me and others
  • Simplify complex ideas with diagrams
  • Make slides that are beautiful
  • Call bullshit on me and others

The role is basically a right hand person to help shape the design of the Mozilla Academy over the next 6 months. Likely it is someone early in their career who is looking to pitch in and make a mark. If we are successful, we will have put the blueprints together for a global classroom and lab for the citizens of the Web, that can scale to tens of millions of people, with a robust business model. This is a project for the ambitious!

If you think you are this person, please send an email to Phia <at> mozillafoundation.org (my assistant). Tell her why you want this role and why you’re the right person to fill it. And, if you know someone who is right for this role, please pass this post on to them. I’m hoping to have someone in place on this work by the end of April, so time is of the essence if you’re interested.


Filed under: mozilla
Categorieën: Mozilla-nl planet

Mike Conley: Things I’ve Learned This Week (March 30 – April 3, 2015)

za, 04/04/2015 - 18:00

This is my second post in a weekly series, where I attempt to distill my week down into some lessons or facts that I’ve picked up. Let’s get to it!

ES6 – what’s safe to use in browser development?

As of March 27, 2015, ES6 classes are still not yet safe for use in production browser code. There’s code to support them in Firefox, but they’re Nightly-only behind a build-time pref.

Array.prototype.includes and ArrayBuffer.transfer are also Nightly only at this time.

However, any of the rest of the ES6 Harmony work currently implemented by Nightly is fair-game for use, according to jorendorff. The JS team is also working on a Wiki page to tell us Firefox developers what ES6 stuff is safe for use and what is not.

Getting a profile from a hung process

According to mstange, it is possible to get profiles from hung Firefox processes using lldb1.

  1. After the process has hung, attach lldb.
  2. Type in2, : p (void)mozilla_sampler_save_profile_to_file("somepath/profile.txt")
  3. Clone mstange’s handy profile analysis repository.
  4. Run: python symbolicate_profile.py somepath/profile.txt

    To graft symbols into the profile. mstange’s scripts do some fairly clever things to get those symbols – if your Firefox was built by Mozilla, then it will retrieve the symbols from the Mozilla symbol server. If you built Firefox yourself, it will attempt to use some cleverness3 to grab the symbols from your binary.

    Your profile will now, hopefully, be updated with symbols.

    Then, load up Cleopatra, and upload the profile.

    I haven’t yet had the opportunity to try this, but I hope to next week. I’d be eager to hear people’s experience giving this a go – it might be a great tool in determining what’s going on in Firefox when it’s hung4!

Parameter vs. Argument

I noticed that when I talked about “things that I passed to functions5”, I would use “arguments” and “parameters” interchangeably. I recently learned that there is more to those terms than I had originally thought.

According to this MSDN article, an argument is what is passed in to a function by a caller. To the function, it has received parameters. It’s like two sides of a coin. Or, as the article puts it, like cars and parking spaces:

You can think of the parameter as a parking space and the argument as an automobile. Just as different automobiles can park in a parking space at different times, the calling code can pass a different argument to the same parameter every time that it calls the procedure.6

Not that it really makes much difference, but I like knowing the details.

  1. Unfortunately, this technique will not work for Windows. :(  

  2. Assuming you’re running a build after this revision landed. 

  3. A binary called dump_syms_mac in mstange’s toolkit, and nm on Linux 

  4. I’m particularly interested in knowing if we can get Javascript stacks via this technique – I can see that being particularly useful with hung content processes. 

  5. Or methods. 

  6. Source 

Categorieën: Mozilla-nl planet

Pagina's