mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Adam Okoye: OPW Internship

Mozilla planet - ti, 09/12/2014 - 08:25

Tomorrow, December 9th (which is only about an 45 minutes at this point) is the start of my OPW internship with Mozilla. I’ll be working on the SUMO/Input Web Designer/Developer project and, from my understanding, primarily working on the “thank you” page that people see after they leave feedback. The goal is for people who leave feedback, especially negative feedback, to not feel brushed off, but rather to feel like their feedback was well received. We also want to be able to a. point them  to knowledge base articles that might mitigate the issue(s) they are having with Firefox based on their feedback (what they wrote in the text field) and b. to point them towards additional ways that they can become involved with Mozilla.

Like I said above, the internship starts on December 9th and it ends on March 9th. The internship, like all OPW projects, is remote but, because there is a Portland Mozilla office I will be able to work in one of their conference rooms.  Most of the programming I will be doing will be in Python and I will also be doing a lot of work in with Django. That said I will also likely be doing some work HTML, CSS, and Javascript. In addition to the thank you page, I’m also going to be working on other assorted Input bugs.

As part of the agreements of my internship I will be posting at least one internship related post every two weeks. In reality I am going to hope to post at least one internship related post a week as it will get me back into the practice of blogging and I will also have a back up plan if there is a week that I can’t post for whatever reason.

Here’s to a productive three months!

Categorieën: Mozilla-nl planet

Nick Alexander: The Firefox for Android build system in 2015

Mozilla planet - ti, 09/12/2014 - 01:20

Me and my colleagues @lucasratmundo, @mleibovic, @michaelcomella, and vivekb attended the Community Building discussion at #mozlandia (notes and slides are available). @mhoye presented his thinking about community building and engagement at Mozilla and beyond. I interpreted Mike’s presentation through a bifurcated lens: I came away feeling that there are social aspects to community engagement, such as providing positive rewards and recognition; and there are technical aspects to community engagement, such as maintaining documentation and simplifying tooling requirements [1].

People like @lsblakk are able to bring new people into our community with phenomenal outreach programs like the Ascend Project, but that’s not my skill-set. I deeply admire the social work Lukas (and others!) are doing, but I personally am most able to empower the Mozilla community to own Firefox for Android by addressing the technical aspects Mike discussed.

Making it easier to contribute to Firefox for Android

In this context, the following initiatives will drive the Firefox for Android tooling:

  1. making it easier to build Firefox for Android the first time;
  2. reducing the edit-compile-test cycle time;
  3. making the Firefox for Android development process look and feel like the standard Android development process.
Making it easier to build Firefox for Android the first time

One strong claim made by mhoye — and supported by many others in the room — is that mach bootstrap has significantly reduced the technical accessibility barrier to building Firefox for Desktop. We need to implement mach bootstrap for Firefox for Android.

For those who don’t know, mach bootstrap is a script that prepares the Firefox build environment, including fetching, updating, and installing the pre-requisites needed for building Firefox. It automates the (often difficult!) task of fetching dependencies; ensures that known-good versions of dependencies are installed; and sets development environment defaults. mach bootstrap is the first thing that should be run in a fresh Firefox source tree [2].

Firefox for Android has more complicated dependencies than Firefox for Desktop, including some that cannot be easily distributed or installed: the Java development kit and run-time environment, the Android SDK and NDK; Google’s Play Services libraries, etc. We can save new contributors a long dependency chase before they see a positive result. In addition, seasoned developers spend an unknown-but-large amount of time discovering that the required dependencies have advanced. Pinning the build to known-good versions, failing the build when said versions are not present, and providing mach bootstrap to update to known-good versions will reduce this frustration.

A contributor started writing a shell script that does the work of mach bootstrap. Bug 1108771 tracks building upon this effort. I’ve also filed Bug 1108782 to track pinning the Firefox for Android build requirements to known-good versions.

Reducing the Firefox for Android edit-compile-test cycle time

Firefox for Android is an unusual Android application: a large C++ library backing a medium-sized Java front-end, all plumbed together with a JavaScript-based message passing system. Right now, building the C++ library takes roughly 12 minutes on my system. Building the Java front-end takes roughly 2 minutes, and the JavaScript parts are essentially free. In 2015, glandium has taken a first quarter goal to make it possible to build Firefox (for Desktop and for Android) without building that large C++ library at all [3]. In the future, purely front-end developers (XUL/JavaScript developers on Desktop; Java/JavaScript developers on Android) will download and cache the C++ build artifacts and build the application on top of the cached artifacts. Firefox for Android is really well-suited to this mode of operation because our dependencies are so well-defined. I’ve filed Bug 1093242 to track part of this work.

The previous work will make it faster to build Firefox for Android the first time, because we won’t build C++ libraries. We’re also going to invest in making each incremental build faster, and there’s some low-hanging fruit here. Right now, the most costly parts of our build are compiling individual JAR libraries and DEXing all of the resulting JAR libraries. Every time we split our JAR libraries, we can parallelize a small part of our build and reduce the wall-clock time of our Java compilation. Right now we could split our single third-party JAR library and save ourselves compile time. And we’re very close to being able to split the Background Services (Sync, Firefox Accounts, Firefox Health Report, etc) library out of Fennec proper, which will save even more compile time.

Improving our DEXing time is more difficult. Android’s DEX processor is a byte-code transformation step that turns Java’s byte-code into Dalvik VM byte-code. For historical reasons, we DEX the entirety of Fennec’s byte-code in one DEX invocation, and it’s both a single-process bottleneck and terribly expensive. For some time, it has been possible to DEX each individual library in parallel and to merge the resulting DEX files. All modern Android build system (such as buck or Gradle) support this. We could support this in the Firefox for Android build system as well, but I think we should move to use a more featured build system under the hood instead. Android build systems are very complicated; we don’t want to write our own, and we definitely don’t want to write our own in Make syntax. In 2015, we’ll push to use a full-featured build tool that brings this DEX-time improvement. More on this in a future post.

Making the Firefox for Android development process "standards compliant"

This point is a more philosophical point than the others. Firefox for Android wins when we engage our community. The community of Android developers is large and constantly pushing the boundaries of what’s possible on a device. We want to tap into that well-spring of talent and innovation, and every thing we do that’s non-standard to an Android developer makes it harder for us to do this. Contributor @fedepaol wrote a blog post about how difficult this used to be.

The good news is, we’re getting better: we have rudimentary Gradle support and you can use IntelliJ now. But we still have a long, long way to go. We’ve got lots of easy wins just waiting for us: tickets like Bug 1107811 will go a long way towards making the Fennec "Android standards compliant" IntelliJ experience better. I have a technical plan to land in-source-tree IntelliJ configurations, so developers can open mobile/android directly in IntelliJ and get to a working Firefox for Android APK in the IDE in one step.

At a lower level, Tickets like Bug 1074258 will let us use the IntelliJ design view more easily, and landing Android JUnit 3 Instrumentation test runners in automation (Bug 1064004) will make local testing significantly easier than the convoluted Robocop process we have right now. The list goes on and on.

Conclusion

The Firefox for Android team moved strongly towards easier builds and ubiquitous tooling in 2014. 2015 is going to be even better. We’re going to improve our technical experience in (at least!) three ways: making the first build easier; making the next builds faster; and unlocking the power of the standard Android developer toolchain.

Join us! Discussion is best conducted on the mobile-firefox-dev mailing list and I’m nalexander on irc.mozilla.org and @ncalexander on Twitter.

Notes [1]I believe there is an over-arching third aspect, that of the system in which we do our work and interact with the community, but right-here-right-now I don’t feel empowered to change this. Systemic change requires making community engagement part of every team’s top-level goals, and achieving such goals requires resources that are allocated well above my pay-grade. [2]In fact, the bootstrapper does not even require a source check-out — you can download just the script and it will fetch enough to bootstrap itself. So it’s more accurate to say just bootstrap rather than mach bootstrap, but mach bootstrap has won the vocabulary battle in this arena. [3]glandium has written a very informative blog post about the future of the Firefox build system at http://glandium.org/blog/?p=3318. The section relevant to this discussion is Specialized incremental builds.
Categorieën: Mozilla-nl planet

Paul Rouget: Firefox.html

Mozilla planet - ti, 09/12/2014 - 01:00
Firefox.html Firefox.html screenshot

I just posted on the Firefox-dev mailing about Firefox.html, an experimental re-implementation of the Firefox UI in HTML. If you have comments, please post on the mailing list.

Code, builds, screenshots: https://github.com/paulrouget/firefox.html.

Categorieën: Mozilla-nl planet

Lukas Blakk: Ascend New Orleans: We need a space!

Mozilla planet - mo, 08/12/2014 - 23:48

I’m trying to bring the second pilot of the Ascend Project http://ascendproject.org to New Orleans in February and am looking for a space to hold the program. We have a small budget to rent space but would prefer to find a partnership and/or sponsor if possible to help keep costs low.

The program takes 20 adults who are typically marginalized in technology/open source and offers them a 6 week accelerated learning environment where they build technical skills by contributing to open source – specifically, Mozilla. Ascend provides the laptops, breakfast, lunch, transit & childcare reimbursement, and a daily stipend in order to lift many of the barriers to participation.

Our first pilot completed 6 weeks ago in Portland, OR and it was a great success with 18 participants completing the 6 week course and fixing many bugs in a wide range of Mozilla projects. They have now continued on to internships both inside and outside of Mozilla as well as seeking job opportunities in the tech industry.

To do this again, in New Orleans, Ascend needs a space to hold the classes!

Space requirements are simple:

* Room for 25 people to comfortably work on laptops * Strong & reliable internet connectivity * Ability to bring in our own food & beverages

Bonus if the space helps network participants with other tech workers, has projector/whiteboards (though we can bring our own in), or video capability.

Please let me know if you have a connection who can help with getting a space booked for this project and if you have any other leads I can look into, I’d love to hear about them.

Categorieën: Mozilla-nl planet

Armen Zambrano: Test mozharness changes on Try

Mozilla planet - mo, 08/12/2014 - 19:59
You can now push to your own mozharness repository (even a specific branch) and have it be tested on Try.

Few weeks ago we developed mozharness pinning (aka mozharness.json) and recently we have enabled it for Try. Read the blog post to learn how to make use of it.

NOTE: This currently only works for desktop, mobile and b2g test jobs. More to come.
NOTE: We only support named branches, tags or specific revisions. Do not use bookmarks as it doesn't work.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Sean Martell: Thank You, Mozlandia

Mozilla planet - mo, 08/12/2014 - 19:33

Well, that was a week.

Sitting here on the Monday after, coffee in hand and reading all of the fresh new posts detailing our recent Coincidental Work Week, I’ve decided to share a few quick thoughts while they’re still fresh in my mind.

For me, last week was a particularly emotionally overwhelming one. There was high energy around once again gathering as a whole, sadness around friends/family moving on, fear in what’s next, excitement in what’s next, and a fine juggling act of trying to manage all those feels as they kicked in all at once.

The work week itself (the actual work part) was just amazing and I’m pretty sure it was the most productive travel week I’ve ever had in any job setting. Things were laid out, solutions discussed, alliances forged. Good stuff.

Then Friday hit.

So did all the emotions. All the feels. All of them.

The night started with me traveling through the swarms of Mozillians getting folks to sign a farewell card for Johnny Slates, my partner in crime for the majority of my Mozilla experience. A tough start. Tears were shed, but really they were thank you tears, in thanks for an awesome time shared at Mozilla.

Later, as I stood in a sea of Mozillians dancing, cheering and smiles all around, I was standing once again in tears. I was watching Mozillians letting loose. I was watching Mozillians get pumped for the future of the Internetz and our role in it. Even though I was listening to lyrics on topics that have brought Mozilla together and torn us apart all at the same time, we were dancing together and having fun.

I felt like I was watching my work family heal.

It was a very, very happy cry.

Thank you to my past, current and future Mozilla family members. To me, there is no old or new guard, just an ever evolving extended family.

<3

Categorieën: Mozilla-nl planet

Mozilla Joins Hour of Code

Mozilla Blog - mo, 08/12/2014 - 18:26
For the second year in a row, Mozilla is a partner in the Hour of Code, and we hope you’ll join us. This  campaign launched in 2013, to align with Computer Science Education Week, and to demystify code and show … Continue reading
Categorieën: Mozilla-nl planet

Dave Herman: Why Mozlandia Was So Effective

Mozilla planet - mo, 08/12/2014 - 16:35

When Chris Beard first announced that over a thousand Mozilla staff and contributors would be descending on Portland this month for an all-hands work week, I worried about two things. I knew a couple of the groups in my department would be approaching deadlines. And I was afraid that so many groups of people in one place would be chaotic and hard to coordinate. I wasn’t even wrong – but it didn’t matter.

The level of focus and effectiveness last week was remarkable. For Mozilla Research’s part, we coordinated with multiple groups, planned 2015 projects, worked through controversial technical decisions, removed obstacles, brought new contributors on board, and even managed to get a bunch of project work done all at the same time.

There were a few things that made last week a success:

Articulating the vision: Leaders have to continually retell their people’s story. This isn’t just about morale, although that’s important. It’s about onboarding new folks, reminding old-timers of the big picture, getting people to re-evaluate their projects against the vision, and providing a group with the vocabulary to help them articulate it themselves.

While Portland was primarily a work week, it’s always a good thing for leadership to grab the opportunity to articulate the vision. This is something that Mitchell Baker has always been especially good at doing, particularly in connecting our work back to Mozilla’s mission; but Chris and others also did a good job of framing our work around building amazing products.

Loosely structured proximity: The majority of the work days were spent without excessive organization, leaving broad groups of people in close proximity but with the freedom to seek out the specific contact they needed. Managers were able to set aside quieter space for the groups of people that needed to get more heads down work done, but large groups (for example, most of Platform) were close enough together that you could find people for impromptu conversations, whether on purpose or – just as important! – by accident.

Cross-team coordination: Remote teams are the life blood of Mozilla. We have a lot of techniques for making remote teams effective. But it can be harder to coordinate across teams, because they don’t have the same pre-existing relationships, or as many opportunities for face-to-face interaction. Last week, Mozilla Research got a bunch of opportunities to build new relationships with other teams and have higher-bandwidth conversations about tricky coordination topics.

I hope we do this kind of event again. There’s nontrivial overhead, and a proper cadence to these things, but every once in a while, getting everyone together pays off.

Categorieën: Mozilla-nl planet

Priyanka Nag: Portland coincidental work-week

Mozilla planet - mo, 08/12/2014 - 16:19
I will leave my travel adventure out from this blog post cause they are sufficiently interesting to deserve a separate, dedicated post. So, I will jump directly to my experience of this co-incidential work-week at Portland.

On the first day, when I walked into the Portland Art Museum in the morning, I was overwhelmed to see so many known faces and being able to flag a few new faces to their IRC nicks (or twitter handles), whom I was meeting for the first time outside of the virtual world. 


What's your slingshot?
During this one week, I heard a lot of amazing people, from David Slater to Chris Beard, from Mark Surman to Mitchel Baker....too much awesomeness on the stage! The guest speakers on the first day was Brian Muirhead, from NASA who made us realize that even though we were not NASA engineers, and our work was limited to the earthen atmosphere, sometimes the criticality of projects or the way of handling them didn't need to differ much. The second day's guest speaker, Michael Lopp (@rands), was a person I had been following on twitter but never knew his real name or how he looked untill the moring of 3rd of December. The talk about Old guards vs New guards was not only something I could relate to but also had a few very interestig points we could all learn from.

After the opening addresses on both days, I found a comfortable spot with the MDN folks. I knew that under all possible circumstances, these would be the people I would mostly hang-around with for the rest of the week. Well, MDN is undoubtedly my most favorite project among all other possible contribution pathways that I have (or still do) tried contributing to.

We do know how to mark our territory!
Just like most Mozilla work-weeks, this week also had a lot of sticky notes all around, so many etherpads created and a master etherpad to link all the etherpads and a lot of wiki pages! When you know that you are gonna be haunted my sticky notes for at-least the next one week, you can be sure that you had a great workweek and a lot of planning. Plannings around the different contribution metrics for MDN, contribution recognition, MDN events for 2015, growing the community as well as a few technical changes and a few responsibilities which I have committed to and will be trying to complete before the calender changes it reading to 2015....it was a crazy crazy fun week. One important initiative that I am not only interested in being executed by also am willing to jump into in any possible manner is the linking of Webmaker and MDN. To me, its like my two superheros who are planning to work together to save this world!

I didn't spend much time with the community building team this week, other than the last day when I could finally join the group. First and foremost, Allen Gunner is undoubtedly one of the best facilitators I have seem in my life. Half of the session, my focus on his skills and how I could learn a few of them. I am happy to have been able to join the community building team on their concluding day as I got a summary of the week's discussion as well as could help the concluding plans and also make a few new commitments to a few new interesting things that are being planned in the year 2015.

Well, I am not sure if I have been able to do a good job at thanking Dietrich personally for inviting me and hosting me at his place for the fun filled get-together, but I sincerely do confess that I had way more fun at his party than I had expected to. Meeting so many new people there, mostly meeting so many amazing engineerings who are building the new mobile operating system which I not only extensively use but also brag about to my friends, family and colleagues.

A few wow moments -

[1] Seeing @rage outside the twitter world, live infront of me!

[2] Mitchel's talk on how Mozilla acknowledges the tensions created around the last few decisions that went out and her explanation around why and how they were made and were important.

[3] Macklemore & Ryan Lewis' live performance at the mega party.

[4] My first ever experience of trying to 'dance' with other Mozillians. Yes, I had successfully avoided them during the Summit, MozFest and all other previous events in the last 2 years.

[5] The proudest moment for me was probably the meeting of the MDN and the webmaker team. When neither of the teams knew every other member of the other team, I was probably the one person who knew every person in that circle. Having worked very closely with both the teams, it was my cloud nine (++) moment of the workweek to be sitting with all my rock-stars together!

A lot of people met, a lot of planning done, a lot of things learnt and most importantly, a lot of commitments made which I am looking forward to execute in 2015.
Categorieën: Mozilla-nl planet

Christian Heilmann: The next UX challenge on the web: gaining offline trust

Mozilla planet - mo, 08/12/2014 - 15:47

you are offline - and that's bad.

A few weeks ago, I released http://removephotodata.com as a tool. It is a simple web app (well, a page) that allows you to remove the EXIF data of an image before sharing it online. I created it as a companion to my “Put social back in social media” talk at TEDx Linz. During this talk I pointed out the excellent exiftool. A command line tool to remove extra information embedded in images people might not want to share. As such, it is too hard to use for most users. So I thought this would be a good solution.

It had some success and people – including the press in Spain – talked about it. Without fail though, every thread of comments or Twitter conversation will have one person pointing out the “seemingly obvious”:

So you create a tool to remove personal data from images and to do that I need to send the photo to your server! FAIL! LOLZ0RZ (and similar)

Which is not true at all. The only server interaction needed is the first load of the page. All the JavaScript analysis and removal of EXIF data happens on your computer. I even added a appcache to ensure that the tool itself works offline. In essence, everything happens on your computer or smartphone. This makes a lot of sense – it would be nonsense to use a service on some machine to remove personal data for you.

I did explain this in the page:

Your photo does not get uploaded anywhere, all of this happens on your device, in your browser. It even works offline.

Nobody seems to read that, though and it is quicker to complain about a seemingly non-sensical security tool.

The web needs a connection, apps do not?

This is not the user’s fault, it is conditioning. We’ve so far have done a bad job advocating the need for offline functionality. The web is an online medium. It’s understandable that people don’t expect a browser to work without an internet connection.

Apps, on the other hand, are expected to work offline. This, of course, is nonsense. The sad state of affairs is that most apps do not work offline. Look around on a train when people are not connected. You see almost everyone on their phone either listening to local music, reading books or playing games. Games are the only things that work offline. All other apps are just sitting there until you connect. You can’t even write your posts as drafts in most of them – something any email client was able to do a long time ago.

The web is unsafe, apps are secure?

People also seem to trust native apps more as they are on your device. You have to go through an install and uninstall process to get them. You see them downloading and installing. Web Apps arrive by magic. This is less re-assuring.

This is security by obscurity and thus to me more dangerous. Of course it is good to know when something gets to your computer. But an install process gives the app more rights to do things, it doesn’t necessarily mean that software is more secure.

Native apps don’t give us more security or insight into what is going on – on the contrary. A packaged format with no indicator when the app is sending or receiving data from the web allows me to hide a lot more nasties than a web site could. It is pretty simple with developer tools in a browser to see what is going on:

Network Tab in Firefox

On my mobile, I have to hope that the Android game doesn’t call home in the background. And I should read the terms and conditions and understand the access the game has to my device. But, no, I didn’t read that and just skimmed through the access rights and ticked “yes” as I wanted to play that game.

There is no doubt that JavaScript in browsers has massive security issues. But it isn’t worse or better than any other of the newer languages. When Richard Stallman demonised JavaScript as a trap as you run code that might not be open on your computer he was right. He was also naive in thinking that people cared about that. We live in a world where we give away privacy and security for convenience. That’s the issue we need to address. Not if you could read all the code that is on your device. Only a small amount of people on this world can make sense of that anyways.

Geek mode on: offline web work in the making

There is great work in the making towards an offline web. Google’s and Mozilla’s ServiceWorker implementations are going places. The latest changes in Chrome give the browser on the device much more power to store things offline. IndexedDB, WebSQL and other local storage solutions are available across browsers. Web Cryptography is coming. Tim Taubert gave an interesting talk about this at JSConf called “Keeping secrets with JavaScript: An Introduction to the WebCrypto API“.

The problem is that we also need to create a craving in our users to have that kind of functionality. And that’s where we don’t do well.

Offline first needs UX love

There is no indicator in the browser that something works offline. We need to tell the user in our copy or with non-standardised icons. That’s not good. We assume a lot from our users when we do that.

When we started offering offline functionality with appcache we did an even worse job. We warned users that the site is trying to store information on their device. In essence we conditioned our users to not trust things that come from the web – even if they requested that data.

Offline functionality is a must. The wonderful world of constant, free and fast connectivity only exists in movies and advertisements for mobiles and smart devices. This is not going to happen any time soon as physics is not likely to change and replacing a lot of copper cable in the ground is quite a job.

We also need to advocate better that users have a right to use their devices offline. Mobile phones are multi-processor machines with a lot of RAM and storage. Why not use that? Why would I have to store my information in the cloud for everything I do? Can I trust the cloud? What is the cloud? To me, it is “someone else’s computer” and they have the right to analyse my data, read it and even cut me off from it once their first few rounds of funding money runs out. My phone I got, why can’t I do more with it when I am offline? Why can’t I sync data with a USB cable?

Of course, all of this is about convenience. It is easier to have my data synced across devices with a cloud service. That way I never lose anything – if the cloud provider is OK with me getting to my data.

Our devices are powerful machines and we should be able to create, encrypt and store information without someone online snooping on me doing it. For this to happen, we need to create users that are aware of these options and see them as a value-add. It is not an easy job – the marketing around the simplicity of closed systems with own cloud services is excellent. But so are we, aren’t we?

Categorieën: Mozilla-nl planet

Doug Belshaw: Feedback on the Web Literacy Map from the LRA conference

Mozilla planet - mo, 08/12/2014 - 14:31

Last week, leaving midway through the Mozilla coincidental workweek, I headed to Florida for the Literacy Research Association conference. Mozilla was invited by contributor Ian O'Byrne to lead a session on Web Literacy Map v1.1 and our plans for v2.0.

Rainbow

You can find what I talked about in this post: Toward The Development of a Web Literacy Map: Exploring, Building, and Connecting Online.

We received some great feedback from the following discussants:

It was difficult to capture it all, so I’m just going to list my takeaways. Special thanks to Amy who sent me her notes!

What’s the theory of learning driving the Web Literacy Map?

We talk about Mozilla’s approach to learning in the Webmaker whitepaper, but this isn’t tied closely enough to the Web Literacy Map as it currently stands.

We can do a better job around recontextualisation

According to Wikipedia:

Recontextualisation is a process that extracts text, signs or meaning from its original context (decontextualisation) in order to introduce it into another context. Since the meaning of texts and signs depend on their context, recontextualisation implies a change of meaning, and often of the communicative purpose too.

That’s a pretty academic way to say that we can do a better job of explaining how memes and other (what Henry Jenkins would call) spreadable media work.

We’re doing well at practising what we’re preaching

Discussants liked the openness and transparency of the Web Literacy Map work, leading to multiple diverse perspectives and voices. They appreciated the way it was fed back for anyone to be able to read and then jump in on. They also liked the apprenticeship model, with ‘mentoring’ explicitly called out through things like Webmaker Mentors.

'Reading, Writing and Participating’ is a problematic approach

This approach along with the 'grid’ approach of the Web Literacy Map’s competency layer is outdated from a new literacies point of view. Talking of 'web literacy’ in a singular way is also reflective of a traditional understanding of the field and invokes a 'deficit’ model. As Mozilla has influence we should think carefully about what we’re amplifying and what we’re foregrounding. This has implications for how people are measured, ranked and sorted.

How and where does criticality emerge in the Web Literacy Map?

There should be a recognition in the map that, as we make the web, the web makes us. The Web Literacy Map implies a neutral process of acquiring skills that lead unproblematically to particular outcomes. Instead, we should move to a practice-oriented approach. Using this approach, practices are situated in activities in relation to other people and things.

Where does the notion of 'critical internet literacy’ fit into the Web Literacy Map?

It’s not good enough just to say that learning pathways are not linear

There’s a particular logic in the Web Literacy Map as it currently stands – in the way that it’s represented and is framed – that there is a 'correct’ way to become an expert. This constrains the way people in fact learn and make sense of the web (despite what we say by way of contextualisation).

It may be called a 'map’ but it looks like a standard

The three columns imply a linearity and separation for progressing in particular skills. Where are the relationships and connections between skills and competencies? There’s plans to focus on cross-cutting themes, but how can we go further? Reading, writing and participating are not separate activities in practice so it makes little sense for them to be separate on the Web Literacy Map.

Having each individually articulated in helpful for teaching and learning but as a whole, it mitigates against fluency. A 'competency grid’ fits an older, outdated model of learning that doesn’t recognise multiple pathways to learning and participating.

Why can’t we have multiple views of the Web Literacy Map?

Having just one representation of the skills and competencies of the Web Literacy Map limits creativity and leads to a 'recipes’ based approach. A 'stories’ based approach might be better, perhaps using a Universal Design for Learning approach. This would lead to greater learner agency and freedom. We should design for difference.

Using the web is an aesthetic expression

I didn’t make good enough notes here, but a couple of discussants talked about how using the web is an aesthetic expression. As a result we should do a better job of expressing this in the Web Literacy Map.

Producing/consuming as an alternative frame

Web 1.0 was monologic, whereas Web 2.0 onwards is dialogic:

The dialogic work carries on a continual dialogue with other works of literature and other authors. It does not merely answer, correct, silence, or extend a previous work, but informs and is continually informed by the previous work. Dialogic literature is in communication with multiple works. This is not merely a matter of influence, for the dialogue extends in both directions, and the previous work of literature is as altered by the dialogue as the present one is. (Wikipedia)

We should connect to other groups doing similar work

There are people like Media Smarts (“Canada’s Centre for Digital and Media Literacy”) doing similar work here. To what extent are they aware and involved with our work?

Where does user/consumer protection fit into the Web Literacy Map?

We include production and navigation in the map, but to what extent are we educating people about organisations that want to use their data for their own purposes?

Conclusion

I really enjoyed the session and, later, it made me think about some research I’d re-read recently about the importance of building up in the learner a 'three-dimensional’ model of the focus are:

Cognitive flexibility theory (Jacobson & Spiro, 1995; Spiro & Jehng, 1990), for instance, suggests that learning about a complex, ill-structured domain requires numerous carefully designed traversals (i.e., paths) across the terrain that defines that domain, and that different traversals yield different insights and understandings. Flexibility is thought to arise from the appreciation learners acquire for variability within the domain and their capacity to use this understanding to reconceptualize knowledge.“ (McEneaney, J. E. (2000). Learning on the Web: A Content Literacy Perspective.

We’ve got lots to think about on the upcoming community calls. As well as thanking the discussants, I’d like to thank Ian O'Byrne and Greg McVerry for making me feel so welcome. They introduced me to lots of fascinating and inspiring people with whom I look forward to following-up. :-)

Questions? Comments? Email me: doug@mozillafoundation.org or add your thoughts to this thread on the #TeachTheWeb discussion forum.

Categorieën: Mozilla-nl planet

Patrick Finch: Thanking all bus drivers on behalf of Mozilla

Mozilla planet - mo, 08/12/2014 - 14:11

I’m just back from Mozlandia, our informal all hands coincidental work week in Portland, Oregon.  In terms of what I got out of the event, I think this may be the best of its kind that I have attended.

On Friday evening, a curious thing happened.  I was sitting with Pierros and Dietrich in the salubrious habitat of the Hilton hotel lobby.   We were accosted by a man who seemed in something of a rush, and who, by his appearance (specifically, the uniform he was wearing), was a bus driver.  He asked if we were from Mozilla.  We confirmed that we were.  He then thanked us for the work we were doing for net neutrality.

I often read people describing themselves as “proud and humbled” in our industry.  I have to confess, I have every bit as hard a time getting my head around how pride can be humble as Yngwie Malmsteen does with the idea that “less is more”.  I can, however, relate to the idea of taking pride in being humble.  And this was such a moment for me.

There are many people who know much, much more about net neutrality and its implications for the future of the Internet than I do.  But still, I expect myself to know more about the topic than our friendly, Oregonian bus driver.  And so I find myself asking the question, “What is he expecting from net neutrality?”.  I believe that his expectations will amount to the Internet progressing much as it has to date.  He probably expects there to be no overall controller, no balkanisation of access and content, and maybe he is even optimistic enough to hope that the internet will not give rise to the acceptance of widespread surveillance.

But I am guessing.  Guessing because I didn’t have time to ask him, (he pretty much flew through that lobby), and also because I am not entirely sure myself of what we can expect from net neutrality.  What troubles me is that “net neutrality” might be a placeholder for some, meaning not just net neutrality, but also a lot of the other aspects of a more – how do we put it – equitable internet.  At the moment, net neutrality is both disrupting the old order, but also giving rise to new empires, vaster and more powerful than those they are replacing.  Now, all empires fall: the ancient Romans, the British, Bell Telecommunications, even the seemingly invincible Liverpool FC of the 1970s and 1980s.  All empires fall.  The question is, “what will be their legacy?”.   It isn’t an easy question to answer, and the parallels between the carve-up of the unindustrialised world and the formation and destruction of nation states in the preceding centuries is a gloomy place to look for metaphors, some of which (balkanisation) have already entered our everyday language.

What I do know is that the bus driver expects Mozilla to do the right thing.  We have his trust.

I believe all of the paid staff at Mozilla are aware of our good fortune to be able to work with ideas and inventions that can shape the future of the internet in ways that we identify with, in ways that we want to believe in.   And as Ogden Nash put it, “People who work sitting down get paid more than people who work standing up.”  Working full-time in the tech industry has its hardships, but I can think of tougher places.  We have much be grateful for.

And so, when a bus driver thanks us for our service, I feel compelled to offer my gratitude in return.  We trust bus drivers to know where we want to go, and to get us there safely.  What do they trust us to do?


Categorieën: Mozilla-nl planet

Wil Clouser: Goodwill Updates - A Firefox OS Feature Idea

Mozilla planet - mo, 08/12/2014 - 09:00

A common aspect amongst the regions Firefox OS targets is a lack of dependable bandwidth. Mobile data (if available) can be slow and expensive, wi-fi connections are rare, and in-home internet completely absent. With the lack of regular or affordable connectivity, it’s easy for people to ignore device and app updates and instead opt to focus on downloading their content.

In the current model, Firefox OS daily pings for system and app updates and downloads the updates when available. Once the update has been installed, the download is deleted from the device storage.

What if there was an alternative way to handle these numerous updates? Rather than delete the downloads, they are saved on the device. Instead of each Firefox OS device being required to download updates, the updates could be shared with other Firefox OS devices. This Goodwill Update would make it easier for people to get new features and important security fixes without having to rely on internet connectivity.

a concept drawing

Goodwill Update could either run in the background (assuming there is disk space and battery life) or could be more user-facing presenting people with notifications about the presence of updates or even showing how much money has been saved by avoiding bandwidth charges. Perhaps they could even offer to buy Bob a beer!

Would this be worth doing to help emerging markets stay up to date?

PS. Hat tip to Katie and Tiffanie for the image and idea help.

Categorieën: Mozilla-nl planet

Mozilla Fundraising: Bitcoin Donations to Mozilla: 17 Days In

Mozilla planet - mo, 08/12/2014 - 08:03
Just over two weeks ago Mozilla began accepting bitcoin donations. In the first three days our bitcoin donation form was live, we raised $1,600 USD in bitcoin, and to date we’ve raised about $5,000 USD. Here is the trendline: We … Continue reading
Categorieën: Mozilla-nl planet

Ben Kelly: Implementing the Service Worker Cache API in Gecko

Mozilla planet - mo, 08/12/2014 - 06:43

For the last few months I’ve been heads down, implementing the Service Worker Cache API in gecko. All the work to this point has been done on a project branch, but the code is finally reaching a point where it can land in mozilla-central. Before this can happen, of course, it needs to be peer reviewed. Unfortunately this patch is going to be large and complex. To ease the pain for the reviewer I thought it would be helpful to provide a high-level description of how things are put together.

If you are unfamiliar with Service Workers and its Cache API, I highly recommend reading the following excellent sources:

Building Blocks

The Cache API is implemented in C++ based on the following Gecko primitives:

  • WebIDL DOM Binding

    All new DOM objects in gecko now use our new WebIDL bindings.

  • PBackground IPC

    PBackground is an IPC facility that connects a child actor to a parent actor. The parent actor is always in the parent process. PBackground, however, allows the child actor to exist in either a remote child content process or within the same parent process. This allows us to build services that support both electrolysis (e10s) and our more traditional single process model.

    Another advantage of PBackground is that the IPC calls are handled by a worker thread rather than the parent process main thread. This helps avoid stalls due to other main thread work.

  • Quota Manager

    Quota Manager is responsible for managing the disk space used by web content. It determines when quota limits have been reached and will automatically delete old data when necessary.

  • SQLite

    mozStorage is an API that provides access to an SQLite database.

  • File System

    Finally, the Cache uses raw files in the file system.

Alternatives

We did consider a couple alternatives to implementing a new storage engine for Cache. Mainly, we thought about using the existing HTTP cache or building on top of IndexedDB. For various reasons, however, we chose to build something new using these primitives instead. Ultimately it came down to the Cache spec not quite lining up with these solutions.

For example, the HTTP cache has an optimization where it only stores a single response for a given URL. In contrast, the Cache API spec requires that multiple Responses can be stored per-URL based on VARY headers, multiple Cache objects, etc. In addition, the HTTP cache doesn’t use the quota management system and Cache must use the quota system.

IndexedDB, on the other hand, is based on structured cloning which doesn’t currently support streaming data. Given that Responses could be quite large and come in from the network slowly, we thought streaming was a priority to reduce the amount of required memory.

Also, while not a technical issue, IndexedDB was undergoing a significant rewrite at the time the Cache work began. We felt that this would delay the Cache implementation.

10,000-Foot View

With those primitives in mind, the overall structure of the Cache implementation looks like this:

Here we see from left-to-right:

  • JS Script

    Web content running in a JavaScript context on the far left. This could be in a Service Worker, a normal Web Worker, or on the main thread.

  • DOM Object

    The script calls into the C++ DOM object using the WebIDL bindings. This layer does some argument validation and conversion, but is mostly just a pass through to the other layers. Since most of the Cache API is asynchronous the DOM object also returns a Promise. A unique RequestId is passed through to the Cache backend and is later used to find the Promise on completion.

  • Child and Parent IPC Actors

    The connection between the processes is represented by a child and a parent actor. These have a one-to-one correlation. In the Cache API request messages are sent from the child-to-parent and response messages are sent back from the parent-to-child. All of these messages are asynchronous and non-blocking.

  • Manager

    This is where things start to get a bit more interesting. The Cache spec requires each origin to get its own, unique CacheStorage instance. This is accomplished by creating a separate per-origin Manager object. These Manager objects can come and go as DOM objects are used and then garbage collected, but there is only ever one Manager for each origin.

  • Context

    When a Manager has a disk operation to perform it first needs to take a number of stateful steps to configure the QuotaManager properly. All of this logic is wrapped up in what is called the Context. I’ll go into more detail on this later, but suffice it to say that the Context handles handles setting up the QuotaManager and then scheduling Actions to occur at the right time.

  • Action

    An Action is essentially a command object that performs a set of IO operations within a Context and then asynchronously calls back to the Manager when they are complete. There are many different Action objects, but in general you can think of each Cache method, like match() or put(), having its own Action.

  • File System

    Finally, the Action objects access the file system through the SQLite database, file streams, or the nsIFile interface.

Closer Look

Lets take a closer look at some of the more interesting parts of the system. Most of the action takes place in the Manager and Context, so lets start there.

Manager

As I mentioned above, the Cache spec indicates each origin should have its own isolated caches object. This maps to a single Manager instance for all CacheStorage and Cache objects for scripts running in the same origin:

Its important that all operations for a single origin are routed through the same Manager because operations in different script contexts can interact with one another.

For example, lets consider the following CacheStorage method calls being executed by scripts running in two separate child processes.

  1. Process 1 calls caches.open('foo').
  2. Process 1’s promise resolves with a Cache object.
  3. Process 2 calls caches.delete('foo').

At this point process 1 has a Cache object that has been removed from the caches CacheStorage index. Any additional calls to caches.open('foo') will create a new Cache object.

But how should the Cache returned to Process 1 behave? It’s a bit poorly defined in the spec, but the current interpretation is that it should behave normally. The script in process 1 should continue to be able to access data in the Cache using match(). In addition, it should be able to store A value using put(), although this is somewhat pointless if the Cache is not in caches anymore. In the future, a caches.put() call may be added to let a Cache object to be re-inserted into the CacheStorage.

In any case, the key here is that the caches.delete() call in process 2 must understand that a Cache object is in use. It cannot simply delete all the data for the Cache. Instead we must reference count all uses of the Cache and only remove the data when they are all released.

The Manager is the central place where all of this reference tracking is implemented and these races are resolved.

A similar issue can happen with cache.match(req) and cache.delete(req). If the matched Response is still referenced, then the body data file needs to remain available for reading. Again, the Manager handles this by tracking outstanding references to open body files. This is actually implemented by using an additional actor called a StreamControl which will be shown in the cache.match() trace below.

Context

There are a number of stateful rules that must be followed in order to use the QuotaManager. The Context is designed to implement these rules in a way that hides the complexity from the rest of the Cache as much as possible.

Roughly the rules are:

  1. First, we must extract various information from the nsIPrincipal by calling QuotaManager::GetInfoFromPrincipal() on the main thread.
  2. Next, the Cache must call QuotaManager::WaitForOpenAllowed() on the main thread. A callback is provided so that we can be notified when the open is permitted. This callback occurs on the main thread.
  3. Once we receive the callback we must next call QuotaManager::EnsureOriginIsInitialized() on the QuotaManager IO thread. This returns a pointer to the origin-specific directory in which we should store all our files.
  4. The Cache code is now free to interact with the file system in the directory retrieved in the last step. These file IO operations can take place on any thread. There are some small caveats about using QuotaManager specific APIs for SQLite and file streams, but for the most part these simply require providing information from the GetInfoFromPrincipal() call.
  5. Once all file operations are complete we must call QuotaManager::AllowNextSynchronizedOp() on the main thread. All file streams and SQLite database connections must be closed before making this call.

The Context object functions like a reference counted RAII-style object. It automatically executes steps 1 to 3 when constructed. When the Context object’s reference count drops to zero, its destructor runs and it schedules the AllowNextSynchronzedOp() to run on the main thread.

Note, while it appears the GetInfoFromPrincipal() call in step 1 could be performed once and cached, we actually can’t do that. Part of extracting the information is querying the current permissions for the principal. Its possible these can change over time.

In theory, we could perform the EnsureOriginIsInitialized() call in step 3 only once if we also implemented the nsIOfflineStorage interface. This interface would allow the QuotaManager to tell us to shutdown when the origin directory needs to be deleted.

Currently the Cache does not do this, however, because the nsIOfflineStorage interface is expected to change significantly in the near future. Instead, Cache simply calls the EnsureOriginIsInitialized() method each time to re-create the directory if necessary. Once the API stabilizes the Cache will be updated to receive all such notifications from QuotaManager.

An additional consequence of not getting the nsIOfflineStorage callbacks is that the Cache must proactively call QuotaManager::AllowNextSynchronizedOp() so that the next QuotaManager client for the origin can do work.

Given the RAII-style life cycle, this is easily achieved by simply having the Action objects hold a reference to the Context until they complete. The Manager has a raw pointer to the Context that is cleared when it destructs. If there is no more work to be done, the Context is released and step 5 is performed.

Once the new nsIOfflineStorage API callbacks are implemented the Cache will be able to keep the Context open longer. Again, this is relatively easy and simply needs the Manager to hold a strong reference to the Context.

Streams and IPC

Since mobile platforms are a key target for Service Workers, the Cache API needs to be memory efficient. RAM is often the most constraining resource on these devices. To that end, our implementation should use streaming whenever possible to avoid holding large buffers in memory.

In gecko this is essentially implemented by a collection of classes that implement the nsIInputStream interface. These streams are pretty straightforward to use in normal code, but what happens when we need to serialize a stream across IPC?

The answer depends on the type of stream being serialized. We have a couple existing solutions:

  • Streams created for a flat memory buffer are simply copied across.
  • Streams backed by a file have their file descriptor dup()’d and passed across. This allows the other process to read the file directly without any immediate memory impact.

Unfortunately, we do not have a way to serialize an nsIPipe across IPC without completely buffering it first. This is important for Cache, because this is the type of stream we receive from a fetch() Response object.

To solve this, Kyle Huey is implementing a new CrossProcessPipe that will send the data across the IPC boundary in chunks.

In this particular case we will be sending all the fetched Response data from the parent-to-child when the fetch() is performed. If the Response is passed to Cache.put(), then the data is copied back to the parent.

You may be asking, “why do you need to send the fetch() data from the child to the parent process when doing a cache.put()? Surely the parent process already has this data somewhere.”

Unfortunately, this is necessary to avoid buffering potentially large Response bodies in the parent. It’s imperative that the parent process never runs out of memory. One day we may be able to open the file descriptor in the parent, dup() it to the child, and then write the data directly from the child process, but currently this is not possible with the current Quota Manager.

Disk Schema

Finally, that brings us to a discussion of how the data is actually stored on disk. It basically breaks down like this:

  • Body data for both Requests and Responses are stored directly in individual snappy compressed files.
  • All other Request and Response data are stored in SQLite.

I know some people discourage using SQLite, but I chose it for a few reasons:

  1. SQLite provides transactional behavior.
  2. SQLite is a well-tested system with known caveats and performance characteristics.
  3. SQL provides a flexible query engine to implement and fine tune the Cache matching algorithm.

In this case I don’t think serializing all of the Cache metadata into a flat file, as suggested by that wiki page, would be a good solution here. In general, only a small subset of the data will be read or write on each operation. In addition, we don’t want to require reading the entire dataset into memory. Also, for expected Cache usage, the data should typically be read-mostly with fewer writes over time. Data will not be continuously appended to the database. For these reasons I’ve chosen to go with SQLite while understanding the risks and pitfalls.

I plan to mitigate fragmentation by performing regular maintenance. Whenever a row is deleted from or inserted into a table a counter will be updated in a flat file. When the Context opens it will examine this counter and perform a VACUUM if it’s larger than a configured constant. The constant will of course have to be fine-tuned based on real world measurements.

Simple marker files will also be used to note when a Context is open. If the browser is killed with a Context open, then a scrubbing process will be triggered the next time that origin accesses caches. This will look for orphaned Cache and body data files.

Finally, the bulk of the SQLite specific code is isolated in two classes; DBAction.cpp and DBSchema.cpp. If we find SQLite is not performant enough, it should be straightforward to replace these files with another solution.

Detailed Trace

Now that we have the lay of the land, lets trace what happens in the Cache when you do something like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 // photo by leg0fenris: https://www.flickr.com/photos/legofenris/ var troopers = 'blob:https://mdn.github.io/6d4a4e7e-0b37-c342-81b6-c031a4b9082c' var legoBox; Promise.all([ fetch(troopers), caches.open('legos') ]).then(function(results) { var response = results[0]; legoBox = results[1]; return legoBox.put(troopers, response); }).then(function() { return legoBox.match(troopers); }).then(function(response) { // invade rebel base });

While it might seem the first Cache operation is caches.open(), we actually need to trace what happens when caches is touched. When the caches attribute is first accessed on the global we create the CacheStorage DOM object and IPC actors.

I’ve numbered each step in order to show the sequence of events. These steps are roughly:

  1. The global WebIDL binding for caches creates a new CacheStorage object and returns it immediately to the script.
  2. Asynchronously, the CacheStorage object creates a new child IPC actor. Since this may not complete immediately, any requests coming in will be queued until actor is ready. Of course, since all the operations use Promises, this queuing is transparent to the content script.
  3. The child actor in turn sends a message to the parent process to create a corresponding parent actor. This message includes the nsIPrincipal describing the content script’s origin and other identifying information.
  4. Before permitting any actual work to take place, the principal provided to the actor must be verified. For various reasons this can only be done on the main thread. So an asynchronous operation is triggered to examine the principal and any CacheStorage operations coming in are queued.
  5. Once the principal is verified we return to the PBackground worker thread.
  6. Assuming verification succeeded, then the origin’s Manager can now be accessed or created. (This is actually deferred until the first operation, though.) Any pending CacheStorage operations are immediately executed.

Now that we have the caches object we can get on with the open(). This sequence of steps is more complex:

There are a lot more steps here. To avoid making this blog post any more boring than necessary, I’ll focus on just the interesting ones.

As with the creation trace above, steps 1 to 4 are basically just passing the open() arguments across to the Manager. Your basic digital plumbing at work.

Steps 5 and 6 make sure the Context exists and schedules an Action to run on the IO thread.

Next, in step 7, the Action will perform the actual work involved. It must find the Cache if it already exists or create a new Cache. This basically involves reading and writing an entry in the SQLite database. The result is a unique CacheId.

Steps 8 to 11 essentially just return the CacheId back to the actor layer.

If this was the last Action, then the Context is released in step 10.

At this point we need to create a new parent actor for the CacheId. This Cache actor will be passed back to the child process where it gets a child actor. Finally a Cache DOM object is constructed and used to resolve the Promise returned to the JS script in first step. All of this occurs in steps 12 to 17.

On the off chance you’re still reading this section, the script next performs a put() on the cache:

This trace looks similar to the last one, with the main difference occurring in the Action on the right. While this is true, its important to note that the IPC serialization in this case includes a data stream for the Response body. So we might be creating a CrossProcessPipe actor to copy data across in chunks.

With that in mind the Action needs to do the following:

  • Stream body data to files on disk. This happens asynchronously on the IO thread. The Action and the Context are kept alive this entire time.
  • Update the SQLite database to reflect the new Request/Response pair with a file name pointer to the body.

All of the steps back to the child process are essentially just there to indicate completion. The put() operation resolves undefined in the success case.

Finally the script can use match() to read the data back out of the Cache:

In this trace the Action must first query the SQLite tables to determine if the Request exists in the Cache. If it does, then it opens a stream to the body file.

Its important to note, again, that this is just opening a stream. The Action is only accessing the file system directory structure and opening a file descriptor to the body. Its not actually reading any of the data for the body yet.

Once the matched Response data and body file stream are passed back to the parent actor, we must create an extra actor for the stream. This actor is then passed back to the child process and used to create a ReadStream.

A ReadStream is a wrapper around the body file stream. This wrapper will send a message back to the parent whenever the stream is closed. In addition, it allows the Manager to signal the stream that a shutdown is occurring and the stream should be immediately closed.

This extra call back to the parent process on close is necessary to allow the Manager to reference track open streams and hold the Context open until all the streams are closed.

The body file stream itself is serialized back to the child process by dup()‘in the file descriptor opened by the Action.

Ultimately the body file data is read from the stream when the content script calls Response.text() or one of the other body consumption methods.

TODO

Of course, there is still a lot to do. While we are going to try to land the current implementation on mozilla-central, a number of issues will need to be resolved in the near future.

  1. SQLite maintenance must be implemented. As I mentioned above, I have a plan for how this will work, but it has not been written yet.
  2. Stress testing must be performed to fine tune the SQLite schema and configuration.
  3. Files should be de-duplicated within a single origin’s CacheStorage. This will be important for efficiently supporting some expected uses of the Cache API. (De-duplication beyond the same origin will require expanded support from the QuotaManager and is unlikely to occur in the near future.)
  4. Request and Response clone() must be improved. Currently a clone() call results in the body data being copied. In general we should be able to avoid almost all copying here, but it will require some work. See bug 1100398 for more details.
  5. Telemetry should be added so that we can understand how the Cache is being used. This will be important for improving the performance of the Cache over time.
Conclusion

While the Cache implementation is sure to change, this is where we are today. We want to get Cache and the other Service Worker bits off of our project branch and into mozilla-central as soon as possible so other people can start testing with them. Reviewing the Cache implementation is an important step in that process.

If you would like to follow along please see bug 940273. As always, feedback is welcome by email or on twitter.

Categorieën: Mozilla-nl planet

Robert O'Callahan: We Aren't Really Going To Have "Firefox On iOS"

Mozilla planet - mo, 08/12/2014 - 02:22

Whatever we decide to do, we won't be porting Firefox as we know it to iOS, unless Apple makes major changes to their App Store policies. The principal issue is that on iOS, the only software Apple allows to download content from the Internet and execute it is their built-in iOS Webkit component. Under that policy, every browser --- including iOS Chrome, for example --- must be some kind of front-end to Apple's Webkit. Thus, from the point of view of Web authors --- and users encountering Web compatibility issues --- all iOS browsers behave like Safari, even when they are named after other browsers. There is some ability to extend the Webkit component but in most areas, engine performance and features are restricted to whatever Safari has.

I certainly support having a product on iOS and I don't necessarily object to calling it Firefox as long as we're very clear in our messaging. To some extent users and Web developers have already acclimatised to a similar confusing situation with iOS Chrome. It's not exactly the same situation: the difference between iOS Chrome and real Chrome is smaller than the difference between iOS Firefox and real Firefox because Blink shares heritage and still much code with Webkit. But both differences are rapidly growing since there's a ton of new Web features that Chrome and Firefox have that Webkit doesn't (e.g. WebRTC, Web Components, ES6 features, Web Animations).

In the meantime I think we need to avoid making pithy statements like "we're bringing Firefox to iOS".

Categorieën: Mozilla-nl planet

Robert O'Callahan: Portland

Mozilla planet - mo, 08/12/2014 - 02:19

Portland was one of the best Mozilla events I've ever attended --- possibly the very best. I say this despite the fact I had a cough the whole week (starting before I arrived), I had inadequate amounts of poor sleep, my social skills for large-group settings are meagre, and I fled the party when the music started.

I feel great about Portland because I spent almost all of each workday talking to people and almost every discussion felt productive. In most work weeks I run out of interesting things to talk about and fall back to the laptop, and/or we have lengthy frustrating discussions where we can't solve a problem or can't reach an agreement, but that didn't really happen this time. Some of my conversations had disagreements, but either we had a constructive and efficient exchange of views or we actually reached consensus.

A good example of the latter is a discussion I led about the future of painting in Gecko, in which I outlined a nebulous plan to fix the issues we currently have in painting and layer construction on the layout side. Bas brought up ideas about GPU-based painting which at first didn't seem to fit well with my plans, but later we were able to sketch a combined design that satisfies everything. I learned a lot in the process.

Another discussion where I learned a lot was with Jason about using rr for record-and-replay JS debugging. Before last week I wasn't sure if it was feasible, but after brainstorming with Jason I think we've figured out how to do it in a straightforward (but clever) way.

Portland also reemphasized to me just how excellent are the people in the Platform team, and other teams too. Just wandering around randomly, I'd almost immediately run into someone I think is amazing. We are outnumbered, but I find it hard to believe that anyone outguns us per capita.

There were lots of great events and people that I missed and wish I hadn't (sorry Doug!), but I feel I made good use of the time so I have few regrets. For the same reason I wasn't bothered by the scheduling chaos. I hear some people felt sad that they missed out on activities, but as often in life, it's a mistake to focus on what you didn't do.

During the week I reflected on my role in the project, finding the right ways to use the power I have, and getting older. I plan to blog about those a bit.

I played board games every night, mostly Bang! and Catan. It was great fun but I probably should cut back a bit next time. Then again, for me it was a more effective way to meet interesting strangers than the organized mixer party event we had.

Categorieën: Mozilla-nl planet

Richard Newman: On soft martial arts and software engineers

Mozilla planet - mo, 08/12/2014 - 01:29

I recently began studying tàijíquán (“tai chi”), the Chinese martial art.

Richard, holding a sword.

It always helps to have someone correct your form.

Many years ago I spent a year or two pursuing shōtōkan karate. Shōtōkan, by most standards, is a “hard” martial art: it opposes force with force, using low, stable stances to deliver direct strikes.

Tàijíquán is an internal art, mixing hard with soft. To most observers (and most practitioners!) it’s entirely a soft, slow-moving exercise form. To quote Wikipedia:

The ability to use t’ai chi ch’uan as a form of self-defense in combat is the test of a student’s understanding of the art. T’ai chi ch’uan is the study of appropriate change in response to outside forces, the study of yielding and “sticking” to an incoming attack rather than attempting to meet it with opposing force. The use of t’ai chi ch’uan as a martial art is quite challenging and requires a great deal of training.

(Other martial arts are soft, but more immediately applicable: jujutsu, judo, and wing chun, for example.)

I see some parallels between the hard/soft characterization of martial arts and the ‘lifecycle’, if you will, of software engineers.

You might find it hard to believe (HTML needs a sarcasm tag, no?), but I was once a young, arrogant developer. I’d been hired at a startup in the US on the strength of a phone call, I was good at what I did, and there was an endless list of problems to solve. I like solving problems, and I liked that I could impress by doing so. And so I did.

I routinely worked 14-hour days. I’d get up at 7, shower, and head to the office. After work I’d go out for dinner with coworkers, then work until bed. I had no real hobbies apart from drinking with my coworkers, so my time was spent writing code. It’s so easy to solve problems when you can solve them yourself.

Eventually, after one too many solo victories over seemingly impossible deadlines, I was burned out.

Hard martial arts are very tempting, particularly to the young and able-bodied: they yield direct results. The better you get, the harder and faster you hit.

The problem with hard martial arts is that the world keeps making newer, tougher opponents, while time and each engagement are conspiring to strip away your own vigor. It takes a toll on your knees, your shoulders. Bruises take longer and longer to go away.

The software industry is like this, too. It will happily take as much time as you give it. Beating that last hard problem by burning a weekend will only win you a pat on the back and a new, bigger task to accomplish. Meanwhile your shoulders hunch, RSI kicks in, your vision worsens. You take your first week off work because the painkillers aren’t enough to let you type any more. You find out what an EKG is, what a sit-stand desk is, what physical therapy is like.

And while it looks like you’re winning — after all, you’re producing software that works — you’re accruing costs, too. You’re spending your future. Not only are you personally losing your motivation, your vitality, and a large part of your self, but you’re also building more software. Either you have to own it, or nobody really does. Maybe someone else should. Maybe it shouldn’t have been built at all. You think you’re winning, but you won’t know until later. And all along, your aggressive approach to building a solution alienates those around you.

A soft martial art tries to use your opponent’s strength and momentum against them. It yields and redirects. Ultimately, it asks whether you need to engage at all.

Hard martial arts eventually force you to confront your own fragility: “I can’t keep doing this”. So does software development, if you’re paying attention. You need to learn to ask the right questions, to draw on the rest of your team, to invest your time in learning and tools, in communication, and above all to invest in other people.

As the quote above suggests, this takes practice. But it works out best in the long run.

Categorieën: Mozilla-nl planet

Soledad Penades: Meanwhile, in Mozlandia…

Mozilla planet - snein, 07/12/2014 - 22:25

Almost every employee and a good amount of volunteers flew into Portland past week for a sort of “coincidental work week” which also included a few common events, the “All hands”. Since it was held in Portland, home to “Portlandia“, someone started calling this week “Mozlandia” and the name stuck.

I knew it was going to be chaotic and busy and so I not only didn’t make any effort to meet with non-Mozilla-related Portlanders, but actively avoided that. When the day has been all about socialising from breakfast to afternoon, the last thing you want is to speak to more people. Also, I am not sure how to put this, but the fact that I visit some acquaintance’s town doesn’t mean that I am under any obligation to meet them. Sometimes people get angry that I didn’t tell them I was visiting and that’s not cool :-(

Speaking about not-coolness: my trip started with two “incidents”. First, I got mansplained at the Heathrow Airport by an Air Canada employee that decided to take over my self-check in machine, trying to press buttons on the screen and answering security questions for me instead of just, maybe, allowing me to operate it as I was doing until he came and interrupted me, out of the blue. There was no one else in the area and I have no idea why he did that, but he got me angry.

Then the rest of the trip went pretty much as usual, with no incident. It was fun to spend layover time at the Vancouver Airport with Guillaume and Zac from the London office, and then share the experience of the Desolate Pod of Gates that is home to the mighty Propeller Planes.

I was really tired by the time I made it to my hotel–it was well past 6 AM in London time and I had been up for almost 24 hours with no sleep except for the short nap in the Vancouver-Portland flight, so the only thing I wanted was to make it to my room and sleeeeep. I got into one of the hotel lifts, and just as the doors were almost closed, someone waved their arm in and the doors opened again. Three massively tall and bulky men entered the lift and pressed some buttons for their floor, while I kept looking down and wondering how would the room look like and whether the pillows would be soft. And then I noticed something… something being repeated several times. I started paying attention and turns out that one of the men was talking to me. He was asking me:

How are you? How are you?

But I hadn’t replied because I was on my own world. So he repeated it again:

How are you?

So here’s the thing. When you’re that tired you have zero room for any sort of bullshit, and I was really, really tired. But those men were also really, really huge, compared to me. So I looked at him and I was really willing to give him a piece of my mind, but the only thing I said was

Maybe that is none of your business.

And luckily the doors for my floor opened and I didn’t have to stand their looks of “disappointment because I hadn’t been nice to them” any longer.

But…

I suddenly felt very unsafe because I hadn’t been nice to them.

Were them following me? Should I request my room to be changed to a different floor? Was there anything I was wearing that would be distinctive and would they be able to identify me the following days?

It took me a while to get asleep because I kept thinking about this, but eventually I got some sound slumber, hoping for an incident-free Sunday.

And it was a great, sunny and very COLD Sunday in Portland. Temperatures were about 0 degrees, which compared to London’s 12 degrees felt even colder. I kept going to warm closed places (cafes! shops! malls!) and then back to the glacial streets, so by Monday morning my body had decided it hated me and was going to demonstrate how much with a number of demonstrations. First came the throat pain, then tummy ache, sneezing, the full list of winter horrors.

This made me not really enjoy the whole “Mozlandia” week. I was in an state of confusion most of the time, either by virtue of my sinuses pressuring my brain, or just because of the medicine I took. It was hard to both follow conversations and articulate thoughts. I hope I didn’t disappoint anyone that wanted to meet me this week for SERIOUS BSNSS, but I was generally a shambles. Sorry about that!

And yet despite of that, I still had some interesting discussions with various people at Mozilla, both intentionally and accidentally, so that was cool. Some topics included:

  • how can we work better with the Platform team (the ones implementing browser APIs, for those not in the Moz-know) so we know for certain which features are planned/implemented and with which degree of completion, and so we can give better advice to interested devs, and how can we improve the way we provide the feedback we get from developers at events, blog posts, etc. By the way: there’s a huge amount of cool new APIs coming up! this is neat :-)
  • future plans for the Web Audio API and the Web Audio Editor in Firefox DevTools, and also a general discussion on the API architecture and how it often takes developers by surprise, and whether we can do anything about that from a tooling point of view or not. Also, games, performance, and mixing other APIs together such as MediaRecorder.
  • the Web Animations API and support for visualising that in the devtools-with keyframes and time lines and all that good, exciting stuff! It got me thinking about whether it would be possible to make another build of tween.js or some sort of util/wrapper that uses the Animations API internally. Food for thought!
  • future Air Mozilla plans, including making it easier to upload content both from a moz-space and from an offline recording, and support for subtitles in various languages. I liked that they stress the fact that content does not need to be in English–after all, the Mozilla community speaks many languages!
  • Rachel Nabors told us about her animation/authoring process to create interactive experiences/comics using just HTML+JS+CSS. This was really enlightening and while I don’t have all the answers to the issues yet, it got me thinking about how can we make this easier and more enjoyable for non-super-tech-savvy audiences. There were cries for a Firefox Designer Edition too–we joked that it would come with some extra colorpickers because why not? :-P

I had to skip a couple of evenings because my immune system was just too excited to be on call, and so I stayed at my room. I didn’t want to go to sleep too early or the jetlag would be horrible, so I stayed awake by building a little silly thing: spoems, or spam poems (sources). I want to use it as a playground to try CSS stuff since it’s mostly text, but so far it’s super basic and that’s OK.

It was funny that this… morning? yesterday afternoon…? other mozillians that were flying back to London in the same plane than me were telling about the best of the closing party and internally I was like “well, I just drank some coffee and listened to Boards of Canada and then had ramen and watched random things on the Internet, and that was exactly what I needed”.

And that was my “Mozlandia”. What about yours? :-P

flattr this!

Categorieën: Mozilla-nl planet

Benjamin Kerensa: Mozilla All Hands: They can’t hold us!

Mozilla planet - snein, 07/12/2014 - 21:06
Macklemore & Ryan Lewis perform for MozillaMacklemore & Ryan Lewis perform for Mozilla

What a wonderful all hands we had this past week. The entire week was full of meetings and planning and I must say I was exhausted by Thursday having been up each day working by 6:00am and going to bed by midnight.

I’m very happy to report that I made a lot of progress on meeting with more people to discuss the future of Firefox Extended Support Release and how to make it a much better offering to organizations.

I also spent some time talking to folks about Firefox in Ubuntu and rebranding Iceweasel to Firefox in Debian (fingers crossed something will happen here in 2015). Also it was great to participate in discussions around making all of the Firefox channels offer more stability and quality to our users.

It was great to hear that we will be doing some work to bring Firefox to iOS which I think will fill a gap that has existed for our users of OSX who have an iPhone.  Anyways, what I can say about this all hands is that there were lots of opportunities for discussions on quality and the future is looking very bright.

Also a big thanks to Lukas Blakk who put together an early morning excursion to Sherwood Ice Arena where Mozillians played some matches of hockey which I took photos of here.

In closing, I have to say it was a great treat for Macklemore & Ryan Lewis to come and perform for us in a private show and help us celebrate Mozilla.

 

Categorieën: Mozilla-nl planet

Pages