Mozilla Nederland LogoDe Nederlandse

Doug Belshaw: Considerations when creating a Privacy badge pathway

Mozilla planet - di, 27/01/2015 - 15:41

Between June and October 2014 I chaired the Badge Alliance working group for Digital and Web Literacies. This was an obvious fit for me, having previously been on the Open Badges team at Mozilla, and currently being Web Literacy Lead.


We used a Google Group to organise our meetings. Our Badge Alliance liaison was my former colleague Carla Casilli. The group contained 208 people, although only around 10% of that number were active at any given time.

The deliverable we decided upon was a document detailing considerations individuals/organisations should take into account when creating a Privacy badge pathway.

Access the document here

We used Mozilla’s Web Literacy Map as a starting point for this work, mainly because many of us had been part of the conversations that led to the creation of it. Our discussions moved from monthly, to fortnightly, to weekly. They were wide-ranging and included many options. However, the guidance we ended up providing is as simple and as straightforward as possible.

For example, we advocated the creation of five badges:

  1. Identifying rights retained and removed through user agreements
  2. Taking steps to secure non-encrypted connections
  3. Explaining ways in which computer criminals are able to gain access to user information
  4. Managing the digital footprint of an online persona
  5. Identifying and taking steps to keep important elements of identity private

We presented options for how learners would level-up using these badges:

  • Trivial Pursuit approach
  • Majority approach
  • Cluster approach

More details on the badges and approaches can be found in the document. We also included more speculative material around federation. This involved exploring the difference between pathways, systems and ecosystems.

The deliverable from this working is currently still on Google Docs, but if there’s enough interest we’ll port it to GitHub pages so it looks a bit like the existing Webmaker whitepaper. This work is helping inform an upcoming whitepaper around Learning Pathways which should be ready by end of Q1 2014.

Karen Smith, co-author of the new whitepaper and part of the Badge Alliance working group, is also heading up a project (that I’m involved with in a small way) for the Office of the Privacy Commissioner of Canada. This is also informed in many ways by this work.

Comments? Questions? Comment directly on the document, tweet me (@dajbelshaw) or email me:

Categorieën: Mozilla-nl planet

Mozilla Firefox 35.0.1 behebt Browser-Absturz - Online PC

Nieuws verzameld via Google - di, 27/01/2015 - 15:26

Mozilla Firefox 35.0.1 behebt Browser-Absturz
Online PC
Mozilla bessert beim Firefox 35 nach und behebt mit dem Update insgesamt acht Fehler, die unter anderem zum Absturz des Open-Source-Browsers geführt hatten. Firefox Logo. Browser-Update: Mozilla hat eine Aktualisierung für den Web-Browser Firefox ...
Mozilla veröffentlicht Firefox 35.0.1 und korrigiert
Firefox: Mozilla behebt Abstürze mit Version
Browser-Update Mozilla Firefox 35.0.1 behebt
alle 7 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Stormy Peters: Your app is not a lottery ticket

Mozilla planet - di, 27/01/2015 - 15:10

Many app developers are secretly hoping to win the lottery. You know all those horrible free apps full of ads? I bet most of them were hoping to be the next Flappy Bird app. (The Flappy Bird author was making $50K/day from ads for a while.)

The problem is that when you are that focused on making millions, you are not focused on making a good app that people actually want. When you add ads before you add value, you’ll end up with no users no matter how strategically placed your ads are.

So, the secret to making millions with your app?

  • Find a need or problem that people have that you can solve.
  • Solve the problem.
  • Make your users awesome. Luke first sent me a pointer to Kathy Sierra’s idea of making your users awesome.  Instagram let people create awesome pictures. Then their friends asked them how they did it …
  • Then monetize. (You can think about this earlier but don’t focus on it until you are doing well.)

If you are a good app developer or web developer, you’ll probably find it easier to do well financially helping small businesses around you create the apps and web pages they need than you will trying to randomly guess what game people might like. (If you have a good idea for a game, that you are sure you and your friends and then perhaps others would like to play, go for it!)

Related posts:

  1. Your “home” on the web
  2. Your competition helps explain who you are
  3. Learning to write JavaScript

Categorieën: Mozilla-nl planet

Mozilla komt nu al met update voor Firefox 35 - Automatisering Gids

Nieuws verzameld via Google - di, 27/01/2015 - 12:59

Mozilla komt nu al met update voor Firefox 35
Automatisering Gids
De nog maar net uitgebrachte browser Firefox 35 leidt tot zoveel problemen, dat Mozilla zich nu al genoodzaakt ziet een ongeplande update uit te brengen. Firefox 35 crashte veelvuldig: bij het opstarten van Firefox, bij het gebruik van de Enhanced ...

en meer »
Categorieën: Mozilla-nl planet

Alistair Laing: Right tool for the Job

Mozilla planet - di, 27/01/2015 - 12:50

I’m still keen as ever and enjoying the experience of developing a browser extension. Last week was the first time I hung out in Google Hangouts with Jan and Florent. On first impressions Google hangouts is pretty sweet. It was smooth and clear(I’m not sure how much of that was down to broadband speeds and connection quality). I learnt so much in that first one hour session and enjoyed chatting to them face-to-face (in digital terms).

TOO COOL to Minify & TOO SASS’y for tools

One of things I learnt was how to approach JS/CSS now my front-end Developer head tells me to always minify and concatenate files to reduce HTTP request. While from a maintenance side, look to using a CSS pre-processor for variables etc. Now when it comes to developing browser extensions you do not have the same issues because of the following reasons:

  1. No HTTP requests are done because the files are actually packaged with the extension and therefore installed on the client machine anyway. Theres also NO network latency because of this.
  2. File sizes aren’t that important as browser extensions (for Firefox at least). The extensions are packaged up in such an effective way that its basically zipping all the contents together so “reducing” the file sizes anyway.
  3. Whilst attempting to fix an issue I came across Mozilla’s implementation of CSS variables, which sort of solves the issue around css variables and modularise the code.

Later today, I’m scheduled to hangout with Jan again and I’m thinking about writing another post about XUL

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 36 beta3 to beta4

Mozilla planet - di, 27/01/2015 - 11:43

In this beta release, for both Desktop & Mobile, we fixed some issues in Javascript, some stability fixes, etc. We also increased the memory size of some components to decrease the number of crashes (example: bugs 869208 & 1124892)

  • 40 changesets
  • 121 files changed
  • 1528 insertions
  • 1107 deletions

ExtensionOccurrences cpp28 h21 c16 11 java9 html7 py4 js4 mn3 ini3 mk2 cc2 xml1 xhtml1 svg1 sh1 in1 idl1 dep1 css1 build1

ModuleOccurrences security46 js15 dom15 mobile10 browser10 editor6 gfx5 testing4 toolkit3 ipc2 xpcom1 services1 layout1 image1

List of changesets:

Cameron McCormackBug 1092363 - Disable Bug 931668 optimizations for the time being. r=dbaron a=abillings - 126d92ac00e9 Tim TaubertBug 1085369 - Move key wrapping/unwrapping tests to their own test file. r=rbarnes, a=test-only - afab84ec4e34 Tim TaubertBug 1085369 - Move other long-running tests to separate test files. r=keeler, a=test-only - d0660bbc79a1 Tim TaubertBug 1093655 - Fix intermittent browser_crashedTabs.js failures. a=test-only - 957b4a673416 Benjamin SmedbergBug 869208 - Increase the buffer size we're using to deliver network streams to OOPP plugins. r=aklotz, a=sledru - cb0fd5d9a263 Nicholas NethercoteBug 1122322 (follow-up) - Fix busted paths in worker memory reporter. r=bent, a=sledru - a99eabe5e8ea Bobby HolleyBug 1123983 - Don't reset request status in MediaDecoderStateMachine::FlushDecoding. r=cpearce, a=sledru - e17127e00300 Jean-Yves AvenardBug 1124172 - Abort read if there's nothing to read. r=bholley, a=sledru - cb103a939041 Jean-Yves AvenardBug 1123198 - Run reset parser state algorithm when aborting. r=cajbir, a=sledru - 17830430e6be Martyn HaighBug 1122074 - Normal Tabs tray has an empty state. r=mcomella, a=sledru - c1e9f11144a5 Michael ComellaBug 1096958 - Move TilesRecorder instance into TopSitesPanel. r=bnicholson, a=sledru - d6baa06d52b4 Michael ComellaBug 1110555 - Use real device dimensions when calculating LWT bitmap sizes. r=mhaigh, a=sledru - 2745f66dac6f Michael ComellaBug 1107386 - Set internal container height as height of MenuPopup. r=mhaigh, a=sledru - e4e2855e992c Ehsan AkhgariBug 1120233 - Ensure that the delete command will stay enabled for password fields. r=roc, ba=sledru - 34330baf2af6 Philipp KewischBug 1084066 - plugins and extensions moved to wrong directory by mozharness. r=ted,a=sledru - 64fb35ee1af6 Bob OwenBug 1123245 Part 1: Enable an open sandbox on Windows NPAPI processes. r=josh, r=tabraldes, a=sledru - 2ab5add95717 Bob OwenBug 1123245 Part 2: Use the USER_NON_ADMIN access token level for Windows NPAPI processes. r=tabraldes, a=sledru - f7b5148c84a1 Bob OwenBug 1123245 Part 3: Add prefs for the Windows NPAPI process sandbox. r=bsmedberg, a=sledru - 9bfc57be3f2c Makoto KatoBug 1121829 - Support redirection of kernel32.dll for hooking function. r=dmajor, a=sylvestre - d340f3d3439d Ting-Yu ChouBug 989048 - Clean up emulator temporary files and do not overwrite userdata image. r=ahal, a=test-only - 89ea80802586 Richard NewmanBug 951480 - Disable test_tokenserverclient on Android. a=test-only - 775b46e5b648 Jean-Yves AvenardBug 1116007 - Disable inconsistent test. a=test-only - 5d7d74f94d6a Kai EngertBug 1107731 - Upgrade Mozilla 36 to use NSS 3.17.4. a=sledru - f4e1d64f9ab9 Gijs KruitboschBug 1098371 - Create localized version of sslv3 error page. r=mconley, a=sledru - e6cefc687439 Masatoshi KimuraBug 1113780 - Use SSL_ERROR_UNSUPPORTED_VERSION for SSLv3 error page. r=gijs, a=sylvestre (see Bug 1098371) - ea3b10634381 Jon CoppeardBug 1108007 - Don't allow GC to observe uninitialized elements in cloned array. r=nbp, a=sledru - a160dd7b5dda Byron Campen [:bwc]Bug 1123882 - Fix case where offset != 0. r=derf, a=abillings - 228ee06444b5 Mats PalmgrenBug 1099110 - Add a runtime check before the downcast in BreakSink::SetCapitalization. r=jfkthame, a=sledru - 12972395700a Mats PalmgrenBug 1110557. r=mak, r=gavin, a=abillings - 3f71dcaa9396 Glenn Randers-PehrsonBug 1117406 - Fix handling of out-of-range PNG tRNS values. r=jmuizelaar, a=abillings - a532a2852b2f Tom SchusterBug 1111248. r=Waldo, a=sledru - 7f44816c0449 Tom SchusterBug 1111243 - Implement ES6 proxy behavior for IsArray. r=efaust, a=sledru - bf8644a5c52a Ben TurnerBug 1122750 - Remove unnecessary destroy calls. r=khuey, a=sledru - 508190797a80 Mark CapellaBug 851861 - Intermittent testFlingCorrectness, etc al. dragSync() consumers. r=mfinkle, a=sledru - 3aca4622bfd5 Jan de MooijBug 1115776 - Fix LApplyArgsGeneric to always emit the has-script check. r=shu, a=sledru - 9ac8ce8d36ef Nicolas B. PierronBug 1105187 - Uplift the harness changes to fix jit-test failures. a=test-only - b17339648b55 Nicolas SilvaBug 1119019 - Avoid destroying a SharedSurface before its TextureClient/Host pair. r=sotaro, a=abillings - 6601b8da1750 Markus StangeBug 1117304 - Also do the checks at the start of CopyRect in release builds. r=Bas, a=sledru - 4417d345698a Markus StangeBug 1117304 - Make sure the tile filter doesn't call CopyRect on surfaces with different formats. r=Bas, a=sledru - bc7489448a98 David MajorBug 1124892 - Adjust Breakpad reservation for xul.dll inflation. r=bsmedberg, a=sledru - 59aa16cfd49f

Categorieën: Mozilla-nl planet

Browser-Update Mozilla Firefox 35.0.1 behebt Browser-Absturz -

Nieuws verzameld via Google - di, 27/01/2015 - 09:45

Browser-Update Mozilla Firefox 35.0.1 behebt Browser-Absturz
Browser-Update: Mozilla hat eine Aktualisierung für den Web-Browser Firefox zum Download freigegeben. Die Entwickler beheben mit dem Update acht Fehler, die in der Version 35 für Probleme bei den Nutzern gesorgt hatten. Die gepatchten Fehlern ...
Mozilla Firefox 35.0.1 behebt Browser-AbsturzOnline PC
Firefox: Mozilla behebt Abstürze mit Version
Mozilla veröffentlicht Firefox 35.0.1 und korrigiert
alle 7 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla, il Web virtuale in stato di alpha - Punto Informatico

Nieuws verzameld via Google - di, 27/01/2015 - 08:55

Mozilla, il Web virtuale in stato di alpha
Punto Informatico
Il Web virtuale di Mozilla si chiama WebVR, ed è in sostanza costituito da una serie di API che erano sin qui accessibili solo con una versione sperimentale di Firefox. Quelle API sono ora sufficientemente mature da essere parte integrante della base ...

Google Nieuws
Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: To MVP Or Not To MVP

Mozilla planet - di, 27/01/2015 - 07:00

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous post was The Tech Demo, and the first in the series is Conception.

The Minimal Viable Product

The Minimal Viable Product is a popular product development approach at Mozilla, and judging from Hacker News it is popular everywhere (but that is a wildly inaccurate way to judge common practice).

The idea is that you build the smallest thing that could be useful, and you ship it. The idea isn’t to make a great product, but to make something so you can learn in the field. A couple definitions:

The Minimum Viable Product (MVP) is a key lean startup concept popularized by Eric Ries. The basic idea is to maximize validated learning for the least amount of effort. After all, why waste effort building out a product without first testing if it’s worth it.

– from How I built my Minimum Viable Product (emphasis in original)

I like this phrase “validated learning.” Another definition:

A core component of Lean Startup methodology is the build-measure-learn feedback loop. The first step is figuring out the problem that needs to be solved and then developing a minimum viable product (MVP) to begin the process of learning as quickly as possible. Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.

– Lean Startup Methodology (emphasis added)

I don’t like this model at all: “once the MVP is established, a startup can work on tuning the engine.” You tune something that works the way you want it to, but isn’t powerful or efficient or fast enough. You’ve established almost nothing when you’ve created an MVP, no aspect of the product is validated, it would be premature to tune. But I see this antipattern happen frequently: get an MVP out quickly, often shutting down critically engaged deliberation in order to Just Get It Shipped, then use that product as the model for further incremental improvements. Just Get It Shipped is okay, incrementally improving products is okay, but together they are boring and uncreative.

There’s another broad discussion to be had another time about how to enable positive and constructive critical engagement around a project. It’s not easy, but that’s where learning happens, and the purpose of the MVP is to learn, not to produce. In contrast I find myself impressed by the shear willfulness of the Halflife development process which apparently involved months of six hour design meetings, four days a week, producing large and detailed design documents. Maybe I’m impressed because it sounds so exhausting, a feat of endurance. And perhaps it implies that waterfall can work if you invest in it properly.

Plan plan plan

I have a certain respect for this development pattern that Dijkstra describes:

Q: In practice it often appears that pressures of production reward clever programming over good programming: how are we progressing in making the case that good programming is also cost effective?

A: Well, it has been said over and over again that the tremendous cost of programming is caused by the fact that it is done by cheap labor, which makes it very expensive, and secondly that people rush into coding. One of the things people learn in colleges nowadays is to think first; that makes the development more cost effective. I know of at least one software house in France, and there may be more because this story is already a number of years old, where it is a firm rule of the house, that for whatever software they are committed to deliver, coding is not allowed to start before seventy percent of the scheduled time has elapsed. So if after nine months a project team reports to their boss that they want to start coding, he will ask: “Are you sure there is nothing else to do?” If they say yes, they will be told that the product will ship in three months. That company is highly successful.

– from Interview Prof. Dr. Edsger W. Dijkstra, Austin, 04–03–1985

Or, a warning from a page full of these kind of quotes: “Weeks of programming can save you hours of planning.” The planning process Dijkstra describes is intriguing, it says something like: if you spend two weeks making a plan for how you’ll complete a project in two weeks then it is an appropriate investment to spend another week of planning to save half a week of programming. Or, if you spend a month planning for a month of programming, then you haven’t invested enough in planning to justify that programming work – to ensure the quality, to plan the order of approach, to understand the pieces that fit together, to ensure the foundation is correct, ensure the staffing is appropriate, and so on.

I believe “Waterfall Design” gets much of its negative connotation from a lack of good design. A Waterfall process requires the design to be very very good. With Waterfall the design is too important to leave it to the experts, to let the architect arrange technical components, the program manager to arrange schedules, the database architect to design the storage, and so on. It’s anti-collaborative, disengaged. It relies on intuition and common sense, and those are not powerful enough. I’ll quote Dijkstra again:

The usual way in which we plan today for tomorrow is in yesterday’s vocabulary. We do so, because we try to get away with the concepts we are familiar with and that have acquired their meanings in our past experience. Of course, the words and the concepts don’t quite fit because our future differs from our past, but then we stretch them a little bit. Linguists are quite familiar with the phenomenon that the meanings of words evolve over time, but also know that this is a slow and gradual process.

It is the most common way of trying to cope with novelty: by means of metaphors and analogies we try to link the new to the old, the novel to the familiar. Under sufficiently slow and gradual change, it works reasonably well; in the case of a sharp discontinuity, however, the method breaks down: though we may glorify it with the name “common sense”, our past experience is no longer relevant, the analogies become too shallow, and the metaphors become more misleading than illuminating. This is the situation that is characteristic for the “radical” novelty.

Coping with radical novelty requires an orthogonal method. One must consider one’s own past, the experiences collected, and the habits formed in it as an unfortunate accident of history, and one has to approach the radical novelty with a blank mind, consciously refusing to try to link it with what is already familiar, because the familiar is hopelessly inadequate. One has, with initially a kind of split personality, to come to grips with a radical novelty as a dissociated topic in its own right. Coming to grips with a radical novelty amounts to creating and learning a new foreign language that can not be translated into one’s mother tongue. (Any one who has learned quantum mechanics knows what I am talking about.) Needless to say, adjusting to radical novelties is not a very popular activity, for it requires hard work. For the same reason, the radical novelties themselves are unwelcome.

– from EWD 1036, On the cruelty of really teaching computing science


All this praise of planning implies you know what you are trying to make. Unlikely!

Coding can be a form of planning. You can’t research how interactions feel without having an actual interaction to look at. You can’t figure out how feasible some techniques are without trying them. Planning without collaborative creativity is dull, planning without research is just documenting someone’s intuition.

The danger is that when you are planning with code, it feels like execution. You can plan to throw one away to put yourself in the right state of mind, but I think it is better to simply be clear and transparent about why you are writing the code you are writing. Transparent because the danger isn’t just that you confuse your coding with execution, but that anyone else is likely to confuse the two as well.

So code up a storm to learn, code up something usable so people will use it and then you can learn from that too.

My own conclusion…

I’m not making an MVP. I’m not going to make a maximum viable product either – rather, the next step in the project is not to make a viable product. The next stage is research and learning. Code is going to be part of that. Dogfooding will be part of it too, because I believe that’s important for learning. I fear thinking in terms of “MVP” would let us lose sight of the why behind this iteration – it is a dangerous abstraction during a period of product definition.

Also, if you’ve gotten this far, you’ll see I’m not creating minimal viable blog posts. Sorry about that.

Categorieën: Mozilla-nl planet

Tomorrow Daily 118: Bill Nye's spacecraft, Mozilla's in-browser VR and more - CNET UK

Nieuws verzameld via Google - di, 27/01/2015 - 03:15

Tomorrow Daily 118: Bill Nye's spacecraft, Mozilla's in-browser VR and more
It's Monday, and we're back with a brand new show filled with all the good stuff: Bill Nye's non-profit organization is working on a private spacecraft the size of a loaf of bread; Mozilla adds core VR support to Firefox Nightly builds; and a Chinese ...

Categorieën: Mozilla-nl planet

Tomorrow Daily 118: Bill Nye's spacecraft, Mozilla's in-browser VR and more - CNET

Nieuws verzameld via Google - di, 27/01/2015 - 03:08


Tomorrow Daily 118: Bill Nye's spacecraft, Mozilla's in-browser VR and more
It's Monday, and we're back with a brand new show filled with all the good stuff: Bill Nye's non-profit organization is working on a private spacecraft the size of a loaf of bread; Mozilla adds core VR support to Firefox Nightly builds; and a Chinese ...
Mozilla Stuffs Virtual Reality Into Main Firefox BuildsPC Magazine
Mozilla Wants To Bring Virtual Reality To The BrowserTechCrunch
Mozilla will bring virtual reality to the browserFirstpost
CrazyEngineers -VR-Zone
alle 17 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Stormy Peters: 7 reasons asynchronous communication is better than synchronous communication in open source

Mozilla planet - di, 27/01/2015 - 00:45

Traditionally, open source software has relied primarily on asynchronous communication. While there are probably quite a few synchronous conversations on irc, most project discussions and decisions will happen on asynchronous channels like mailing lists, bug tracking tools and blogs.

I think there’s another reason for this. Synchronous communication is difficult for an open source project. For any project where people are distributed. Synchronous conversations are:

  • Inconvenient. It’s hard to schedule synchronous meetings across time zones. Just try to pick a good time for Australia, Europe and California.
  • Logistically difficult. It’s hard to schedule a meeting for people that are working on a project at odd hours that might vary every day depending on when they can fit in their hobby or volunteer job.
  • Slower. If you have more than 2-3 people you need to get together every time you make a decision, things will move slower. I currently have a project right now that we are kicking off and the team wants to do everything in meetings. We had a several meeting last week and one this week. Asynchronously we could have had several rounds of discussion by now.
  • Expensive for many people. When I first started at GNOME, it was hard to get some of our board members on a phone call. They couldn’t call international numbers, or couldn’t afford an international call and they didn’t have enough bandwidth for an internet voice call. We ended up using a conference call line from one of our sponsor companies. Now it’s video.
  • Logistically difficult. Mozilla does most of our meetings as video meetings. Video is still really hard for many people. Even with my pretty expensive, supposedly high end internet in a developed country, I often have bandwidth problems when participating in video calls. Now imagine I’m a volunteer from Nigeria. My electricity might not work all the time, much less my high speed internet.
  • Language. Open source software projects work primarily in English and most of the world does not speak English as their first language. Asynchronous communication gives them a chance to compose their messages, look up words and communicate more effectively.
  • Confusing. Discussions and decisions are often made by a subset of the project and unless the team members are very diligent the decisions and rationale are often not communicated out broadly or effectively. You lose the history behind decisions that way too.

There are some major benefits to synchronous conversation:

  • Relationships. You build relationships faster. It’s much easier to get to know the person.
  • Understanding. Questions and answers happen much faster, especially if the question is hard to formulate or understand. You can quickly go back and forth and get clarity on both sides. They are also really good for difficult topics that might be easily misinterpreted or misunderstood over email where you don’t have tone and body language to help convey the message.
  • Quicker. If you only have 2-3 people, it’s faster to talk to them then to type it all out. Once you have more than 2-3, you lose that advantage.

I think as new technologies, both synchronous and asynchronous become main stream, open source software projects will have to figure out how to incorporate them. For example, at Mozilla, we’ve been working on how video can be a part of our projects. Unfortunately, they usually just add more synchronous conversations that are hard to share widely but we work on taking notes, sending notes to mailing lists and recording meetings to try to get the relationship and communication benefits of video meetings while maintaining good open source software project practices. I personally would like to see us use more asynchronous tools as I think video and synchronous tools benefit full time employees at the expense of volunteer involvement.

How does your open source software project use asynchronous and synchronous communication tools? How’s the balance working for you?

Related posts:

  1. Humanitarian projects bring more students to open source software
  2. Open source enables companies to collaborate
  3. 10 free apps I wish were open source

Categorieën: Mozilla-nl planet

Darrin Henein: Rapid Prototyping with Gulp, Framer.js and Sketch: Part One

Mozilla planet - ma, 26/01/2015 - 23:09



The process of design is often thought of as being entirely generative–people who design things study a particular problem, pull out their sketchbooks, markers and laptops, and produce artifacts which slowly but surely progress towards some end result which then becomes “The Design” of “The Thing”. It is seen as an additive process, whereby each step builds upon the previous, sometimes with changes or modifications which solve issues brought to light by the earlier work.

Early in my career, I would sit at my desk and look with disdain at all the crumpled-paper that filled my trash bin and cherish that one special solution that made the cut. The bin was filled with all of my “bad ideas”. It was overflowing with “failed” attempts before I finally “got it right”. It took me some time, but I’ve slowly learned that the core of my design work is defined not by that shiny mockup or design spec I deliver, but more truly by the myriad of sketches and ideas that got me there. If your waste bin isn’t full by the end of a project, you may want to ask yourself if you’ve spent enough time exploring the solution space.

I really love how Facebook’s Product Design Director Julie Zhuo put it in her essay “Junior Designers vs. Senior Designers”, where she illustrates (in a very non-scientific, but effective way) the difference in process that experience begets. The key delta to me is the singularity of the Junior Designer’s process, compared to the exploratory, branching, subtractive process of the more seasoned designer. Note all the dead ends and occasions where the senior designer just abandons an idea or concept. They clearly have a full trash bin by the end of this journey. Through the process of evaluation and subtraction, a final result is reached. The breadth of ideas explored and abandoned is what defines the process, rather than the evolution of a single idea. It is important to achieve this breadth of ideation to ensure that the solution you commit to was not just a lucky one, but a solution that was vetted against a variety of alternatives.

The unfortunate part of this realization is that often it is just that – an idealized process which faces little conceptual opposition but (in my experience) is often sacrificed in the name of speed or deadlines. Generating multiple sketches is not a huge cost, and is one of the primary reasons so much exploration should take place at that fidelity. Interactions, behavioural design and animations, however, are much more costly to generate, and so the temptation there is to iterate on an idea until it feels right. While this is not inherently a bad thing, wouldn’t it be nice if we could iterate and explore things like animations with the same efficiency we experience with sketching?

As a designer with the ability to write some code, my first goal with any project is to eliminate any inefficiencies – let me focus on the design and not waste time elsewhere. I’m going to walk through a framework I’ve developed during a recent project, but the principle is universal – eliminate or automate the things you can, and maximize the time you spend actually problem-solving and designing.

Designing an Animation Using Framer.js and Sketch Get the Boilerplate Project on Github

User experience design has become a much more complex field as hardware and software have evolved to allow increasingly fluid, animated, and dynamic interfaces. When designing native applications (especially on mobile platforms such as Android or iOS) there is both an expectation and great value to leverage animation in our UI. Whether to bring attention to an element, educate the user about the hierarchy of the screens in an app, or just to add a moment of delight, animation can be a powerful tool when used correctly. As designers, we must now look beyond Photoshop and static PNG files to define our products, and leverage tools like Keynote or HTML to articulate how these interfaces should behave.

While I prefer to build tools and workflows with open-source software, it seems that the best design tools available are paid applications. Thankfully, Sketch is a fantastic application and easily worth it’s price.

My current tool of choice is a library called framer.js, which is an open-source framework for prototyping UI. For visual design I use Sketch. I’m going to show you how I combine these two tools to provide me with a fast, automated, and iterative process for designing animations.

I am also aware that Framer Studio exists, as well as Framer Generator. These are both amazing tools. However, I am looking for something as automated and low-friction as possible; both of these tools require some steps between modifying the design and seeing the results. Lets look at how I achieved a fully automated solution to this problem.

Automating Everything With Gulp

Here is the goal: let me work in my Sketch and/or CoffeeScript file, and just by saving, update my animated prototype with the new code and images without me having to do anything. Lofty, I know, but let’s see how it’s done.

Gulp is a Javascript-based build tool, the latest in a series of incredible node-powered command line build tools.

Some familiarity with build tools such as Gulp or Grunt will help here, but is not mandatory. Also, this will explain the mechanics of the tool, but you can still use this framework without understanding every line!

 The gulpfile is  just a list of tasks, or commands, that we can run in different orders or timings. Let’s breakdown my gulpfile.js:

var gulp = require('gulp'); var coffee = require('gulp-coffee'); var gutil = require('gulp-util'); var watch = require('gulp-watch'); var sketch = require('gulp-sketch'); var browserSync = require('browser-sync');

This section at the top just requires (imports) the external libraries I’m going to use. These include Gulp itself, CoffeeScript support (which for me is faster than writing Javascript), a watch utility to run code whenever a file changes, and a plugin which lets me parse and export from Sketch files.

gulp.task('build', ['copy', 'coffee', 'sketch']); gulp.task('default', ['build', 'watch']);

Next, I setup the tasks I’d like to be able to run. Notice that the build and default tasks are just sets of other tasks. This lets me maintain a separation of concern and have tasks that do only one thing.

gulp.task('watch', function(){'./src/*.coffee', ['coffee']);'./src/*.sketch', ['sketch']); browserSync({ server: { baseDir: 'build' }, browser: 'google chrome', injectChanges: false, files: ['build/**/*.*'], notify: false }); });

This is the watch task. I tell Gulp to watch my src folder for CoffeeScript files and Sketch files; these are the only source files that define my prototype and will be the ones I change often. When a CoffeeScript or Sketch file changes, the coffee or sketch tasks are run, respectively.

Next, I set up browserSync to push any changed files within the build directory to my browser, which in this case is Chrome. This keeps my prototype in the browser up-to-date without having to hit refresh. Notice I’m also specifying a server: key, which essentially spins up a web server with the files in my build directory.

gulp.task('coffee', function(){ gulp.src('src/*.coffee') .pipe(coffee({bare: true}).on('error', gutil.log)) .pipe(gulp.dest('build/')) });

The second major task is coffee. This, as you may have guessed, simply transcompiles any *.coffee files in my src folder to Javascript, and places the resulting JS file in my build folder. Because we are containing our prototype in one file, there is no need for concatenation or minification.

gulp.task('sketch', function(){ gulp.src('src/*.sketch') .pipe(sketch({ export: 'slices', format: 'png', saveForWeb: true, scales: 1.0, trimmed: false })) .pipe(gulp.dest('build/images')) });

The sketch task is also aptly named, as it is responsible for exporting the slices I have defined in my Sketch file to pngs, which can then be used in the prototype. In Sketch, you can mark a layer or group as “exportable”, and this task only looks for those assets.

gulp.task('copy', function(){ gulp.src('src/index.html') .pipe(gulp.dest('build')) gulp.src('src/lib/**/*.*') .pipe(gulp.dest('build/lib')) gulp.src('src/images/**/*.{png, jpg, svg}') .pipe(gulp.dest('build/images')); });

The last task is simply housekeeping. It is only run once, when you first start the Gulp process on the command line. It copies any HTML files, JS libraries, or other images I want available to my prototype. This let’s me keep everything in my src folder, which is a best practice. As a general rule of thumb for build systems, avoid placing anything in your output directory (in this case, build), as you jeopardize your ability to have repeatable builds.

Recall my default task was defined above, as:

gulp.task('default', ['build', 'watch']);

This means that by running $ gulp in this directory from the command line, my default task is kicked off. It won’t exit without ctrl-C, as watch will run indefinitely. This lets me run this command only once, and get to work.

$ gulp

So where are we now? If everything worked, you should see your prototype available at http://localhost:3000. Saving either or app.sketch should trigger the watch we setup, and compile the appropriate assets to our build directory. This change of files in the build directory should trigger BrowserSync, which will then update our prototype in the browser. Voila! We can now work in either of 2 files ( or app.sketch), and just by saving them have our shareable, web-based prototype updated in place. And the best part is, I only had to set this up once! I can now use this framework with mynext project and immediately begin designing, with a hyper-fast iteration loop to facilitate that work.

The next step is to actually design the animation using Sketch and framer.js, which deserves it’s own post altogether and will be covered in Part Two of this series.

Follow me on twitter @darrinhenein to be notified when part two is available.

Categorieën: Mozilla-nl planet

Adam Okoye: Tests, Feedback Results, and a New Thank You

Mozilla planet - ma, 26/01/2015 - 20:14

As I’ve said in previous posts, my internship primarily revolves around creating a new “thank you” page that will be shown to people who leave negative (or “sad”) feedback on

The current thank you page which the user gets directed to after giving any type of feedback, good or bad, looks like this:

current thank you page

As you can see it’s pretty basic. It does include a link to Support Mozilla (SUMO), which I think is very useful. It also has links a page that shows you how to download different builds of Firefox (beta, nightly, etc), a page with a lot of useful information on how to get involved with contributing to Mozilla, and links to Mozilla’s social networking profiles. While the links are interesting in their own right, they don’t do a lot in terms of quickly guiding someone a solution if they’re having a problem with Firefox. We want to change that in order to hopefully make the page more useful to people who are having trouble using Firefox. The new thank you page will end up being a different Django template that people will be redirected to.

Part of making the new page more useful will be including links to SUMO articles that are related to the feedback that people have given. Last week I wrote the code that redirects a specific segment of people to the new thank you page as well as a test for that code. The new thank you page will be rolled out via a Waffle flag which I made some weeks ago which made writing the test a tad more complex. Right now there are a few finishing touches that needed to be added to the test in order to close out the bug, but I’m hoping to finish those by the end of Tuesday, the 27th.

We’ll be using one of the three SUMO API endpoints to take the text from the feedback, search the knowledge base, and questions and return results. To figure out which endpoint to use I used a script that Will Kahn-Greene wrote to look at feedback taken from Input and results returned via SUMO’s endpoints and then rank which endpoint’s results were the best. I did that for 120 pieces of feedback.

Tomorrow I’m going to start sketching and mocking up the new thank you page which I’m really looking forward to. I’ll be using a white board for the sketching which will be a first for me I’m hoping that it’ll be easier for me than pencils/pen and paper. I’ll also be able to quickly and easily upload all of the pictures I take of the whiteboard to the my computer which I think will be useful.

Categorieën: Mozilla-nl planet

Mark Surman: Mozilla Participation Plan (draft)

Mozilla planet - ma, 26/01/2015 - 20:02

Mozilla needs a more creative and radical approach to participation in order to succeed. That is clear. And, I think, pretty widely agreed upon across Mozilla at this stage. What’s less clear: what practical steps do we take to supercharge participation at Mozilla? And what does this more creative and radical approach to participation look like in the everyday work and lives of people involved Mozilla?

Mozilla and participation

This post outlines what we’ve done to begin answering these questions and, importantly, it’s a call to action for your involvement. So read on.

Over the past two months, we’ve written a first draft Mozilla Participation Plan. This plan is focused on increasing the impact of participation efforts already underway across Mozilla and on building new methods for involving people in Mozilla’s mission. It also calls for the creation of new infrastructure and ways of working that will help Mozilla scale its participation efforts. Importantly, this plan is meant to amplify, accelerate and complement the many great community-driven initiatives that already exist at Mozilla (e.g. SuMo, MDN, Webmaker, community marketing, etc.) — it’s not a replacement for any of these efforts.

At the core of the plan is the assumption that we need to build a virtuous circle between 1) participation that helps our products and programs succeed and 2) people getting value from participating in Mozilla. Something like this:

Virtuous circle of participation

This is a key point for me: we have to simultaneously pay attention to the value participation brings to our core work and to the value that participating provides to our community. Over the last couple of years, many of our efforts have looked at just one side or the other of this circle. We can only succeed if we’re constantly looking in both directions.

With this in mind, the first steps we will take in 2015 include: 1) investing in the ReMo platform and the success of our regional communities and 2) better connecting our volunteer communities to the goals and needs of product teams. At the same time, we will: 3) start a Task Force, with broad involvement from the community, to identify and test new approaches to participation for Mozilla.

Participation Plan

The belief is that these activities will inject the energy needed to strengthen the virtuous circle reasonably quickly. We’ll know we’re succeeding if a) participation activities are helping teams across Mozilla measurably advance product and program goals and b) volunteers are getting more value out of their participation out of Mozilla. These are key metrics we’re looking at for 2015.

Over the longer run, there are bigger ambitions: an approach to participation that is at once massive and diverse, local and global. There will be many more people working effectively and creatively on Mozilla activities than we can imagine today, without the need for centralized control. This will result in a different and better, more diverse and resilient Mozilla — an organization that can consistently have massive positive impact on the web and on people’s lives over the long haul.

Making this happen means involvement and creativity from people across Mozilla and our community. However, a core team is needed to drive this work. In order to get things rolling, we are creating a small set of dedicated Participation Teams:

  1. A newly formed Community Development Team that will focus on strengthening ReMo and tying regional communities into the work of product and program groups.
  2. A participation ‘task force’ that will drive a broad conversation and set of experiments on what new approaches could look like.
  3. And, eventually, a Participation Systems Team will build out new infrastructure and business processes that support these new approaches across the organization.

For the time being, these teams will report to Mitchell and me. We will likely create an executive level position later in the year to lead these teams.

As you’ll see in the plan itself, we’re taking very practical and action oriented steps, while also focusing on and experimenting with longer-term questions. The Community Development Team is working on initiatives that are concrete and can have impact soon. But overall we’re just at the beginning of figuring out ‘radical participation’.

This means there is still a great deal of scope for you to get involved — the plans  are still evolving and your insights will improve our process and the plan. We’ll come out with information soon on more structured ways to engage with what we’re calling the ‘task force’. In the meantime, we strongly encourage your ideas right away on ways the participation teams could be working with products and programs. Just comment here on this post or reach out to Mitchell or me.

PS. I promised a follow up on my What is radical participation? post, drawing on comments people made. This is not that. Follow up post on that topic still coming.

Filed under: mozilla, opensource
Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting

Mozilla planet - ma, 26/01/2015 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the month: January 2015

Mozilla planet - ma, 26/01/2015 - 19:22

Irvin Chen has been an inspiring contributor last month and we want to recognize his great work as a Rep.

irvinIrvin has been organizing weekly MozTW Lab and also other events to spread Mozilla in the local community space in Taiwan, such as Spark meetup, d3.js meetup or Wikimedia mozcafe.

He also helped to run an l10n sprint for video subtitle/Mozilla links/SUMO and webmaker on transifex.

Congratulations Irvin for your awesome work!

Don’t forget to congratulate him on Discourse!

Categorieën: Mozilla-nl planet

Ben Kero: Attempts source large E-Ink screens for a laptop-like device

Mozilla planet - ma, 26/01/2015 - 19:11

One idea that’s been bouncing around in my head for the last few years has been a laptop with an E-Ink display. I would have thought this would be a niche that had been carved out already, but it doesn’t seem that any companies are interested in exploring it.

I use my laptop in some non-traditional environments, such as outdoors in direct sunlight. Almost all laptops are abysmal in a scenario like this. E-Ink screens are a natural response to this requirement. Unlike traditional TFT-LCD screens, E-Ink panels are meant to be viewed with an abundance of natural light. As a human, I too enjoy natural light.

Besides my fantasies of hacking on the beach, these would be very useful to combat the raster burn that seems to be so common among regular computer users. Since TFT-LCDs act as an artificial sunlight, they can have very negative side-effects on the eyes, and indirectly on the brain. Since E-Ink screens work without a backlight they are not susceptible to these problems. This has the potential to help me reclaim some of the time that I spend without a device before bedtime for health reasons.

The limitations of E-Ink panels are well known to anybody who has used one. The refresh rate is not nearly as good, the color saturation varies between abysmal to non-existent, and the available size are much more limited than LCD panels (smaller). Despite all these reasons, the panels do have advantages. They do not give the user raster burn like other backlit panels. They are cheap, standardized, and easy to replace. They are also useable in direct sunlight. Until recently they offered competitive DPI compared to laptop panels as well.

As a computer professional many of these downsides of LCD panels concern me. I spend a large amount of my work day staring at the displays. I fear this will have a lasting effect on me and many others who do the same.

The E-Ink manufacturer offerings are surprisingly sparse, with no devices that I can find targeted towards consumers or hobbyists. Traditional LCDs are available over a USB interface, able to be used as external displays on any embedded or workstation system. Interfaces for E-Ink displays are decidedly less advanced. The panels that Amazon sources use an undocumented DTO protocol/connector. The panels that everybody else seems to use also have a specific protocol/connector, but some controllers are available.

The one panel I’ve been able to source to try to integrate into a laptop-like object is PervasiveDisplay’s 9.7″ panel with SPI controller. This would allow a computer to speak SPI to the controller board, which would then translate the calls into operations to manage drawing to the panel. Although this is useful, availability is limited to a few component wholesale sites and Digikey. Likewise it’s not exactly cheap. Although the SPI controller board is only $28, the set of controller and 9.7″ panel is $310. Similar replacement Kindle DX panels cost around $85 elsewhere on the internet.

It would be cheaper to buy an entire Kindle DX, scrap the computer and salvage the panel than to buy the PervasiveDisplays evaluation kit on Digikey. To be fair this is comparing a used consumer device to a niche evaluation kit, so of course the former device is going to be cheaper.

To their credit, they’re also trying to be active in the Open Hardware community. They’ve launched, which is a site advocating freeing ePaper technology from the hands of the few companies and into the hands of open hardware enthusiasts and low-run product manufacturers.

From their site:

We recognize ePaper is a new technology and we’re asking your help in making it better known. Up till now, all industry players have kept the core technologies closed. We want to change this. If the history of the Internet has proven anything, it is that open technologies lead to unbounded innovation and unprecedented value added to the entire economy.

There are some panels listed up on SparkFun and Adafruit, although those are limited to 1.44 inch to 2.0 inch displays, which are useless for my use case. Likewise, these are geared towards Arduino compatibility, while I need something that is performant through a (relatively) fast and high bandwidth interface like exists on my laptop mainboard.

Bunnie/Xobs of the Kosagi Novena open laptop project clued me in to the fact that the iMX6 SoC present in the aforementioned device contains an EPD (Electronic Paper Display) controller. Although the pins on the chip likely aren’t broken out to the board, it gives me hope. My hope is that in the future devices such as the Raspberry Pi, CubieBoard, or other single-board computers will break out the controller to a header on the main board.

I think that making this literal stockpile of panels available to open hardware enthusiasts, we can empower them to create anything from innovations in the eBook reader market to creating an entirely new class of device.

Categorieën: Mozilla-nl planet

Mozilla busca llevar la realidad virtual a Firefox -

Nieuws verzameld via Google - ma, 26/01/2015 - 18:34

Mozilla busca llevar la realidad virtual a Firefox
Mientras Facebook ya adquirió Oculus VR y Microsoft se prepara para HoloLens, Mozilla (de Firefox) se suma al mundo de la realidad virtual, por lo que incorpora ahora a sus ediciones en desarrollo continuo (Nightly) y para desarrolladores (Developer ...

en meer »
Categorieën: Mozilla-nl planet

Adam Lofting: The week ahead: 26 Jan 2015

Mozilla planet - ma, 26/01/2015 - 17:09


I should have started the week by writing this, but I’ll do it quickly now anyway.

My current todo list.
List status: Pretty good. Mostly organized near the top. Less so further down. Fine for now.

Objectives to call out for this week.

  • Bugzilla and Github clean-out / triage
  • Move my home office out to the shed (depending on a few things)

+ some things that carry over from last week

  • Write a daily working process
  • Work out a plan for aligning metrics work with dev team heartbeats
  • Don’t let the immediate todo list get in the way of planning long term processes
  • Invest time in working open
  • Wrestle with multiple todo list systems until they (or I) work together nicely
Categorieën: Mozilla-nl planet