mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 maanden 1 dag geleden

Mozilla Security Blog: Analysis of the Alexa Top 1M sites

wo, 28/06/2017 - 18:47

Prior to the release of the Mozilla Observatory a year ago, I ran a scan of the Alexa Top 1M websites. Despite being available for years, the usage rates of modern defensive security technologies was frustratingly low. A lack of tooling combined with poor and scattered documentation had led to there being little awareness around countermeasures such as Content Security Policy (CSP), HTTP Strict Transport Security (HSTS), and Subresource Integrity (SRI).

A few months after the Observatory’s release — and 1.5M Observatory scans later — I reassessed the Top 1M websites. The situation appeared as if it was beginning to improve, with the use of HSTS and CSP up by approximately 50%. But were those improvements simply low-hanging fruit, or has the situation continued to improve over the following months?

Technology April 2016 October 2016 June 2017 % Change Content Security Policy (CSP) .005%1
.012%2 .008%1
.021%2 .018%1
.043%2 +125% Cookies (Secure/HttpOnly)3 3.76% 4.88% 6.50% +33% Cross-origin Resource Sharing (CORS)4 93.78% 96.21% 96.55% +.4% HTTPS 29.64% 33.57% 45.80% +36% HTTP → HTTPS Redirection 5.06%5
8.91%6 7.94%5
13.29%6 14.38%5
22.88%6 +57% Public Key Pinning (HPKP) 0.43% 0.50% 0.71% +42%  — HPKP Preloaded7 0.41% 0.47% 0.43% -9% Strict Transport Security (HSTS)8 1.75% 2.59% 4.37% +69%  — HSTS Preloaded7 .158% .231% .337% +46% Subresource Integrity (SRI) 0.015%9 0.052%10 0.113%10 +117% X-Content-Type-Options (XCTO) 6.19% 7.22% 9.41% +30% X-Frame-Options (XFO)11 6.83% 8.78% 10.98% +25% X-XSS-Protection (XXSSP)12 5.03% 6.33% 8.12% +28%

The pace of improvement across the web appears to be continuing at an astounding rate. Although a 36% increase in the number of sites that support HTTPS might seem small, the absolute numbers are quite large — it represents over 119,000 websites.

Not only that, but 93,000 of those websites have chosen to be HTTPS by default, with 18,000 of them forbidding any HTTP access at all through the use of HTTP Strict Transport Security.

The sharp jump in the rate of Content Security Policy (CSP) usage is similarly surprising. It can be difficult to implement for a new website, and often requires extensive rearchitecting to retrofit to an existing site, as most of the Alexa Top 1M sites are. Between increasingly improving documentation, advances in CSP3 such as ‘strict-dynamic’, and CSP policy generators such as the Mozilla Laboratory, it appears that we might be turning a corner on CSP usage around the web.

Observatory Grading

Despite this progress, the vast majority of large websites around the web continue to not use Content Security Policy and Subresource Integrity. As these technologies — when properly used — can nearly eliminate huge classes of attacks against sites and their users, they are given a significant amount of weight in Observatory scans.

As a result of their low usage rates amongst established websites, they typically receive failing grades from the Observatory. Nevertheless, I continue to see improvements across the board:

Grade April 2016 October 2016 June 2017 % Change  A+ .003% .008% .013% +62% A .006% .012% .029% +142% B .202% .347% .622% +79% C .321% .727% 1.38% +90% D 1.87% 2.82% 4.51% +60% F 97.60% 96.09% 93.45% -2.8%

As 969,924 scans were successfully completed in the last survey, a decrease in failing grades by 2.8% implies that over 27,000 of the largest sites in the world have improved from a failing grade in the last eight months alone.

In fact, my research indicates that over 50,000 websites around the web have directly used the Mozilla Observatory to improve their grades, indicated by scanning their website, making an improvement, and then scanning their website again. Of these 50,000 websites, over 2,500 have improved all the way from a failing grade to an A or A+ grade.

When I first built the Observatory a year ago at Mozilla, I had never imagined that it would see such widespread use. 3.8M scans across 1.55M unique domains later, it seems to have made a significant difference across the internet. I feel incredibly lucky to work at a company like Mozilla that has provided me with a unique opportunity to work on a tool designed solely to make internet a better place.

Please share the Mozilla Observatory and the Web Security Guidelines so that the web can continue to see improvements over the years to come!

 

Footnotes:

  1. Allows 'unsafe-inline' in neither script-src nor style-src
  2. Allows 'unsafe-inline' in style-src only
  3. Amongst sites that set cookies
  4. Disallows foreign origins from reading the domain’s contents within user’s context
  5. Redirects from HTTP to HTTPS on the same domain, which allows HSTS to be set
  6. Redirects from HTTP to HTTPS, regardless of the final domain
  7. As listed in the Chromium preload list
  8. max-age set to at least six months
  9. Percentage is of sites that load scripts from a foreign origin
  10. Percentage is of sites that load scripts
  11. CSP frame-ancestors directive is allowed in lieu of an XFO header
  12. Strong CSP policy forbidding 'unsafe-inline' is allowed in lieu of an XXSSP header

The post Analysis of the Alexa Top 1M sites appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Important Platform Update

wo, 28/06/2017 - 15:47

Hello, SUMO Mozillians!

We have an important update regarding our site to share with you, so grab something cold/hot to drink (depending on your climate), sit down, and give us your attention for the next few minutes.

As you know, we have been hard at work for quite some time now migrating the site over to a new platform. You were a part of the process from day one (since we knew we needed to find a replacement for Kitsune) and we would like to once more thank you for your participation throughout that challenging and demanding period. Many of you have given us feedback or lent a hand with testing, checking, cleaning up, and generally supporting our small team before, during, and after the migration.

Over time and due to technical difficulties beyond our team’s direct control, we decided to ‘roll back’ to Kitsune to better support the upcoming releases of Firefox and related Mozilla products.

The date of ‘rolling forward’ to Lithium was to be decided based on the outcome of leadership negotiations of contract terms and the solving of technical issues (such as redirects, content display, and localization flows, for example) by teams from both sides working together.

In the meantime, we have been using Kitsune to serve content to users and provide forum support.

We would like to inform you that a decision has been made on Mozilla’s side to keep using Kitsune for the foreseeable future. Our team will investigate alternative options to improve and update Mozilla’s support for our users and ways to empower your contributions in that area.

What are the reasons behind this decision?

  1. Technical challenges in shaping Lithium’s platform to meet all of Mozilla’s user support needs.
  2. The contributor community’s feedback and requirements for contributing comfortably.
  3. The upcoming major releases for Firefox (and related products) requiring a smooth and uninterrupted user experience while accessing support resources.

What are the immediate implications of this decision?

  1. Mozilla will not be proceeding with a full ‘roll forward’ of SUMO to Lithium at this time. All open Lithium-related Bugzilla requests will be re-evaluated and may be closed as part of our next sprint (after the San Francisco All Hands).
  2. SUMO is going to remain on Kitsune for both support forum and knowledge base needs for now. Social support will continue on Respond.
  3. The SUMO team is going to kick off a reevaluation process for Kitsune’s technical status and requirements with the help of Mozilla’s IT team. This will include evaluating options of using Kitsune in combination with other tools/platforms to provide support for our users and contribution opportunities for Mozillians.

If you have questions about this update or want to discuss it, please use our community forums.

We are, as always, relying on your time and effort in successfully supporting millions of Mozilla’s software users and fans around the world. Thank you for your ongoing participation in making the open web better!

Sincerely yours,

The SUMO team

P.S. Watch the video from the first day of the SFO All Hands if you want to see us discuss the above (and not only).

 

Categorieën: Mozilla-nl planet

Chris Lord: Goodbye Mozilla

wo, 28/06/2017 - 13:16

Today is effectively my last day at Mozilla, before I start at Impossible on Monday. I’ve been here for 6 years and a bit and it’s been quite an experience. I think it’s worth reflecting on, so here we go; Fair warning, if you have no interest in me or Mozilla, this is going to make pretty boring reading.

I started on June 6th 2011, several months before the (then new, since moved) London office opened. Although my skills lay (lie?) in user interface implementation, I was hired mainly for my graphics and systems knowledge. Mozilla was in the region of 500 or so employees then I think, and it was an interesting time. I’d been working on the code-base for several years prior at Intel, on a headless backend that we used to build a Clutter-based browser for Moblin netbooks. I wasn’t completely unfamiliar with the code-base, but it still took a long time to get to grips with. We’re talking several million lines of code with several years of legacy, in a language I still consider myself to be pretty novice at (C++).

I started on the mobile platform team, and I would consider this to be my most enjoyable time at the company. The mobile platform team was a multi-discipline team that did general low-level platform work for the mobile (Android and Meego) browser. When we started, the browser was based on XUL and was multi-process. Mobile was often the breeding ground for new technologies that would later go on to desktop. It wasn’t long before we started developing a new browser based on a native Android UI, removing XUL and relegating Gecko to page rendering. At the time this felt like a disappointing move. The reason the XUL-based browser wasn’t quite satisfactory was mainly due to performance issues, and as a platform guy, I wanted to see those issues fixed, rather than worked around. In retrospect, this was absolutely the right decision and lead to what I’d still consider to be one of Android’s best browsers.

Despite performance issues being one of the major driving forces for making this move, we did a lot of platform work at the time too. As well as being multi-process, the XUL browser had a compositor system for rendering the page, but this wasn’t easily portable. We ended up rewriting this, first almost entirely in Java (which was interesting), then with the rendering part of the compositor in native code. The input handling remained in Java for several years (pretty much until FirefoxOS, where we rewrote that part in native code, then later, switched Android over).

Most of my work during this period was based around improving performance (both perceived and real) and fluidity of the browser. Benoit Girard had written an excellent tiled rendering framework that I polished and got working with mobile. On top of that, I worked on progressive rendering and low precision rendering, which combined are probably the largest body of original work I’ve contributed to the Mozilla code-base. Neither of them are really active in the code-base at the moment, which shows how good a job I didn’t do maintaining them, I suppose.

Although most of my work was graphics-focused on the platform team, I also got to to do some layout work. I worked on some over-invalidation issues before Matt Woodrow’s DLBI work landed (which nullified that, but I think that work existed in at least one release). I also worked a lot on fixed position elements staying fixed to the correct positions during scrolling and zooming, another piece of work I was quite proud of (and probably my second-biggest contribution). There was also the opportunity for some UI work, when it intersected with platform. I implemented Firefox for Android’s dynamic toolbar, and made sure it interacted well with fixed position elements (some of this work has unfortunately been undone with the move from the partially Java-based input manager to the native one). During this period, I was also regularly attending and presenting at FOSDEM.

I would consider my time on the mobile platform team a pretty happy and productive time. Unfortunately for me, those of us with graphics specialities on the mobile platform team were taken off that team and put on the graphics team. I think this was the start in a steady decline in my engagement with the company. At the time this move was made, Mozilla was apparently trying to consolidate teams around products, and this was the exact opposite happening. The move was never really explained to me and I know I wasn’t the only one that wasn’t happy about it. The graphics team was very different to the mobile platform team and I don’t feel I fit in as well. It felt more boisterous and less democratic than the mobile platform team, and as someone that generally shies away from arguments and just wants to get work done, it was hard not to feel sidelined slightly. I was also quite disappointed that people didn’t seem particular familiar with the graphics work I had already been doing and that I was tasked, at least initially, with working on some very different (and very boring) desktop Linux work, rather than my speciality of mobile.

I think my time on the graphics team was pretty unproductive, with the exception of the work I did on b2g, improving tiled rendering and getting graphics memory-mapped tiles working. This was particularly hard as the interface was basically undocumented, and its implementation details could vary wildly depending on the graphics driver. Though I made a huge contribution to this work, you won’t see me credited in the tree unfortunately. I’m still a little bit sore about that. It wasn’t long after this that I requested to move to the FirefoxOS systems front-end team. I’d been doing some work there already and I’d long wanted to go back to doing UI. It felt like I either needed a dramatic change or I needed to leave. I’m glad I didn’t leave at this point.

Working on FirefoxOS was a blast. We had lots of new, very talented people, a clear and worthwhile mission, and a new code-base to work with. I worked mainly on the home-screen, first with performance improvements, then with added features (app-grouping being the major one), then with a hugely controversial and probably mismanaged (on my part, not my manager – who was excellent) rewrite. The rewrite was good and fixed many of the performance problems of what it was replacing, but unfortunately also removed features, at least initially. Turns out people really liked the app-grouping feature.

I really enjoyed my time working on FirefoxOS, and getting a nice clean break from platform work, but it was always bitter-sweet. Everyone working on the project was very enthusiastic to see it through and do a good job, but it never felt like upper management’s focus was in the correct place. We spent far too much time kowtowing to the desires of phone carriers and trying to copy Android and not nearly enough time on basic features and polish. Up until around v2.0 and maybe even 2.2, the experience of using FirefoxOS was very rough. Unfortunately, as soon as it started to show some promise and as soon as we had freedom from carriers to actually do what we set out to do in the first place, the project was cancelled, in favour of the whole Connected Devices IoT debacle.

If there was anything that killed morale for me more than my unfortunate time on the graphics team, and more than having FirefoxOS prematurely cancelled, it would have to be the Connected Devices experience. I appreciate it as an opportunity to work on random semi-interesting things for a year or so, and to get some entrepreneurship training, but the mismanagement of that whole situation was pretty epic. To take a group of hundreds of UI-focused engineers and tell them that, with very little help, they should organised themselves into small teams and create IoT products still strikes me as an idea so crazy that it definitely won’t work. Certainly not the way we did it anyway. The idea, I think, was that we’d be running several internal start-ups and we’d hopefully get some marketable products out of it. What business a not-for-profit company, based primarily on doing open-source, web-based engineering has making physical, commercial products is questionable, but it failed long before that could be considered.

The process involved coming up with an idea, presenting it and getting approval to run with it. You would then repeat this approval process at various stages during development. It was, however, very hard to get approval for enough resources (both time and people) to finesse an idea long enough to make it obviously a good or bad idea. That aside, I found it very demoralising to not have the opportunity to write code that people could use. I did manage it a few times, in spite of what was happening, but none of this work I would consider myself particularly proud of. Lots of very talented people left during this period, and then at the end of it, everyone else was laid off. Not a good time.

Luckily for me and the team I was on, we were moved under the umbrella of Emerging Technologies before the lay-offs happened, and this also allowed us to refocus away from trying to make an under-featured and pointless shopping-list assistant and back onto the underlying speech-recognition technology. This brings us almost to present day now.

The DeepSpeech speech recognition project is an extremely worthwhile project, with a clear mission, great promise and interesting underlying technology. So why would I leave? Well, I’ve practically ended up on this team by a series of accidents and random happenstance. It’s been very interesting so far, I’ve learnt a lot and I think I’ve made a reasonable contribution to the code-base. I also rewrote python_speech_features in C for a pretty large performance boost, which I’m pretty pleased with. But at the end of the day, it doesn’t feel like this team will miss me. I too often spend my time finding work to do, and to be honest, I’m just not interested enough in the subject matter to make that work long-term. Most of my time on this project has been spent pushing to open it up and make it more transparent to people outside of the company. I’ve added model exporting, better default behaviour, a client library, a native client, Python bindings (+ example client) and most recently, Node.js bindings (+ example client). We’re starting to get noticed and starting to get external contributions, but I worry that we still aren’t transparent enough and still aren’t truly treating this as the open-source project it is and should be. I hope the team can push further towards this direction without me. I think it’ll be one to watch.

Next week, I start working at a new job doing a new thing. It’s odd to say goodbye to Mozilla after 6 years. It’s not easy, but many of my peers and colleagues have already made the jump, so it feels like the right time. One of the big reasons I’m moving, and moving to Impossible specifically, is that I want to get back to doing impressive work again. This is the largest regret I have about my time at Mozilla. I used to blog regularly when I worked at OpenedHand and Intel, because I was excited about the work we were doing and I thought it was impressive. This wasn’t just youthful exuberance (he says, realising how ridiculous that sounds at 32), I still consider much of the work we did to be impressive, even now. I want to be doing things like that again, and it feels like Impossible is a great opportunity to make that happen. Wish me luck!

Categorieën: Mozilla-nl planet

Daniel Pocock: How did the world ever work without Facebook?

di, 27/06/2017 - 21:29

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandhi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandhi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

Categorieën: Mozilla-nl planet

Tarek Ziadé: Advanced Molotov example

vr, 23/06/2017 - 00:00

Last week, I blogged about how to drive Firefox from a Molotov script using Arsenic.

It is pretty straightforward if you are doing some isolated interactions with Firefox and if each worker in Molotov lives its own life.

However, if you need to have several "users" (==workers in Molotov) running in a coordinated way on the same web page, it gets a little bit tricky.

Each worker is its coroutine and triggers the execution of one scenario by calling the coroutine that was decorated with @scenario.

Let's consider this simple use case: we want to run five workers in parallel that all visit the same etherpad lite page with their own Firefox instance through Arsenic.

One of them is adding some content in the pad and all the others are waiting on the page to check that it is updated with that content.

So we want four workers to wait on a condition (=pad written) before they make sure and check that they can see it.

Moreover, since Molotov can call a scenario many times in a row, we need to make sure that everything was done in the previous round before changing the pad content again. That is, four workers did check the content of the pad.

To do all that synchronization, Python's asyncio offers primitives that are similar to the one you would use with threads. asyncio.Event can be used for instance to have readers waiting for the writer and vice-versa.

In the example below, a class wraps two Events and exposes simple methods to do the syncing by making sure readers and writer are waiting for each other:

class Notifier(object): def __init__(self, readers=5): self._current = 1 self._until = readers self._readers = asyncio.Event() self._writer = asyncio.Event() def _is_set(self): return self._current == self._until async def wait_for_writer(self): await self._writer.wait() async def one_read(self): if self._is_set(): return self._current += 1 if self._current == self._until: self._readers.set() def written(self): self._writer.set() async def wait_for_readers(self): await self._readers.wait()

Using this class, the writer can call written() once it has filled the pad and the readers can wait for that event by calling wait_for_writer() which blocks until the write event is set.

one_read() is then called for each read. This second event is used by the next writer to make sure it can change the pad content after every reader did read it.

So how do we use this class in a Molotov test? There are several options and the simplest one is to create one Notifier instance per run and set it in a variable:

@molotov.scenario(1) async def example(session): get_var = molotov.get_var notifier = get_var('notifier' + str(session.step), factory=Notifier) wid = session.worker_id if wid != 4: # I am NOT worker 4! I read the pad # wait for worker #4 to edit the pad await notifier.wait_for_writer() # <.. pad reading here...> # notify that we've read it await notifier.one_read() else: # I am worker 4! I write in the pad if session.step > 1: # waiting for the previous readers to have finished # before we start a new round previous_notifier = get_var('notifier' + str(session.step)) await previous_notifier.wait_for_readers() # <... writes in the pad...> # informs that the write task was done notifier.written()

A lot is going on in this scenario. Let's look at each part in detail. First of all, the notifier is created as a var via set_var(). Its name contains the session step.

The step value is incremented by Molotov every time a worker is running a scenario, and we can use that value to create one distinct Notifier instance per run. It starts at 1.

Next, the session.worker_id value gives each distinct worker a unique id. If you run molotov with 5 workers, you will get values from 0 to 4.

We are making the last worker (worker id== 4) the one that will be in charge of writing in the pad.

For the other workers (=readers), they just use wait_for_writer() to sit and wait for worker 4 to write the pad. worker 4 notifies them with a call to written().

The last part of the script allows Molotov to run the script several times in a row using the same workers. When the writer starts its work, if the step value is superior to one, it means that we have already run the test at least one time.

The writer, in that case, gets back the Notifier from the previous run and verifies that all the readers did their job before changing the pad.

All of this syncing work sound complicated, but once you understand the pattern, it let you run advanced scenario in Molotov where several concurrent "users" need to collaborate.

You can find the full script at https://github.com/tarekziade/molosonic/blob/master/loadtest.py

Categorieën: Mozilla-nl planet

Firefox UX: Let‘s tackle the same challenge again, and again.

do, 22/06/2017 - 21:12
Actually, let’s not!

The products we build get more design attention as our Firefox UX team has grown from about 15 to 45 people. Designers can now continue to focus on their product after the initial design is finished, instead of having to move to the next project. This is great as it helps us improve our products step by step. But this also leads to increasing efforts to keep this growing team in sync and able to timely answer all questions posed to us.

Scaling communication from small to big teams leads to massive effort for a few.

Especially for engineers and new designers it is often difficult to get timely answers to simple questions. Those answers are often in the original spec, which too often is hard to locate. Or worse, it may be in the mind of the designer, who may have left, or receives too many questions to respond timely.

In a survey we ran in early 2017, developers reported to feel they

  • spend too much time identifying the right specs to build from,
  • spend too much time waiting for feedback from designers, and
  • spend too much time mapping new designs to existing UI elements.

In the same survey designers reported to feel they

  • spend too much time identifying current UI to re-use in their designs, and
  • spend too much time re-building current UI to use in their designs.

All those repetitive tasks people feel they spend too much time on ultimately keep us from tackling newer and bigger challenges. ‒ So, actually, let‘s not spend our time on those.

Let’s help people spend time on what they love to do.

Shifting some communication to a central tool can reduce load on people and lower the barrier for entry.

Let’s build tools that help developers know what a given UI should look like, without them needing to wait for feedback from designers. And let’s use that system for designers to identify UI we already built, and to learn how they can re-use it.

We call this the Photon Design System,
and its first beta version is ready to be used:
design.firefox.com/photon

We are happy to receive feedback and contributions on the current content of the system, as well as on what content to add next.

Photon Design System

Based on what we learned from people, we are building our design system to help people:

  • find what they are looking for easily,
  • understand the context of that quickly, and
  • more deeply understand Firefox Design.

Currently the Photon Design System covers fundamental design elements like icons, colors, typography and copy-writing as well as our design principles and guidelines on how to design for scale. Defining those already helped designers better align across products and features, and developers have a definitive source to fall back to when a design does not specify a color, icon or other.

Growth

With all the design fundamentals in place we are starting to combine them into defined components that can easily be reused to create consistent Firefox UI across all platforms, from mobile to desktop, and from web-based to native. This will add value for people working on Firefox products, as well as help people working on extensions for Firefox.

If you are working on Firefox UI

We would love to learn from you what principles, patterns & components your team’s work touches, and what you feel is worth documenting for others to learn from, and use in their UI.

Share your principle/pattern/component with us!

And if you haven’t yet, ask yourself where you could use what’s already documented in the Photon Design System and help us find more and more synergies across our products to utilize.

If you are working on a Firefox extension

We would love to learn about where you would have wanted design support when building your extension, and when you had to spend more time on design then you intended to.

Share with us!

Let‘s tackle the same challenge again, and again. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Open Design Blog: MDN’s new design is in Beta

do, 22/06/2017 - 19:46

Change is coming to MDN. In a recent post, we talked about updates to the MDN brand, and this time we want to focus on the upcoming design changes for MDN. MDN started as a repository for all Mozilla documentation, but today MDN’s mission is to provide developers with the information they need to build things on the open Web. We want to more clearly represent that mission in the naming and branding of MDN.

New MDN logo

MDN’s switch to new branding reflects an update of Mozilla’s overall brand identity, and we are taking this opportunity to update MDN’s visual design to match Mozilla’s design language and clean new look. For MDN that means bold typography that highlights the structure of the page, more contrast, and a reduction to the essentials. Color in particular is more sparingly used, so that the code highlighting stands out.

Here’s what you can expect from the first phase:

screenshot of new MDN design

New MDN design

The core idea behind MDN’s brand identity change is that MDN is a resources for web developers. We realize that MDN is a critical resource for many web developers and we want to make sure that this update is an upgrade for all users. Instead of one big update, we will make incremental changes to the design in several phases. For the initial launch, we will focus on applying the design language to the header, footer and typography. The second phase will see changes to landing pages such as the web platform, learning area, and MDN start page. The last part of the redesign will cover the article pages themselves, and prepare us for any functional changes we’ve got coming in the future.

Today, we are launching the first phase of the redesign to our beta users. Over the next few weeks we’ll collect feedback, and fix potential issues before releasing it to all MDN users in July. Become a beta tester on MDN and be among the first to see these updates, track the progress, and provide us with feedback to make the whole thing even better for the official launch.

The post MDN’s new design is in Beta appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Gigabit Eugene Open House

do, 22/06/2017 - 18:00

Mozilla Gigabit Eugene Open House Hello Eugene, Oregon! Come meet with local innovators, educators, entrepreneurs, students, and community advocates and learn about what it means to be a “Mozilla Gigabit...

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Gigabit Eugene Open House

do, 22/06/2017 - 18:00

Mozilla Gigabit Eugene Open House Hello Eugene, Oregon! Come meet with local innovators, educators, entrepreneurs, students, and community advocates and learn about what it means to be a “Mozilla Gigabit...

Categorieën: Mozilla-nl planet

Air Mozilla: Gigabit Community Fund June 2017 RFP Webinar

do, 22/06/2017 - 17:13

Gigabit Community Fund June 2017 RFP Webinar This summer, we're launching a new round of the Mozilla Gigabit Community Fund. We're funding projects that explore how high-speed networks can be leveraged for...

Categorieën: Mozilla-nl planet

Air Mozilla: Gigabit Community Fund June 2017 RFP Webinar

do, 22/06/2017 - 17:13

Gigabit Community Fund June 2017 RFP Webinar This summer, we're launching a new round of the Mozilla Gigabit Community Fund. We're funding projects that explore how high-speed networks can be leveraged for...

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Powerful New Additions to the CSS Grid Inspector in Firefox Nightly

do, 22/06/2017 - 17:00

CSS Grid is revolutionizing web design. It’s a flexible, simple design standard that can be used across all browsers and devices. Designers and developers are rapidly falling in love with it and so are we. That’s why we’ve been working hard on the Firefox Developer Tools Layout panel, adding powerful upgrades to the CSS Grid Inspector and Box Model. The latest improvements are now available in Firefox Nightly.

Layout Panel Improvements

The new Layout Panel lists all the available CSS Grid containers on the page and includes an overlay to help you visualize the grid itself. Now you can customize the information displayed on the overlay, including grid line numbers and dimensions.

This is especially useful if you’re still getting to know CSS Grid and how it all works.

There’s also a new interactive grid outline in the sidebar. Mouse over the outline to highlight parts of the grid on the pages and display size, area, and position information.

The new “Display grid areas” setting shows the bounding areas and the associated area name in every cell. This feature was inspired by CSS Grid Template Builder, which was created by Anthony Dugois.

Finally, the Grid Inspector is capable of visualizing transformations applied to the grid container. This lets developers accurately see where their grid lines are on the page for any grids that are translated, skewed, rotated or scaled.

Improved Box Model Panel

We also added a Box Model Properties component that lists properties that affect the position, size and geometry of the selected element. In addition, you’ll be able to see and edit the top/left/bottom/right position and height/width properties—making live layout tweaks quick and easy.

Finally, you’ll also be able to see the offset parent for any positioned element, which is useful for quickly finding nested elements.

As always, we want to hear what you like or don’t like and how we can improve Firefox Dev Tools. Find us on Discourse or @firefoxdevtools on twitter.

Thanks to the Community

Many people were influential in shipping the CSS Layout panel in Nightly, especially the Firefox Developer Tools and Developer Relations teams. We thank them for all their contributions to making Firefox awesome.

We also got a ton of help from the amazing people in the community, and participants in programs like Undergraduate Capstone Open Source Projects (UCOSP) and Google Summer of Code (GSoC). Many thanks to all the contributors who helped land features in this release including:

Micah Tigley – Computer science student at the University of Lethbridge, Winter 2017 UCOSP student, Summer 2017 GSoC student. Micah implemented the interactive grid outline and grid area display.

Alex LockhartDalhousie University student, Winter 2017 UCOSP student. Alex contributed to the Box Model panel with the box model properties and position information.

Sheldon Roddick –  Student at Thompson Rivers University, Winter 2017 UCOSP student. Sheldon did a quick contribution to add the ability to edit the width and height in the box model.

If you’d like to become a contributor to Firefox Dev Tools hit us up on GitHub or Slack or #devtools on irc.mozilla.com. Here you will find all the resources you need to get started.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jun. 22, 2017

do, 22/06/2017 - 16:00

Reps Weekly Meeting Jun. 22, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Dustin J. Mitchell: Taskcluster Manual Revamp

do, 22/06/2017 - 13:22

As the Great Taskcluster Migration draws near the finish line, we are seeing people new to Taskcluster and keen to take advantage of its new features every day. It’s exciting to build something with such expressive power: easy-to-use loaners, automatic toolchain builds, and a simple process for adding new tests, to name just a few.

We have long had a thorough reference section, with technical details of the various microservices and workers that comprise Taskcluster, but that information is a bit too deep for a newcomer. A few years ago, we introduced a tutorial to guide the beginning user to the knowledge they need for their use-case, but the tutorial only goes so far.

Daniele Procida gave a great talk at PyCon 2017 about structuring documentation, which came down to this diagram:

Tutorials | How-To Guides -------------|--------------- Discussions | Reference

This shows four types of documentation. The top is practical, while the bottom is more theoretical. The left side is useful for learning, while the right side is useful when trying to solve a problem. So the missing components are “discussion” and “how-to guides”. Daniele’s “discussions” means prose-style expositions of a system, organized to increase the reader’s understanding of the system as a whole.

Taskcluster has had a manual for quite a while, but it did not really live up to this promise. Instead, it was a collection of documents that didn’t fit anywhere else.

Over the last few months, we have refashioned the manual to fit this form. It now starts out with a gentle but thorough description of tasks (the core concept of Taskcluster), then explains how tasks are executed before delving into the design of the system. At the end, it includes a bunch of use-cases with advice on how to solve them, filling the “how-to guides” requirement.

If you’ve been looking to learn more about Taskcluster, check it out!

Categorieën: Mozilla-nl planet

Mozilla Reps Community: RepsNext – Status Update June 2017

do, 22/06/2017 - 12:19

In the past few months we have kept working on the implementation of our RepsNext initiative. The RepsNext initiative has started more than a year ago with the goal to bring the Mozilla Reps program to the next level. Back in January we wrote a status update. Almost half a year later, we want to provide a further update. We have also published our OKRs for the current quarter with goals to further the implementation of RepsNext.

RepsNext overview explaining what is done and what not, explained further down in text in this article.

Resources

The Resources training is finalized. It’s still a little bit text-heavy, but we want to move forward with the training and iterate based on feedback. For this, we have reached out to a few selected Reps based on the past 6 months to ask them to test the training and give initial feedback about the process and content. Once we have this feedback, we will adjust the training if needed and then open up the Resources track for applications. Applications will most probably be done in a Google Form and will include general info about the Rep as well as a free-text input field where the Rep can explain why they are fitted for the track as well as provide some links to previous, good budget requests they filed. You can learn more about the Resources Track on the Resources Wiki page.

Onboarding Process

We have simplified and streamlined the on-boarding process for new Reps. Until April we had a lot of applications that were open for more than 6 months. We are happy to report that we have started to on-board 20 new Reps between April and now. Further 10 Reps are in the administrative process of signing the agreement and creating profiles on the Portal. All of this is thanks to a new Webinar. The Webinar allows us to give Reps the very needed first information about Reps and what to expect being a Rep.

Participation Alignment

The Council is working with the Participation team in order to co-create the quarterly and yearly goals and OKRs for 2017. This happened twice already this year and we will continue to give our valuable input and feedback for the quarters to come. The program’s goals are also being created based the team’s goals and priorities. We are also attending the monthly Open Innovation Team calls. Of course this is an ongoing work that will continue. The Reps Council is also involved in strategic and operational discussions as representatives for the broader community, giving feedback on the currently ongoing strategic projects. All of this work will continue at the All Hands in San Francisco later this month.

Leadership

At the beginning of our work on RepsNext, we wanted to do a specific Leadership Track Reps can apply for as a specialization. Throughout the past months it became clear that we want all Reps to improve their leadership skills to help out other Reps as well as their communities. Therefore we created an initial list of good leadership resources for everyone to access and learn. At first this is a basic list of resources which will be improved on in the future. We want all Reps to be able to improve their leadership skills as soon as possible and later build on top of this knowledge with further resources. Please provide your feedback in the Discourse topic!

Coaching

Previously known as Regional Coaches, Community Coaches will continue to support local communities. Additionally to that we are currently creating a Coaches Training to train new Reps on coaching skills as well as existing mentors to improve their skills. These coaches will be able to coach Reps in regards to personal development. The idea is to have the Coaches Training on a self-serve basis, so everyone can take the training and complete a narrative which will be evaluated at the end to graduate from the training. This will help us to increase the quality of coaching/mentoring in the Reps program as well as in local communities. Additionally it will decrease the current bottleneck we have onboarding new Reps and we will be able to assign a coach to every Rep on a one year commitment basis with the option to switch the coach after this period. We are currently reviewing the implementation proposal so we can add the training to Teachable and publish it for all Reps.

Functional areas

We recently asked all Reps to choose their path for the future. This gives us a valuable basis to argue around functional doers in the Reps program. We will further build out the exact details about functional doers and their interest. The ongoing strategy projects will additionally give us valuable guidance in coming up with the perfect opportunities for functional doers. If you are interested in statistics about this survey, join our discussion on Discourse.

Upcoming work

We are in the last steps to finish our work on the Resources track and the Coaching training. This allows us to start talks on further improvements in the third quarter of this year. We are also going to the All Hands to discuss Reps, Strategy, Mobilizers and more with the Open Innovation team. We will update you about the outcomes of that after the All Hands.

You can follow all the Reps program’s goals and progress in the Reps Issue Tracker.

Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.

Categorieën: Mozilla-nl planet

Alex Vincent: Validating directory inputs?

do, 22/06/2017 - 09:38

A quick thought here.  I spent several hours today trying to figure out why a simple Firefox toolkit application wouldn’t work.  (I don’t know what to call “-app application.ini” applications anymore, as “XULRunner” has definitely fallen from favor…)  It took me far too long to realize that the “default” subdirectory should’ve been named “defaults” – something that I already know about these apps, but I only build them from scratch every two years or so…

Catching this sort of rookie mistake is, fundamentally, an argument validation exercise:  the main difference is instead of the argument being an object of some kind, it’s a directory on the filesystem.  If Mozilla has a module or component for validating a directory’s structure in general, I haven’t heard of it…

Which is the point of my post here.  I’m wondering what general-purpose libraries exist for validating a directory tree’s structure and contents at a basic level.  Somebody out there must have run into this problem before and created libraries for this.  I’d love to see libraries written in C++, D, Python, NodeJS and/or privileged JavaScript.  Please reply to my post if you can point me to them.  (For once, a quick search on the world’s most popular search engine fails me…)  Bonus points for libraries that allow passing in callbacks for file-specific validation. (“Is there a syntactically correct .ini file at (root)/application.ini?”)

Categorieën: Mozilla-nl planet

Air Mozilla: Community Participation Guidelines Revision Brownbag (APAC)

wo, 21/06/2017 - 22:00

Community Participation Guidelines Revision Brownbag (APAC) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 103

wo, 21/06/2017 - 19:00

The Joy of Coding - Episode 103 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Pagina's