mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 weken 2 dagen geleden

The Mozilla Blog: Being Open and Connected on Your Own Terms with our New Facebook Container Add-On

di, 27/03/2018 - 15:01

There’s an important conversation going on right now about the power that companies like Facebook wield over our lives. These businesses are built on technology platforms that are so complex, it’s unreasonable to expect users to fully understand the implications of interacting with them. As a user of the internet, you deserve a voice and should be able to use the internet on your own terms. In light of recent news on how the aggregation of user data can be used in surprising ways, we’ve created an add-on for Firefox called Facebook Container, based on technology we’ve been working on for the last couple of years and accelerated in response to what we see in terms of growing demand for tools that help manage privacy and security.

The pages you visit on the web can say a lot about you. They can infer where you live, the hobbies you have, and your political persuasion. There’s enormous value in tying this data to your social profile, and Facebook has a network of trackers on various websites. This code tracks you invisibly and it is often impossible to determine when this data is being shared.

Facebook Container isolates your Facebook identity from the rest of your web activity. When you install it, you will continue to be able to use Facebook normally. Facebook can continue to deliver their service to you and send you advertising. The difference is that it will be much harder for Facebook to use your activity collected off Facebook to send you ads and other targeted messages.

This Add-On offers a solution that doesn’t tell users to simply stop using a service that they get value from. Instead, it gives users tools that help them protect themselves from the unexpected side effects of their usage. The type of data in the recent Cambridge Analytica incident would not have been prevented by Facebook Container. But troves of data are being collected on your behavior on the internet, and so giving users a choice to limit what they share in a way that is under their control is important.

Facebook isn’t unique in their practice of collecting data from your activity outside of the core service, and our goal is not to single out a particular organization, but to start with a well-defined problem that we can solve quickly. As good privacy hygiene, it’s also worth reviewing your privacy settings for each app that you use regularly. With respect to Facebook, this link from EFF has useful advice on how to keep your data where you want it to be, under more of your control.

To learn more about how our Facebook Container Add-On works, check out our Firefox Frontier Blog.

To add the Facebook Container Add-On, visit here.

The post Being Open and Connected on Your Own Terms with our New Facebook Container Add-On appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: The Firefox Accounts authentication zoo

di, 27/03/2018 - 12:35

After my article on the browser sync mechanisms I spent some time figuring out how Firefox Accounts work. The setup turned out remarkably complex, with many different server types communicating with each other even for the most basic tasks. While this kind of overspecialization probably should be expected given the scale at which this service operates, the number of different authentication methods is surprising and the official documentation only tells a part of the story while already being fairly complex. I’ll try to show the entire picture here, in case somebody else needs to piece it all together.

Authentication server login: password hash

Your entry point is normally accounts.firefox.com. This is what Mozilla calls the Firefox Accounts content server – a client-side only web application, backed by a very basic server essentially producing static content. When you enter your credentials this web application will hash your password, currently using PBKDF2 with 1000 iterations, in future hopefully something more resilient to brute-forcing. It will send that hash to the Firefox Account authentication server under api.accounts.firefox.com and get a session token back on success.

Using the session token: Hawk with derived ID

Further communication with the authentication server uses the Hawk authentication scheme that carefully avoids sending the session token over the wire again and signs all the request parameters as well as the payload. A clever trick makes sure that the client doesn’t have to remember an additional Hawk ID here: the ID is a hash of the session token. Not that the content server communicates a lot with the authentication server after the login, the most important call here is signing a public key that the content server generates on the client side. The corresponding private key can then be used to generate BrowserID assertions.

Do you remember BrowserID? BrowserID a.k.a. Persona was a distributed single sign-on service that Mozilla introduced in 2011 and shut down in 2016. Part of it apparently still lives on in Firefox Accounts. How are these assertions being used?

Getting OAuth token: BrowserID assertion

Well, Firefox Accounts use the BrowserID assertion to generate yet another authentication token. They send it to oauth.accounts.firefox.com and want an OAuth token back. But the OAuth server has to validate the BrowserID assertion first. It delegates that task to verifier.accounts.firefox.com which forwards the requests to browserid-local-verify package running on some compute cluster. Verification process involves looking up issuer’s public key info and verifying its RSA signature. If everything is right the verifier server will send information contained in the assertion back and leave it up to the OAuth server to verify that the correct issuer was used. Quite unsurprisingly, only “api.accounts.firefox.com” as issuer will give you an OAuth token.

Funny fact: while the verifier is based on Node.js, it doesn’t use built-in crypto to verify RSA signatures. Instead, this ancient JS-based implementation is currently being used. It doesn’t implement signing however, so the RSA-Sign library by Kenji Urushima is used on top. That library is no longer available online, and its quality is rather questionable.

Accessing user’s profile and subscription settings: OAuth

OAuth is the authentication method of choice when communicating with the profile.accounts.firefox.com server. Interestingly, the user’s profile stored here consists only of the user’s name and their avatar. While the email address is also returned, the profile server actually queries the authentication server behind the scenes to retrieve it, using the same OAuth token.

The content server will also use OAuth to get the user’s newsletter subscription settings from the Basket proxy living under accounts.firefox.com/basket/. This proxy will verify the OAuth token and then forward your request to the basket.mozilla.org server using an API key to authenticate the request. See, the Basket server cannot deal with OAuth itself. It can only do API keys that grant full access or its own tokens to manage individual accounts. It isn’t exactly strict in enforcing the use of these tokens however.

Accessing sync data: Hawk with tokens

An additional twist comes in when you sync your data which requires talking to token.services.mozilla.com first. The stated goal of this service isn’t merely assigning users to one of the various storage servers but also dealing with decentralized logins. I guess that these goals were formulated before BrowserID was shut down. Either way, it will take your BrowserID assertion and turn it into yet another authentication token, conveniently named just that: token. The token is a piece of data containing your user ID among other things. This data is signed by the token server, and the storage servers can validate it.

Mozilla goes a step further however and gives the client a secret key. So when the storage server is actually accessed, the Hawk authentication scheme mentioned before is used for authentication: the token is used as Hawk ID while the secret key is never sent over the wire again and is merely used to sign the request parameters.

Conclusions

Clearly, some parts of this setup made sense at some point but no longer do. This especially applies to the use of BrowserID: the complicated generation and verification process makes no sense if only one issuer is allowed. The protocol is built on top of JSON Web Tokens (JWT), yet using JWT without any modifications would make a lot more sense here.

Also, why is Mozilla using their own token library that looks like a proprietary version of JWT? It seems that this library was introduced before JWT came along, today it is simply historical ballast.

Evaluating the use of Hawk is more complicated. While Hawk looks like a good idea, one has to ask: what are the benefits of signing request parameters if all traffic is already encrypted via TLS? In fact, Hawk is positioning itself as a solution for websites where implementing full TLS protection isn’t feasible for some reason. Mozilla uses TLS everywhere however. Clearly, nothing can help if one of the endpoints of the TLS connection is compromised. But what if an attacker is able to break up TLS-protected connections, e.g. a state-level actor? Bad luck, Hawk won’t really help then. While Hawk mostly avoids sending the secret over the wire, this secret still needs to be sent to the client once. An attacker who can snoop into TLS-encrypted connections will intercept it then.

In the end, the authentication zoo here means that Mozilla has to maintain more than a dozen different authentication libraries. All of these are critical to the security of Firefox Accounts and create an unnecessarily large attack surface. Mozilla would do good by reducing the authentication methods to a minimum. OAuth for example is an extremely simple approach, and I can see only one reason why it shouldn’t be used: validating a token requires querying the OAuth server. If offline validation is desirable, JWT can be used instead. While the complexity is higher then, JWT is a well-established standard with stable libraries to support it.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Play TLS 1.3 with curl

di, 27/03/2018 - 07:44

The IESG recently approved the TLS 1.3 draft-28 for proposed standard and we can expect the real RFC for this protocol version to appear soon (within a few months probably).

TLS 1.3 has been in development for quite some time by now, and a lot of TLS libraries already support it to some extent. At varying draft levels.

curl and libcurl has supported an explicit option to select TLS 1.3 since curl 7.52.0 (December 2016) and assuming you build curl to use a TLS library with support, you've been able to use TLS 1.3 with curl since at least then. The support has gradually been expanded to cover more and more libraries since then.

Today, curl and libcurl support speaking TLS 1.3 if you build it to use one of these fine TLS libraries of a recent enough version:

  • OpenSSL
  • BoringSSL
  • libressl
  • NSS
  • WolfSSL
  • Secure Transport (on iOS 11 or later, and macOS 10.13 or later)

GnuTLS seems to be well on their way too. TLS 1.3 support exists in the GnuTLS master branch on gitlab.

curl's TLS 1.3-support makes it possible to select TLS 1.3 as preferred minimum version.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Voice Fill Graduation Report

di, 27/03/2018 - 06:01

Voice Fill is now available at the Firefox Add-ons website for all Firefox users. Test Pilot users will be automatically migrated to this version.

Last year, Mozilla launched several parallel efforts to build capability around voice technologies. While work such as the Common Voice and DeepSpeech projects took aim at creating a foundation for future open source voice recognition projects, the Voice Fill experiment in Test Pilot took a more direct approach by building voice-based search into Firefox to learn if such a feature would be valuable to Firefox users. We also wanted to push voice research at Mozilla by contributing general tooling and training data to add value to future voice projects.

How it went down

The Firefox Emerging Technologies team approached Test Pilot with an idea for a voice input experiment that let users fill out any form element on the web with voice input.

<figcaption>An early prototype</figcaption>

As a technical feat, the early prototypes were quite impressive, but we identified two major usability issues that had to be overcome. First, adding voice control to every site with a text input would mean debugging our implementation across an impossibly large number of websites which could break the experiment in random and hard-to-repair ways. Second, because users must opt into voice controls in Firefox on a per-site basis, this early prototype would require that users fiddle with browser permissions wherever they wanted to engage with voice input.

In order to overcome these challenges, the Test Pilot and Emerging Technologies teams worked together to identify a minimum scope for our experiment. Voice Fill would focus on voice-based search as a its core use case and would only be available to users through Google, DuckDuckGo, and Yahoo search engines. Users visiting these sites would see a microphone button indicating that Voice Fill was available, and could click the button to trigger a voice search.

<figcaption>Animation showing the Voice Fill interface</figcaption>

From an engineering standpoint, the Voice Fill WebExtension add-on worked by letting users activate microphone input on specific search engine pages. Once triggered, an overlay appeared on the page prompting the user to record their voice via standard getUserMedia browser API. We used a WebExtension content script to inject the Voice Fill interface into search pages, and the Lottie library — which parses After Effects animations in JSON — to power the awesome mic animations provided by our super talented visual designer.

Voice Fill relied on an Emscripten module based on WebRTC C code to handle voice activity detection and register events for thing like loudness and silence during voice recording. After recording, samples were analyzed by an open source speech recognition engine called Kaldi. Kaldi is highly configurable, but essentially works by taking snippets of speech, then using a speech model (we used an legacy version of the Api.ai model in our experiment) to convert each snippet into best guesses at text along with a confidence ratings for each guess. For example, I might say “Pizza” and Kaldi might guess “Pizza” with 97% confidence, “Piazza” with 85% confidence, and “Pit saw” with 60% confidence.

<figcaption>Search results in Voice Fill</figcaption>

Depending on the confidence generated for any given speech sample, Voice Fill did one of the following one for each analyzed voice sample.

  • If the topmost confidence rating was high enough, or the difference between the first and second confidence scores for a result was large enough, Voice Fill triggered a search automatically.
  • If the topmost confidence rating was below a certain threshold, or if the top two confidence ratings were tightly clustered, we showed a list of possible search terms for the user to choose from.
  • If Kaldi returned no suggestions, we displayed a very pretty error screen and asked the user to try again.
What did we learn?

One of the big goals of the Test Pilot program is to assess market fit for experimental concepts, and it was pretty clear from the start that Voice Fill was not the most attractive experiment for the Test Pilot audience.

<figcaption>Voice Fill has fewer daily users than our other active experiments</figcaption>

The graph above shows the average number of Firefox profiles with each of our four add-on based experiments installed over the last two months. While the other three sit in the 15 to 20k user range, Voice Fill, in orange has significantly fewer users.

This lack of market fit bears out when we look at how users engaged with Voice Fill on the Test Pilot website. In the first two weeks of January when Mozilla’s marketing department ran a promotion for Test Pilot. The chart below shows how many Test Pilot users clicked on each experiment installation button (or in the case of Send, clicked the button that links to the Send website). Again, Voice Fill garnered significantly less direct user attention than other experiments.

<figcaption>The pitch for Voice Fill was less attractive than for other experiments</figcaption>

So Voice Fill didn’t set the world on fire, but by shipping in Test Pilot, we were able to determine that a pure speech to text search function may not be the most highly sought after Firefox feature without undertaking the complex task of building a massive service for every Firefox user.

As mentioned above, Voice Fill is one part of an effort to improve open source voice recognition tools at Mozilla. While it had a modest overall user base, Voice Fill gave us a large corpus of data on which to conduct comparative analysis.

Over its lifespan, Voice Fill users produced nearly one hundred thousand requests resulting in more than one hundred ten hours of audio. A comparative analysis of the voice fill corpus using different speech models gave us a insight into how to benchmark the performance of future voice-based efforts.

We conducted our analysis by running the Voice Fill corpus through the Voice Fill’s Api.ai speech model, the open source DeepSpeech model built by Mozilla, the Kaldi Aspire model, and Google’s Speech API.

The chart below shows the average amount of time each of these models needed to decode samples in our corpus. In terms of raw speed, the Api.ai model used in Voice fill performed quite well relative to DeepSpeech and Aspire. The Google comparison here is not quite apples-to-apples since it’s average time includes a call to Google’s Cloud API whereas the other three analyses were conducted on a local cluster.

<figcaption>Average time to process each sample by speech model</figcaption>

Next we wanted to know how many of the words Google’s Speech API identified were also identified by the other models. The chart below show total words in the corpus where each model matched with the results generated by Google’s Speech API. Here, Api.ai matched forty six thousand words with Google, Aspire matched forty-two thousand and DeepSpeech matched just thirty thousand. DeepSpeech lags behind, but it’s worth noting that it’s by far the newest of these training models. While it has a long way to go to catch up to Google’s proprietary model, it’s quite impressive for such a young open source effort.

While we can’t be sure exactly why Google’s model outperforms the others in this instance, the qualitative feedback from Test Pilot suggests that our users accents might be one factor.

We limited promotion of Voice Fill to Test Pilot users to English-speaking users, but did not restrict the experiment by geography. As a result, many users told us that their accents seemed to prevent Voice Fill from accurately interpreting voice samples. Here is another limitation that would prevent us from shipping Voice Fill in Firefox in its current form: our users came from all over the world, and the model we used simply does not account for the literal diversity of voices among Firefox users.

What Happens Next?

Voice Fill is leaving Test Pilot, but it will remain available to all users of Firefox at the Firefox Add-ons website. We know from user feedback that Voice Fill provides accessibility benefits to some of its users and we are delighted to continue to support this use case.

All of these samples collected in Voice Fill will be used be used to go help train and improve the DeepSpeech open source speech recognition model.

Additionally, The proxy service we built to let Voice Fill speak to that future voice-based experiments and services at Mozilla could share a common infrastructure. This service is already being used by the Mozilla IoT Gateway, an open source connector for smart devices.

We’re also exploring improvements to the way Firefox add-ons handle user media. The approaches available to us in Voice Fill were limited, and may have contributed to the diminished usability of the experiments.

Thank you to everyone that participated in the Voice Fill experiment in Test Pilot, and thanks in particular to Faramarz Rashed and Andre Natal on the Mozilla Emerging Technologies team for spearheading Voice Fill!

Voice Fill Graduation Report was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Snooze Tabs Graduation Report

di, 27/03/2018 - 06:01

Snooze Tabs is now available at the Firefox Add-ons website for all Firefox users.Test Pilot users will be automatically migrated to this version.

We’re getting ready to graduate the Snooze Tabs experiment, and we wanted to share some of the things we learned.

The Problem Space

Snooze Tabs launched as an experiment in Test Pilot in February 2017 with the goal of making it easier for people to continue tasks in Firefox at a time of their choosing. From previous research conducted by the Firefox User Research team on task continuity and workflows, we started to develop an understanding of the ways people’s workflows can span multiple contexts and the types of behaviors and tools that people use to support context switching and task continuity. We knew, for example, that leaving browser tabs open is one way that people actively hold tasks to which they intend to return later.

With the Snooze Tabs experiment we wanted to learn more about how a tab management feature — specifically one that might reduce the cognitive load of leaving many tabs open — could support task continuity.

How It Worked

When installed, Snooze Tabs added a button to the browser toolbar. This button triggered a panel displaying different time increments at which the current tab could be snoozed, including an option to pick a custom date and time.

<figcaption>The Snooze Tabs panel showing the different snooze options</figcaption>

People could then select a time and confirm their selection. At the selected time, the snoozed tab would reappear and people could then switch, or give focus to, their “woken” tab or re-snooze it for a time of their choosing.

<figcaption>At the selected time, a snoozed tab would wake, or reopen.</figcaption>

The Snooze Tabs UI also allowed people to view all of their pending snoozes and delete any snoozes they no longer wanted.

What We Learned

Over the last year, we saw just over 58,000 people using Snooze Tabs over some 400,000 sessions. The number of both new and returning users stayed relatively constant over the life of the experiment.

<figcaption>Total new and returning Snooze Tabs users</figcaption>

People using Snooze Tabs used all of the available options to create snoozes. “Tomorrow,” “Pick a Date/Time,” and “Later Today” were the most selected time options, and “Next Open” was the least selected option. The popularity of the “Tomorrow” option suggests that people tend to anticipate continuing with tasks in the browser in the short term — in the next 24 hours — rather than anticipating what they will do in the next few weeks. The relatively few number of people who selected “Next Open” may suggest that people using the experiment did not understand that option, that they do not anticipate closing the browser, could not anticipate when they would re-open the browser, or that “next open” is too soon to continue their task.

<figcaption>Cumulative snooze times</figcaption>

The data on resnoozes showed the same near-future options being most popular and greater time increments being less popular.

<figcaption>Cumulative re-snoozes</figcaption>

Additionally, more than half of tabs that were snoozed were resnoozed when woken. This data suggests that people may have a hard time accurately predicting when they will be able to return to a task. While we don’t know people’s complete workflows or whether tasks were completed, we saw that most woken tabs were given focus, which suggests that the feature may have at least helped people remember tasks they intended to continue at a later time. Finally, very few people edited or cancelled their snoozes, which might suggest a threshold of active management that people are willing to do in the name of task continuity.

Next Steps

Snooze Tabs will now be available for all users of Firefox from the Firefox Add-ons Website for all users of Firefox. Test Pilot users will be automatically migrated to this new version. This version has the same functionality as in Test Pilot, but we’ve closed a number of bugs, improved the user interface, and made the experience more accessible.

If you’re interested in contributing to the future of Snooze Tabs, check out the Github repository and/or the Discourse forum and let us know. Thank you to everyone who used Snooze Tabs, and we hope you’ll continue trying out Test Pilot experiments.

Snooze Tabs Graduation Report was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 227

di, 27/03/2018 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is fui, a crate to add both a command-line interface and text forms to your program. Thanks to musicmatze for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

178 pull requests were merged in the last week

New Contributors
  • Daniel Kolsoi
  • lukaslueg
  • Lymia Aluysia
  • Maxwell Borden
  • Maxwell Powlison
  • memoryleak47
  • Mrowqa
  • Sean Silva
  • Tyler Mandry
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

If Rust is martial arts teacher, Perl is a pub brawler. If you survive either, you’re likely to be good at defending yourself, though both can be painful at times.

Michal 'vorner' Vaner.

Thanks to llogiq for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Marco Castelluccio: Zero coverage reports

di, 27/03/2018 - 02:00

One of the nice things we can do with code coverage data is looking at which files are completely not covered by any test.

These files might be interesting for two reasons. Either they are:

  • dead code;
  • code that is not tested at all.

Dead code can obviously be removed, bringing a lot of advantages for developers and users alike:

  • Improve maintainability (no need to update the code in case of refactorings in the case of dead code);
  • Reduce build time for developers and CI;
  • Reduce the attack surface;
  • Decrease the size of the resulting binary which can have effects on performance, installation duration, etc.

Untested code, instead, can be really problematic. Changes to this code can take more time to be verified, require more QA resources, and so on. In summary, we can’t trust them as we trust code that is properly tested.

A study from Google Test Automation Conference 2016 showed that an uncovered line (or method) is twice as likely to have a bug fix than a covered line (or method). On top of that, testing a feature prevents unexpected behavior changes.

Using these reports, we have managed to remove a good amount of code from mozilla-central, so far around 60 files with thousands of lines of code. We are confident that there’s even more code that we could remove or conditionally compile only if needed.

As any modern software, Firefox relies a lot on third party libraries. Currently, most (all?) the content of these libraries is built by default. For example, ~400 files are untested in the gfx/skia/ directory).

Reports (updated weekly) can be seen at https://marco-c.github.io/code-coverage-reports/. It allows filtering by language (C/C++, JavaScript), filtering out third-party code or header files, showing completely uncovered files only or all files which have uncovered functions (sorted by number of uncovered functions).

uncovered code

Currently there are 2730 uncovered files (2627 C++, 103 JavaScript), 557 if ignoring third party files. As our regular code coverage reports on codecov.io, these reports are restricted to Windows and Linux platforms.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 26 Mar 2018

ma, 26/03/2018 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 26 Mar 2018

ma, 26/03/2018 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Report of High Level Expert Group on “Fake News”: A good first step, more work is needed

ma, 26/03/2018 - 12:25

In mid March, the European Commission published the final report of the High Level Expert Group (HLEG) on Fake News, “A Multi-Dimensional Approach to Disinformation”. The group was established in early January of this year, and comprised a range of experts and stakeholders from the technology industry, broadcasters, the fact checking community, academics, consumer groups, and journalists. The group was expertly chaired by Dr Madeleine De Cock Buning of Utrecht University, specialised in Intellectual Property, Copyright and Media and Communication Law.

I represented Mozilla in the HLEG, in close cooperation with Katharina Borchert, our Chief Innovation Officer, who spearheads the Mozilla Information and Trust Initiative. Mozilla’s engagement in this High Level Expert Group complements our efforts to develop products, research, and communities to battle information pollution and so-called “fake news” online.

The HLEG was assigned an ambitious task of advising the Commission on “scoping the phenomenon of fake news, defining the roles and responsibilities of relevant stakeholders, grasping the international dimension, taking stock of the positions at stake, and formulating recommendations.” The added challenge was that this was to be done in under two months with only four in-person meetings.

This final report is the result of intense discussion with the HLEG members, and we managed to produce a document that can constructively contribute to the dialogue and further understanding of disinformation in the EU. It’s not perfect, but thanks to our Chair’s diligent work to find agreement amongst different stakeholders in the group, I’m satisfied with the outcome.

What became obvious after the very first convening of the HLEG is that we would not be able to “solve” disinformation. With that necessary dose of humility, we managed to set out a good starting point for further cooperation of all stakeholders to counter the spread of disinformation. Here are some of the key highlights:

Call it “disinformation” not “fake news”
The report stresses the need to abandon the harmful term “fake news”. In addition to being overly broad, it has become weaponised to undermine trust in the media. The report focuses on disinformation, which is defined as “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” (Pg. 10).

Investing in digital literacy and trust building is crucial
Life-long education, empowerment, training, and trust building are key competencies that can lead to greater social resilience against disinformation. This isn’t just a matter for individuals using technology and consuming news – it matters just as much, and indifferent ways, to journalists and content creators.

Moreover, key to this is the recognition that media literacy extends beyond understanding the technical workings of the internet; equally crucial are methods to encourage critical thinking (see Pg. 25-27).

More (EU) research is needed
A lot of the data, and many promising initiatives in this space (such as the Credibility Coalition or the Trust Project), are primarily US based. The report encourages that public authorities, both on the EU and national level, support the development of a network of independent European Centres for (academic) research on disinformation. The purposes of which will facilitate more thorough understanding of the impact, scale, and the amplification methods, to evaluate the measures taken by different actors, and to constantly adjust the necessary responses (more on Pg. 5).

No one wants a “Ministry of Truth” (in either government or Silicon Valley)
The solutions explored in the report are of a non-legislative nature. This is in large part because the HLEG wanted to avoid knee-jerk reactions from governments who might risk adopting new laws and regulations with very little understanding of the essence, scope, and severity of the problem.

The report also acknowledges that pressuring private companies to determine what type of legal content is to be considered truthful, acceptable, or “quality” news, is equally troubling. Ultimately, the report outlines that interventions must be targeted, tailored, and based on a precise understanding of the problem(s) we are trying to solve (more on Pg. 32 & 35).

Commitment to continue this important work through a Coalition
As properly addressing the issue of disinformation cannot be meaningfully done in such a short time, the group proposed that this work should continue through a multistakeholder Coalition. The Coalition will consist of practitioners and experts where the roles and responsibilities of the various stakeholders – with a particular focus on platforms – will be fleshed out with a view to establishing a Code of Practice. The report presents 10 principles for the platforms which will serve as a basis for further elaboration (find them on Pg. 32 of the report). The principles include the need to adapt advertising and sponsored content policies, to enable privacy-compliant access to fact checking and research communities, and to make advanced settings and controls more readily available to users to empower them to customise their online experience.

You can find the full report here, and for full disclosure purposes, the minutes of the four in-person meetings (1, 2, 3, and 4). We thank everyone involved and look forward to continuing our work to tackle disinformation in Europe and across the globe.

The post Report of High Level Expert Group on “Fake News”: A good first step, more work is needed appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 109

ma, 26/03/2018 - 02:30

In the last week, we merged 94 PRs in the Servo organization’s repositories.

We also got Servo running under the hood of Firefox Focus on Android as a proof of concept. More details on that soon!

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions
  • nical fixed an issue with disappearing 2d transforms in WebRender.
  • christianpoveda implemented the typed array-based send API for WebSockets.
  • nox implemented the WebGL getAttachedShaders API.
  • kwonoj added support for retrieving typed arrays from Fetch bodies.
  • nox added support for obtaining data URLs from WebGL canvases.
  • Xanewok removed a source of unsafety from the JS handle APIs.
  • Xanewok replaced hand-written typed array support in WebGL APIs with automatically generated code.
  • jdm worked around a frequent OOM crash on Android.
  • glennw made automatic mipmap generation for WebRender images opt-in.
  • glennw simplified various parts of the WebRender pipeline for line decorations.
  • christianpoveda added support for typed arrays as blob sources.
  • alexrs made the command parsing portion of homu testable.
  • lsalzman reduced the amount of memory that is consumed by glyph caches in WebRender.
  • glennw made text shadows draw in screen space in WebRender.
  • jdm increased the configurability of homu’s list of repositories.
  • Moggers exposed the WebRender debugger through a cargo feature for downstream projects.
  • gootorov implemented the getFrameBufferAttachmentParameter WebGL API.
  • paulrouget redesigned the way that Servo’s embedding APIs are structured.
  • nakul02 added time spent waiting on synchronous recv() operations to Servo’s profiler.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Shing Lyu: Merge Pull Requests without Merge Commits

zo, 25/03/2018 - 23:46

By default, GitHub’s pull request (or GitLab’s merge request) will merge with a merge commit. That means your feature branch will be merged into the master by creating a new commit, and both the feature and master branch will be kept.

Let’s illustrate with an example:

Let’s assume we branch out a feature branch called “new-feature” from the master branch, and pushed a commit called “Finished my new feature”. At the same time someone pushed another commit called “Other’s feature” onto the master branch.

branch

If we now create a pull request for our branch, and get merged, we’ll see a new merge commit called “Merge branch ‘new-feature’”

merge_commit

If you look at GitHub’s commit history, you’ll notice that the UI shows a linear history, and the commits are ordered by the time they were pushed. So if multiple people merged multiple branches, all of them will be mixed up. The commits on your branch might interlace with other people’s commits. More importantly, some development teams don’t use pull request or merge requests at all. Everyone is suppose to push directly to master, and maintain a linear history. How can you develop in branches but merge them back to master without a merge commit?

Under the hood, GitHub and GitLab’s “merge” button uses the --no-ff option, which will force create a merge commit. What you are looking for is the opposite: --ff-only (ff stands for fast-forward). This option will cleanly append your commits to master, without creating a merge commit. But it only works if there is not new commits in master but not in your feature branch, otherwise it will fail with a warning. So if someone pushes to master and you did a git pull on your local master, you need to do a rebase on your feature branch before using --ff-only merge. Let’s see how to do this with an example:

git checkout new-feature # Go to the feature branch named "new-feature" git rebase master # Now your feature have all the commits from master git checkout master #Go back to master git merge --ff-only new-feature

After these commands, your master branch should contain the commits from the feature branch, as if they are cherry-picked from the feature branch. You can then push directly remote.

git push

If unfortunately someone pushed more code to the remote master while you are doing this, your push might fail. You can pull, rebase and push again like so:

git pull --rebase && git push

GitHub’s documentation has some nice illustrations about the two different kind of merges.

Here is a script that does the above for you. To run it you have to checkout to the feature branch you want to merge back to master, then execute it. It will also pull and rebase both your feature and master branch to the most up-to-date remote master during the operation.

#!/usr/bin/env bash CURRBRANCH=$(git rev-parse --abbrev-ref HEAD) if [ $CURRBRANCH = "master" ] then echo "Already on master, aborting..." exit 1 fi echo "Merging the change from $CURRBRANCH to master..." echo "Rebasing branch $CURRBRANCH to latest master" git fetch origin master && \ git rebase origin/master && \ echo "Checking out to master and pull" && \ git checkout master && \ git rebase origin/master && \ echo "Merging the change from $CURRBRANCH to master..." && \ git merge --ff-only $CURRBRANCH && \ git log | less echo "DONE. You may want to do one last test before pushing"

It’s worth mentioning that both GitHub and GitLab allows you to do the fast-forward (and squash) merge on it’s UI. But it’s configured on a per-repository basis, so if you don’t control the repository, you might have to ask your development team’s administrator to turn on the feature. You can read more about this feature in GitHub’s documentation and GitLab’s documentation. If you are interested in squashing the commits manually, but don’t know how, check out my previous post about squashing.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR7b1 and TenFourFoxBox 1.1 available

za, 24/03/2018 - 23:12
TenFourFox Feature Parity Release 7 beta 1 is now available for testing (downloads, hashes, release notes). I chose to push this out a little faster than usual since there are a few important upgrades and I won't have as much time to work on the browser over the next couple weeks, so you get to play with it early.

In this version, the hidden basic adblock feature introduced in FPR6 is now exposed in the TenFourFox preference pane:

It does not default to on, and won't ever do so, but it will reflect the state of what you set it to if you played around with it in FPR6. Logging, however, is not exposed in the UI. If you want that off (though it now defaults to off), you will still need to go into about:config and change tenfourfox.adblock.logging.enabled to false. The blocklist includes several more cryptominers, adblockerblockers and tracking scripts, and there are a couple more I am currently investigating which will either make FPR7 final or FPR8.

The other big change is some retuning to garbage and cycle collection intervals which should reduce the browser's choppiness and make GC pauses less frequent, more productive and more deterministic. I did a number of stress tests to make sure this would not bloat the browser or make it run out of memory, and I am fairly confident the parameters I settled on strike a good balance between performance and parsimoniousness. Along with these updates are some additional DOM and CSS features under the hood, additional HTTPS cipher support (fixing Amtrak in particular, among others) and some sundry performance boosts and microoptimizations. The user agent strings are also updated for Firefox 60 and current versions of iOS and Android.

To go along with this is an update to TenFourFoxBox which allows basic adblock to be enabled for foxboxes and updates the cloaked user agent string to Firefox 60. There is a new demo foxbox for 2048, just for fun, and updated Gmail and user guide foxboxes. TenFourFoxBox 1.1 will go live simultaneously with FPR7 final on or about May 9.

Meanwhile, the POWER9-based Talos II showed up in public; here's a nice picture of it at the OpenPOWER Summit running Unreal Engine with engineer Tim Pearson. I'm not in love with the case, but that's easily corrected. :) Word on the street is April for general availability. You'll hear about it here first.

Categorieën: Mozilla-nl planet

Eric Shepherd: Results of the MDN “Competitive Content Analysis” SEO experiment

vr, 23/03/2018 - 21:46

The next SEO experiment I’d like to discuss results for is the MDN “Competitive Content Analysis” experiment. In this experiment, performed through December into early January, involved selecting two of the top search terms that resulted in MDN being included in search results—one of them where MDN is highly-placed but not at #1, and one where MDN is listed far down in the search results despite having good content available.

The result is a comparison of the quality of our content and our SEO against other sites that document these technology areas. With that information in hand, we can look at the competition’s content and make decisions as to what changes to make to MDN to help bring us up in the search rankings.

The two keywords we selected:

  • “tr”: For this term, we were the #2 result on Google, behind w3schools.
  • “html colors”: For this keyword, we were in 27th place. That’s terrible!

These are terms we should be highly placed for. We have a number of pages that should serve as good destinations for these keywords. The job of this experiment: to try to make that be the case.

The content updates

For each of the two keywords, the goal of the experiment was to improve our page rank for the keywords in question; at least one MDN page should be near or at the top of the results list. This means that for each keyword, we need to choose a preferred “optimum” destination as well as any other pages that might make sense for that keyword (especially if it’s combined with other keywords).

To accomplish that involves updating the content of each of those pages to make sure they’re as good as possible, but also to improve the content of pages that link to the pages that should show up on search results. The goal is to improve the relevant pages’ visibility to search as well as their content quality, in order to improve page position in the search results.

Things to look for

So, for each page that should be linked to the target pages, as well as the target pages themselves, these things need to be evaluated and improved as needed:

  • Add appropriate links back and forth between each page and the target pages.
  • Is the content clear and thorough?
  • Make sure there’s interactive content, such as new interactive examples.
  • Ensure the page’s layout and content hierarchy is up-to-date with our current content structure guidelines.
  • Examine web analytics data to determine what improvements the data suggest should be done beyond these items.
Pages reviewed and/or updated for “tr”

The primary page, obviously, is this one in the HTML element reference:

These pages are closely related and were also reviewed and in most cases updated (sometimes extensively) as part of the experiment:

A secondary group of pages which I felt to be a lower priority to change but still wound up reviewing and in many cases updating:

Pages reviewed and/or updated for “html colors”

This one is interesting in that “html colors” doesn’t directly correlate to a specific page as easily. However, we selected the following pages to update and test:

The problem with this keyword, “html colors”, is that generally what people really want is CSS colors, so you have to try to encourage Google to route people to stuff in the CSS documentation instead of elsewhere. This involves ensuring that you refer to HTML specifically in each page in appropriate ways.

I’ve opted in general to consider the CSS <color> value page to be the destination for this, for reference purposes, with the article “Applying color” being a new one I created to use as a landing page for all things color related to route people to useful guide pages.

The results

As was the case with previous experiments, we only allowed about 60 days for the Google to pick up and fully internalize the changes as well as for user reactions to affect the outcome, despite the fact that 90 days is usually the minimum time you run these tests for, with six months being preferred. However, we have had to compress our schedule for the experiments. We will, as before, continue to watch the results over time.

Results for the “tr” keyword

The pages updated to improve their placement when the “tr” keyword is used in Google search, as well as the amount of change over time seen for each, is shown in the table below. These were the pages which were updated and which appeared in search results analysis for the selected period of time.

Change (%)

Address

Impressions

Clicks Position CTR HTML/Element/tr -43.22% 124.57% 2.58% 285.71% HTML/Element/table 26.68% 27.02% -2.90% 0.00% HTML/Element/template 27.02% 9.21% -15.45% -14.05% API/HTMLTableRowElement — — — — API/HTMLTableRowElement/insertCell -2.78% -23.91% -2.16% -21.77% API/HTMLTableRowElement/rowIndex — — — — HTML/Element/thead 38.82% 19.70% 0.00% -13.67% HTML/Element/tbody 42.72% 100.52% 14.19% 40.68% HTML/Element/tfoot 8.90% 11.29% 2.64% 2.18% HTML/Element/th -50.32% 3.43% 0.39% 106.25% HTML/Element/td 20.05% 40.27% -8.04% 17.01% API/HTMLTableElement/rows — — — —

The data is interesting. Impression counts are generally up, as are clicks and search engine results page (SERP) position. Interestingly, the main <tr> page, the primary page for this keyword, has lost impressions yet gained clicks, with the CTR skyrocketing by a sizeable 285%. This means that people are seeing better search results when searching just for “tr”, and getting right to that page more often than before we began.

Results for the “html colors” keyword

The table below shows the pages updated for the “html colors” keyword and the amount of change seen in the Google activity for each page.

Page Change Impressions (%) Change Clicks (%) Change Position (Absolute) Change Position (%) Change CTR (%) https://developer.mozilla.org/en-US/docs/Learn/Accessibility/CSS_and_JavaScript +28.61% +19.88% -0.99 -4.54% -6.78% https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_boxes/Backgrounds -43.88% -38.17% -2.71 -21.20% +10.17% https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_boxes/Borders +51.59% +33.33% +3.28 +48.62% -12.04% https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_text/Fundamentals +34.87% +29.55% -1.35 -11.34% -3.94% https://developer.mozilla.org/en-US/docs/Web/CSS/background-color +9.03% +19.26% -0.17 -2.46% +9.38% https://developer.mozilla.org/en-US/docs/Web/CSS/border-color +36.02% +36.98% -0.09 -1.38% +0.71% https://developer.mozilla.org/en-US/docs/Web/CSS/color +23.04% +23.42% +0.03 +0.34% +0.31% https://developer.mozilla.org/en-US/docs/Web/CSS/color_value +14.95% +34.09% -1.21 -10.26% +16.65% https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Colors/Color_picker_tool -10.78% +6.68% +1.76 +24.78% +19.56% https://developer.mozilla.org/en-US/docs/Web/CSS/outline-color +830.70% +773.91% -0.97 -12.42% -6.10% https://developer.mozilla.org/en-US/docs/Web/CSS/text-decoration-color +3254.57% +3429.41% -1.45 -21.98% +5.21% https://developer.mozilla.org/en-US/docs/Web/HTML/Applying_color +50.32% +45.21% -0.56 -4.83% -3.40% https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/color +31.15% +25.57% -0.44 -4.44% -4.25%

These results are also quite promising, especially since time did not permit me to make as many changes to this content as I’d have liked. The changes for the color value type page are good; nearly 15% increase in impressions and a very good 34% rise in clicks means a health boost to CTR. Ironically, though, our position in search results dropped by nearly 1.25 points., or 10%.

The approximate 23% increase in both impressions and clicks on the CSS color attribute is quite good, and I’m pleased by the 10% gain in CTR for the learning area article on styling box backgrounds.

Almost every page sees significant gains in both impressions and clicks (take a look at text-decoration-color, in particular, with over 3000% growth!).

The sea of red is worrisome at first glance, but I think what’s happening here is that because of the improvements in impression counts (that is, how often users see these pages on Google), they are prone to reaching the page they really want more quickly. Note which pages are the ones with the positive click-through rate (CTR), which is the ratio of clicks divided by impressions. This is in order of highest change in CTR to lowest:

  1. https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Colors/Color_picker_tool
  2. https://developer.mozilla.org/en-US/docs/Web/CSS/color_value
  3. https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_boxes/Backgrounds
  4. https://developer.mozilla.org/en-US/docs/Web/CSS/background-color
  5. https://developer.mozilla.org/en-US/docs/Web/CSS/text-decoration-color
  6. https://developer.mozilla.org/en-US/docs/Web/CSS/border-color
  7. https://developer.mozilla.org/en-US/docs/Web/CSS/color

What I believe we’re seeing is this: due to the improvements to SEO (and potentially other factors), all of the color-related pages are getting more traffic. However, the ones in the list above are the ones seeing the most benefit; they’re less prone to showing up at inappropriate times and more likely to be clicked when they are presented to the user. This is a good sign.

Over time, I would hope to improve the SEO further to help bring the search results positions up for these pages, but that takes a lot more time than we’ve given these pages so far.

Uncertainties

For this experiment, the known uncertainties (an oxymoron, but we’ll go with that term anyway) include:

  • As before, the elapsed time was far too share to get viable data for this experiment. We will examine the data again in a few months to see how things are progressing.
  • This project had additional time constraints that led me not to make as many changes as I might have preferred, especially for the “html colors” keyword. The results may have been significantly different had more time been available, but that’s going to be common in real-world work anyway.
  • Overall site growth during the time we ran this experiment also likely inflated the results somewhat.
Decisions

After sharing these results with Kadir and Chris, we came to the following initial conclusions:

  • This is promising, and should be pursued for pages which already have low-to-moderate traffic.
  • Regardless of when we begin general work to perform and make changes as a result of competitive content analysis, we should immediately update MDN’s contributor guides to incorporate recommended changes.
  • The results suggest that content analysis should be a high-priority part of our SEO toolbox. Increasing our internal link coverage and making documents relate to each other creates a better environment for search engine crawlers to accumulate good data.
  • We’ll re-evaluate the results in a few months after more data has accumulated.

If you have questions or comments about this experiment or its results, please feel free to post to this topic on Mozilla’s Discourse forum.

Categorieën: Mozilla-nl planet

The Firefox Frontier: No More Notifications (If You Want)

vr, 23/03/2018 - 20:15

Online, your attention is priceless. That’s why every site in the universe wants permission to send you notifications about new stuff. It can be distracting at best and annoying at … Read more

The post No More Notifications (If You Want) appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Pagina's