mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

David Humphrey: Experiments with "Good First Experience"

Mozilla planet - ti, 27/03/2018 - 17:02

Since writing my post about good-first-bug vs. good-first-experience I've been experimenting with different approaches to creating useful walkthroughs. I think I've settled on a method that works well, and wanted to write about it, so as to encourage others to do the same.

First, a refresher on what I mean by Good First Experience (GFE). Unlike a Good First Bug (GFB), which can only be fixed by one person (i.e., it's destroyed in being solved), a GFE is reproducible by anyone and everyone willing to put in the time. As such, a GFE is not tied to the current state of the project (i.e., rapidly changing), but rather uses an old commit to freeze the state of the project so it can be recreated. Following the steps doesn't alter the project; it alters you.

If we think of an OSS project like a team of climbers ascending a mountain, a GFE is a camp part-way up the route that backpackers can visit in order to get a feel for the real thing. A GFE is also like a good detective novel: you know the mystery is going to get solved by the end, but nevertheless, it's thrilling to experience the journey, and see how it happens. Could I solve this before the book does?

With my open source students, I've tried a mix of written and in-class presentation style. My approach is usually to fix a bug in a project I don't know and document what I do. I think it's useful to put yourself on an even footing with a new developer by working in unfamiliar code. Doing so forces me to be more deliberate with how I use tools and debugging/code-reading techniques. It also means I (mostly) can't rely on years of accumulated knowledge about how code is put together, and instead have to face the challenge fresh. Obviously I can't leave my experience out of the mix, because I've been programming for over 30 years. But I can remove familiarity, which is what a lot of us rely on without realizing it. Try fixing a bug in a project and language you've never worked on before, and you'll be surprised at what you learn about yourself. When I need to humble myself, a few hours with a CSS bug is usually all I need :)

My first attempts at this involved writing a blog post. I've done a few of these:

I enjoy this style. It's easy for me to write in my blog. However I've moved away from it for a number of reasons. First, I don't like how it recedes into the past by being tied to the history of my blog. It's important what I wrote, not when I wrote it. Instead of a journal entry, I want this to feel more like documentation. Another thing I don't like about using my blog is that it ends up being disconnected from the project and code in question. There's an unnecessary separation between the experience and the code, one that I think encourages you to read but not do anything.

I've since started using another style: hijacking the project's README.md file and writing everything in a branch on my fork. First, some examples:

I got the idea for this approach when I wrote my guide on Machine Learning and Caffe. To do this, I needed a combination of documentation, source files, and images. Obviously my blog wouldn't suffice, so I did it as its own repo. I'd seen lots of people "blog" using Gist before (e.g., this Makefile tutorial), and I was curious to know what would happen if I repurposed an entire repo as a writing medium.

In the case of my ML guide, it's meant a ton of exposure (4.5K stars and weeks as a top trending repo), and nearly 500 forks. It's also formed its own community, with people filing bugs and still other people helping solve them. It also resulted in a complete Chinese translation, and thereby yet more exposure.

Knowing that I could use GitHub in this way, I was interested to try an even more symbiotic approach for my GFE guides:

  • Fork the repo in question
  • Create a new branch, and freeze the project state so it is reproducable by others
  • Add a screenshots/ directory for all the images I need to include. Now I can just git add and git commit these into the repo
  • Erase the README.md file contents, and start writing my guide in there.
  • Link to files within the project, pinned to the commit that I'm on in this branch

I've liked a number of things that this approach provides:

  • As an outsider, you can contribute something to a project you like that will help others get involved. I don't have to convince the project to do this. I just do it.
  • Similarly, I can do something in the project without having to get permission. I'm not asking any of the projects to make this official documentation. Whether they do or don't, it exists on GitHub identically
  • People (my students, or others) who want to try what I'm doing can simply clone my fork, and checkout my branch. All the files are there in the exactly the right state. Everything should work as it did for me. There are obviously going to be issues with environments and dependency versions I can't control as easily.
  • People can interact with what I've done to suggest corrections, file issues (just enable Issues on your fork), star the repo, etc.

I taught my Brave walkthrough in class yesterday, and I think it was ideally suited to the time I had (2 hours), and the level of the students. Many times in the past I would have fixed a bug live in class, but I didn't produce a guide that could be used after the class ended. By doing both, I've found that students can watch me do it live, and we can discuss lots of things that are happening; and then after class, they can read through it again, and ideally try it themselves, to further develop the skills.

This approach is something new that I'm enjoying, and I wanted to share it as a possible style for others to try. If you have projects you think I should do this on, let me know. I'm definitely going to try to do more of this.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Facebook Container Extension: Take control of how you’re being tracked

Mozilla planet - ti, 27/03/2018 - 15:01

Our Multi-Account Containers extension has been a game changer for many users, letting them manage various parts of their online life without intermingling their accounts. To help Firefox users have … Read more

The post Facebook Container Extension: Take control of how you’re being tracked appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Being Open and Connected on Your Own Terms with our New Facebook Container Add-On

Mozilla planet - ti, 27/03/2018 - 15:01

There’s an important conversation going on right now about the power that companies like Facebook wield over our lives. These businesses are built on technology platforms that are so complex, it’s unreasonable to expect users to fully understand the implications of interacting with them. As a user of the internet, you deserve a voice and should be able to use the internet on your own terms. In light of recent news on how the aggregation of user data can be used in surprising ways, we’ve created an add-on for Firefox called Facebook Container, based on technology we’ve been working on for the last couple of years and accelerated in response to what we see in terms of growing demand for tools that help manage privacy and security.

The pages you visit on the web can say a lot about you. They can infer where you live, the hobbies you have, and your political persuasion. There’s enormous value in tying this data to your social profile, and Facebook has a network of trackers on various websites. This code tracks you invisibly and it is often impossible to determine when this data is being shared.

Facebook Container isolates your Facebook identity from the rest of your web activity. When you install it, you will continue to be able to use Facebook normally. Facebook can continue to deliver their service to you and send you advertising. The difference is that it will be much harder for Facebook to use your activity collected off Facebook to send you ads and other targeted messages.

This Add-On offers a solution that doesn’t tell users to simply stop using a service that they get value from. Instead, it gives users tools that help them protect themselves from the unexpected side effects of their usage. The type of data in the recent Cambridge Analytica incident would not have been prevented by Facebook Container. But troves of data are being collected on your behavior on the internet, and so giving users a choice to limit what they share in a way that is under their control is important.

Facebook isn’t unique in their practice of collecting data from your activity outside of the core service, and our goal is not to single out a particular organization, but to start with a well-defined problem that we can solve quickly. As good privacy hygiene, it’s also worth reviewing your privacy settings for each app that you use regularly. With respect to Facebook, this link from EFF has useful advice on how to keep your data where you want it to be, under more of your control.

To learn more about how our Facebook Container Add-On works, check out our Firefox Frontier Blog.

To add the Facebook Container Add-On, visit here.

The post Being Open and Connected on Your Own Terms with our New Facebook Container Add-On appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: The Firefox Accounts authentication zoo

Mozilla planet - ti, 27/03/2018 - 12:35

After my article on the browser sync mechanisms I spent some time figuring out how Firefox Accounts work. The setup turned out remarkably complex, with many different server types communicating with each other even for the most basic tasks. While this kind of overspecialization probably should be expected given the scale at which this service operates, the number of different authentication methods is surprising and the official documentation only tells a part of the story while already being fairly complex. I’ll try to show the entire picture here, in case somebody else needs to piece it all together.

Authentication server login: password hash

Your entry point is normally accounts.firefox.com. This is what Mozilla calls the Firefox Accounts content server – a client-side only web application, backed by a very basic server essentially producing static content. When you enter your credentials this web application will hash your password, currently using PBKDF2 with 1000 iterations, in future hopefully something more resilient to brute-forcing. It will send that hash to the Firefox Account authentication server under api.accounts.firefox.com and get a session token back on success.

Using the session token: Hawk with derived ID

Further communication with the authentication server uses the Hawk authentication scheme that carefully avoids sending the session token over the wire again and signs all the request parameters as well as the payload. A clever trick makes sure that the client doesn’t have to remember an additional Hawk ID here: the ID is a hash of the session token. Not that the content server communicates a lot with the authentication server after the login, the most important call here is signing a public key that the content server generates on the client side. The corresponding private key can then be used to generate BrowserID assertions.

Do you remember BrowserID? BrowserID a.k.a. Persona was a distributed single sign-on service that Mozilla introduced in 2011 and shut down in 2016. Part of it apparently still lives on in Firefox Accounts. How are these assertions being used?

Getting OAuth token: BrowserID assertion

Well, Firefox Accounts use the BrowserID assertion to generate yet another authentication token. They send it to oauth.accounts.firefox.com and want an OAuth token back. But the OAuth server has to validate the BrowserID assertion first. It delegates that task to verifier.accounts.firefox.com which forwards the requests to browserid-local-verify package running on some compute cluster. Verification process involves looking up issuer’s public key info and verifying its RSA signature. If everything is right the verifier server will send information contained in the assertion back and leave it up to the OAuth server to verify that the correct issuer was used. Quite unsurprisingly, only “api.accounts.firefox.com” as issuer will give you an OAuth token.

Funny fact: while the verifier is based on Node.js, it doesn’t use built-in crypto to verify RSA signatures. Instead, this ancient JS-based implementation is currently being used. It doesn’t implement signing however, so the RSA-Sign library by Kenji Urushima is used on top. That library is no longer available online, and its quality is rather questionable.

Accessing user’s profile and subscription settings: OAuth

OAuth is the authentication method of choice when communicating with the profile.accounts.firefox.com server. Interestingly, the user’s profile stored here consists only of the user’s name and their avatar. While the email address is also returned, the profile server actually queries the authentication server behind the scenes to retrieve it, using the same OAuth token.

The content server will also use OAuth to get the user’s newsletter subscription settings from the Basket proxy living under accounts.firefox.com/basket/. This proxy will verify the OAuth token and then forward your request to the basket.mozilla.org server using an API key to authenticate the request. See, the Basket server cannot deal with OAuth itself. It can only do API keys that grant full access or its own tokens to manage individual accounts. It isn’t exactly strict in enforcing the use of these tokens however.

Accessing sync data: Hawk with tokens

An additional twist comes in when you sync your data which requires talking to token.services.mozilla.com first. The stated goal of this service isn’t merely assigning users to one of the various storage servers but also dealing with decentralized logins. I guess that these goals were formulated before BrowserID was shut down. Either way, it will take your BrowserID assertion and turn it into yet another authentication token, conveniently named just that: token. The token is a piece of data containing your user ID among other things. This data is signed by the token server, and the storage servers can validate it.

Mozilla goes a step further however and gives the client a secret key. So when the storage server is actually accessed, the Hawk authentication scheme mentioned before is used for authentication: the token is used as Hawk ID while the secret key is never sent over the wire again and is merely used to sign the request parameters.

Conclusions

Clearly, some parts of this setup made sense at some point but no longer do. This especially applies to the use of BrowserID: the complicated generation and verification process makes no sense if only one issuer is allowed. The protocol is built on top of JSON Web Tokens (JWT), yet using JWT without any modifications would make a lot more sense here.

Also, why is Mozilla using their own token library that looks like a proprietary version of JWT? It seems that this library was introduced before JWT came along, today it is simply historical ballast.

Evaluating the use of Hawk is more complicated. While Hawk looks like a good idea, one has to ask: what are the benefits of signing request parameters if all traffic is already encrypted via TLS? In fact, Hawk is positioning itself as a solution for websites where implementing full TLS protection isn’t feasible for some reason. Mozilla uses TLS everywhere however. Clearly, nothing can help if one of the endpoints of the TLS connection is compromised. But what if an attacker is able to break up TLS-protected connections, e.g. a state-level actor? Bad luck, Hawk won’t really help then. While Hawk mostly avoids sending the secret over the wire, this secret still needs to be sent to the client once. An attacker who can snoop into TLS-encrypted connections will intercept it then.

In the end, the authentication zoo here means that Mozilla has to maintain more than a dozen different authentication libraries. All of these are critical to the security of Firefox Accounts and create an unnecessarily large attack surface. Mozilla would do good by reducing the authentication methods to a minimum. OAuth for example is an extremely simple approach, and I can see only one reason why it shouldn’t be used: validating a token requires querying the OAuth server. If offline validation is desirable, JWT can be used instead. While the complexity is higher then, JWT is a well-established standard with stable libraries to support it.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Play TLS 1.3 with curl

Mozilla planet - ti, 27/03/2018 - 07:44

The IESG recently approved the TLS 1.3 draft-28 for proposed standard and we can expect the real RFC for this protocol version to appear soon (within a few months probably).

TLS 1.3 has been in development for quite some time by now, and a lot of TLS libraries already support it to some extent. At varying draft levels.

curl and libcurl has supported an explicit option to select TLS 1.3 since curl 7.52.0 (December 2016) and assuming you build curl to use a TLS library with support, you've been able to use TLS 1.3 with curl since at least then. The support has gradually been expanded to cover more and more libraries since then.

Today, curl and libcurl support speaking TLS 1.3 if you build it to use one of these fine TLS libraries of a recent enough version:

  • OpenSSL
  • BoringSSL
  • libressl
  • NSS
  • WolfSSL
  • Secure Transport (on iOS 11 or later, and macOS 10.13 or later)

GnuTLS seems to be well on their way too. TLS 1.3 support exists in the GnuTLS master branch on gitlab.

curl's TLS 1.3-support makes it possible to select TLS 1.3 as preferred minimum version.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Voice Fill Graduation Report

Mozilla planet - ti, 27/03/2018 - 06:01

Voice Fill is now available at the Firefox Add-ons website for all Firefox users. Test Pilot users will be automatically migrated to this version.

Last year, Mozilla launched several parallel efforts to build capability around voice technologies. While work such as the Common Voice and DeepSpeech projects took aim at creating a foundation for future open source voice recognition projects, the Voice Fill experiment in Test Pilot took a more direct approach by building voice-based search into Firefox to learn if such a feature would be valuable to Firefox users. We also wanted to push voice research at Mozilla by contributing general tooling and training data to add value to future voice projects.

How it went down

The Firefox Emerging Technologies team approached Test Pilot with an idea for a voice input experiment that let users fill out any form element on the web with voice input.

<figcaption>An early prototype</figcaption>

As a technical feat, the early prototypes were quite impressive, but we identified two major usability issues that had to be overcome. First, adding voice control to every site with a text input would mean debugging our implementation across an impossibly large number of websites which could break the experiment in random and hard-to-repair ways. Second, because users must opt into voice controls in Firefox on a per-site basis, this early prototype would require that users fiddle with browser permissions wherever they wanted to engage with voice input.

In order to overcome these challenges, the Test Pilot and Emerging Technologies teams worked together to identify a minimum scope for our experiment. Voice Fill would focus on voice-based search as a its core use case and would only be available to users through Google, DuckDuckGo, and Yahoo search engines. Users visiting these sites would see a microphone button indicating that Voice Fill was available, and could click the button to trigger a voice search.

<figcaption>Animation showing the Voice Fill interface</figcaption>

From an engineering standpoint, the Voice Fill WebExtension add-on worked by letting users activate microphone input on specific search engine pages. Once triggered, an overlay appeared on the page prompting the user to record their voice via standard getUserMedia browser API. We used a WebExtension content script to inject the Voice Fill interface into search pages, and the Lottie library — which parses After Effects animations in JSON — to power the awesome mic animations provided by our super talented visual designer.

Voice Fill relied on an Emscripten module based on WebRTC C code to handle voice activity detection and register events for thing like loudness and silence during voice recording. After recording, samples were analyzed by an open source speech recognition engine called Kaldi. Kaldi is highly configurable, but essentially works by taking snippets of speech, then using a speech model (we used an legacy version of the Api.ai model in our experiment) to convert each snippet into best guesses at text along with a confidence ratings for each guess. For example, I might say “Pizza” and Kaldi might guess “Pizza” with 97% confidence, “Piazza” with 85% confidence, and “Pit saw” with 60% confidence.

<figcaption>Search results in Voice Fill</figcaption>

Depending on the confidence generated for any given speech sample, Voice Fill did one of the following one for each analyzed voice sample.

  • If the topmost confidence rating was high enough, or the difference between the first and second confidence scores for a result was large enough, Voice Fill triggered a search automatically.
  • If the topmost confidence rating was below a certain threshold, or if the top two confidence ratings were tightly clustered, we showed a list of possible search terms for the user to choose from.
  • If Kaldi returned no suggestions, we displayed a very pretty error screen and asked the user to try again.
What did we learn?

One of the big goals of the Test Pilot program is to assess market fit for experimental concepts, and it was pretty clear from the start that Voice Fill was not the most attractive experiment for the Test Pilot audience.

<figcaption>Voice Fill has fewer daily users than our other active experiments</figcaption>

The graph above shows the average number of Firefox profiles with each of our four add-on based experiments installed over the last two months. While the other three sit in the 15 to 20k user range, Voice Fill, in orange has significantly fewer users.

This lack of market fit bears out when we look at how users engaged with Voice Fill on the Test Pilot website. In the first two weeks of January when Mozilla’s marketing department ran a promotion for Test Pilot. The chart below shows how many Test Pilot users clicked on each experiment installation button (or in the case of Send, clicked the button that links to the Send website). Again, Voice Fill garnered significantly less direct user attention than other experiments.

<figcaption>The pitch for Voice Fill was less attractive than for other experiments</figcaption>

So Voice Fill didn’t set the world on fire, but by shipping in Test Pilot, we were able to determine that a pure speech to text search function may not be the most highly sought after Firefox feature without undertaking the complex task of building a massive service for every Firefox user.

As mentioned above, Voice Fill is one part of an effort to improve open source voice recognition tools at Mozilla. While it had a modest overall user base, Voice Fill gave us a large corpus of data on which to conduct comparative analysis.

Over its lifespan, Voice Fill users produced nearly one hundred thousand requests resulting in more than one hundred ten hours of audio. A comparative analysis of the voice fill corpus using different speech models gave us a insight into how to benchmark the performance of future voice-based efforts.

We conducted our analysis by running the Voice Fill corpus through the Voice Fill’s Api.ai speech model, the open source DeepSpeech model built by Mozilla, the Kaldi Aspire model, and Google’s Speech API.

The chart below shows the average amount of time each of these models needed to decode samples in our corpus. In terms of raw speed, the Api.ai model used in Voice fill performed quite well relative to DeepSpeech and Aspire. The Google comparison here is not quite apples-to-apples since it’s average time includes a call to Google’s Cloud API whereas the other three analyses were conducted on a local cluster.

<figcaption>Average time to process each sample by speech model</figcaption>

Next we wanted to know how many of the words Google’s Speech API identified were also identified by the other models. The chart below show total words in the corpus where each model matched with the results generated by Google’s Speech API. Here, Api.ai matched forty six thousand words with Google, Aspire matched forty-two thousand and DeepSpeech matched just thirty thousand. DeepSpeech lags behind, but it’s worth noting that it’s by far the newest of these training models. While it has a long way to go to catch up to Google’s proprietary model, it’s quite impressive for such a young open source effort.

While we can’t be sure exactly why Google’s model outperforms the others in this instance, the qualitative feedback from Test Pilot suggests that our users accents might be one factor.

We limited promotion of Voice Fill to Test Pilot users to English-speaking users, but did not restrict the experiment by geography. As a result, many users told us that their accents seemed to prevent Voice Fill from accurately interpreting voice samples. Here is another limitation that would prevent us from shipping Voice Fill in Firefox in its current form: our users came from all over the world, and the model we used simply does not account for the literal diversity of voices among Firefox users.

What Happens Next?

Voice Fill is leaving Test Pilot, but it will remain available to all users of Firefox at the Firefox Add-ons website. We know from user feedback that Voice Fill provides accessibility benefits to some of its users and we are delighted to continue to support this use case.

All of these samples collected in Voice Fill will be used be used to go help train and improve the DeepSpeech open source speech recognition model.

Additionally, The proxy service we built to let Voice Fill speak to that future voice-based experiments and services at Mozilla could share a common infrastructure. This service is already being used by the Mozilla IoT Gateway, an open source connector for smart devices.

We’re also exploring improvements to the way Firefox add-ons handle user media. The approaches available to us in Voice Fill were limited, and may have contributed to the diminished usability of the experiments.

Thank you to everyone that participated in the Voice Fill experiment in Test Pilot, and thanks in particular to Faramarz Rashed and Andre Natal on the Mozilla Emerging Technologies team for spearheading Voice Fill!

Voice Fill Graduation Report was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Snooze Tabs Graduation Report

Mozilla planet - ti, 27/03/2018 - 06:01

Snooze Tabs is now available at the Firefox Add-ons website for all Firefox users.Test Pilot users will be automatically migrated to this version.

We’re getting ready to graduate the Snooze Tabs experiment, and we wanted to share some of the things we learned.

The Problem Space

Snooze Tabs launched as an experiment in Test Pilot in February 2017 with the goal of making it easier for people to continue tasks in Firefox at a time of their choosing. From previous research conducted by the Firefox User Research team on task continuity and workflows, we started to develop an understanding of the ways people’s workflows can span multiple contexts and the types of behaviors and tools that people use to support context switching and task continuity. We knew, for example, that leaving browser tabs open is one way that people actively hold tasks to which they intend to return later.

With the Snooze Tabs experiment we wanted to learn more about how a tab management feature — specifically one that might reduce the cognitive load of leaving many tabs open — could support task continuity.

How It Worked

When installed, Snooze Tabs added a button to the browser toolbar. This button triggered a panel displaying different time increments at which the current tab could be snoozed, including an option to pick a custom date and time.

<figcaption>The Snooze Tabs panel showing the different snooze options</figcaption>

People could then select a time and confirm their selection. At the selected time, the snoozed tab would reappear and people could then switch, or give focus to, their “woken” tab or re-snooze it for a time of their choosing.

<figcaption>At the selected time, a snoozed tab would wake, or reopen.</figcaption>

The Snooze Tabs UI also allowed people to view all of their pending snoozes and delete any snoozes they no longer wanted.

What We Learned

Over the last year, we saw just over 58,000 people using Snooze Tabs over some 400,000 sessions. The number of both new and returning users stayed relatively constant over the life of the experiment.

<figcaption>Total new and returning Snooze Tabs users</figcaption>

People using Snooze Tabs used all of the available options to create snoozes. “Tomorrow,” “Pick a Date/Time,” and “Later Today” were the most selected time options, and “Next Open” was the least selected option. The popularity of the “Tomorrow” option suggests that people tend to anticipate continuing with tasks in the browser in the short term — in the next 24 hours — rather than anticipating what they will do in the next few weeks. The relatively few number of people who selected “Next Open” may suggest that people using the experiment did not understand that option, that they do not anticipate closing the browser, could not anticipate when they would re-open the browser, or that “next open” is too soon to continue their task.

<figcaption>Cumulative snooze times</figcaption>

The data on resnoozes showed the same near-future options being most popular and greater time increments being less popular.

<figcaption>Cumulative re-snoozes</figcaption>

Additionally, more than half of tabs that were snoozed were resnoozed when woken. This data suggests that people may have a hard time accurately predicting when they will be able to return to a task. While we don’t know people’s complete workflows or whether tasks were completed, we saw that most woken tabs were given focus, which suggests that the feature may have at least helped people remember tasks they intended to continue at a later time. Finally, very few people edited or cancelled their snoozes, which might suggest a threshold of active management that people are willing to do in the name of task continuity.

Next Steps

Snooze Tabs will now be available for all users of Firefox from the Firefox Add-ons Website for all users of Firefox. Test Pilot users will be automatically migrated to this new version. This version has the same functionality as in Test Pilot, but we’ve closed a number of bugs, improved the user interface, and made the experience more accessible.

If you’re interested in contributing to the future of Snooze Tabs, check out the Github repository and/or the Discourse forum and let us know. Thank you to everyone who used Snooze Tabs, and we hope you’ll continue trying out Test Pilot experiments.

Snooze Tabs Graduation Report was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 227

Mozilla planet - ti, 27/03/2018 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is fui, a crate to add both a command-line interface and text forms to your program. Thanks to musicmatze for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

178 pull requests were merged in the last week

New Contributors
  • Daniel Kolsoi
  • lukaslueg
  • Lymia Aluysia
  • Maxwell Borden
  • Maxwell Powlison
  • memoryleak47
  • Mrowqa
  • Sean Silva
  • Tyler Mandry
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

If Rust is martial arts teacher, Perl is a pub brawler. If you survive either, you’re likely to be good at defending yourself, though both can be painful at times.

Michal 'vorner' Vaner.

Thanks to llogiq for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Marco Castelluccio: Zero coverage reports

Mozilla planet - ti, 27/03/2018 - 02:00

One of the nice things we can do with code coverage data is looking at which files are completely not covered by any test.

These files might be interesting for two reasons. Either they are:

  • dead code;
  • code that is not tested at all.

Dead code can obviously be removed, bringing a lot of advantages for developers and users alike:

  • Improve maintainability (no need to update the code in case of refactorings in the case of dead code);
  • Reduce build time for developers and CI;
  • Reduce the attack surface;
  • Decrease the size of the resulting binary which can have effects on performance, installation duration, etc.

Untested code, instead, can be really problematic. Changes to this code can take more time to be verified, require more QA resources, and so on. In summary, we can’t trust them as we trust code that is properly tested.

A study from Google Test Automation Conference 2016 showed that an uncovered line (or method) is twice as likely to have a bug fix than a covered line (or method). On top of that, testing a feature prevents unexpected behavior changes.

Using these reports, we have managed to remove a good amount of code from mozilla-central, so far around 60 files with thousands of lines of code. We are confident that there’s even more code that we could remove or conditionally compile only if needed.

As any modern software, Firefox relies a lot on third party libraries. Currently, most (all?) the content of these libraries is built by default. For example, ~400 files are untested in the gfx/skia/ directory).

Reports (updated weekly) can be seen at https://marco-c.github.io/code-coverage-reports/. It allows filtering by language (C/C++, JavaScript), filtering out third-party code or header files, showing completely uncovered files only or all files which have uncovered functions (sorted by number of uncovered functions).

uncovered code

Currently there are 2730 uncovered files (2627 C++, 103 JavaScript), 557 if ignoring third party files. As our regular code coverage reports on codecov.io, these reports are restricted to Windows and Linux platforms.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 26 Mar 2018

Mozilla planet - mo, 26/03/2018 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 26 Mar 2018

Mozilla planet - mo, 26/03/2018 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Report of High Level Expert Group on “Fake News”: A good first step, more work is needed

Mozilla planet - mo, 26/03/2018 - 12:25

In mid March, the European Commission published the final report of the High Level Expert Group (HLEG) on Fake News, “A Multi-Dimensional Approach to Disinformation”. The group was established in early January of this year, and comprised a range of experts and stakeholders from the technology industry, broadcasters, the fact checking community, academics, consumer groups, and journalists. The group was expertly chaired by Dr Madeleine De Cock Buning of Utrecht University, specialised in Intellectual Property, Copyright and Media and Communication Law.

I represented Mozilla in the HLEG, in close cooperation with Katharina Borchert, our Chief Innovation Officer, who spearheads the Mozilla Information and Trust Initiative. Mozilla’s engagement in this High Level Expert Group complements our efforts to develop products, research, and communities to battle information pollution and so-called “fake news” online.

The HLEG was assigned an ambitious task of advising the Commission on “scoping the phenomenon of fake news, defining the roles and responsibilities of relevant stakeholders, grasping the international dimension, taking stock of the positions at stake, and formulating recommendations.” The added challenge was that this was to be done in under two months with only four in-person meetings.

This final report is the result of intense discussion with the HLEG members, and we managed to produce a document that can constructively contribute to the dialogue and further understanding of disinformation in the EU. It’s not perfect, but thanks to our Chair’s diligent work to find agreement amongst different stakeholders in the group, I’m satisfied with the outcome.

What became obvious after the very first convening of the HLEG is that we would not be able to “solve” disinformation. With that necessary dose of humility, we managed to set out a good starting point for further cooperation of all stakeholders to counter the spread of disinformation. Here are some of the key highlights:

Call it “disinformation” not “fake news”
The report stresses the need to abandon the harmful term “fake news”. In addition to being overly broad, it has become weaponised to undermine trust in the media. The report focuses on disinformation, which is defined as “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” (Pg. 10).

Investing in digital literacy and trust building is crucial
Life-long education, empowerment, training, and trust building are key competencies that can lead to greater social resilience against disinformation. This isn’t just a matter for individuals using technology and consuming news – it matters just as much, and indifferent ways, to journalists and content creators.

Moreover, key to this is the recognition that media literacy extends beyond understanding the technical workings of the internet; equally crucial are methods to encourage critical thinking (see Pg. 25-27).

More (EU) research is needed
A lot of the data, and many promising initiatives in this space (such as the Credibility Coalition or the Trust Project), are primarily US based. The report encourages that public authorities, both on the EU and national level, support the development of a network of independent European Centres for (academic) research on disinformation. The purposes of which will facilitate more thorough understanding of the impact, scale, and the amplification methods, to evaluate the measures taken by different actors, and to constantly adjust the necessary responses (more on Pg. 5).

No one wants a “Ministry of Truth” (in either government or Silicon Valley)
The solutions explored in the report are of a non-legislative nature. This is in large part because the HLEG wanted to avoid knee-jerk reactions from governments who might risk adopting new laws and regulations with very little understanding of the essence, scope, and severity of the problem.

The report also acknowledges that pressuring private companies to determine what type of legal content is to be considered truthful, acceptable, or “quality” news, is equally troubling. Ultimately, the report outlines that interventions must be targeted, tailored, and based on a precise understanding of the problem(s) we are trying to solve (more on Pg. 32 & 35).

Commitment to continue this important work through a Coalition
As properly addressing the issue of disinformation cannot be meaningfully done in such a short time, the group proposed that this work should continue through a multistakeholder Coalition. The Coalition will consist of practitioners and experts where the roles and responsibilities of the various stakeholders – with a particular focus on platforms – will be fleshed out with a view to establishing a Code of Practice. The report presents 10 principles for the platforms which will serve as a basis for further elaboration (find them on Pg. 32 of the report). The principles include the need to adapt advertising and sponsored content policies, to enable privacy-compliant access to fact checking and research communities, and to make advanced settings and controls more readily available to users to empower them to customise their online experience.

You can find the full report here, and for full disclosure purposes, the minutes of the four in-person meetings (1, 2, 3, and 4). We thank everyone involved and look forward to continuing our work to tackle disinformation in Europe and across the globe.

The post Report of High Level Expert Group on “Fake News”: A good first step, more work is needed appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 109

Mozilla planet - mo, 26/03/2018 - 02:30

In the last week, we merged 94 PRs in the Servo organization’s repositories.

We also got Servo running under the hood of Firefox Focus on Android as a proof of concept. More details on that soon!

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions
  • nical fixed an issue with disappearing 2d transforms in WebRender.
  • christianpoveda implemented the typed array-based send API for WebSockets.
  • nox implemented the WebGL getAttachedShaders API.
  • kwonoj added support for retrieving typed arrays from Fetch bodies.
  • nox added support for obtaining data URLs from WebGL canvases.
  • Xanewok removed a source of unsafety from the JS handle APIs.
  • Xanewok replaced hand-written typed array support in WebGL APIs with automatically generated code.
  • jdm worked around a frequent OOM crash on Android.
  • glennw made automatic mipmap generation for WebRender images opt-in.
  • glennw simplified various parts of the WebRender pipeline for line decorations.
  • christianpoveda added support for typed arrays as blob sources.
  • alexrs made the command parsing portion of homu testable.
  • lsalzman reduced the amount of memory that is consumed by glyph caches in WebRender.
  • glennw made text shadows draw in screen space in WebRender.
  • jdm increased the configurability of homu’s list of repositories.
  • Moggers exposed the WebRender debugger through a cargo feature for downstream projects.
  • gootorov implemented the getFrameBufferAttachmentParameter WebGL API.
  • paulrouget redesigned the way that Servo’s embedding APIs are structured.
  • nakul02 added time spent waiting on synchronous recv() operations to Servo’s profiler.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Shing Lyu: Merge Pull Requests without Merge Commits

Mozilla planet - snein, 25/03/2018 - 23:46

By default, GitHub’s pull request (or GitLab’s merge request) will merge with a merge commit. That means your feature branch will be merged into the master by creating a new commit, and both the feature and master branch will be kept.

Let’s illustrate with an example:

Let’s assume we branch out a feature branch called “new-feature” from the master branch, and pushed a commit called “Finished my new feature”. At the same time someone pushed another commit called “Other’s feature” onto the master branch.

branch

If we now create a pull request for our branch, and get merged, we’ll see a new merge commit called “Merge branch ‘new-feature’”

merge_commit

If you look at GitHub’s commit history, you’ll notice that the UI shows a linear history, and the commits are ordered by the time they were pushed. So if multiple people merged multiple branches, all of them will be mixed up. The commits on your branch might interlace with other people’s commits. More importantly, some development teams don’t use pull request or merge requests at all. Everyone is suppose to push directly to master, and maintain a linear history. How can you develop in branches but merge them back to master without a merge commit?

Under the hood, GitHub and GitLab’s “merge” button uses the --no-ff option, which will force create a merge commit. What you are looking for is the opposite: --ff-only (ff stands for fast-forward). This option will cleanly append your commits to master, without creating a merge commit. But it only works if there is not new commits in master but not in your feature branch, otherwise it will fail with a warning. So if someone pushes to master and you did a git pull on your local master, you need to do a rebase on your feature branch before using --ff-only merge. Let’s see how to do this with an example:

git checkout new-feature # Go to the feature branch named "new-feature" git rebase master # Now your feature have all the commits from master git checkout master #Go back to master git merge --ff-only new-feature

After these commands, your master branch should contain the commits from the feature branch, as if they are cherry-picked from the feature branch. You can then push directly remote.

git push

If unfortunately someone pushed more code to the remote master while you are doing this, your push might fail. You can pull, rebase and push again like so:

git pull --rebase && git push

GitHub’s documentation has some nice illustrations about the two different kind of merges.

Here is a script that does the above for you. To run it you have to checkout to the feature branch you want to merge back to master, then execute it. It will also pull and rebase both your feature and master branch to the most up-to-date remote master during the operation.

#!/usr/bin/env bash CURRBRANCH=$(git rev-parse --abbrev-ref HEAD) if [ $CURRBRANCH = "master" ] then echo "Already on master, aborting..." exit 1 fi echo "Merging the change from $CURRBRANCH to master..." echo "Rebasing branch $CURRBRANCH to latest master" git fetch origin master && \ git rebase origin/master && \ echo "Checking out to master and pull" && \ git checkout master && \ git rebase origin/master && \ echo "Merging the change from $CURRBRANCH to master..." && \ git merge --ff-only $CURRBRANCH && \ git log | less echo "DONE. You may want to do one last test before pushing"

It’s worth mentioning that both GitHub and GitLab allows you to do the fast-forward (and squash) merge on it’s UI. But it’s configured on a per-repository basis, so if you don’t control the repository, you might have to ask your development team’s administrator to turn on the feature. You can read more about this feature in GitHub’s documentation and GitLab’s documentation. If you are interested in squashing the commits manually, but don’t know how, check out my previous post about squashing.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR7b1 and TenFourFoxBox 1.1 available

Mozilla planet - sn, 24/03/2018 - 23:12
TenFourFox Feature Parity Release 7 beta 1 is now available for testing (downloads, hashes, release notes). I chose to push this out a little faster than usual since there are a few important upgrades and I won't have as much time to work on the browser over the next couple weeks, so you get to play with it early.

In this version, the hidden basic adblock feature introduced in FPR6 is now exposed in the TenFourFox preference pane:

It does not default to on, and won't ever do so, but it will reflect the state of what you set it to if you played around with it in FPR6. Logging, however, is not exposed in the UI. If you want that off (though it now defaults to off), you will still need to go into about:config and change tenfourfox.adblock.logging.enabled to false. The blocklist includes several more cryptominers, adblockerblockers and tracking scripts, and there are a couple more I am currently investigating which will either make FPR7 final or FPR8.

The other big change is some retuning to garbage and cycle collection intervals which should reduce the browser's choppiness and make GC pauses less frequent, more productive and more deterministic. I did a number of stress tests to make sure this would not bloat the browser or make it run out of memory, and I am fairly confident the parameters I settled on strike a good balance between performance and parsimoniousness. Along with these updates are some additional DOM and CSS features under the hood, additional HTTPS cipher support (fixing Amtrak in particular, among others) and some sundry performance boosts and microoptimizations. The user agent strings are also updated for Firefox 60 and current versions of iOS and Android.

To go along with this is an update to TenFourFoxBox which allows basic adblock to be enabled for foxboxes and updates the cloaked user agent string to Firefox 60. There is a new demo foxbox for 2048, just for fun, and updated Gmail and user guide foxboxes. TenFourFoxBox 1.1 will go live simultaneously with FPR7 final on or about May 9.

Meanwhile, the POWER9-based Talos II showed up in public; here's a nice picture of it at the OpenPOWER Summit running Unreal Engine with engineer Tim Pearson. I'm not in love with the case, but that's easily corrected. :) Word on the street is April for general availability. You'll hear about it here first.

Categorieën: Mozilla-nl planet

Eric Shepherd: Results of the MDN “Competitive Content Analysis” SEO experiment

Mozilla planet - fr, 23/03/2018 - 21:46

The next SEO experiment I’d like to discuss results for is the MDN “Competitive Content Analysis” experiment. In this experiment, performed through December into early January, involved selecting two of the top search terms that resulted in MDN being included in search results—one of them where MDN is highly-placed but not at #1, and one where MDN is listed far down in the search results despite having good content available.

The result is a comparison of the quality of our content and our SEO against other sites that document these technology areas. With that information in hand, we can look at the competition’s content and make decisions as to what changes to make to MDN to help bring us up in the search rankings.

The two keywords we selected:

  • “tr”: For this term, we were the #2 result on Google, behind w3schools.
  • “html colors”: For this keyword, we were in 27th place. That’s terrible!

These are terms we should be highly placed for. We have a number of pages that should serve as good destinations for these keywords. The job of this experiment: to try to make that be the case.

The content updates

For each of the two keywords, the goal of the experiment was to improve our page rank for the keywords in question; at least one MDN page should be near or at the top of the results list. This means that for each keyword, we need to choose a preferred “optimum” destination as well as any other pages that might make sense for that keyword (especially if it’s combined with other keywords).

To accomplish that involves updating the content of each of those pages to make sure they’re as good as possible, but also to improve the content of pages that link to the pages that should show up on search results. The goal is to improve the relevant pages’ visibility to search as well as their content quality, in order to improve page position in the search results.

Things to look for

So, for each page that should be linked to the target pages, as well as the target pages themselves, these things need to be evaluated and improved as needed:

  • Add appropriate links back and forth between each page and the target pages.
  • Is the content clear and thorough?
  • Make sure there’s interactive content, such as new interactive examples.
  • Ensure the page’s layout and content hierarchy is up-to-date with our current content structure guidelines.
  • Examine web analytics data to determine what improvements the data suggest should be done beyond these items.
Pages reviewed and/or updated for “tr”

The primary page, obviously, is this one in the HTML element reference:

These pages are closely related and were also reviewed and in most cases updated (sometimes extensively) as part of the experiment:

A secondary group of pages which I felt to be a lower priority to change but still wound up reviewing and in many cases updating:

Pages reviewed and/or updated for “html colors”

This one is interesting in that “html colors” doesn’t directly correlate to a specific page as easily. However, we selected the following pages to update and test:

The problem with this keyword, “html colors”, is that generally what people really want is CSS colors, so you have to try to encourage Google to route people to stuff in the CSS documentation instead of elsewhere. This involves ensuring that you refer to HTML specifically in each page in appropriate ways.

I’ve opted in general to consider the CSS <color> value page to be the destination for this, for reference purposes, with the article “Applying color” being a new one I created to use as a landing page for all things color related to route people to useful guide pages.

The results

As was the case with previous experiments, we only allowed about 60 days for the Google to pick up and fully internalize the changes as well as for user reactions to affect the outcome, despite the fact that 90 days is usually the minimum time you run these tests for, with six months being preferred. However, we have had to compress our schedule for the experiments. We will, as before, continue to watch the results over time.

Results for the “tr” keyword

The pages updated to improve their placement when the “tr” keyword is used in Google search, as well as the amount of change over time seen for each, is shown in the table below. These were the pages which were updated and which appeared in search results analysis for the selected period of time.

Change (%)

Address

Impressions

Clicks Position CTR HTML/Element/tr -43.22% 124.57% 2.58% 285.71% HTML/Element/table 26.68% 27.02% -2.90% 0.00% HTML/Element/template 27.02% 9.21% -15.45% -14.05% API/HTMLTableRowElement — — — — API/HTMLTableRowElement/insertCell -2.78% -23.91% -2.16% -21.77% API/HTMLTableRowElement/rowIndex — — — — HTML/Element/thead 38.82% 19.70% 0.00% -13.67% HTML/Element/tbody 42.72% 100.52% 14.19% 40.68% HTML/Element/tfoot 8.90% 11.29% 2.64% 2.18% HTML/Element/th -50.32% 3.43% 0.39% 106.25% HTML/Element/td 20.05% 40.27% -8.04% 17.01% API/HTMLTableElement/rows — — — —

The data is interesting. Impression counts are generally up, as are clicks and search engine results page (SERP) position. Interestingly, the main <tr> page, the primary page for this keyword, has lost impressions yet gained clicks, with the CTR skyrocketing by a sizeable 285%. This means that people are seeing better search results when searching just for “tr”, and getting right to that page more often than before we began.

Results for the “html colors” keyword

The table below shows the pages updated for the “html colors” keyword and the amount of change seen in the Google activity for each page.

Page Change Impressions (%) Change Clicks (%) Change Position (Absolute) Change Position (%) Change CTR (%) https://developer.mozilla.org/en-US/docs/Learn/Accessibility/CSS_and_JavaScript +28.61% +19.88% -0.99 -4.54% -6.78% https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_boxes/Backgrounds -43.88% -38.17% -2.71 -21.20% +10.17% https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_boxes/Borders +51.59% +33.33% +3.28 +48.62% -12.04% https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_text/Fundamentals +34.87% +29.55% -1.35 -11.34% -3.94% https://developer.mozilla.org/en-US/docs/Web/CSS/background-color +9.03% +19.26% -0.17 -2.46% +9.38% https://developer.mozilla.org/en-US/docs/Web/CSS/border-color +36.02% +36.98% -0.09 -1.38% +0.71% https://developer.mozilla.org/en-US/docs/Web/CSS/color +23.04% +23.42% +0.03 +0.34% +0.31% https://developer.mozilla.org/en-US/docs/Web/CSS/color_value +14.95% +34.09% -1.21 -10.26% +16.65% https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Colors/Color_picker_tool -10.78% +6.68% +1.76 +24.78% +19.56% https://developer.mozilla.org/en-US/docs/Web/CSS/outline-color +830.70% +773.91% -0.97 -12.42% -6.10% https://developer.mozilla.org/en-US/docs/Web/CSS/text-decoration-color +3254.57% +3429.41% -1.45 -21.98% +5.21% https://developer.mozilla.org/en-US/docs/Web/HTML/Applying_color +50.32% +45.21% -0.56 -4.83% -3.40% https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/color +31.15% +25.57% -0.44 -4.44% -4.25%

These results are also quite promising, especially since time did not permit me to make as many changes to this content as I’d have liked. The changes for the color value type page are good; nearly 15% increase in impressions and a very good 34% rise in clicks means a health boost to CTR. Ironically, though, our position in search results dropped by nearly 1.25 points., or 10%.

The approximate 23% increase in both impressions and clicks on the CSS color attribute is quite good, and I’m pleased by the 10% gain in CTR for the learning area article on styling box backgrounds.

Almost every page sees significant gains in both impressions and clicks (take a look at text-decoration-color, in particular, with over 3000% growth!).

The sea of red is worrisome at first glance, but I think what’s happening here is that because of the improvements in impression counts (that is, how often users see these pages on Google), they are prone to reaching the page they really want more quickly. Note which pages are the ones with the positive click-through rate (CTR), which is the ratio of clicks divided by impressions. This is in order of highest change in CTR to lowest:

  1. https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Colors/Color_picker_tool
  2. https://developer.mozilla.org/en-US/docs/Web/CSS/color_value
  3. https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_boxes/Backgrounds
  4. https://developer.mozilla.org/en-US/docs/Web/CSS/background-color
  5. https://developer.mozilla.org/en-US/docs/Web/CSS/text-decoration-color
  6. https://developer.mozilla.org/en-US/docs/Web/CSS/border-color
  7. https://developer.mozilla.org/en-US/docs/Web/CSS/color

What I believe we’re seeing is this: due to the improvements to SEO (and potentially other factors), all of the color-related pages are getting more traffic. However, the ones in the list above are the ones seeing the most benefit; they’re less prone to showing up at inappropriate times and more likely to be clicked when they are presented to the user. This is a good sign.

Over time, I would hope to improve the SEO further to help bring the search results positions up for these pages, but that takes a lot more time than we’ve given these pages so far.

Uncertainties

For this experiment, the known uncertainties (an oxymoron, but we’ll go with that term anyway) include:

  • As before, the elapsed time was far too share to get viable data for this experiment. We will examine the data again in a few months to see how things are progressing.
  • This project had additional time constraints that led me not to make as many changes as I might have preferred, especially for the “html colors” keyword. The results may have been significantly different had more time been available, but that’s going to be common in real-world work anyway.
  • Overall site growth during the time we ran this experiment also likely inflated the results somewhat.
Decisions

After sharing these results with Kadir and Chris, we came to the following initial conclusions:

  • This is promising, and should be pursued for pages which already have low-to-moderate traffic.
  • Regardless of when we begin general work to perform and make changes as a result of competitive content analysis, we should immediately update MDN’s contributor guides to incorporate recommended changes.
  • The results suggest that content analysis should be a high-priority part of our SEO toolbox. Increasing our internal link coverage and making documents relate to each other creates a better environment for search engine crawlers to accumulate good data.
  • We’ll re-evaluate the results in a few months after more data has accumulated.

If you have questions or comments about this experiment or its results, please feel free to post to this topic on Mozilla’s Discourse forum.

Categorieën: Mozilla-nl planet

The Firefox Frontier: No More Notifications (If You Want)

Mozilla planet - fr, 23/03/2018 - 20:15

Online, your attention is priceless. That’s why every site in the universe wants permission to send you notifications about new stuff. It can be distracting at best and annoying at … Read more

The post No More Notifications (If You Want) appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mike Conley: Things I’ve Learned This Week (May 25 – May 29, 2015)

Thunderbird - mo, 01/06/2015 - 07:49
MozReview will now create individual attachments for child commits

Up until recently, anytime you pushed a patch series to MozReview, a single attachment would be created on the bug associated with the push.

That single attachment would link to the “parent” or “root” review request, which contains the folded diff of all commits.

We noticed a lot of MozReview users were (rightfully) confused about this mapping from Bugzilla to MozReview. It was not at all obvious that Ship It on the parent review request would cause the attachment on Bugzilla to be r+’d. Consequently, reviewers used a number of workarounds, including, but not limited to:

  1. Manually setting the r+ or r- flags in Bugzilla for the MozReview attachments
  2. Marking Ship It on the child review requests, and letting the reviewee take care of setting the reviewer flags in the commit message
  3. Just writing “r+” in a MozReview comment

Anyhow, this model wasn’t great, and caused a lot of confusion.

So it’s changed! Now, when you push to MozReview, there’s one attachment created for every commit in the push. That means that when different reviewers are set for different commits, that’s reflected in the Bugzilla attachments, and when those reviewers mark “Ship It” on a child commit, that’s also reflected in an r+ on the associated Bugzilla attachment!

I think this makes quite a bit more sense. Hopefully you do too!

See gps’s blog post for the nitty gritty details, and some other cool MozReview announcements!

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Calendar builds

Thunderbird - wo, 27/05/2015 - 10:26

Common (excluding Website bugs)-specific: (23)

  • Fixed: 735253 – JavaScript Error: “TypeError: calendar is null” {file: “chrome://calendar/content/calendar-task-editing.js” line: 102}
  • Fixed: 768207 – Make the cache checkbox default-on in the new calendar dialog
  • Fixed: 1049591 – Fix lots of strict warnings
  • Fixed: 1086573 – Lightning and Thunderbird disagree about timezone support in ics files
  • Fixed: 1099592 – Make JS callers of ios.newChannel call ios.newChannel2 in calendar/
  • Fixed: 1149423 – Add Windows timezone names to list of aliases
  • Fixed: 1151011 – Calendar events show up on wrong day when printing
  • Fixed: 1151440 – Choose a color not responsive when creating a New calendar in Lightning 4.0b1
  • Fixed: 1153327 – Run compare-locales with merging for Lightning
  • Fixed: 1156015 – Email scheduling fails for recipients with URN id
  • Fixed: 1158036 – Support sendMailTo for URN type attendees
  • Fixed: 1159447 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_extract.js
  • Fixed: 1159638 – Getter fails in calender-migration-dialog on first run after installation
  • Fixed: 1159682 – Provide a more appropriate “learn more” page on integrated Lightning firstrun
  • Fixed: 1159698 – Opt-out dialog has a button for “disable”, but actually the addon is removed
  • Fixed: 1160728 – Unbreak Lightning 4.0b4 beta builds
  • Fixed: 1162300 – TEST-UNEXPECTED-FAIL | xpcshell-libical.ini:calendar/test/unit/test_alarm.js | xpcshell return code: 0
  • Fixed: 1163306 – Re-enable libical tests and disable ical.js in nightly builds when binary compatibility is back
  • Fixed: 1165002 – Lightning broken, tries to load libical backend although “calendar.icaljs” defaults to “true”
  • Fixed: 1165315 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug759324.js | xpcshell return code: 1 | ###!!! ASSERTION: Deprecated, use NewChannelFromURI2 providing loadInfo arguments!
  • Fixed: 1165497 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_alarmservice.js | xpcshell return code: -11
  • Fixed: 1165726 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/testBasicFunctionality.js | testBasicFunctionality.js::testSmokeTest
  • Fixed: 1165728 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug494140.js | xpcshell return code: -11

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Thunderbird comm-central builds

Thunderbird - wo, 27/05/2015 - 10:25

Thunderbird-specific: (54)

  • Fixed: 401779 – Integrate Lightning Into Thunderbird by Default and Ship Thunderbird with Lightning Enabled
  • Fixed: 717292 – Spell check language setting for subject and body not synchronized, but temporarily appears so when changing language and depending on focus (confusing ux)
  • Fixed: 914225 – Support hotfix add-on in Thunderbird
  • Fixed: 1025547 – newmailaccount/jquery.tmpl.js, line 123: reference to undefined property def[1]
  • Fixed: 1088975 – Answering mail with sendername containing encoded special chars and comma creates two “To”-entries
  • Fixed: 1101237 – Remove distribution directory during install
  • Fixed: 1109178 – Thunderbird OAuth implementation does not work with Evernote
  • Fixed: 1110166 – Port |Bug 1102219 – Rename String.prototype.contains to String.prototype.includes| to comm-central
  • Fixed: 1113097 – Fix misuse of fixIterator
  • Fixed: 1130854 – Package Lightning with Thunderbird
  • Fixed: 1131997 – Adapt for Debugger Server code for changes in bug 1059308
  • Fixed: 1135291 – Update chat log entries added to Gloda since bug 955292 to use relative paths
  • Fixed: 1135588 – New conversations get indexed twice by gloda, leading to duplicate search results
  • Fixed: 1138154 – Plugins default to “always activate” in Thunderbird
  • Fixed: 1142879 – [meta] track Mozilla-central (Core) issues that we want to have fixed in TB38
  • Fixed: 1146698 – Chat Messages added to logs just before shutdown may not be indexed by gloda
  • Fixed: 1148330 – Font indicator doesn’t update when cursor is placed in text where core returns sans-serif (Windows). Serif and monospace don’t work (Linux).
  • Fixed: 1148512 – TEST-UNEXPECTED-FAIL | mailnews/imap/test/unit/test_dod.js | xpcshell return code: 0||1 | streamMessages – [streamMessages : 94] false == true | application crashed [@ mozalloc_abort(char const * const)]
  • Fixed: 1149059 – splitter in compose window can be resized down to completely obscure composition area
  • Fixed: 1151206 – Using a theme hides minimize, maximize and close button in composer window [Mac]
  • Fixed: 1151475 – Remove use of expression closures in mail/
  • Fixed: 1152299 – [autoconfig] Cosmetic changes for WEB.DE config
  • Fixed: 1152706 – Upgrade to Correspondents column (combined To/From column) too agressive
  • Fixed: 1152796 – chrome://messenger/content/folderDisplay.js, line 697: TypeError: this._savedColumnStates.correspondentCol is undefined
  • Fixed: 1152926 – New mail sound preview doesn’t work for default system sound on Mac OS X
  • Fixed: 1154737 – Permafail: TEST-UNEXPECTED-FAIL | toolkit/components/telemetry/tests/unit/test_TelemetryPing.js | xpcshell return code: 0
  • Fixed: 1154747 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/session-store/test-session-store.js | test-session-store.js::test_message_pane_height_persistence
  • Fixed: 1156669 – Trash folder duplication while using IMAP with localized TB
  • Fixed: 1157236 – In-content dialogs: Port bug 1043612, bug 1148923 and bug 1141031 to TB
  • Fixed: 1157649 – TEST-UNEXPECTED-FAIL | dom/push/test/xpcshell/test_clearAll_successful.js (and most other push tests)
  • Fixed: 1158824 – Port bug 138009 to fix packaging errors | Missing file(s): bin/defaults/autoconfig/platform.js
  • Fixed: 1159448 – Thunderbird ignores proxy settings on POP3S protocol
  • Fixed: 1159627 – resource:///modules/dbViewWrapper.js, line 560: SyntaxError: unreachable code after return statement
  • Fixed: 1159630 – components/glautocomp.js, line 155: SyntaxError: unreachable code after return statement
  • Fixed: 1159676 – mailnews/mime/jsmime/test/test_custom_headers.js | run_next_test 0 – TypeError: _gRunningTest is undefined at /builds/slave/test/build/tests/xpcshell/head.js:1435 (and other jsmime tests)
  • Fixed: 1159688 – After switching/changing the window layout, dragging the splitter between threadpane and messagepane can create gray/grey area/space (misplaced notificationbox)
  • Fixed: 1159815 – Take bug 1154791 “Inline spell checker loses red underlines after a backspace is used – take two” in Thunderbird 38
  • Fixed: 1159817 – Take “Bug 1100966 – Inline spell checker loses red underlines after a backspace is used” in Thunderbird 38
  • Fixed: 1159834 – Consider taking “Bug 756984 – Changing location in editor doesn’t preserve the font when returning to end of text/line” in Thunderbird 38
  • Fixed: 1159923 – Take bug 1140105 “Can’t query for a specific font face when the selection is collapsed” in TB 38
  • Fixed: 1160105 – Fix strict mode warnings in protovis-r2.6-modded.js
  • Fixed: 1160106 – “Searching…” spinner at the bottom of gloda search results never goes away
  • Fixed: 1160114 – Strict mode warnings on faceted search
  • Fixed: 1160805 – Missing Windows and Linux nightly builds, build step set props: previous_buildid fails
  • Fixed: 1161162 – “Join Chat” doesn’t focus the newly joined MUC
  • Fixed: 1162396 – Take bug 1140617 “Pasting an image loses the composition style” in TB38
  • Fixed: 1163086 – Take bug 967494 “changing spellcheck language in one composition window affects all open and new compositions” in TB38
  • Fixed: 1163299 – “TypeError: getBrowser(…) is null” in contentAreaClick with Lightning installed and started in calendar view
  • Fixed: 1163343 – Incorrectly formatted error message “sending failed”
  • Fixed: 1164415 – Error in comment for imapEnterServerPasswordPrompt
  • Fixed: 1164658 – TypeError: Cc[‘@mozilla.org/weave/service;1’] is undefined at resource://gre/modules/FxAccountsWebChannel.jsm:227
  • Fixed: 1164707 – missing toolkit_perfmonitoring.xpt in aurora builds
  • Fixed: 1165152 – Take bug 1154894 in TB 38 branch: Disable test_plugin_default_state.js so Thunderbird can ship with plugins disabled by default
  • Fixed: 1165320 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/notification/test-notification.js

MailNews Core-specific: (30)

  • Fixed: 610533 – crash [@ nsMsgDatabase::GetSearchResultsTable(char const*, int, nsIMdbTable**)] with virtual folder
  • Fixed: 745664 – Rename Address book aaa to aaa_test, delete another address book bbb, and renamed address book aaa_test will lose its name and appear deleted after restart (dataloss! involving localized names)
  • Fixed: 777770 – get rid of nsVoidArray from /mailnews
  • Fixed: 786141 – Use nsIFile.exists() instead of stat to check the existence of the file
  • Fixed: 1069790 – Email addresses with parenthesis are not pretty-printed anymore
  • Fixed: 1072611 – Ctrl+P not working from Composition’s Print Preview window
  • Fixed: 1099587 – Make JS callers of ios.newChannel call ios.newChannel2 in mail/ and mailnews/
  • Fixed: 1130248 – |To: “foo@example.com” <foo@example.com>| becomes |”foo@example.comfoo”@example.com| when I compose mail to it
  • Fixed: 1138220 – some headers are not not properly capitalized
  • Fixed: 1141446 – Behaviour of malformed rfc2047 encoded From message header inconsistent
  • Fixed: 1143569 – User-agent error when posting to NNTP due to RFC5536 violation of Tb (user-agent header is folded just after user-agent:, “user-agent:[CRLF][SP]Mozilla…”)
  • Fixed: 1144693 – Disable libnotify usage on Linux by default for new-mail notifications (doesn’t always work after bug 858919)
  • Fixed: 1149320 – fix compile warnings in mailnews/extensions/
  • Fixed: 1150891 – Port package-manifest.in changes from Bug 1115495 – Part 2: PAC generator for browsing and system wide proxy
  • Fixed: 1151782 – Inputting 29th Feb as a birthday in the addressbook contact replaces it with 1st Mar.
  • Fixed: 1152364 – crash in Address Book via nsAbBSDirectory::GetChildNodes nsCOMArrayEnumerator::operator new(unsigned int, nsCOMArray_base const&)
  • Fixed: 1152989 – Account Manager Extensions broken in Thunderbird 37/38
  • Fixed: 1154521 – jsmime fails on long references header and e-mail gets sent and stored in Sent without headers
  • Fixed: 1155491 – Support autoconfig and manual config of gmail IMAP OAuth2 authentication
  • Fixed: 1155952 – Nesting level does not match indentation
  • Fixed: 1156691 – GUI “Edit filters”: Conditions/actions (for specfic accounts) not visible
  • Fixed: 1156777 – nsParseMailbox.cpp:505:55: error: ‘do_QueryObject’ was not declared in this scope
  • Fixed: 1158501 – Port bug 1039866 (metro code removal) and bug 1085557 (addition of socorro symbol upload API)
  • Fixed: 1158751 – Port NO_JS_MANIFEST changes | mozbuild.frontend.reader.SandboxValidationError: calendar/base/backend/icaljs/moz.build
  • Fixed: 1159255 – Build error: MSVC_ENABLE_PGO = True is not permitted to be used in mailnews/intl/moz.build
  • Fixed: 1159626 – chrome://messenger/content/accountUtils.js, line 455: SyntaxError: unreachable code after return statement
  • Fixed: 1160647 – Port |Bug 1159972 – Remove the fallible version of PL_DHashTableInit()| to comm-central
  • Fixed: 1163347 – Don’t require scope in ispdb config for OAuth2
  • Fixed: 1165737 – Fix usage of NS_LITERAL_CSTRING in mailnews, port Bug 1155963 to comm-central
  • Fixed: 1166842 – Re-enable binary extensions for comm-central

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Pages