mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 4 maanden 4 dagen geleden

Hacks.Mozilla.Org: I built something with A-Frame in 2 days (and you can too)

wo, 06/09/2017 - 17:03

A few months ago, I had the opportunity to try out several WebVR experiences for the first time, and I was blown away by the possibilities. Using just a headset and my Firefox browser, I was able to play games, explore worlds, paint, create music and so much more. All through the open web. I was hooked.

A short while later, I was introduced to A-Frame, a web framework for building virtual reality experiences. The “Hello World” demo is a mere 15 lines of code. This blew my mind. Building an experience in Virtual Reality seems like a task reserved for super developers, or that guy from Mr. Robot. After glancing through the A-Frame documentation, I realized that anyone with a little front-end experience can create something for Virtual Reality…even me – a marketing guy who likes to build websites in his spare time.

My team had an upcoming presentation to give. Normally we would create yet another slide deck. This time, however, I decided to give A-Frame a shot, and use Virtual Reality to tell our story and demo our work.

Within two days I was able to teach myself how to build this (slightly modified for sharing purposes). You can view the GitHub repo here.

The result was a presentation that was fun and unique. People were far more engaged in Virtual Reality than they would have been watching us flip through slides on a screen.

This isn’t a “how-to get started with A-Frame” post (there are plenty of great resources for that). I did, however, find solutions for a few “gotchas” that I’ll share below.

Walking through walls

One of the first snags I ran into was that the camera would pass through objects and walls. After some research, I came across a-frame-extras. It includes an add-on called “kinematic-body” that helped solve this issue for me.

Controls

A-frame extras also has helpers for controls. It gave me an easy way to implement controls for keyboard, mouse, touchscreen, etc.

Generating rooms

It didn’t take me long to figure out how to create and position walls to create a room. I didn’t just want a room though. I wanted multiple rooms and hallways. Manually creating them would take forever. During my research I came across this post, where the author created a maze using an array of numbers. This inspired me to create generate my own map using a similar method:

const map = { "data": [ 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 4, 4, 4, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 4, 0, 0, 0, 4, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 4, 0, 0, 0, 4, 4, 4, 1, 0, 8, 0, 0, 0, 0, 0, 1, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 4, 4, 4, 4, 4, 4, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0 ], "height":19, "width":19 }

0 = no walls
1 – 4 = walls with various textures
8 = user start position
9 = log position to console

This would allow me to try different layouts, start at different spots around the map, and quickly get coordinates for positioning items and rooms (you’ll see why this is useful below). You can view the rest of the code here.

Duplicating rooms

Once I created a room, I wanted to recreate a variation of this room at different locations around the map. This is where I learned to embrace the “ object. When you use “ as a container, it allows entities inside the container to be positioned relative to that parent entity object. I found this post about relative positioning to be helpful in understanding the concept. This allowed me to duplicate the code for a room, and simply provide new position coordinates for the parent entity.

Conclusion

I have no doubt that there are better and more efficient ways to create something like this, but the fact that a novice like myself was able to build something in just a couple of days speaks volumes to the power of A-Frame and WebVR. The A-Frame community also deserves a lot of credit. I found libraries, code examples, and blog posts for almost every issue and question I had.

Now is the perfect time to get started with WebVR and A-Frame, especially now that it’s supported for anyone using the latest version of Firefox on Windows. Check out the website, join the community, and start building.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl author activity illustrated

wo, 06/09/2017 - 15:31

At the time of each commit, check how many unique authors that had a change committed within the previous 120, 90, 60, 30 and 7 days. Run the script on the curl git repository and then plot a graph of the data, ranging from 2010 until today. This is just under 10,000 commits.

(click for the full resolution version)

git-authors-active.pl is the little stand-alone script I wrote and used for this – should work fine for any git repository. I then made the graph from that using libreoffice.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla and the Washington Post Are Reinventing Online Comments

wo, 06/09/2017 - 15:00
To engage readers, build community, and strengthen journalism, Mozilla’s open-source commenting platform will be integrated across washingtonpost.com this summer

 

Digital journalism has revolutionized how we engage with the news, from the lightning speed at which it’s delivered to different formats on offer.

But the comments section beneath that journalism? It’s… broken. Trolls, harassment, enmity and abuse undermine meaningful discussion and push many people away. Many major newsrooms are removing their comments. Many new sites are launching without them.

Instead, newsrooms are directing interaction and engagement to social media. As a result, tools are limited, giant corporations control the data, and news organizations cannot build a direct relationship with their audience. 

At Mozilla, we’re not giving up on online comments. We believe that engaging readers and building community around the news strengthens not just journalism, but also open society. We believe comments are a fundamental part of the decentralized web.

Mozilla has been researching, testing, and building software in this area since 2015. Today, our work is taking a huge step forward as the Washington Post integrates Talk — Mozilla’s open-source commenting platform — across washingtonpost.com.

Talk is currently deployed across the Washington Post’s Politics, Business, and The Switch (technology) sections, and will roll out to more sections in the coming weeks.

Talk is open-source commenting software developed by Mozilla.

What is Talk?

Talk is developed by The Coral Project, a Mozilla creation that builds open-source tools to make digital journalism more inclusive and more engaging, both for audience members and journalists. Starting this summer, Talk will also be integrated across Fairfax Media’s websites in Australia, including the Sydney Morning Herald and The Age. One of The Coral Project’s other tools, Ask, is currently being used by 13 newsrooms, including the Miami Herald, Univision, and PBS Frontline.

“Trust in journalism relies on building effective relationships with your audience,” says Andrew Losowsky, project lead of The Coral Project. “Talk rethinks how moderation, comment display and conversation can function on news websites. It encourages more meaningful interactions between journalists and the people they serve.”

“Talk is informed by a huge amount of research into online communities,” Losowsky adds. “We’ve commissioned academic studies and held workshops around the world to find out what works, and also published guides to help newsrooms change their strategies. We’ve interviewed more than 300 people from 150 newsrooms in 30 countries, talking to frequent commenters, people who never comment, and even trolls. We’ve learned how to turn comments — which have so much potential — into a productive space for everyone.”

“Commenters and comment viewers are among the most loyal readers The Washington Post has,” said Greg Barber, The Post’s director of newsroom product. “Through our work with Mozilla, The New York Times, and the Knight Foundation in The Coral Project, we’ve invested in a set of tools that will help us better serve them, powering fruitful discussion and debate for years to come.”

The Coral Project was created thanks to a generous grant from the Knight Foundation and is currently funded by the Democracy Fund, the Rita Allen Foundation, and Mozilla. It also offers hosting and consulting services for newsrooms who need support in running their software.

Here’s what makes Talk different

It’s filled with features that improve interactions, including functions that show the best comments first, ignore specific users, find great commenters, give badges to staff members, filter out unreliable flaggers, and offer a range of audience reactions.

You own your data. Unlike the most popular systems, every organization using Talk runs its own version of the software, and keeps its own data. Talk doesn’t contain any tracking, or digital surveillance. This is great for journalistic integrity, good for privacy, and important for the internet.

It’s fast. Talk is small — about 300kb — and lightweight. Only a small number of comments initially load, to keep the page load low. New comments and reactions update instantaneously.

It’s flexible. Talk uses a plugin architecture, so each newsroom can make their comments act in a different way. Plugins can be written by third parties — the Washington Post has already written and open sourced several — and applied within the embed code, in order to change the functionality for particularly difficult topics.

It’s easy to moderate. Based on feedback from moderators at 12 different companies, we’ve created a simple moderation system with keyboard shortcuts and a feature-rich configuration.

It’s great for technologists. Talk is fully extensible with a RESTful and Graph API, and a plugin architecture that includes webhooks. The CSS is also fully customizable.

It’s 100% free. The code is public and available for you to download and run. And if you want us to help you host or integrate Talk into your site, we offer paid services that support the project.

Learn more about The Coral Project.

The post Mozilla and the Washington Post Are Reinventing Online Comments appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Identifying regressions when working on bugs

wo, 06/09/2017 - 06:14

Many of the patches that we write are fixes to things that have broken as a result of a change, often known as regressions.  An important aspect of a high quality release is for us to be able to identify and fix as many of these regressions as we can for each release, and this requires collaboration between people who file bugs, those who triage them, people who fix them, and of course the release managers.

As engineers, one of the things we can do when fixing a bug is to double check whether the bug was introduced as a result of a recent code change, and based on that decide whether the fix needs to be backported to older branches.  In this post, I’m going to talk about how I usually do this.

Identifying the source of a bug

Sometimes it is clear from the description of a bug that the bug didn’t exist previously, as is often the case for severely broken scenarios.  It’s a good practice to always ask yourself “did this used to work?”  If the answer is yes and you have a way to reproduce the bug, then we need to figure out what code change broke things in the first place.  We have a really helpful tool called mozregression which allows you to bisect the history of the codebase to find the offending code change.  There is documentation available for using the tool, but to summarize: it walks you back in the history using a binary search algorithm to allow you to relatively quickly find out the specific code change that introduced a regression.  This tool handles downloading the old Firefox versions and running them for you — all you need to do is to try to reproduce the bug in each Firefox instance it opens up and then tell the tool whether that version was good or bad.  Mozregression also handles the creation of new profiles for each build so that you don’t have to worry about downgrades when you go from one build to another, or the impact of leaving the profile in a dirty state while testing during bisection.  All you need to have at hand to start using it is knowing which version of Firefox shows a bug, and which version doesn’t.  (When I don’t know when a bug first appeared, I usually use the –launch command to run a super old version to test whether the problem exists there to find a good base version; as it doesn’t matter much how wide the regression range you start with is.)

But sometimes there are no specific steps to reproduce available, or it may not be obvious that a bug is a regression.  In those cases, when you have a fix for the bug at hand, it would be nice to take a few extra minutes to look at the code that you have modified in the fix to see where the code changes are coming from.  This can be done using a blame/annotate source code tool.  These tools show which changeset each line of code is coming from.  Depending on your favorite revision control system (hg or git) and your favorite source code editor/IDE (Vim, Emacs, Eclipse, VS Code, Sublime, etc.) you can find any number of plugins and many online tutorials on how to view annotated source code.  An easy way, and the that I use most of the time these days is looking at annotations through Searchfox.  But you may also want to familiarize yourself with a plugin that works with your workflow to assist you in navigating the history of the code.

For example, when you hover the left-hand sidebar (I’ve sometimes heard this referred to as “the gutter”) when viewing a file in Searchfox, you will see an info box like this:

Searchfox history tooltip box

There are a few helpful links here for following the history of the code.  “Show earliest version with this line” is useful to look at the revision of this file which introduced the line you hovered.  “Show latest version without this line” is helpful to look at the parent revision of this file with respect to the aforementioned changeset.  Using these two links you can navigate backwards in history to get to the point in time when a part of the code was first introduced.  Often times you need to walk several steps back in the history before you get to the correct revision since code can be moved around, re-indented, modified in slight ways that don’t matter to you at the time, etc.

Once you get to the final version of the file that introduced the code in question, you will see something like this at the top of the page:

Searchfox changeset header

This shows you the changeset SHA1 for the problematic code in question (I picked a completely arbitrary changeset here which as far as I’m aware has not caused any regressions for demonstration purposes!)  By clicking on the changeset identifier (bc62a859 in this example), and following the “hg” link from there, you can get to a table showing you which release milestone it got landed in:

hg.mozila.org changeset info section

This information is also available on the bug (which you can get to by clicking the link with the bug number next to “bugs” – 1358447 in this example):

Information about what release a bug got fixed in

Marking the bug as a regression

Once you identify the version of Firefox that first had the broken code checked into it, it is helpful to communicate this information to the release management team.  Doing this involves a simple three-step process:

  • Add the regression keyword to the bug if it doesn’t have it already.
  • Based on the version of Firefox that was first impacted by the bug found in the previous step, under Firefox Tracking Flags, set the Tracking flag for the respective versions to “?”.
  • Similarly, set the Status flag for the affected versions to “affected”, and for the older version(s) to “unaffected”.  Bugzilla will help you fill out a comment section describing why this tracking is needed to the release management team.

Marking a bug as a regression

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR3b1 available

wo, 06/09/2017 - 06:10
TenFourFox Feature Parity Release 3 beta 1 is now available (hashes, downloads, release notes). This release has two major updates: first, a whole lot more of the browser has AltiVec in it. All of the relevant call sites that use the slower OS X memory search function memchr() were either converted to the VMX-accelerated version that we introduced for JavaScript in FPR2, or, if the code simply checks to see if a character is there but doesn't need to know where it is, our VMX-accelerated haschr(). This occurs in lots of places, including event handling, font validation and even the network stack; not all of them are hot, but all of them benefit.

The second major change is additional JavaScript ES6 and ES7 compatibility. It's not sufficient for us simply to copy over later versions of JavaScript from browser versions between 45 and 52; besides the fact they may require platform support we don't implement, they don't have our JIT, our PowerPC-specific optimizations and our later updates which would need to be backported and merged, they don't have our accumulated security patches, and they don't have some of the legacy features we need to continue to support 45-era add-ons (particularly legacy generator and array comprehensions). This hybrid engine maintains backwards compatibility but has expanded syntax that fixes some issues with Dropbox (though we still have a couple other glitches to smoke out), Amazon Music and Beta for Pnut, and probably a number of other sites. There is a limit to how much I can cram into the engine and there is a very large frontend refactor around Fx51 which will probably not be easily backported, but there should be more improvements I can squeeze in.

There was also supposed to be a new feature for delaying video decoding until the rest of the page had loaded to improve throughput on YouTube and some other sites, but YouTube introduced its new site design while I was testing the feature, and unfortunately the "lazy loading" technique they appear to be using now means the browser cannot deterministically compute when video will start competing with layout for resources. I'm thinking of a way to retool this but it will not be an enabled part of FPR3. One idea is to forge dropped frames into MSE's statistics early so it shifts to a lower quality stream for a period of time as a "fast start;" another might be to decouple the media state machine from the decoder more completely. I haven't decided how I will attack this problem yet.

In miscellaneous changes, even after desperate begging Imgur would not fix their site sniffer to stop giving us a mobile version using the default TenFourFox user agent (this never used to happen until recently, btw), even if just an image by itself were requested. I got sick of being strung along by their tech support tickets, so this version just doesn't send any user agent to any Imgur site, unless you explicitly select something other than the default. Take that, Imgur. The reason I decided to do this for Imgur specifically is because their mobile site actually causes bugs in TenFourFox due to a hard CoreGraphics limit I can't seem to get around, so serving us the mobile site inappropriately is actually detrimental as opposed to merely annoying. Other miscellaneous changes include some widget tune-ups, more removal of 10.7+ specific code, and responsiveness tweaks to the context menu and awesome bar.

Last but not least, this release has a speculative fix for long-running issue 72 where 10.5 systems could end up with a frozen top menu bar after cycling repeatedly through pop-up menus. You'll notice this does not appear in the source commits yet because I intend to back it out immediately if it doesn't fix the problem (it has a small performance impact even on 10.4 where this issue does not occur). If you are affected by this issue and the optimized build doesn't fix your problem, please report logging to the Github issue from the debug version when the issue triggers. If it does fix it, however, I will commit the patch to the public repo and it will become a part of the widget library.

Other than that, look for the final release on or about September 26. Post questions, concerns and feedback in the comments.

Categorieën: Mozilla-nl planet

Emma Humphries: Triage Summary 2017-09-05

wo, 06/09/2017 - 03:24

It's the weekly report on the state of triage in Firefox-related components.

Marking Bugs for the Firefox 57 Release

The bug that will overshadow all the hard work we put into the Firefox 57 release has probably already been filed. We need to find it and fix it. If you think a bug might affect users in the 57 release, please set the correct Firefox versions affected in the bug, and then set the tracking-request flag to request tracking of bug by release management team.

Poll Result

It’s not a large sample, but by an 8-1-1 margin, you said you’d like a logged-in BMO home page like: https://fitzgen.github.io/bugzilla-todos/.

Now, the catch. We don’t have staff to work on this, but I’ve filed a bug, https://bugzilla.mozilla.org/show_bug.cgi?id=1397063 to do this work. If you’d like undying gratitude, and some WONTFIX swag, grab this bug.

Hotspots

The components with the most untriaged bugs remain the JavaScript Engine and Build Config.

**Rank** **Component** **Last Week** **This Week** ---------- ------------------------------ --------------- --------------- 1 Core: JavaScript Engine 477 472 2 Core: Build Config 459 455 3 Firefox for Android: General 408 415 4 Firefox: General 254 258 5 Core: General 241 235 6 Core: JavaScript: GC 180 175 7 Core: XPCOM 171 172 8 Core: Networking 159 167 All Components 8,822 8,962

Please make sure you’ve made it clear what, if anything will happen with these bugs.

Not sure how to triage? Read https://wiki.mozilla.org/Bugmasters/Process/Triage.

Next Release

**Version** 56 56 57 57 57 57 ------------------------------------- ------- ----- ------ ------- ------- ------- **Date** 7/31 8/7 8/14 8/21 8/21 9/5 **Untriaged this Cycle** 4,479 479 835 1,196 1,481 1,785 **Unassigned Untriaged this Cycle** 3,674 356 634 968 1,266 1,477 **Affected this Release** 139 125 123 119 42 83 **Enhancements** 103 3 5 11 17 15 **Orphaned P1s** 192 196 191 183 18 23 **Stalled P1s** 179 157 152 155 13 23

What should we do with these bugs? Bulk close them? Make them into P3s? Bugs without decisions add noise to our system, cause despair in those trying to triage bugs, and leaves the community wondering if we listen to them.

Methods and Definitions

In this report I talk about bugs in Core, Firefox, Firefox for Android, Firefox for IOs, and Toolkit which are unresolved, not filed from treeherder using the intermittent-bug-filer account*, and have no pending needinfos.

By triaged, I mean a bug has been marked as P1 (work on now), P2 (work on next), P3 (backlog), or P5 (will not work on but will accept a patch).

https://wiki.mozilla.org/Bugmasters#Triage_Process

A triage decision is not the same as a release decision (status and tracking flags.)

https://mozilla.github.io/triage-report/#report

Untriaged Bugs in Current Cycle

Bugs filed since the start of the Firefox 57 release cycle (August 2nd, 2017) which do not have a triage decision.

https://mzl.la/2wzJxLP

Recommendation: review bugs you are responsible for (https://bugzilla.mozilla.org/page.cgi?id=triage_owners.html) and make triage decision, or RESOLVE.

Untriaged Bugs in Current Cycle Affecting Next Release

Bugs marked status_firefox56 = affected and untriaged.

https://mzl.la/2wzjHaH

Enhancements in Release Cycle

Bugs filed in the release cycle which are enhancement requests, severity = enhancement, and untriaged.

https://mzl.la/2wzCBy8

​Recommendation: ​product managers should review and mark as P3, P5, or RESOLVE as WONTFIX.

High Priority Bugs without Owners

Bugs with a priority of P1, which do not have an assignee, have not been modified in the past two weeks, and do not have pending needinfos.

https://mzl.la/2sJxPbK

Recommendation: review priorities and assign bugs, re-prioritize to P2, P3, P5, or RESOLVE.

Inactive High Priority Bugs

There 159 bugs with a priority of P1, which have an assignee, but have not been modified in the past two weeks.

https://mzl.la/2u2poMJ

Recommendation: review assignments, determine if the priority should be changed to P2, P3, P5 or RESOLVE.

Stale Bugs

Bugs in need of review by triage owners. Updated weekly.

https://mzl.la/2wNyONP

* New intermittents are filed as P5s, and we are still cleaning up bugs after this change, See https://bugzilla.mozilla.org/show_bug.cgi?id=1381587, https://bugzilla.mozilla.org/show_bug.cgi?id=1381960, and https://bugzilla.mozilla.org/show_bug.cgi?id=1383923

If you have questions or enhancements you want to see in this report, please reply to me here, on IRC, or Slack and thank you for reading.



comment count unavailable comments
Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, August 2017

wo, 06/09/2017 - 02:00

Here’s what happened in August in Kuma, the engine of MDN Web Docs:

  • Launched beta of interactive examples
  • Continued work on the AWS migration
  • Prepared for KumaScript translations
  • Refined the Browser Compat Data schema
  • Shipped tweaks and fixes

Here’s the plan for September:

  • Establish maintenance mode in AWS
Done in August Launched beta of interactive examples

On August 29, we launched the interactive examples. We’re starting with showing them to 50% of anonymous users, to measure the difference in site speed. You can also visit the new pages directly. See the interactive editors in beta post on Discourse for more details. We’re collecting feedback with a short survey. See the “take our survey” link below the new interactive example.

We’ve already gotten several rounds of feedback, by showing early iterations to Mozilla staff and to the Brigade, who helped with the MDN redesign. Schalk, Stephanie, Kadir, and Will Bamberg added user interviews to our process. They recruited web developers to try out the new feature, watched how they used it, and adjusted the design based on the feedback.

One of the challenging issues was avoiding a scrollbar, when the <iframe> for the interactive example was smaller than the content. The scrollbar broke the layout, and made interaction clumsy. We tried several rounds of manual iframe sizing before implementing dynamic sizing using postMessage to send the desired size from the client <iframe> to the MDN page. (PR 4361).

Another change from user testing is that we began with deployment to S3 behind a CDN, rather than waiting until after beta testing. Thanks to Dave Parfitt for quickly implementing this (PR 149).

It will take a while to complete the beta testing and launch these six pages to all users. Until then, we still have the live samples. Stephanie Hobson recently improved these by opening them in a new tab, rather than replacing the MDN reference page. (PR 4391).

Continued work on the AWS migration

We’re continuing the work to rehost MDN in AWS, using Kubernetes. We track the AWS migration work in a GitHub project, and we’re getting close to production data tests.

In our current datacenter, we use Apache to serve the website (using mod_wsgi). We’re not using Apache in AWS, and in August we updated Kuma to take on more of Apache’s duties, such as serving files from MDN’s distant past (PR 4365 from Ryan Johnson) and handling old redirects (PR 4231 from Dave Parfitt).

We are currently using MySQL with a custom collation utf8_distinct_ci. The collation determines how text is sorted, and if two strings are considered to be equal. MySQL includes several collations, but they didn’t allow the behavior we wanted for tags. We wanted to allow both “Reference” and the French “Référence”, but not allow the lower-case variants “reference” and “référence”. The custom collation allowed us to do this while still using our tagging library django-taggit. However, we can’t use a custom collation in AWS’s RDS database service. The compromise was to programmatically rename tags (they are now “Reference” and “Référence (2)”), and switch to the standard utf8_general_ci collation, which still prevents the lowercase variants (PR 4376 by John Whitlock). After the AWS migration, we will revisit tags, and see how to best support the desired features.

Prepared for KumaScript translations

There was some preparatory work toward translating KumaScript strings in Pontoon, but nothing shipped yet. The locale files have been moved from the Kuma repository to a new repository, mozilla-l10n/mdn-l10n. The Docker image for KumaScript now includes the locale files. Finally, KumaScript now lives at mdn/kumascript, in the mdn Github organization.

There are additional tasks planned, to use 3rd-party libraries to load translation files, apply translations, and to extract localizable strings. However, AWS will be the priority for the rest of September, so we are not planning on taking the next steps until October.

Refined the Browser Compat Data schema

Florian Scholz and wbamberg have finished a long project to update the Browser Compatibility Data schema. This included a script to migrate the data (BCD PR 304), and a unified {{compat}} macro suitable for compatibility tables across the site (KumaScript PR 272). The new schema is used in release 0.0.4 of mdn-browser-compat-data.

The goal is to convert all the compatibility data on MDN to the BCD format. Florian is on track to convert the JavaScript data in September. Jean-Yves Perrier has made good progress on migrating HTML compatibility data with 7 merged PRs, starting with PR 279.

Shipped Tweaks and Fixes

There were many PRs merged in August:

Many of these were from external contributors, including several first-time contributions. Here are some of the highlights:

Planned for September

Work will continue to migrate to Browser Compat Data, and to fix issues with the redesign and the new interactive examples.

Establish Maintenance Mode in AWS

In September, we plan to prepare a maintenance mode deployment in AWS, and send it some production traffic. This will allow us to model the resources needed when the production environment is hosted in AWS. It will also keep MDN data available while the production database is transferred, when we finalize the transition.

Categorieën: Mozilla-nl planet

Mark Banner: Bookmarks changes in Firefox Nightly – Testing Wanted

di, 05/09/2017 - 23:08

There’s been a project that’s been going on for several years to move Firefox’s bookmarks processing off the main thread and to happen in the background – to help reduce possible jerkiness when dealing with bookmarks.

One part of this (Places Transactions) has been enabled in Nightly for about 4-5 weeks. We think we’ve fixed all the regressions for this, and we’d now like more people testing.

So, when you’re checking out the improved performance of Firefox Nightly, please also keep an eye on Bookmarks – if you’re moving/copying/editing them, and you undo/redo actions, check that everything behaves as it should.

If it doesn’t, please file a bug with how to reproduce and we’ll take a look. If you can also test with the preference browser.places.useAsyncTransactions set to false (after a restart) to know if it happens with the old style transactions – that will help us finding the issue.

We know there’s still some performance issues with the new async transactions which we’re currently working on and hope to be landing fixes for those soon.

Categorieën: Mozilla-nl planet

Christian Heilmann: Reasons to attend and/or speak at Reasons.to

di, 05/09/2017 - 23:08

I am currently on the train back to London after attending the first two days of Reasons.to in Brighton, England. I need to go to pick up my mail that accumulated in my London flat before going back to Berlin and Seattle in a day, otherwise there would be no way I’d not want to see this conference through to the end.

Reasons.to stage sign

I don’t want to go. Reasons.to is an amazing experience. Let me start by listing the reasons why you should be part of it as an attendee or as a presenter. I will write up a more detailed report on why this year was amazing for me personally later.

Why reasons.to is a great experience for attendees:

Reasons.to is a conference about creative makers that use technology as a tool. It is not a conference about hard-core technical topics or limited to creating the next app or web site. It is a celebration of creativity and being human about it. If you enjoy Beyond Tellerand, this is also very much for you. That’s not by accident – the organisers of both shows are long-term friends and help each other finding talent and getting the right people together.

As such, it demands more of both the presenters and the audience. There are no recordings of the talks, and there is no way to look up later what happened. It is all about the here and now and about everyone at the event making it a memorable experience.

Over and over the organisers remind the audience to use the time to mingle and network and not worry about asking the presenters for more details. There is no Q&A and there is ample time in breaks to ask in person instead. Don’t worry – presenters are coached that this is something to expect at this event and they all agreed.

There is no food catering – you’re asked to find people to join and go out for breaks, lunches and dinners instead. This is a great opportunity to organize yourselves and even for shy people to leave with a group and have a good excuse to get a bit out of their shell.

This is a getting to know and learning about each other event. And as such, there is no need to advertise itself as an inclusive safe space for people. It just is. You meet people from all kind of backgrounds, families arrive with children and all the people involved in putting on the show know each other.

There are no blatant sponsored talks or holy wars about “framework vs. library” or “technology x vs. technology y”. There is no grandstanding about “here is what I did and it will revolutionise our little world”. There is no “I know this doesn’t work yet, but it will be what you need to use else you’d be outdated and you do it wrong”. And most importantly there is no “this is my showreel, am I not amazing” presentations that are sadly enough often what “creative” events end up having.

The organisers are doing a thorough job finding presenters that are more than safe bets to sell tickets or cover the newest hotness. Instead they work hard to find people who have done amazing things and aren’t necessarily that known but deserve to be.

If anything, there is a very refreshing feeling of meeting people whose work you may know from advertising, on trains, TV or big billboards. And realizing that these are humans and humble about their outrageous achievements. And ready to share their experiences and techniques creating them – warts and all.

The organisers have a keen eye on spotting talent that is amazing but not quite ready to tell the world about it and then making them feel welcome and excited about sharing their story. All the presenters are incredibly successful in what they do, yet none of them are slick and perfect in telling their story. On the contrary, it is very human to see the excitement and how afraid some of these amazing people are in showing you how they work.

Reasons.to is not an event where you will leave with a lot of new and immediately applicable technical knowledge. You will leave, however, with a feeling that even the most talented people are having the same worries as you. And that there is more to you if you just stop stalling and allow yourself to be more creative. And damn the consequences.

Why reasons.to is a great idea for presenters

As a presenter, I found this conference to be incredibly relaxed. It is an entity, it is a happening that is closed in itself without being elitist.

Not having video recordings and having a very low-traffic social media backchannel might be bad for your outside visibility and makes it harder to show the impact you had to your manager. But it makes for a much less stressful environment to present in. Your job is to inspire and deal with the audience at the event, not to deliver a great, reusable video recording or deal with people on social media judging you without having seen you performing or being aware of the context in which you said something.

You have a chance to be yourself. A chance to not only deliver knowledge but share how you came by it and what you did wrong without having to worry about disappointing an audience eager for hard facts. You can be much more vulnerable and human here than at other – more competitive – events.

You need to be ready to be available though. And to spend some extra time in getting to know the other presenters, share tips and details with the audience and to not be a performer that drops in, does the show and moves on. This event is a great opportunity not only to show what you did and want people to try, but it is also a great event to stay at and take in every other talk. Not to compare, but to just learn about people like you but with vastly different backgrounds and approaches.

There is no place for ego at this event. That’s a great thing as it also means that you don’t need to be the perfect presenter. Instead you’re expected to share your excitement and be ready to show mistakes you made. As you would with a group of friends. This is refreshing and a great opportunity for people who have something to show and share but aren’t quite sure if the stage is theirs to command.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Army of Awesome’s Retirement and Mozilla Social Support’s Participation Outreach

di, 05/09/2017 - 19:27

Twitter is a social network used by millions of users around the world for many reasons – and this include helping Firefox users when they need it. If you have a Twitter account, you like to help people, you like to share your knowledge, and want to be a Social Support member of Firefox – join us!

We aim to have the engine that has been powering Army of Awesome (AoA) officially disabled before the end of 2017. To continue with the incredible work that has been accomplished for several years, we plan a new approach to supporting users on Twitter using TweetDeck.

TweetDeck is a web tool made available by Twitter allowing you to post to your timeline and manage your user profile within the social network, additionally boasting several features and filters to improve the general experience.

Through the application of filters in TweetDeck you can view comments, questions, and problems of Firefox users. Utilizing simple, but at the same time rather advanced tools, we can offer quality support to the users right where they are.

If you are interested, please take a look at the guidelines of the project, that take careful note of the successes and failures of past programs. Once you are filled with amazing-ness of the guidelines, fill out this form with your email and we will send you more information about everything you need to know about the program’s mutual purpose. After completing the form you can start configuring TweetDeck to display the issues to be answered and users to be helped.

We are sure you will have an incredible experience for all of you who are passionate about Mozilla and Twitter – and we can hardly wait to see the great results of your actions!

Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Beer and Tell: September 2017, 05 Sep 2017

di, 05/09/2017 - 19:00

 September 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Categorieën: Mozilla-nl planet

Nick Desaulniers: GCC vs LLVM Q3 2017: Active Developer Counts

di, 05/09/2017 - 09:20

A blog post from a few years ago that really stuck with me was Martin Olsson’s Browser Engines 2015: Commit Rates and Active Developer Counts, where he shows information about the number of authors and commits to popular web browsers. The graphs and analysis had interesting takeaways like showing the obvious split in blink and webkit, and relative number of contributors of the projects. Martin had data comparing gcc to llvm from Q4 2015, but I wanted to see what the data looked like now in Q3 2017 and wanted to share my findings; simply rerunning the numbers. Luckily Martin open sourced the scripts he used for measurements so they could be rerun.

Commit count and active authors in the previous 60 days is a rough estimate for project health; the scripts don’t/can’t account for unique authors (same author using different git commit info) and commit frequency is meaningless for comparing developers that commit early and commit often, but let’s take a look. Active contributors over 60 days cuts out folks who do commit to either code bases, just not as often. Lies, damn lies, and statistics, right? Or torture the data enough, and it will confess anything…

Note that LLVM is split into a few repositories (llvm the common base, clang the C/C++ frontend, libc++ the C++ runtime, compiler-rt the sanitizers/built-ins/profiler lib, lld the linker, clang-tools-extra the utility belt, lldb the debugger (there are more, these are the most active LLVM projects)). Later, I refer to LLVM as the grouping of these repos.

There’s a lot of caveats with this data. I suspect that the separate LLVM repo’s have a lot of overlap and have fewer active contributors when looked at in aggregate. That is to say you can’t simply add them else you’d be double counting a bunch. Also, the comparison is not quite fair since the overlap in front-end-language and back-end-target support in these two massive projects does not overlap in a lot of places.

LLVM’s 60 day active contributors are ~3x-5x times GCC’s and growing, while GCC’s 100-count hasn’t changed much since ‘04. It’s safe to say GCC is not dying; it’s going steady and chugging away as it has been, but it seems LLVM has been very strong in attracting active contributors. Either way, I’m thankful to have not one, but two high quality open source C/C++ compilers.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 198

di, 05/09/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is brain, a programming language transpiler to brainfuck of all things! Thank you, icefoxen for the weird suggestion. It's appreciated!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

120 pull requests were merged in the last week

New Contributors
  • Andrew Gauger
  • Andy Gauge
  • Jeremy Sorensen
  • Lukas H
  • Phlosioneer
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

We're currently writing up the discussions, we'd love some help. Check out the tracking issue for details.

PRs:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

you can ask a Future “are we there yet”, to which it can answer “yes”, “no”, or "don’t make me come back there" an Iterator is something you can keep asking “more?” until it gets fed up and stops listening Display is just a way to say “show me your moves”, with the other formatting traits being other dance moves if something isn’t Send, then it’s a cursed item you can’t give away, it’s yours to deal with if something isn’t Sync, then it won’t even appear for other people, it’s possibly an apparition inside your head things that are Clone can reproduce asexually, but only on command. things that are Copy won’t bother waiting for you

@QuietMisdreavus on Twitter.

Thanks to Havvy for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - September 5, 2017

di, 05/09/2017 - 02:00

Here’s what happened on the MozMEAO SRE team from August 29th - September 5th.

Current work Deis Workflow: Final Release

The final release of Deis Workflow is scheduled for September 9th, 2017. We use Deis Workflow to help run Basket, Bedrock, Snippets, and Careers, so each project will need to be modified to use Kubernetes directly (instead of interfacing with Kubernetes via Deis).

More info here.

MDN Migration to AWS Analytics eval

We’re evaluating Snowplow to see if it will meet our analytics needs.

Upcoming Portland Deis 1 cluster decommissioning

The Deis 1 cluster in Portland decommissioning has been pushed out until next week due to support issues related to other applications.

Links
Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Rust 2017 Survey Results

di, 05/09/2017 - 02:00

It’s that time of the year, where we take a good look at how things are going by asking the community at large – both Rust users and non-users. And wow, did you respond!

This year we had 5,368 responses. That’s over 2,000 more responses than we had last year!

The scale of the feedback was both inspiring and humbling, and we’ve worked hard to read through all of your comments and suggestions. There were so many helpful ideas and experiences that people shared, and we truly do appreciate it. Without further ado, let’s take a look.

 66.9% Rust users, 9.9% stopped using, 23.2% never used

Just as we saw last year, 2/3rds of responses are from Rust users and the remainder from non-users. This year, we separated out the “don’t use Rust” to capture those who used Rust and stopped from those who never used Rust. It’s inspiring to see so many developers wanting to help us make Rust better (even if they don’t use it!) so that it can reach even more people.

We’ll talk more about Rust non-users later in this report, but first let’s look at the responses from Rust users.

Using Rust

 0.5% less than a day, 4.2% less than a week, 13.1% less than a month, 39.7% less than a year, 42.5% over a year (hover for more info)

This year, we’re seeing a growing amount of experienced users sticking with Rust, with the “more than a year” users growing to over 42% (up from 30% from last year). The beginners are also an impressively large set, with the “less than a month” crowd at just about 18%, meaning we’re currently attracting nearly a 1/5th of our user base size, even as it grows larger, every month.

 36.5% less 1000 lines, 46.3% 1000 to 10000 lines, 14.2% 10000 to 100000 lines, 2.1% over 100000, 0.9% don't know (hover for more info)

People are working with ever-larger amounts of Rust, with medium- and large-scale lines of code totals both nearly doubling since last year as a percentage of the whole, now making up 16% of respondents (up from last year’s 8.9%). This shows a growing interest in using Rust in ever-larger projects, and a growing need for tools to support this growth.

 17.5% daily, 43.3% weekly, 24.4% monthly, 14.9% rarely

Despite the rising amount of code developers are working with, we’re seeing a small downtick in both daily and weekly users. Daily users have fallen from 19% to 17.5%, and weekly users have fallen from 48.8% to 43.3%. This could be a natural transition in this stage of our growth, as a broader range of developers begin using Rust.

Path to Stability

 92.5% no, 7.5% yes

In the last year, we made big strides in breakages caused by releases of the compiler. Last year, 16.2% of respondents said that upgrading to a new stable Rust compiler broke their code. This year, that number has fallen to 7.5% of respondents. This is a huge improvement, and one we’re proud of, though we’ll continue working to push this down even further.

Chart show strong support for nightly and current stable releases

Developers have largely opted to move to nightly or a recent stable (with some on beta), showing that developers are eager to upgrade and do so quickly. This simplifies the support structure a bit from last year, where developers were on a wider range of versions.

Stable users now make up 77.9% of Rust users. Unfortunately, despite our efforts with procedural macros and helping move crates like Serde to stable, we still have work to do to promote people moving away from the nightly Rust compiler. This year shows an increase in nightly users, now at 1,852 votes it represents 51.6% of respondents using nightly, up from 48.8% of last year.

How we use Rust

 90.2% rustup, 18.9% linux distros, 5% homebrew, 4.7% official .msi, 3.1% official tarball, 1.4% official mac pkg (hover for more info)

One of the big success stories with Rust tooling was rustup, the Rust toolchain installer. Last year, we saw a wide diversity in ways people were installing Rust. This year, many of these have moved to using rustup as their main way of installing Rust, totalling now 3,205 of the responses, which moves it from last year’s 52.8% to 90.2%.

 80.9% Linux, 35.5% macOS, 31.5% Windows, 3.2% BSD-variant

Linux still features prominently as one of the main platforms Rust developers choose. Of note, we also saw a rise in the use of Windows as a developer platform at 1,130 of the 3,588 total respondents, putting it at 31.5% of respondents, up from 27.6% of last year.

 91.5% Linux, 46.7% Windows, 38.2% macOS, 16.8% embedded, 13.2% WebAssembly and asm.js, 9.9% Android, 8.9% BSD-variant, 5.3% Apple iOS

Next, we asked what platforms people were targeting with their Rust projects. While we see a similar representation of desktop OSes here, we also see a growing range of other targets. Android and iOS are at healthy 9.9% and 5.3% respectively, both almost 10x larger than last year’s percentages. Embedded also has had substantial growth since last year’s single-digit percentage. As a whole, cross-compilation has grown considerably since this time last year.

 45.8% vim, 33.8% vscode, 16.1% intellij, 15.7% atom, 15.4% emacs, 12.2% sublime, 1.5% eclipse, 1.5% visual studio

Among editors, vim remains king, though we see healthy growth in VSCode adoption at 34.1% (up from last year’s 3.8%). This growth no doubt has been helped by VSCode being one of the first platforms to get support for the Rust Language Server.

 4.4% full-time, 16.6% part-time, 2.9% no but company uses Rust, 57.6% no, 2.4% not sure, 16.1% not applicable (hover for more info)

Rust in the workplace has also continued to grow. This year’s 4.4% full-time and 16.6% part-time Rust workers show a tick up from last year’s 3.7% full-time and 16.1% part-time.

 18.9% less than 1000 lines, 56% 1000 to 10000 lines, 23.1% 10000 to 100000 lines, 2% more than 100000 lines

Users who use Rust part-time in their companies showed a growth in larger projects since last year, with the medium- and large-scale projects taking up more percentage of total projects this time around.

 1.9% less than 1000 lines, 27.9% 1000 to 10000 lines, 52.6% 10000 to 100000 lines, 17.5% more than 100000 lines

Likewise, full-time Rust commercial users saw medium- and large-scale projects grow to taking a larger part of the pie, with projects over 100,000 lines of code making up almost 18% of the all full-time commercial respondents, and a large shift in the 10,000-100,000 lines range from 39.7% up to 52.6%.

Feeling Welcome

 75.1% feel welcome, 1.3% don't feel welcome, 23.6% don't know (hover for more info)

An important piece of the Rust community is to be one that is welcoming to new users, whether they are current users or potential users in the future. We’re pleased to see that over 3/4th of all respondents said they feel welcome in the Rust community, with 23.6% not sure.

chart showing 81.4% not underrepresented, and a variety of underrepresented, with no category above 5%

The demographics of respondents stayed about the same year over year. Diversity and inclusiveness continue to be vital goals for the Rust project at all levels. The Rust Bridge initiative aims for diversity at the entry level. The Rust Reach project, launched this year, brings in a wide range of expertise from people underrepresented in the Rust world, and pairs them with Rust team members to make Rust more accessible to a wider audience.

Stopped using Rust

New this year, we separated out the people who had stopped using Rust from those who had never used Rust to better understand why they stopped. Let’s take a look first at when they stopped.

 3.2% less than a day, 18.5% less than a week, 43.1% less than a month, 30.2% less than a year, 4.9% more than a year (hover for more info)

The first surprise we had here was how long people gave Rust a try before they stopped. Our initial hunch was that people would give up using Rust in the first day, or possibly the first week, if it didn’t suit them or their project. Instead, what we see is that people tried Rust for a much longer time on average than that.

Themes from people who stopped using Rust:

  • 23% responded that Rust is too difficult to use.
  • 20% responded that they didn’t have enough time to learn and use Rust effectively.
  • 10% responded that tools aren’t use mature enough.
  • 5% responded they needed better IDE support.
  • The rest of users mentioned the need for support for Rust in their jobs, they’d finished the project they need to use Rust in, were turned away by Rust’s syntax, couldn’t think of a project to build, or had a bad interaction with the Rust community.
Not using Rust

 666 company doesn't use Rust, 425 Rust is too intimidating hard to learn or too complicated, 295 Rust doesn't solve a problem for me, 255 Rust doesn't have good IDE support, 209 Rust doesn't have libraries I need, 161 Rust seems too risky for production, 89 Rust doesn't support platforms I need, 73 Rust doesn't have tools I need

While the learning curve and language complexity still played a role in preventing people from picking up Rust, one aspect that resonated with many people is that there just simply aren’t enough active commercial projects in Rust for people to be a part of. For some, they could surmount the learning curve if there was strong incentive to do so.

Areas for Improvement

Finally, at the end of the survey we we provided a free-form area to talk about where Rust could improve. Before we get to the themes we saw, we wanted to give a big “thank you!” to everyone who posted thoughtful comments. There are many, many good ideas, which we will be making available to the respective sub-teams for future planning. With that, let’s look at the themes that were important this year:

  • 17% of responses underscored the need for better ergonomics in the language. People had many suggestions about how to improve Rust for day-to-day use, to allow for easier prototyping, to work with async programming more easily, and to be more flexible with more data structure types. Just as before, the need for a much easier and smoother experience with the borrow checker and how to work with lifetimes was a popular request.
  • 16% of responses talk about the importance of creating better documentation. These covered a topics from helping users transition from other languages, creating more examples and sample projects, helping people get started with various tasks or crates, and creating video resources to facilitate learning.
  • 15% of responses point out that library support needs to improve. People mention the need for a strong support set of core libraries, of the difficulty finding high quality crates, the general maturity of the crates and the crate ecosystem, the need for libraries to cover a wide range of areas (eg web, GUIs, networking, databases, etc). Additionally, people mentioned that libraries can be hard to get started with depending on their API design and amount of documentation.
  • 9% of the responses encouraged us to continue to build our IDE support. Again, this year underscored that there are a sizeable section of developers that need support for Rust in their IDEs and tools. The Rust Language Server, the on-going effort to support IDEs broadly, was mentioned as one of the top items people are looking forward to this year, and comments pointed to these efforts needing to reach stable and grow support into other IDEs as well as continuing to grow the number of available features.
  • 8% of responses mention learning curve specifically. As more developers try to pick up Rust or teach it to coworkers and friends, they’re finding that there aren’t sufficient resources to do so effectively and that Rust itself resists a smooth learning experience.
  • Other strong themes included the need for: faster compile times, more corporate support of Rust (including jobs), better language interop, improved tooling, better error messages, more marketing, less marketing, and improved support for web assembly.
Conclusion

We’re blown away by the response this year. Not only is this a much larger number of responses than we had last year, but we’re also seeing a growing diversity in what people are using Rust for. Thank you so much for your thoughtful replies. We look forward to using your feedback, your suggestions, and your experience to help us plan for next year.

Categorieën: Mozilla-nl planet

Daniel Pocock: Spyware Dolls and Intel's vPro

ma, 04/09/2017 - 08:09

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking
  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard disk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?
Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: September’s Featured Extensions

za, 02/09/2017 - 01:00

Firefox Logo on blue background

Pick of the Month: Search Image

by Didier Lafleur
Highlight any text and perform a Google image search with a couple clicks.

“I’ve been looking for something like this for years, to the point I wrote my own script. This WORKS for me.”

Featured: Cookie AutoDelete

by Kenny Do
Automatically delete stagnant cookies from your closed tabs. Offers whitelist capability, as well.

“Very good replacement for Self-Destructing Cookies.”

Featured: Tomato Clock

by Samuel Jun
A super simple but effective time management tool. Use Tomato Clock to break your work bursts into meaningful 25-minute “tomato” intervals.

“A nice way to track my productivity for the day.”

Featured: Country Flags & IP Whois

by Andy Portmen
This extension will display the country flag of a website’s server location. Simple, informative.

“It does what it should.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post September’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Sean McArthur: Bye Mozilla, Hello Buoyant

vr, 01/09/2017 - 21:45
Bye, Mozilla

Today is my last day as a Mozilla employee.

It hurts to say that. I love Mozilla.1

I loved waking up to work knowing that I was working for you. For everyone’s internet. Truly, even if you feel Firefox is inferior to your preferred browser, you must admit that an internet ruled by profit-driven businesses is not a dream of anyone. What Mozilla does, by working to provide an alternative browser choice, is allow a non-profit organization to have a voice. Without Firefox, the group of people that make up Mozilla would just be yelling at the closed doors of “tech giants”.

I got to work on some amazing technology, and with superb humans. The concept of Persona is exactly the kind of thing that Mozilla’s voice can push for: a way to fix passwords, while curbing identity providers from tracking your every action. Unfortunately, we couldn’t get enough adoption before we realized that Firefox needed more help. Firefox was hurting, and without Firefox, well, our voice doesn’t mean much. I dream that we can attack that problem again someday.

Taking our Identity team off Persona, we boosted Firefox Sync from a nerd toy into something that all Firefox users could benefit from. This actually quite important, something that can often times be forgotten even inside the organization. With Sync benefiting from Firefox Accounts, users gain a whole lot more value from installing Firefox on multiple devices. Firefox’s Awesomebar is still far better at finding things than Chrome does, and in my own experience, it’s only gotten better since it can remember links I open on my phone or tablet.

The superb humans really are superb. Yes, they’re intelligent. But that’s the boring part. Many people are. They stand out, instead, because of their empathy, their optimism, their voice, their loyalty. My team members are loyal to each other, because each of us is loyal to the mission: a free, open internet where the user is in command. We all wanted each other to succeed, because that always meant wins for you, the user.

And yet, it’s time for me start the next step in my journey. I still wish Mozilla the very best. You should definitely be using Firefox. And I hope I’ll bump into my friends plenty of times more in the future.

Hello, Buoyant

Starting Monday, I’ll be working for Buoyant.

Over the past few years, I’ve been learning and writing Rust. I’ve really dug into the community2, and been absolutely loving working on tools for HTTP and servers and clients and whatnot.

Buoyant is working on tools that help big websites scale. One such tool is linkerd, described as a service mesh. This tool is to help websites that are receiving godzillions of requests, and so needs to be fast and use little memory. Also, it’s 2017, and so releasing a new tool that has CVEs about memory unsafety every couple months isn’t really acceptable, when we have alternatives. So, Rust!

It turns out, we’re a great fit! I’ll be continuing to work on HTTP pieces in Rust. In fact, this means I’ll now be working in Rust full-time, so hopefully pieces should be built faster. I’ll be working in open source still, so hey, perhaps you will still benefit!

This is a really sad day for me, but I’m also super excited for next week!3

  1. I’ve been at Mozilla for over 6 years! It’s like I’m leaving part of my family, part of how I identify myself in the world. Not many places really grip you personally like Mozilla does. 

  2. I really tried to get into the nodejs community a few years ago, but eventually ran into enough cases of elitism that I gave up. Thankfully, the Rust community is fantastic any way I can measure. 

  3. Worst. Roller coaster. Ever. 

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #22

vr, 01/09/2017 - 07:59

With around three weeks left in the development cycle for Firefox 57, everyone seems to be busy getting the last fixes in to shape up this long-awaited release.  On the Quantum Flow project, we have kept up with the triage of the incoming bug reports that are tagged as [qf], and as we’re getting closer to the beta uplift date, the realistic opportunity for fixing bugs is getting narrower, and as such the bar for prioritizing incoming bug reports as [qf:p1] keeps getting higher.  This matches with the overall shift in focus in the past few weeks towards getting all the ongoing work that is targeting Firefox 57 under control to make sure we manage to do as much of what we have planned to do for this release as possible.

This past week we made more progress on optimizing the performance of Firefox for the Speedometer V2 benchmark.  Besides many of the usual optimizations, which you will read about in the acknowledgement section of the newsletter, one noteworthy item was David Major’s investigation for adding this benchmark to the set of pages that we load to train the PGO profile we use on Windows builds.  This allowed the MSVC code generator to generate better optimized code using the profile information and bought us a few benchmark score points.  Of course, earlier similar attempts hadn’t really gained us better performance, and it’s unclear whether this change will stick or get backed out due to PGO specific crashes or whatnot, but in the mean time we’re not stopping landing other improvements to Firefox for this benchmark either!  At the time of this writing, the Firefox Health Dashboard puts our benchmark score on Nightly within a 4.07% difference compared to Chrome.

Another news worthy of mention related to Speedometer is that recently Speedometer tests with Stylo were enabled on AWFY.  As can be seen on the reference hardware score page, Stylo builds are now a bit faster than normal Gecko when running Speedometer.  This has been achieved by the hard work of many people on the Stylo team and I’d like to take a moment to thank them, and especially call out Bobby Holley who helped make sure that we have a great performance story here.

In other performance related news, this past week the first implementation of our cooperative preemptive scheduling of web page JavaScript, more commonly known as Quantum DOM, landed.  The design document describes some of the background information which may be helpful if you need to understand the details of how the new world looks like.  For now, this feature is disabled by default while the ongoing work to iron out the remaining issues continues.

The Quantum DOM project has been a massive overhaul of our codebase.  A huge part of it has been the “labeling” project that Bevis Tseng has been tirelessly leading for many months now.  The basic idea behind this part of the project is to give each runnable a name and indicate which tab or document the runnable is associated with (I’m simplifying a bit, please see the wiki page for more details.)  Bill McCloskey had a great suggestion about some performance lessons that we have learned through this project for the performance story section of this newsletter, which was to highlight how this project ended up uncovering some unexpected performance issues in Firefox!

Bevis has some telemetry analysis which measures the number of runnables of a certain type (to view the interesting part, please scroll down to the “full runnable list” section).  This analysis has been used to prioritize which runnables need to be worked on next for labeling purposes.  But as this list shows the relative frequencies of runnables, we’ve ended up finding several surprises in where some runnables are showing up on this list, which have uncovered performance issues which would otherwise be very difficult to detect and diagnose.  Here are a few examples (thanks to Bill for enumerating them!):

  • We used to send the DidComposite notification to every tab, regardless of whether it was in the foreground or background.  We tried to fix this once, but the fix actually only fixed a related issue involving multiple windows.  The real fix finally got fixed later.
  • We used to have a “startup refresh driver” which used to have only for a few milliseconds during startup.  However, it was showing up as #33 on the list of runnables.  We found out that it was never being disabled after it was being started, so if we ever started running the startup refresh driver, it would run indefinitely in that browsing session, and get to the top of the list.  Unfortunately, while this runnable disappeared for a while after that bug was fixed, it is now back and we’re not sure why.
  • We found out that MediaStreamGraphStableStateRunnable is #20 on this list, which was surprising as this runnable is only supposed to be used for WebRTC and WebAudio, neither being extremely popular features on the Web.  Randell Jesup found out that there is a bug causing the runnable to be continually dispatched after a WebRTC or WebAudio session is over.
  • We run a runnable for the intersection observer feature a lot.  We tried to cut the frequency of this runnable once, but it doesn’t seem to have helped much.  This runnable still shows up quite high on the list, as #6.

I encourage people to look at the telemetry analysis to see if they can spot a runnable with a familiar name which appears too high on the list.  It’s very likely that there are other performance bugs lurking in our codebase which this tool can help uncover.

Now, please allow me to take a moment to acknowledge the hard work of everyone who helped make Firefox faster this past week.  I hope I’m not forgetting any names!

Categorieën: Mozilla-nl planet

Robert O'Callahan: rr Trace Portability

vr, 01/09/2017 - 05:39

We want to be able to record an rr trace on one machine but copy it to another machine for replay. For example, you might record a failing test on one machine and copy the trace to a developer's machine for debugging. Or, you might record a failure locally and upload the trace to some cloud service for analysis. In short: on rr master, this works!

It turned out there were only two big issues to solve. We needed a way to make traces fully self-contained, because for efficiency we don't always copy all needed files into the trace during recording. rr pack addressed that. rr pack also compacts the trace by eliminating duplicate copies of the same file. Switching to brotli also reduced trace size, as did using Cap'n Proto for trace data.

The other big issue was handling CPUID instructions. We needed a way to ensure that during replay CPUID instructions returned the same results as they did during recording — they generally won't if you switch machines. Modern Intel hardware supports "CPUID faulting", i.e. you can configure the CPU to trap every time a CPUID instruction occurs. Linux didn't expose this capability to user-space, so last year Kyle Huey did the hard work of adding a Linux system-call API to expose it: the ARCH_GET/SET_CPUID subfeature of arch_prctl. It works very much like the existing PR_GET/SET_TSC, which give control over the faulting of RDTSC/RDTSCP instructions. Getting the feature into the upstream kernel was a bit of an ordeal, but that's a story for another day. It finally shipped in the 4.12 kernel.

When CPUID faulting is available, rr recording stores the results of all CPUID instructions in the trace, and rr replay intercepts all CPUID instructions and takes their results from the trace. With this in place, we're able to move traces from one machine/distro/kernel to another and replay them successfully.

We also support situations where CPUID faulting is not available on the recording machine but is on the replay machine. At the start of recording we save all available CPUID data (there are only a relatively small number of possible CPUID "leaves"), and then rr replay traps CPUID instructions and emulates them using the stored data.

Caveat: the user is responsible for ensuring the destination machine supports all instructions and other CPU features used by the recorded program. At some point we could add an rr feature to mask the CPUID values reported during recording so you can limit the CPU features a recorded program uses. (We actually already do this internally so that applications running under rr believe that RTM transactional memory and RDRAND, which rr can't handle, are not available.)

CPUID faulting is supported on most modern Intel CPUs, at least on Ivy Bridge and its successor Core architectures. Kyle also added support to upstream Xen and KVM to virtualize it, and even emulate it regardless of whether the underlying hardware supports it. However, VM guests running on older Xen or KVM hypervisors, or on other hypervisors, probably can't use it. And as mentioned, you will need a Linux 4.12 kernel or later.

Categorieën: Mozilla-nl planet

Pagina's