mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Marcia Knous: Building a creative foundation

Mozilla planet - mo, 23/09/2019 - 20:38
Last week I spent two days at Harvard University participating in my third Professional Development class at Harvard. This time the subject was “Creative Thinking: Innovative Solutions to Complex Challenges.” The workshop was led by two experienced facilitators, Anne Manning and Susan Robertson. We started with introductions, and it soon became clear we had a very diverse group of participants - I was the lone person from the tech sector, but there was a nice blend of sectors represented, as well as some international participants. This made from some very interesting discussion outside the classroom and during the various breaks. I was also pleased that some people sought me out, especially once they found out I was an “Ideator.” Prior to the class, we had taken an assessment, and then were presented with the results. In one of the exercises, it turned out we were teamed up with other participants who fell into the same quadrant as us. I thought it was a good way to weave that assessment into the class content (and of course, initially without us being aware of it). I had some great takeaways from the two day class. I think the thing I appreciated the most was that the facilitators went to great lengths to give us a toolkit to take with us to apply the next time we are working on a project or interacting within a team. I think I also left the class with the distinct feeling that much like the diagram our team came up with above, you really have to build creativity into your system in a continuous manner. Some highlights: ▪ Anne and Susan were great, and they were excellent at pivoting when it was needed. I continually appreciated the fact that they both would “add on” to discussions with interesting insight. If you are going to have a dynamic duo presenting, you need to make sure they play off each other well. In this case, it worked well. ▪ Having engaged participants, especially in a workshop such as this is really important. Our group was definitely engaged, and I think that helped it make a better overall workshop for everyone. ▪ As a remote employee I really relish events such as this. There are a lot of great hallway conversations. I also enjoyed sharing what I do at Mozilla and having people learn about our products. I even got one of the many iPhone users in the workshop to download Firefox! Also one participant said he feels as if the Android devices he had were more “crashy” than the iPhone. I wonder if others feel that way. ▪ I liked the fact that 2 of the 3 pre-workshop assignments were videos. One of our first assignments was to talk to as many people as we could about the pre-workshop material. I found that every single person I spoke with mentioned liking one of the 2 videos, but no one I talked to mentioned the article that was part of the course prep. I think the two videos overall resonated more with the group I conversed with. ▪ One of the final team assignments happened partially during lunch. I think that didn’t work well for our team. For one, we lost one team member and had to make decisions without him. Instead of doing the entire “clustering” of ideas in a group, we actually did it twice. I would prefer in the future if we just leave lunch for eating and socializing and make adjustments in the schedule to account for any team work that has to be done. ▪ Speaking of teams, there was one exercise where we had a team member at one point and then they abruptly left our team and joined another team after the coffee break. I still wonder why… ▪ I continue to struggle with exercises where we have to build something. This time it was the marshmallow challenge, which centered around building a free standing structure out of spaghetti with a marshmallow on top. I was part of a three person team, but in the end I think I ended up observing more than actually helping with the implementation. This could be a function of how my brain operates - I may not immediately gravitate toward being hands on with the material, but like to function in the capacity of someone who is overseeing what is being done. I have been in workshops before where the same thing has happened. ▪ I learned a lot about how hard it is to be in System 2 thinking. ▪ We spent a bit of time talking about divergent and convergent thinking. I think overall this is useful to keep in mind whenever you are working on a project. ▪ There can be lessons learned from the absurd, even a fur sink. This example was part of our introduction to the “GPS” tool, which can be used for feedback, to improve ideas, and as a climate setting tool. Much of the value of that tool is framing the question by using “How might we…?” or “How to….” ▪ I mentioned to the instructor that it was challenging at times to have tight time constraints for some of the exercises we were doing (on demand creativity). I noted that sometimes I like to mull things over. The nature of this 2 day workshop didn’t allow for extended time. Sometimes I wondered if the outcome on my end would have been different if I had more time. On a personal level, the amount of prolonged cognitive activity we did over the two day span was pretty draining for me. ▪ The Assumption Smashing exercise was interesting, and I like the fact that I could ideate without any constraints. ▪ We talked quite a bit about building a climate for creative thinking, and how important it is to do well in the “Clarify” mode, before moving through to the other three modes: Ideate, Develop, and Implement. I think the importance of this was demonstrated when we actually started to dig deep into one of the team projects we were working on. ▪ Some of the team exercises, especially the final one, generated some very creative presentations. I have attached a picture of what our team came up with in terms of the challenge which was "How can we build innovating thinking into existing systems and processes"? ▪ We had a guest speaker the second day who specialized in Innovation. I think this worked well the second day since he was able to give us real world examples of some of the things we had already discussed. Would have liked more Q&A with him, but again there were time constraints. ▪ We covered quite a bit of material in 2 days. I think the facilitators definitely made the most of the time we had. Anne mentioned that a credit version of this class is offered in the summer. If I had the time, I would love to explore this topic over a longer time span. ▪ Harvard does a great job at running these programs. Everything is very well organized - I arrived and my name placard and all materials were already ready at my assigned seat. Food was pretty good overall, but was a bit better at the program I attended last year.
Categorieën: Mozilla-nl planet

The Mozilla Blog: Introducing ‘Stealing Ur Feelings,’ an Interactive Documentary About Big Tech, AI, and You

Mozilla planet - mo, 23/09/2019 - 15:01
Stealing Ur Feelings‘ uses dark humor to expose how Snapchat, Instagram, and Facebook can use AI to profit off users’ faces and emotions

 

An augmented reality film revealing how the most popular apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize democracy makes its worldwide debut on the web today. Using the same AI technology described in corporate patents, “Stealing Ur Feelings,” by Noah Levenson, learns the viewers’ deepest secrets just by analyzing their faces as they watch the film in real-time.

Watch https://stealingurfeelin.gs/

Viewer scorecard from ‘Stealing Ur Feelings’

The six-minute documentary explains the science of facial emotion recognition technology and demystifies how the software picks out features like your eyes and mouth to understand if you’re happy, sad, angry, or disgusted. While it is not confirmed whether big tech companies have started using this AI, “Stealing Ur Feelings” explores its potential applications, including a Snapchat patent titled “Determining a mood for a group.” The diagrams from the patent show Snapchat using smartphone cameras to analyze and rate users’ expressions and emotions at concerts, debates, and even a parade.

The documentary was made possible through a $50,000 Creative Media Award from Mozilla. The Creative Media Awards reflect Mozilla’s commitment to partner with artists to engage the public in exploring and understanding complex technical issues, such as the potential pitfalls of AI in dating apps (Monster Match) and the hiring process (Survival of the Best Fit).

“Stealing Ur Feelings” is debuting online alongside a petition from Mozilla to Snapchat. Viewers are asked to smile at the camera at the end of the film if they would like to sign a petition demanding Snapchat to publicly disclose whether or not it is already using facial emotion recognition technology in its app. Once the camera detects a smile, the viewer is taken to a Mozilla petition, which they can read and sign.  

The documentary also generates a downloadable scorecard featuring a photo of the viewer with Snapchat-like filters and lenses. The unique image reveals some tongue-in-cheek assumptions that the AI makes about the viewer while watching the film. These include the viewer’s IQ, annual income, and how much they like pizza and Kanye West.

“Stealing Ur Feelings” has screened at several distinguished film festivals and exhibits in recent months, including the Tribeca Film Festival, Open City Documentary Festival, Camden International Film Festival, and the Tate Modern. Later this year, the film will screen at Tactical Tech’s Glass Room installation in San Francisco. The film has already been inducted into MIT’s prestigious docubase and praised by the Museum of the Moving Image.

“Facial recognition is the perfect tool to extract even more data from us, all the time, everywhere — even when we’re not scrolling, typing, or clicking,” said Noah Levenson, the New York-based artist and engineer who created “Stealing Ur Feelings. “Set against the backdrop of Cambridge Analytica and the digital privacy scandals rocking today’s news, I wanted to create a fast, darkly funny, dizzying unveiling of the ‘fun secret feature’ lurking behind our selfies.” Levenson was recently named a Rockefeller Foundation Bellagio Resident Fellow on artificial intelligence.

“Artificial intelligence is increasingly interwoven into our everyday lives,” said Mark Surman, Mozilla’s Executive Director. “Mozilla’s Creative Media Awards seek to raise awareness about the potential of AI, and ensure the technology is used in a way that makes our lives better rather than worse.”

The post Introducing ‘Stealing Ur Feelings,’ an Interactive Documentary About Big Tech, AI, and You appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Cameron Kaiser: A quick note for 64-bit PowerPC Firefox builders

Mozilla planet - mo, 23/09/2019 - 06:56
If you build Firefox on 64-bit Linux, *BSD, etc. for your G5, you may want to check out this Talospace article for an upcoming low-level fix especially as we need to ensure big-endian systems work fine with it. The problem never affected OS X Firefox for Power Macs because those builds were only ever 32-bit, and even TenFourFox is 32-bit through and through even on the G5 largely for reasons of Carbon compatibility which we need for some pieces of the widget code. Since this is syndicated on Planet Mozilla let me give a big thanks to Ted Campbell for figuring out the root cause, which turned out to be a long-standing problem I don't think anyone ever noticed before.

I have not decided what to land on TenFourFox FPR17 mostly because this fix took up a fair bit of time; it's possible FPR17 may be a security-only stopgap release. In a related vein, the recent shift to a 4-week cadence for future Firefox releases starting in January will unfortunately increase my workload and may change how I choose to roll out additional features generally. Build day on the G5 is, in fact, literally a day or sometimes close to two (with the G5 in Reduced performance to cut down on fan noise and power consumption it takes about 20 hours to generate all four CPU-optimized releases, plus another 6 hours to regenerate the debug build for development testing; if there are JavaScript changes, I usually kick off a round each on the debug, G4/7450 and G5 builds through the 20,000+ item test suite and this adds another ten hours). Although the build and test process is about 2/3rds automated, it still needs intervention if it goes awry; plus, uploading to SourceForge is currently a manual process, and of course the documentation doesn't write itself. I don't have any easy means of cross-building TenFourFox on the Talos II (which, by the way, with dual 4-core CPUs for 32 threads builds Firefox in about half an hour), so I need to figure out how to balance this additional time requirement with the time I personally have available. While I do intend to continue supporting TenFourFox for those occasions I need to use a Power Mac, this Talos II is undeniably my daily driver, and fixing bugs in the mainline Firefox build I use every day is unavoidably a higher priority.

Categorieën: Mozilla-nl planet

François Marier: Restricting third-party iframe widgets using the sandbox attribute, referrer policy and feature policy

Mozilla planet - sn, 21/09/2019 - 05:19

Adding third-party embedded widgets on a website is a common but potentially dangerous practice. Thankfully, the web platform offers a few controls that can help mitigate the risks. While this post uses the example of an embedded SurveyMonkey survey, the principles can be used for all kinds of other widgets.

Note that this is by no means an endorsement of SurveyMonkey's proprietary service. If you are looking for a survey product, you should consider a free and open source alternative like LimeSurvey.

SurveyMonkey's snippet

In order to embed a survey on your website, the SurveyMonkey interface will tell you to install the following website collector script:

<script>(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type="text/javascript",c.async=!0,c.id=n,c.src=["https:"===location.protocol?"https://":"http://","widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd9NxKf_2BhnTfDFrN34n_2BjT1Kk0sqrObugJL8ZXdb_2BaREa.js"].join(""),a.parentNode.insertBefore(c,a))})(window,document,"script","smcx-sdk");</script><a style="font: 12px Helvetica, sans-serif; color: #999; text-decoration: none;" href=https://www.surveymonkey.com> Create your own user feedback survey </a>

which can be rewritten in a more understandable form as:

( function (s) { var scripts, last_script, new_script; window.SMCX = window.SMCX || [], document.getElementById("smcx-sdk") || ( scripts = document.getElementsByTagName("script"), last_script = scripts[scripts.length - 1], new_script = document.createElement("script"), new_script.type = "text/javascript", new_script.async = true, new_script.id = "smcx-sdk", new_script.src = [ "https:" === location.protocol ? "https://" : "http://", "widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd9NxKf_2BhnTfDFrN34n_2BjT1Kk0sqrObugJL8ZXdb_2BaREa.js" ].join(""), last_script.parentNode.insertBefore(new_script, last_script) ) } )();

The fact that this adds a third-party script dependency to your website is problematic because it means that a security vulnerability in their infrastructure could lead to a complete compromise of your site, thanks to third-party scripts having full control over your website. Security issues aside though, this could also enable this third-party to violate your users' privacy expectations and extract any information displayed on your site for marketing purposes.

However, if you embed the snippet on a test page and inspect it with the developer tools, you will find that it actually creates an iframe:

<iframe width="500" height="500" frameborder="0" allowtransparency="true" src="https://www.surveymonkey.com/r/D3KDY6R?embedded=1" ></iframe>

and you can use that directly on your site without having to load their script.

Mixed content anti-pattern

As an aside, the script snippet they propose makes use of a common front-end anti-pattern:

"https:"===location.protocol?"https://":"http://"

This is presumably meant to avoid inserting an HTTP script element into an HTTPS page, since that would be considered mixed content and get blocked by browsers, however this is entirely unnecessary. One should only ever use the HTTPS version of such scripts anyways since an HTTP page never prohibits embedding HTTPS content.

In other words, the above code snippet can be simplified to:

"https://" Restricting iframes

Thanks to defenses which have been added to the web platform recently, there are a few things that can be done to constrain iframes.

Firstly, you can choose to hide your full page URL from SurveyMonkey using the referrer policy:

referrerpolicy="strict-origin"

This mean seem harmless, but page URLs sometimes include sensitive information in the URL path or query string, for example, search terms that a user might have typed. The strict-origin policy will limit the referrer to your site's hostname, port and protocol.

Secondly, you can prevent the iframe from being able to access anything about its embedding page or to trigger popups and unwanted downloads using the sandbox attribute:

sandbox="allow-scripts allow-forms"

Ideally, the contents of this attribute would be empty so that all restrictions would be active, but SurveyMonkey is a JavaScript application and it of course needs to submit a form since that's the purpose of the widget.

Finally, a new experimental capability is making its way into browsers: feature policy. In the context of untrusted iframes, it enables developers to explicitly disable certain powerful features:

allow="accelerometer 'none'; ambient-light-sensor 'none'; camera 'none'; display-capture 'none'; document-domain 'none'; fullscreen 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; payment 'none'; usb 'none'; vibrate 'none'; vr 'none'; webauthn 'none'"

Putting it all together, we end up with the following HTML snippet:

<iframe width="500" height="500" frameborder="0" allowtransparency="true" allow="accelerometer 'none'; ambient-light-sensor 'none'; camera 'none'; display-capture 'none'; document-domain 'none'; fullscreen 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; payment 'none'; usb 'none'; vibrate 'none'; vr 'none'; webauthn 'none'" sandbox="allow-scripts allow-forms" referrerpolicy="strict-origin" src="https://www.surveymonkey.com/r/D3KDY6R?embedded=1" ></iframe> Content Security Policy

Another advantage of using the iframe directly is that instead of loosening your site's Content Security Policy by adding all of the following:

  • script-src https://www.surveymonkey.com
  • img-src https://www.surveymonkey.com
  • frame-src https://www.surveymonkey.com

you can limit the extra directives to just the frame controls:

  • frame-src https://www.surveymonkey.com

CSP Embedded Enforcement would be another nice mechanism to make use of, but looking at SurveyMonkey's CSP policy:

Content-Security-Policy: default-src https: data: blob: 'unsafe-eval' 'unsafe-inline' wss://*.hotjar.com 'self'; img-src https: http: data: blob: 'self'; script-src https: 'unsafe-eval' 'unsafe-inline' http://www.google-analytics.com http://ajax.googleapis.com http://bat.bing.com http://static.hotjar.com http://www.googleadservices.com 'self'; style-src https: 'unsafe-inline' http://secure.surveymonkey.com 'self'; report-uri https://csp.surveymonkey.com/report?e=true&c=prod&a=responseweb

it allows the injection of arbitrary Flash files, inline scripts, evals and any other scripts hosted on an HTTPS URL, which means that it doesn't really provide any meaningful security benefits.

Embedded enforcement is thefore not a usable security control in this particular example until SurveyMonkey gets a stricter CSP policy.

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10n Report: September Edition

Mozilla planet - to, 19/09/2019 - 17:34

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added New content and projects What’s new or coming up in Firefox desktop

As anticipated in the previous edition of the L10N Report, Firefox 70 is going to be a large release, introducing new features and several improvements around Tracking Protection, privacy and security. The deadline to ship any updates in Firefox 70 is October 8. Make sure to test your localization before the deadline, focusing on:

  • about:protections
  • about:logins
  • Privacy preferences and protection panel (the panel displayed when you click on the shield icon in the address bar)

Be also mindful of a few last-minute changes that were introduced in Beta to allow for better localization.

If your localization is missing several strings and you don’t know where to start from, don’t forget that you can use Tags in Pontoon to focus on high priority content first (example).

Upcoming changes to the release cycle

The current version of the rapid release cycle allows for cycles of different length, ranging from 6 to 8 weeks. Over two years ago we moved to localize Nightly by default. Assuming an average 6-weeks cycle for the sake of simplicity:

  • New strings are available for localization a few days after landing in mozilla-central and showing up in Nightly (they spend some time in a quarantine repository, to avoid exposing localizers to unclear content).
  • Depending on when a string lands within the cycle, you’d have up to 6 weeks to localize before it moves to Beta. In the worst case scenario, a string could land at the very end of the cycle, and will need to be translated after that version of Firefox moves to Beta.
  • Once it moves to Beta, you still have almost the full cycle (4.5 weeks) to localize. Ideally, this time should be spent to fine tune and test the localization, more than catching up with missing strings.

A few days ago it was announced that Firefox is progressively moving to a 4-weeks release cycle. If you’re focusing your localization on Nightly, this should have a relatively small impact on your work:

  • In Nightly, you’d have up to 4 weeks to localize content before i moves to Beta.
  • In Beta, you’d have up to 2.5 weeks to localize.

The cycles will shorten progressively, stabilizing to 4 weeks around April 2020. Firefox 75 will be the first one with a 4-weeks cycle in both Nightly and Beta.

While this shortens the time available for localization, it also means that the schedule becomes predictable and, more importantly, localization updates can ship faster: if you fix something in Beta today, it could take up to 8 weeks to ship in release. With the new cycle, it will always take a maximum of 4 weeks.

What’s new or coming up in web projects Firefox Accounts

A lot more strings have landed since the last report.  Please allocate time accordingly after finishing other higher priority projects. An updated deadline will be added to Pontoon in the coming days. This will ensure localized content is on production as part of the October launch.

Mozilla.org

A few pages have been recently added and more will be added in the coming weeks to support the major release in October. Most of the pages will be enabled in de, en-CA, en-GB, and fr locales only, and some can be opted-in. Please note, Mozilla staff editors will be localizing the pages in German and French.

Legal documentation

We have quite a few updates in legal documentation. If your community is interested in reviewing any of the following, please adhere to this process: All change requests will be done through pull requests on GitHub. With a few exceptions, all the suggested changes should go through a peer review for approval before the changes go to production.

MDN & SuMo

Due to recent merge to a single Bengali locale on the product side, the articles were consolidated as well. For the overlapped articles, the ones selected were based on criteria such as article completion and the date of the completion.

What’s new or coming up in SuMo

Newly published articles for Fire TV:

Newly published articles for Preview:

Newly published articles for Firefox for iOS:

Improving TM matching of Fluent strings

Translation Memory (TM) matching has been improved by changing the way we store Fluent strings in our TM. Instead of storing full messages (together with their IDs and other syntax elements), we now store text only. Obviously, that increases the number of results shown in the Machinery tab, and also makes our TMX exports more usable. Thanks to Jordi Serratosa for driving this effort forward! As part of the fix, we also discovered and fixed bug 1578155, which further improves TM matching for all file formats.

Faster saving of translations.

As part of fixing bug 1578057, Michal Stanke discovered a potential speed up for saving translations. Specifically, improving the way we update the latest activity column in dashboards resulted in a noticeable speedup of 10-20% for saving a translation. That’s a huge win for an operation that happens around 2,000 times every day. Well done, Michal!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Markus v2.0.0 released! Better metrics API for Python projects.

Mozilla planet - to, 19/09/2019 - 15:00
What is it?

Markus is a Python library for generating metrics.

Markus makes it easier to generate metrics in your program by:

  • providing multiple backends (Datadog statsd, statsd, logging, logging roll-up, and so on) for sending metrics data to different places
  • sending metrics to multiple backends at the same time
  • providing a testing framework for easy metrics generation testing
  • providing a decoupled architecture making it easier to write code to generate metrics without having to worry about making sure creating and configuring a metrics client has been done--similar to the Python logging module in this way

We use it at Mozilla on many projects.

v2.0.0 released!

I released v2.0.0 just now. Changes:

Features

  • Use time.perf_counter() if available. Thank you, Mike! (#34)
  • Support Python 3.7 officially.
  • Add filters for adjusting and dropping metrics getting emitted. See documentation for more details. (#40)

Backwards incompatible changes

  • tags now defaults to [] instead of None which may affect some expected test output.

  • Adjust internals to run .emit() on backends. If you wrote your own backend, you may need to adjust it.

  • Drop support for Python 3.4. (#39)

  • Drop support for Python 2.7.

    If you're still using Python 2.7, you'll need to pin to <2.0.0. (#42)

Bug fixes

  • Document feature support in backends. (#47)
  • Fix MetricsMock.has_record() example. Thank you, John!
Where to go for more

Changes for this release: https://markus.readthedocs.io/en/latest/history.html#september-19th-2019

Documentation and quickstart here: https://markus.readthedocs.io/en/latest/index.html

Source code and issue tracker here: https://github.com/willkg/markus

Let me know whether this helps you!

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Virtual identities in Hubs

Mozilla planet - to, 19/09/2019 - 15:00
Virtual identities in Hubs

Identity is a complicated concept—who are we really? Most of us have government IDs that define part of our identity, but that’s just a starting point. We present ourselves differently depending on context—who we are with our loved ones might not be the same as who we are at work, but both are legitimate representations of ourselves.

Virtual spaces make this even harder. We might maintain many virtual identities with different degrees of overlap. Having control over our representation and identity online is a critical component of safety and privacy, and platforms should prioritize user agency.

More importantly, autonomy and privacy are intrinsically intertwined. If everyone saw my google searches, I would probably change what I search for. If I knew my employer could monitor my interactions when I’m not at work, I would behave differently. Privacy isn’t just about protecting information about myself, it’s about allowing me to express myself.

Avatars

Avatars are a digital representation of individuals. They enable virtual embodiment, making communication in a virtual environment more natural and analogous to communication in real life. They also help us ground ourselves spatially in the 3D environment and allow others to have a specific point to reference, which enables directional sentiment and simulated eye contact.

Your decisions about avatar representation can both reveal personally identifiable information about you, such as your face and affect your self-perception and influence your behavior. This phenomenon is known as the Proteus Effect. This effect can induce societal biases, like feeling more confident when embodying taller avatars.

When we think about applying concepts related to identity to social VR platforms like Hubs, platforms need to design and implement features that focus on enabling users to easily manage their identities. On the avatar side, that means making it really easy to choose, change, and customize avatars so that you decide how much or little you want it to represent you.

In Hubs, we chose robots as the default avatar instead of picking a single human representation. However, any glb files can be used as the base avatar form - it was important for us that the platform could support any number of avatar styles, driven by the communities and users themselves. That means if you prefer cats or pinecones, you have the flexibility to choose that representation for yourself instead.

When it comes to determining visual identity of one’s self in a 3D space, that control should belong to the user, not the platform.

Accounts

When you interact with others online, there is a risk of exposing different parts of your identity to them and it isn’t always clear what is exposed when you make an account on a website. Your profile on a social network, for example, may have an image of you shared with other people, display your legal name, or show an account pseudonym that you use with other online services.

Hubs allows you to use the platform regardless of whether or not you have an account, but one of the benefits of account-based services is the ability to have a known identity that can be responsible for certain actions and behaviors. Certain actions, like being promoted to a room moderator, require an account. A challenge with pseudonymous and anonymous spaces is that a lack of a valued account can also result in a lack of accountability.

Hubs accounts are purposely lightweight, requiring only an email address. Being able to tie your virtual identity to a second account, such as a Discord account, can provide further benefits, such as increased room security, and the ability to communicate across different platforms. Particularly when room links are more widely distributed, the room dynamics can benefit from linking users to a known identity— but platforms should respect how much information they request from users.

There’s a balance with how much knowledge the platforms need to validate identities and how much data users need to provide. When possible, personal information collection should be minimized.

Mozilla Hubs is an open source social VR platform—come try it out at hubs.mozilla.com or contribute here. Read more about privacy in Hubs here.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Upcoming docs.rs changes

Mozilla planet - wo, 18/09/2019 - 02:00

On September 30th breaking changes will be deployed to the docs.rs build environment. docs.rs is a free service building and hosting documentation for all the crates published on crates.io. It's open source, maintained by the Rustdoc team and operated by the Infrastructure team.

What will change

Builds will be executed inside the rustops/crates-build-env Docker image. That image contains a lot of system dependencies installed to ensure we can build as many crates as possible. It's already used by Crater, and we added all the dependencies previously installed in the legacy build environment.

To ensure we can continue operating the service in the future and to increase its reliability we also improved the sandbox the builds are executed in, adding new limits:

  • Each platform will now have 15 minutes to build its dependencies and documentation.
  • 3 GB of RAM will be available for the build.
  • Network access will be disabled (crates.io dependencies will still be fetched).
  • Only the target/ directory will be writable, and it will be purged after each build.

Finally, docs.rs will now use the latest nightly available when building crates, instead of using a manually updated pinned version of nightly.

How to prepare for the changes

To test if your crate builds inside the new environment you can download the Docker image locally and execute a shell inside it:

docker pull rustops/crates-build-env docker run --rm --memory 3221225472 -it rustops/crates-build-env bash

Once you're in a shell you can install rustup (it's not installed by default in the image), install Rust nightly, clone your crate's repository and then build the documentation:

cargo fetch time cargo doc --no-deps

To aid your testing these commands will limit the available RAM to 3 GB and show the total execution time of cargo doc, but network access will not be blocked as you'll need to fetch dependencies.

If your project needs a system dependency missing in the build environment, please open an issue on the Docker image's repository and we'll consider adding it.

If your crate fails to build because it took more than 15 minutes to generate its docs or it uses more than 3 GB of RAM please open an issue and we will consider reasonable limit increases for your crate. We will not enable network access for your crate though: you'll need to change your crate not to require any external resource at build time.

We recommend using Cargo features to remove the parts of the code causing build failures, enabling those features with docs.rs metadata.

Acknowledgements

The new build environment is based on Rustwide, the library powering Crater. It was extracted from the Crater codebase, and created both by the Crater contributors and the Rustwide contributors.

The implementation work on the docs.rs side was done by Pietro Albini and Onur Aslan, with QuietMisdreavus and Mark Rousskov reviewing the changes.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: Moving Firefox to a faster 4-week release cycle

Mozilla planet - ti, 17/09/2019 - 18:46

This article is cross-posted from Mozilla Hacks

Overview

We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.
Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

 a table showing the release dates for Firefox GA and pre-release channels, 2019-2020. Follow the link for data.

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SpiderMonkey and Tor will have to do more frequent releases if they wish to stay current with Firefox releases. These Firefox releases will have fewer changes each so they should be correspondingly easier to integrate. The 4-week releases of Firefox will be the most stable, fastest, and best quality builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to https://wiki.mozilla.org/Release_Management/Calendar for the latest release dates and other information. Got questions? Please send email to release-mgmt@mozilla.com.

The post Moving Firefox to a faster 4-week release cycle appeared first on Future Releases.

Categorieën: Mozilla-nl planet

The Firefox Frontier: What to do after a data breach

Mozilla planet - ti, 17/09/2019 - 18:00

You saw the news alert. You got an email, either from Firefox Monitor or a company where you have an account. There’s been a security incident — a data breach. … Read more

The post What to do after a data breach appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Moving Firefox to a faster 4-week release cycle

Mozilla planet - ti, 17/09/2019 - 17:10

Editor’s Note: Wednesday, 10:40am PT. We’ve updated this post with the following correction: The SeaMonkey Project consumes Firefox releases, not SpiderMonkey, which is Firefox’s JavaScript engine. Thanks to an astute reader for noticing.

Overview

We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.
Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

a table showing the release dates for Firefox GA and pre-release channels, 2019-2020

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SeaMonkey and Tor will have to do more frequent releases if they wish to stay current with Firefox releases. These Firefox releases will have fewer changes each so they should be correspondingly easier to integrate. The 4-week releases of Firefox will be the most stable, fastest, and best quality builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to https://wiki.mozilla.org/Release_Management/Calendar for the latest release dates and other information. Got questions? Please send email to release-mgmt@mozilla.com.

The post Moving Firefox to a faster 4-week release cycle appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Alexandre Poirot: Trabant Calculator - A data visualization of TreeHerder Jobs durations

Mozilla planet - ti, 17/09/2019 - 17:04

Link to this tool (its sources)

What is this tool about?

Its goal is to give a better sense on how much computations are going on in Mozilla automation. Current TreeHerder UI surfaces job durations, but only per job. To get a sense on how much we stress our automation, we have to click on each individual job and do the sum manually. This tool is doing this sum for you. Well, it also tries to rank the jobs by their durations. I would like to open minds about the possible impact on the environment we may have here. For that, I am translating these durations into something fun that doesn’t necessarily make any sense.

What is that car’s GIF?

The car is a Trabant. This car is often seen as symbolic of the former East Germany and the collapse of the Eastern Bloc in general. This part of the tool is just a joke. You may only consider looking at durations, which are meant to be trustable data. Translating a worker duration into CO2 emission is almost impossible to get right. And that’s what I do here: Translate worker duration into a potential energy consumption, which I translate into a potential CO2 emission, before finally translating that CO2 emission into the equivalent emission of a trabant over a given distance in kilometers.

Power consumption of an AWS worker per hour

Here is a really weak computation of Amazon AWS CO2 emissions for a t4.large worker. The power usage of the machines these workers are running on could be 0.6 kW. Such worker uses 25% of these machines. Then let’s say that Amazon Power Usage Effectiveness is 1.1. It means that one hour of a worker consumes 0.165 kWh (0.6 * 0.25 * 1.1).

CO2 emission of electricity per kWh

Based on US Environmental Protection Agency (source), the average CO2 emission per MWh is 998.4 lb/MWh. So 998.4 * 453.59237(g/lb) = 452866 g/MWh, and, 452866 / 1000 = 452 g of CO2/kWh. Unfortunately, the data is already old. It comes from a 2018 report, which seems to be about 2017 data.

CO2 emission of a Trabant per km

A Trabant emits 170 g of CO2 / km (source). (Another [source] reports 140g, but let’s say it emits a lot.)

Final computation Trabant’s kilometers = "Hours of computation" * "Power consumption of a worker per hour" * "CO2 emission of electribity per kWh" / "CO2 emission of a trabant per km" Trabant’s kilometers = "Hours of computation" * 0.165 * 452 / 170 => Trabant’s kilometers = "Hours of computation" * 0.4387058823529412 ** All of this must be wrong

Except the durations! Everything else is highly subject to debate.
Sources are here, and contributions or feedback are welcomed.

Categorieën: Mozilla-nl planet

Pages