mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Subscribe to Mozilla planet feed
Planet Mozilla - https://planet.mozilla.org/
Updated: 1 wike 3 dagen ferlyn

The Rust Programming Language Blog: Completing the transition to the new borrow checker

fr, 01/11/2019 - 01:00

For most of 2018, we've been issuing warnings about various bugs in the borrow checker that we plan to fix -- about two months ago, in the current Rust nightly, those warnings became hard errors. In about two weeks, when the nightly branches to become beta, those hard errors will be in the beta build, and they will eventually hit stable on December 19th, as part of Rust 1.40.0. If you're testing with Nightly, you should be all set -- but otherwise, you may want to go and check to make sure your code still builds. If not, we have advice for fixing common problems below.

Background: the non-lexical lifetime transition

When we released Rust 2018 in Rust 1.31, it included a new version of the borrow checker, one that implemented "non-lexical lifetimes". This new borrow checker did a much more precise analysis than the original, allowing us to eliminate a lot of unnecessary errors and make Rust easier to use. I think most everyone who was using Rust 2015 can attest that this shift was a big improvement.

The new borrow checker also fixed a lot of bugs

What is perhaps less well understood is that the new borrow checker implementation also fixed a lot of bugs. In other words, the new borrow checker did not just accept more programs -- it also rejected some programs that were only accepted in the first place due to memory unsafety bugs in the old borrow checker!

Until recently, those fixed bugs produced warnings, not errors

Obviously, we don't want to accept programs that could undermine Rust's safety guarantees. At the same time, as part of our commitment to stability, we try to avoid making sudden bug fixes that will affect a lot of code. Whenever possible, we prefer to "phase in" those changes gradually. We usually begin with "Future Compatibility Warnings", for example, before moving those warnings to hard errors (sometimes a small bit at a time). Since the bug fixes to the borrow checker affected a lot of crates, we knew we needed a warning period before we could make them into hard errors.

To implement this warning period, we kept two copies of the borrow checker around (this is a trick we use quite frequently, actually). The new checker ran first. If it found errors, we didn't report them directly: instead, we ran the old checker in order to see if the crate used to compile before. If so, we reported the errors as Future Compatibility Warnings, since we were changing something that used to compile into errors.

All good things must come to an end; and bad ones, too

Over time we have been slowly transitioning those future compatibility warnings into errors, a bit at a time. About two months ago, we decided that the time had come to finish the job. So, over the course of two PRs, we converted all remaining warnings to errors and then removed the old borrow checker implementation.

What this means for you

If you are testing your package with nightly, then you should be fine. In fact, even if you build on stable, we always recommend that you test your builds in CI with the nightly build, so that you can identify upcoming issues early and report them to us.

Otherwise, you may want to check your dependencies. When we decided to remove the old borrow checker, we also analyzed which crates would stop compiling. For anything that seemed to be widely used, we made sure that there were newer versions of that crate available that do compile (for the most part, this had all already happened during the warning period). But if you have those older versions in your Cargo.lock file, and you are only using stable builds, then you may find that your code no longer builds once 1.40.0 is released -- you will have to upgrade the dependency.

The most common crates that were affected are the following:

  • url version 1.7.0 -- you can upgrade to 1.7.2, though you'd be better off upgrading to 2.1.0
  • nalgebra version 0.16.13 -- you can upgrade to 0.16.14, though you'd be better off upgrading to 0.19.0
  • rusttype version 0.2.0 to 0.2.3 -- you can upgrade to 0.2.4, though you'd be better upgrading to 0.8.1

You can find out which crates you rely upon using the cargo-tree command. If you find that you do rely (say) on url 1.7.0, you can upgrade to 1.7.2 by executing:

cargo update --package url --precise 1.7.2 Want to learn more?

If you'd like to learn more about the kinds of bugs that were fixed -- or if you are seeing errors in your code that you need to fix -- take a look at this excellent blog post by Felix Klock, which goes into great detail.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Upcoming changes to extension sideloading

to, 31/10/2019 - 22:27

Sideloading is a method of installing an extension in Firefox by adding an extension file to a special location using an executable application installer. This installs the extension in all Firefox instances on a computer.

Sideloaded extensions frequently cause issues for users since they did not explicitly choose to install them and are unable to remove them from the Add-ons Manager. This mechanism has also been employed in the past to install malware into Firefox. To give users more control over their extensions, support for sideloaded extensions will be discontinued. 

November 1 update: we’ve heard some feedback expressing confusion about how this change will give more control to Firefox users. Ever since we implemented abuse reporting in Firefox 68, the top kind of report we receive by far has been extension installs that weren’t expected and couldn’t be removed, and the extensions being reported are known to be distributed through sideloading. With this change, we are enforcing more transparency in the installation process, by letting users choose whether they want to install an application companion extension or not, and letting them remove it when they want to. Developers will still be free to self-distribute extensions on the web, and users will still be able to install self-distributed extensions. Enterprise administrators will continue to be able to deploy extensions to their users via policies. Other forms of automatic extension deployment like the ones used for some Linux distributions and applications like Selenium may be impacted by these changes. We’re still investigating some technical details around these cases and will try to strike the right balance between user choice and minimal disruption.

During the release cycle for Firefox version 73, which goes into pre-release channels on December 3, 2019 and into release on February 11, 2020, Firefox will continue to read sideloaded files, but they will be copied over to the user’s individual profile and installed as regular add-ons. Sideloading will stop being supported in Firefox version 74, which will be released on March 10, 2020. The transitional stage in Firefox 73 will ensure that no installed add-ons will be lost, and end users will gain the ability to remove them if they chose to.

If you self-distribute your extension via sideloading, please update your install flows and direct your users to download your extension through a web property that you own, or through addons.mozilla.org (AMO). Please note that all extensions must meet the requirements outlined in our Add-on Policies and Developer Agreement.  If you choose to continue self-distributing your extension, make sure that new versions use an update URL to keep users up-to-date. Instructions for distributing an extension can be found in our Extension Workshop document repository.

If you have any questions, please head to our community forum.

The post Upcoming changes to extension sideloading appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Facebook Is Still Failing at Ad Transparency (No Matter What They Claim)

to, 31/10/2019 - 20:58

Yesterday, Jack Dorsey made a bold statement: Twitter will cease all political advertising on the platform. “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale,” he tweeted.

Later that day, Sheryl Sandberg responded: Facebook doesn’t have to cease political advertising… because the platform is “focused and leading on transparency.” Sandberg cited Facebook’s ad archive efforts, which ostensibly allow researchers to study the provenance and impact of political ads.

To be clear: Facebook is still falling short on its transparency commitments. Further, even perfect transparency wouldn’t change the fact that Facebook is accepting payment to promote dangerous and untrue ads. 

Some brief history: Because of the importance of transparency in the political ad arena, Mozilla has been closely analyzing Facebook’s ad archive for over a year, and assessing its ability to provide researchers and others with meaningful information.

In February, Mozilla and 37 civil society organizations urged Facebook to provide better transparency into political advertising on their platform. Then, in March, Mozilla and leading disinformation researchers laid out exactly what an effective ad transparency archive should look like.

But when Facebook finally released its ad transparency API in March, it was woefully ineffective. It met just two of experts’ five minimum guidelines. Further, a Mozilla researcher uncovered a long list of bugs and shortcomings that rendered the API nearly useless.

The New York Times agreed: “Ad Tool Facebook Built to Fight Disinformation Doesn’t Work as Advertised,” reads a July headline. The article continues: “The social network’s new ad library is so flawed, researchers say, that it is effectively useless as a way to track political messaging.”

Since that time, Mozilla has confirmed that Facebook has made small changes in the API’s functionality — but we still judge the tool to be fundamentally flawed for its intended purpose of providing transparency and a data source for rigorous research.

Rather than deceptively promoting their failed API, Facebook must heed researchers’ advice and commit to truly transparent political advertising. If they can’t get that right, maybe they shouldn’t be running political ads at all for the time being.

The post Facebook Is Still Failing at Ad Transparency (No Matter What They Claim) appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Paul McLanahan: The Lounge on Dokku

to, 31/10/2019 - 18:44

Mozilla has hosted an enterprise instance of IRCCloud for several years now, and it’s been a great client to use with our IRC network. IRCCloud has deprecated their enterprise product and so Mozilla recently decommissioned our instance. I then saw several colleagues praising The Lounge as a good self-hosted alternative. I became even more interested when I saw that the project maintains a docker image distribution of their releases. I now have an instance running and I’m using irc.mozilla.org via this client and I agree with my colleagues: it’s a decent replacement.

Categorieën: Mozilla-nl planet

Anne van Kesteren: Shadow tree encapsulation theory

to, 31/10/2019 - 12:02

A long time ago Maciej wrote down five types of encapsulation for shadow trees (i.e., node trees that are hidden in the shadows from the document tree):

  1. Encapsulation against accidental exposure — DOM nodes from the shadow tree are not leaked via pre-existing generic APIs — for example, events flowing out of a shadow tree don't expose shadow nodes as the event target.
  2. Encapsulation against deliberate access — no API is provided which lets code outside the component poke at the shadow DOM. Only internals that the component chooses to expose are exposed.
  3. Inverse encapsulation — no API is provided which lets code inside the component see content from the page embedding it (this would have the effect of something like sandboxed iframes or Caja).
  4. Isolation for security purposes — it is strongly guaranteed that there is no way for code outside the component to violate its confidentiality or integrity.
  5. Inverse isolation for security purposes — it is strongly guaranteed that there is no way for code inside the component to violate the confidentiality or integrity of the embedding page.

Types 3 through 5 do not have any kind of support and type 4 and 5 encapsulation would be hard to pull off due to Spectre. User agents typically use a weaker variant of type 4 for their internal controls, such as the video and input elements, that does not protect confidentiality. The DOM and HTML standards provide type 1 and 2 encapsulation to web developers, and type 2 mostly due to Apple and Mozilla pushing rather hard for it. It might be worth providing an updated definition for the first two as we’ve come to understand them:

  1. Open shadow trees — no standardized web platform API provides access to nodes in an open shadow tree, except APIs that have been explicitly named and designed to do so (e.g., composedPath()). Nothing should be able to observe these shadow trees other than through those designated APIs (or “monkey patching”, i.e., modifying objects). Limited form of information hiding, but no integrity or confidentiality.
  2. Closed shadow trees — very similar to open shadow trees, except that designated APIs also do not get access to nodes in a closed shadow tree.

Type 2 encapsulation gives component developers control over what remains encapsulated and what is exposed. You need to take all your users into account and expose the best possible public API for them. At the same time, it protects you from folks taking a dependency on the guts of the component. Aspects you might want to refactor or add functionality to over time. This is much harder with type 1 encapsulation as there will be APIs that can reach into the details of your component and if users do so you cannot refactor it without updating all the callers.

Now, both type 1 and 2 encapsulation can be circumvented, e.g., by a script changing the attachShadow() method or mutating another builtin that the component has taken a dependency on. I.e., there is no integrity and as they run in the same origin there is no security boundary either. The limited form of information hiding is primarily a maintenance boon and a way to manage complexity. Maciej addresses this as well:

If the shadow DOM is exposed, then you have the following risks:

  1. A page using the component starts poking at the shadow DOM because it can — perhaps in a rarely used code path.
  2. The component is updated, unaware that the page is poking at its guts.
  3. Page adopts new version of component.
  4. Page breaks.
  5. Page author blames component author or rolls back to old version.

This is not good. Information hiding and hiding of implementation details are key aspects of encapsulation, and are good software engineering practices.

Categorieën: Mozilla-nl planet

Dave Townsend: Creating HTML content with a fixed aspect ratio without the padding trick

wo, 30/10/2019 - 18:01

It seems to be a common problem, you want to display some content on the web with a certain aspect ratio but you don’t know the size you will be displaying at. How do you do this? CSS doesn’t really have the tools to do the job well currently (there are proposals). In my case I want to display a video and associated controls as large as possible inside a space that I don’t know the size of. The size of the video also varies depending on the one being displayed.

Padding height

The answer to this according to almost all the searching I’ve done is the padding-top/bottom trick. For reasons that I don’t understand when using relative lengths (percentages) with the CSS padding-top and padding-bottom properties the values are calculated based on the width of the element. So padding-top: 100% gives you padding equal to the width of the element. Weird. So you can fairly easily create a box with a height calculated from its width and from there display content at whatever aspect ratio you choose. But there’s an inherent problem here, you need to know the width of the box in the first place, or at least be able to constrain it based on something. In my case the aspect ratio of the video and the container are both unknown. In some cases I need to constrain the width and calculate the height, but in others I need to constrain the height and calculate the width which is where this trick fails.

object-fit

There is one straightforward solution. The CSS object-fit property allows you to scale up content to the largest size possible for the space allocated. This is perfect for my needs, except that it only works for replaced content like videos and images. In my case I also need to overlay some controls on top and I won’t know where to position them unless they are inside a box the size of the video.

The solution?

So what I need is something where I can create a box with set sizes and then scale both width and height to the largest that fit entirely in the container. What do we have on the web that can do that … oh yes, SVG. In SVG you can define the viewport for your content and any shapes you like inside with SVG coordinates and then scale the entire SVG viewport using CSS properties. I want HTML content to scale here and luckily SVG provides the foreignObject element which lets you define a rectangle in SVG coordinates that contains non-SVG content, such as HTML! So here is what I came up with:

<!DOCTYPE html> <html> <head> <style type="text/css"> html, body, svg, div { height: 100%; width: 100%; margin: 0; padding: 0; } div { background: red; } </style> </head> <body> <svg viewBox="0 0 4 3"> <foreignObject x="0" y="0" width="100%" height="100%"> <div></div> </foreignObject> </svg> </body> </html>

This is pretty straightforward. It creates an SVG document with a viewport with a 4:3 aspect ratio, a foreignObject container that fills the viewport and then a div that fills that. what you end up with is a div with a 4:3 aspect ratio. While this shows it working against the full page it seems to work anywhere with constraints on either height, width or both such as in a flex or grid layout. Obviously changing the viewBox allows you to get any aspect ratio you like, just setting it to the size of the video gives me exactly what I want.

You can see it working over on codepen.

Categorieën: Mozilla-nl planet

William Lachance: Using BigQuery JavaScript UDFs to analyze Firefox telemetry for fun & profit

wo, 30/10/2019 - 16:11

For the last year, we’ve been gradually migrating our backend Telemetry systems from AWS to GCP. I’ve been helping out here and there with this effort, most recently porting a job we used to detect slow tab spinners in Firefox nightly, which produced a small dataset that feeds a small adhoc dashboard which Mike Conley maintains. This was a relatively small task as things go, but it highlighted some features and improvements which I think might be broadly interesting, so I decided to write up a small blog post about it.

Essentially all this dashboard tells you is what percentage of the Firefox nightly population saw a tab spinner over the past 6 months. And of those that did see a tab spinner, what was the severity? Essentially we’re just trying to make sure that there are no major regressions of user experience (and also that efforts to improve things bore fruit):

Pretty simple stuff, but getting the data necessary to produce this kind of dashboard used to be anything but trivial: while some common business/product questions could be answered by a quick query to clients_daily, getting engineering-specific metrics like this usually involved trawling through gigabytes of raw heka encoded blobs using an Apache Spark cluster and then extracting the relevant information out of the telemetry probe histograms (in this case, FX_TAB_SWITCH_SPINNER_VISIBLE_MS and FX_TAB_SWITCH_SPINNER_VISIBLE_LONG_MS) contained therein.

The code itself was rather complicated (take a look, if you dare) but even worse, running it could get very expensive. We had a 14 node cluster churning through this script daily, and it took on average about an hour and a half to run! I don’t have the exact cost figures on hand (and am not sure if I’d be authorized to share them if I did), but based on a back of the envelope sketch, this one single script was probably costing us somewhere on the order of $10-$40 a day (that works out to between $3650-$14600 a year).

With our move to BigQuery, things get a lot simpler! Thanks to the combined effort of my team and data operations[1], we now produce “stable” ping tables on a daily basis with all the relevant histogram data (stored as JSON blobs), queryable using relatively vanilla SQL. In this case, the data we care about is in telemetry.main (named after the main ping, appropriately enough). With the help of a small JavaScript UDF function, all of this data can easily be extracted into a table inside a single SQL query scheduled by sql.telemetry.mozilla.org.

CREATE TEMP FUNCTION udf_js_json_extract_highest_long_spinner (input STRING) RETURNS INT64 LANGUAGE js AS """ if (input == null) { return 0; } var result = JSON.parse(input); var valuesMap = result.values; var highest = 0; for (var key in valuesMap) { var range = parseInt(key); if (valuesMap[key]) { highest = range > 0 ? range : 1; } } return highest; """; SELECT build_id, sum (case when highest >= 64000 then 1 else 0 end) as v_64000ms_or_higher, sum (case when highest >= 27856 and highest < 64000 then 1 else 0 end) as v_27856ms_to_63999ms, sum (case when highest >= 12124 and highest < 27856 then 1 else 0 end) as v_12124ms_to_27855ms, sum (case when highest >= 5277 and highest < 12124 then 1 else 0 end) as v_5277ms_to_12123ms, sum (case when highest >= 2297 and highest < 5277 then 1 else 0 end) as v_2297ms_to_5276ms, sum (case when highest >= 1000 and highest < 2297 then 1 else 0 end) as v_1000ms_to_2296ms, sum (case when highest > 0 and highest < 50 then 1 else 0 end) as v_0ms_to_49ms, sum (case when highest >= 50 and highest < 100 then 1 else 0 end) as v_50ms_to_99ms, sum (case when highest >= 100 and highest < 200 then 1 else 0 end) as v_100ms_to_199ms, sum (case when highest >= 200 and highest < 400 then 1 else 0 end) as v_200ms_to_399ms, sum (case when highest >= 400 and highest < 800 then 1 else 0 end) as v_400ms_to_799ms, count(*) as count from (select build_id, client_id, max(greatest(highest_long, highest_short)) as highest from (SELECT SUBSTR(application.build_id, 0, 8) as build_id, client_id, udf_js_json_extract_highest_long_spinner(payload.histograms.FX_TAB_SWITCH_SPINNER_VISIBLE_LONG_MS) AS highest_long, udf_js_json_extract_highest_long_spinner(payload.histograms.FX_TAB_SWITCH_SPINNER_VISIBLE_MS) as highest_short FROM telemetry.main WHERE application.channel='nightly' AND normalized_os='Windows' AND application.build_id > FORMAT_DATE("%Y%m%d", DATE_SUB(CURRENT_DATE(), INTERVAL 2 QUARTER)) AND DATE(submission_timestamp) >= DATE_SUB(CURRENT_DATE(), INTERVAL 2 QUARTER)) group by build_id, client_id) group by build_id;

In addition to being much simpler, this new job is also way cheaper. The last run of it scanned just over 1 TB of data, meaning it cost us just over $5. Not as cheap as I might like, but considerably less expensive than before: I’ve also scheduled it to only run once every other day, since Mike tells me he doesn’t need this data any more often than that.

[1] I understand that Jeff Klukas, Frank Bertsch, Daniel Thorn, Anthony Miyaguchi, and Wesley Dawson are the principals involved - apologies if I’m forgetting someone.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: A Year with Spoke: Announcing the Architecture Kit

ti, 29/10/2019 - 17:52
 Announcing the Architecture Kit

Spoke, our 3D editor for creating environments for Hubs, is celebrating its first birthday with a major update. Last October, we released the first version of Spoke, a compositing tool for mixing 2D and 3D content to create immersive spaces. Over the past year, we’ve made a lot of improvements and added new features to make building scenes for VR easier than ever. Today, we’re excited to share the latest feature that adds to the power of Spoke: the Architecture Kit!

We first talked about the components of the Architecture Kit back in March. With the Architecture Kit, creators now have an additional way to build custom content for their 3D scenes without using an external tool. Specifically, we wanted to make it easier to take existing components that have already been optimized for VR and make it easy to configure those pieces to create original models and scenes. The Architecture Kit contains over 400 different pieces that are designed to be used together to create buildings - the kit includes wall, floor, ceiling, and roof pieces, as well as windows, trim, stairs, and doors.

Because Hubs runs across mobile, desktop, and VR devices and delivered through the browser, performance is a key consideration. The different components of the Architecture Kit were created in Blender so that each piece would align together to create seamless connections. By avoiding mesh overlap, which is a common challenge when building with pieces that were made separately, z-fighting between faces becomes less of a problem. Many of the Architecture Kit pieces were made single-sided, which reduces the number of faces that need to be rendered. This is incredibly useful when creating interior or exterior pieces where one side of a wall or ceiling piece will never be visible by a user.

We wanted the Architecture Kit to be configurable beyond just the meshes. Buildings exist in a variety of contexts, so different pieces of the Kit can have one or more material slots with unique textures and materials that can be applied. This allows you to customize a wall with a window trim to have, for example, a brick wall and a wood trim. You can choose from the built-in textures of the Architecture Kit pieces directly in Spoke, or download the entire kit from GitHub.

 Announcing the Architecture Kit

This release, which focuses on classic architectural styles and interior building tools, is just the beginning. As part of building the architecture kit, we’ve built a Blender add-on that will allow 3D artists to create their own kits. Creators will be able to specify individual collections of pieces and set an array of different materials that can be applied to the models to provide variation for different meshes. If you’re an artist interested in contributing to Spoke and Hubs by making a kit, drop us a line at hubs@mozilla.com.

 Announcing the Architecture Kit

In addition to the Architecture Kit, Spoke has had some other major improvements over the course of its first year. Recent changes to object placement have made it easier when laying out scene objects, and an update last week introduced the ability to edit multiple objects at one time. We added particle systems over the summer, enabling more dynamic elements to be placed in your rooms. It’s also easier to visualize different components of your scene with a render mode toggle that allows you to swap between wireframe, normals, and shadows. The Asset Panel got a makeover, multi-select and edit was added, and we fixed a huge list of bugs when we made Spoke available as a fully hosted web app.

Looking ahead to next year, we’re planning on adding features that will give creators more control over interactivity within their scenes. While the complete design and feature set for this isn’t yet fully scoped, we’ve gotten great feedback from Spoke creators that they want to add custom behaviors, so we’re beginning to think about what it will look like to experiment with scripting or programmed behavior on elements in a scene. If you’re interested in providing feedback and following along with the planning, we encourage you to join the Hubs community Discord server, and keep an eye on the Spoke channel. You can also follow development of Spoke on GitHub.

There has never been a better time to start creating 3D content for the web. With new tools and features being added to Spoke all the time, we want your feedback in and we’d love to see what you’re building! Show off your creations and tag us on Twitter @ByHubs, or join us in the Hubs community Discord and meet other users, chat with the team, and stay up to date with the latest in Mozilla Social VR.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: A Year in Review: Fighting Online Disinformation

ti, 29/10/2019 - 16:45

A year ago, Mozilla signed the first ever Code of Practice on Disinformation, brokered in Europe as part of our commitment to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts. The Code set a wide range of commitments for all the signatories, from transparency in political advertising to the closure of fake accounts, to address the spread of disinformation online. And we were hopeful that the Code would help to drive change in the platform and advertising sectors.

Since then, we’ve taken proactive steps to help tackle this issue, and today our self assessment of this work was published by the European Commission. Our assessment covers the work we’ve been doing at Mozilla to build tools within the Firefox browser to fight misinformation, empower users with educational resources, support research on disinformation and lead advocacy efforts to push the ecosystem to live up to their own commitments within the Code of Practice.

Most recently, we’ve rolled-out enhanced security features in the default setting of Firefox that safeguards our users from the pervasive tracking and collection of personal data by ad networks and tech companies by blocking all third-party cookies by default. As purveyors of disinformation feed off of information that can be revealed about an individual’s browsing behavior, we expect this protection to reduce the exposure of users to the risks of being targeted by disinformation campaigns.

We are proud of the steps we’ve taken during the last year, but it’s clear that the platforms and online advertising sectors  need to do more – e.g. protection against online tracking, meaningful transparency of political ads and support of the research community  – to fully tackle online disinformation and truly empower individuals. In fact, recent studies have highlighted the fact that disinformation is not going away – rather, it is becoming a “pervasive and ubiquitous part of everyday life”.

The Code of Practice represents a positive shift in the fight to counter misinformation, and today’s self assessments are proof of progress. However, this work has only just begun and we must make sure that the objectives of the Code are fully realized. At Mozilla, we remain committed to further this agenda to counter disinformation online.

The post A Year in Review: Fighting Online Disinformation appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Cameron Kaiser: SourceForge download issues (and Github issues issues)

ti, 29/10/2019 - 16:10
There are two high-priority problems currently affecting TenFourFox's download and development infrastructure. Please don't open any more Tenderapp tickets on these: I am painfully aware of them and am currently trying to devise workarounds, and the more tickets get opened the more time I spend redirecting people instead of actually working on fixes.

The first one is that the hack we use to relax JavaScript syntax to get Github working (somewhat) is now causing the browser to go into an infinite error loop on Github issue reports and certain other Github pages, presumably due to changes in code on their end. Unfortunately we use Github heavily for the wiki and issues tracker, so this is a major problem. The temporary workaround is, of course, a hack to relax JavaScript syntax even further. This hack is disgusting and breaks a lot of tests but is simple and does seem to work, so if I can't come up with something better it will be in FPR17. Most regular users won't be affected by this.

However, the one that is now starting to bite people is that SourceForge has upgraded its cipher suite to require a new cipher variant TenFourFox doesn't support. Unfortunately all that happens is links to download TenFourFox don't work; there is no error message, no warning and no explanation. That is a bug in and of itself but this leaves people dead in the water being pinged to install an update they can't access directly.

Naturally you can download the browser on another computer but this doesn't help users who only have a Power Mac. Fortunately, the good old TenFourFox Downloader works fine. As a temporary measure I have started offering the Downloader to all users from the TenFourFox main page, or you can get it from this Tenderapp issue. I will be restricting it to only Power Mac users regardless of browser very soon because it's not small and serves no purpose on Intel Macs (we don't support them) or other current browsers (they don't need it), but I want to test that the download gate works properly first and I'm not at my G5 right this second, so this will at least unblock people for the time being.

For regression and stability reasons I don't want to do a full NSPR/NSS update but I think I can hack this cipher in as we did with another one earlier. I will be speeding up my timetable for FPR17 so that people can test the changes (including some other site compatibility updates), but beta users are forewarned that you will need to use another computer to download the beta. Sorry about that.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Password dos and don’ts

ti, 29/10/2019 - 16:00

So many accounts, so many passwords. That’s online life. The average person with a typical online presence is estimated to have close to 100 online accounts, and that figure is … Read more

The post Password dos and don’ts appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Pages