mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 1 dag 7 uur geleden

Pascal Chevrel: MozFR Transvision Reloaded: 1 year later

vr, 13/05/2016 - 11:27

Just one year ago, the French Mozilla community was living times of major changes: several key historical contributors were leaving the project, our various community portals were no longer updates or broken, our tools were no longer maintained. At the same time a few new contributors were also popping in our IRC channel asking for ways to get involved in the French Mozilla community.

As a result, Kaze decided to organize the first ever community meetup for the French-speaking community in the Paris office (and we will repeat this meetup in June in the brand new Paris office!) .

IMG_0151

This resulted in a major and successful community reboot. Leaving contributors passed on the torch to other members of the community, newer contributors were meeting in real life for the first time. This is how Clarista officially became our events organizer, this is how Théo replaced Cédric as the main Firefox localizer and this is how I became the new developer for Transvision! :)

What is Transvision? Transvision is a web application created by Philippe Dessantes which was helping the French team finding localized/localizable strings in Mozilla repositories.

Summarized like that, it doesn't sound that great, but believe me, it is! Mozilla applications have big gigantic repos, there are tens of thousands of strings in our mercurial repositories, some of them we have translated like a decade ago, when you decide to change a verb for a better one for example, it is important to be able to find all occurrences of this verb you have used in the past to see if they need update too. When somebody spots a typo or a clumsy wording, it's good to be able to check if you didn't make the same translation mistakes in other parts of the Mozilla applications several years ago and of course, it's good to be able to check that in just a few seconds. Basically, Phillippe had built the QA/assistive technology for our localization process that best fitted our team needs and we just couldn't let it die.

During the MozFR meetup, Philippe showed to me how the application worked and we created a github repository where we put the currently running version of the code. I tagged that code as version 1.0.

Over the summer, I familiarized myself with the code which was mostly procedural PHP, several Bash scripts to maintain copies of our mercurial repos and a Python script used to extract the strings. Quickly, I decided that I would follow the old Open Source strategy of Release early, release often. Since I was doing that on the sidelines of my job at Mozilla,  I needed the changes to be small but frequent incremental steps as I didn't know how much time I could devote to this project. Basically, having frequent releases means that I always have the codebase in mind which is good since I can implement an idea quickly, without having to dive into the code to remember it.

One year and 15 releases later, we are now at version 2.5, so here are the features and achievements I am most proud of:

  1. Transvision is alive and kicking :)
  2. We are now a team! Jesús Perez has been contributing code since last December, a couple more people have shown interest in contributing and Philippe is interested in helping again too. We have also a dynamic community of localizers giving feedback, reporting bugs are asking for immrovements
  3. The project is now organized and if some day I need to step down and pass the torch to another maintainer, he should not have difficulties setting the project up and maintaining it. We have a github repo, release notes, bugs, tagged releases, a beta server, unit testing, basic stats to understand what is used in the app and a mostly cleaned up codebase using much more modern PHP and tools (Atoum, Composer). It's not perfect, but I think that for amateur developers, it's not bad at all and the most important thing is that the code keeps on improving!
  4. There are now more than 3000 searches per week done by localizers on Transvision. That was more like 30 per week a year ago. There are searches in more than 70 languages, although 30 locales are doing the bulk of searches and French is still the biggest consumer with 40% of requests.
  5. Some people are using Transvision in ways I hadn't anticipated, for example our documentation localizers use it to find the translation of UI mentioned in help articles they translate for support.mozilla.org, people in QA use it to point to localized strings in Bugzilla

A quick recap of what we have done, feature-wise, in the last 12 months:

  • Completely redesigned the application to look and feel good
  • Locale to Locale searches, English is not necessarily the locale you want to use as the source (very useful to check differences from languages of the same family, for example Occitan/French/Catalan/Spanish...).
  • Hints and warnings for strings that look too long or too short compare to English, potentially bad typography, access keys that don't match your translation...
  • Possibility for anybody to file a bug in Bugzilla with a pointer to the badly translated string (yes we will use it for QA test days within the French community!)
  • Firefox OS strings are now there
  • Search results are a lot more complete and accurate
  • We now have a stable Json/JsonP API, I know that Pontoon uses it to provide translation suggestions, I heard that the Moses project uses it too. (if you use the Transvision API, ping me, I'd like to know!)
  • We can point any string to the right revision controled file in the source and target repos
  • We have a companion add-on called MozTran for heavy users of the tool provided by Goofy, from our Babelzilla friends.

The above list is of course just a highlight of the main features, you can get more details on the changelog.

If you use Transvision, I hope you enjoy it and that it is useful oo you. If you don't use Transvision (yet), give it a try, it may help you in your translation process, especially if your localization process is similar to the French one (targets Firefox Nighty builds first, work directly on the mercurial repo, focus on QA).

This was the first year of the rebirth of Transvision, I hope that the year to come will be just as good as this one. I learnt a lot with this project and I am happy to see it grow both in terms of usage and community, I am also happy that one tool that was created by a specific localization team is now used by so many other teams in the world :)

Categorieën: Mozilla-nl planet

Air Mozilla: Bay Area Rust Meetup May 2016

vr, 13/05/2016 - 04:00

Bay Area Rust Meetup May 2016 Bay Area Rust Meetup for May 2016.

Categorieën: Mozilla-nl planet

Air Mozilla: Bay Area Rust Meetup May 2016

vr, 13/05/2016 - 04:00

Bay Area Rust Meetup May 2016 Bay Area Rust Meetup for May 2016.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Taking Rust everywhere with rustup

vr, 13/05/2016 - 02:00

Cross-compilation is an imposing term for a common kind of desire:

  • You want to build an app for Android, or iOS, or your router using your laptop.

  • You want to write, test and build code on your Mac, but deploy it to your Linux server.

  • You want your Linux-based build servers to produce binaries for all the platforms you ship on.

  • You want to build an ultraportable binary you can ship to any Linux platform.

  • You want to target the browser with Emscripten or WebAssembly.

In other words, you want to develop/build on one “host” platform, but get a final binary that runs on a different “target” platform.

Thanks to the LLVM backend, it’s always been possible in principle to cross-compile Rust code: just tell the backend to use a different target! And indeed, intrepid hackers have put Rust on embedded systems like the Raspberry Pi 3, bare metal ARM, MIPS routers running OpenWRT, and many others.

But in practice, there are a lot of ducks you have to get in a row to make it work: the appropriate Rust standard library, a cross-compiling C toolchain including linker, headers and binaries for C libraries, and so on. This typically involves pouring over various blog posts and package installers to get everything “just so”. And the exact set of tools can be different for every pair of host and target platforms.

The Rust community has been hard at work toward the goal of “push-button cross-compilation”. We want to provide a complete setup for a given host/target pair with the run of a single command. Today we’re happy to announce that a major portion of this work is reaching beta status: we’re building binaries of the Rust standard library for a wide range of targets, and shipping them to you via a new tool called rustup.

Introducing rustup

At its heart, rustup is a toolchain manager for Rust. It can download and switch between copies of the Rust compiler and standard library for all supported platforms, and track Rust’s nightly, beta, and release channels, as well as specific versions. In this way rustup is similar to the rvm, rbenv and pyenv tools for Ruby and Python. I’ll walk through all of this functionality, and the situations where it’s useful, in the rest of the post.

Today rustup is a command line application, and I’m going to show you some examples of what it can do, but it’s also a Rust library, and eventually these features are expected to be presented through a graphical interface where appropriate — particularly on Windows. Getting cross-compilation set up should eventually be a matter of checking a box in the Rust installer.

Our ambitions go beyond managing just the Rust toolchain: to have a true push-button experience for cross-compilation, it needs to set up the C toolchain as well. That functionality is not shipping today, but it’s something we hope to incorporate over the next few months.

Basic toolchain management

Let’s start with something simple: installing multiple Rust toolchains. In this example I create a new library, ‘hello’, then test it using rustc 1.8, then use rustup to install and test that same crate on the 1.9 beta.

That’s an easy way to verify your code works on the next Rust release. That’s good Rust citizenship!

We can use rustup show to show us the installed toolchains, and rustup update to keep the up to date with Rust’s releases.

Finally, rustup can also change the default toolchain with rustup default:

$ rustc --version rustc 1.8.0 (db2939409 2016-04-11) $ rustup default 1.7.0 info: syncing channel updates for '1.7.0-x86_64-unknown-linux-gnu' info: downloading component 'rust' info: installing component 'rust' info: default toolchain set to '1.7.0-x86_64-unknown-linux-gnu' 1.7.0-x86_64-unknown-linux-gnu installed - rustc 1.7.0 (a5d1e7a59 2016-02-29) $ rustc --version rustc 1.7.0 (a5d1e7a59 2016-02-29)

On Windows, where Rust supports both the GNU and MSVC ABI, you might want to switch from the default stable toolchain on Windows, which targets the 32-bit x86 architecture and the GNU ABI, to a stable toolchain that targets the 64-bit, MSVC ABI.

$ rustup default stable-x86_64-pc-windows-msvc info: syncing channel updates for 'stable-x86_64-pc-windows-msvc' info: downloading component 'rustc' info: downloading component 'rust-std' ... stable-x86_64-pc-windows-msvc installed - rustc 1.8.0-stable (db2939409 2016-04-11)

Here the “stable” toolchain name is appended with an extra identifier indicating the compiler’s architecture, in this case x86_64-pc-windows-msvc. This identifier is called a “target triple”: “target” because it specifies a platform for which the compiler generates (targets) machine code; and “triple” for historical reasons (in many cases “triples” are actually quads these days). Target triples are the basic way we refer to particular common platforms; rustc by default knows about 56 of them, and rustup today can obtain compilers for 14, and standard libraries for 30.

Example: Building static binaries on Linux

Now that we’ve got the basic pieces in place, let’s apply them to simple cross-compilation task: building an ultraportable static binary for Linux.

One of the unique features of Linux that has become increasingly appreciated is its stable syscall interface. Because the Linux kernel puts exceptional effort into maintaining a backward-compatible kernel interface, it’s possible to distribute ELF binaries with no dynamic library dependencies that will run on any version of Linux. Besides being one of the features that make Docker possible, it also allows developers to build self-contained applications and deploy them to any machine running Linux, regardless of whether it’s Ubuntu or Fedora or any other distribution, and regardless of exact mix of software libraries they have installed.

Today’s Rust depends on libc, and on most Linuxes that means glibc. For technical reasons, glibc cannot be fully statically linked, making it unusable for producing a truly standalone binary. Fortunately, an alternative exists: musl, a small, modern implementation of libc that can be statically linked. Rust has been compatible with musl since version 1.1, but until recently developers have needed to build their own compiler to benefit from it.

With that background, let’s walk through compiling a statically-linked Linux executable. For this example you’ll want to be running Linux — that is, your host platform will be Linux, and your target platform will also be Linux, just a different flavor: musl. (Yes, this is technically cross-compilation even though both targets are Linux).

I’m going to be running on Ubuntu 16.04 (using this Docker image). We’ll be building the basic hello world:

rust:~$ cargo new --bin hello && cd hello rust:~/hello$ cargo run Compiling hello v0.1.0 (file:///home/rust/hello) Running `target/debug/hello` Hello, world!

That’s with the default x86_64-unknown-linux-gnu target. And you can see it has many dynamic dependencies:

rust:~/hello$ ldd target/debug/hello linux-vdso.so.1 => (0x00007ffe5e979000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fca26d03000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fca26ae6000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fca268cf000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fca26506000) /lib64/ld-linux-x86-64.so.2 (0x000056104c935000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fca261fd000)

To compile for musl instead call cargo with the argument --target=x86_64-unknown-linux-musl. If we just go ahead and try that we’ll get an error:

rust:~/hello$ cargo run --target=x86_64-unknown-linux-musl Compiling hello v0.1.0 (file:///home/rust/hello) error: can't find crate for `std` [E0463] error: aborting due to previous error Could not compile `hello`. ...

The error tells us that the compiler can’t find std. That is of course because we haven’t installed it.

To start cross-compiling, you need to acquire a standard library for the target platform. Previously, this was an error-prone, manual process — cue those blog posts I mentioned earlier. But with rustup, it’s just part of the usual workflow:

rust:~/hello$ rustup target add x86_64-unknown-linux-musl info: downloading component 'rust-std' for 'x86_64-unknown-linux-musl' info: installing component 'rust-std' for 'x86_64-unknown-linux-musl' rust:~/hello$ rustup show installed targets for active toolchain -------------------------------------- x86_64-unknown-linux-gnu x86_64-unknown-linux-musl active toolchain ---------------- stable-x86_64-unknown-linux-gnu (default) rustc 1.8.0 (db2939409 2016-04-11)

So I’m running the 1.8 toolchain for Linux on 64-bit x86, as indicated by the x86_64-unknown-linux-gnu target triple, and now I can also target x86_64-unknown-linux-musl. Neat. Surely we are ready to build a slick statically-linked binary we can release into the cloud. Let’s try:

rust:~/hello$ cargo run --target=x86_64-unknown-linux-musl Compiling hello v0.1.0 (file:///hello) Running `target/x86_64-unknown-linux-musl/debug/hello` Hello, world!

And that… just worked! Run ldd on it for proof that it’s the real deal:

rust:~/hello$ ldd target/x86_64-unknown-linux-musl/debug/hello not a dynamic executable

Now take that hello binary and copy it to any x86_64 machine running Linux and it’ll run just fine.

For more advanced use of musl consider rust-musl-builder, a Docker image set up for musl development, which helpfully includes common C libraries compiled for musl.

Example: Running Rust on Android

One more example. This time building for Android, from Linux, i.e., arm-linux-androideabi from x86_64-unknown-linux-gnu. This can also be done from OS X or Windows, though on Windows the setup is slightly different.

To build for Android we need to the Android target, so let’s set up another 'hello, world’ project and install it.

rust:~$ cargo new --bin hello && cd hello rust:~/hello$ rustup target add arm-linux-androideabi info: downloading component 'rust-std' for 'arm-linux-androideabi' info: installing component 'rust-std' for 'arm-linux-androideabi' rust:~/hello$ rustup show installed targets for active toolchain -------------------------------------- arm-linux-androideabi x86_64-unknown-linux-gnu active toolchain ---------------- stable-x86_64-unknown-linux-gnu (default) rustc 1.8.0 (db2939409 2016-04-11)

So let’s see what happens if we try to just build our 'hello’ project without installing anything further:

rust:~/hello$ cargo build --target=arm-linux-androideabi Compiling hello v0.1.0 (file:///home/rust/hello) error: linking with `cc` failed: exit code: 1 ... (lots of noise elided) error: aborting due to previous error Could not compile `hello`.

The problem is that we don’t have a linker that supports Android yet, so let’s take a moment’s digression to talk about building for Android. To develop for Android we need the Android NDK. It contains the linker rustc needs to create Android binaries. To just build Rust code that targets Android the only thing we need is the NDK, but for practical development we’ll want the Android SDK too.

On Linux, download and unpack them with the following commands (the output of which is not included here):

rust:~/home$ cd rust:~$ curl -O https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz rust:~$ tar xzf android-sdk_r24.4.1-linux.tgz rust:~$ curl -O http://dl.google.com/android/repository/android-ndk-r10e-linux-x86_64.zip rust:~$ unzip android-ndk-r10e-linux-x86_64.zip

We further need to create what the NDK calls a “standalone toolchain”. We’re going to put ours in a directory called android-ndk-r10e:

rust:~$ android-ndk-r10e/build/tools/make-standalone-toolchain.sh \ --platform=android-18 --toolchain=arm-linux-androideabi-clang3.6 \ --install-dir=android-18-toolchain --ndk-dir=android-ndk-r10e/ --arch=arm Auto-config: --toolchain=arm-linux-androideabi-4.8, --llvm-version=3.6 Copying prebuilt binaries... Copying sysroot headers and libraries... Copying c++ runtime headers and libraries... Copying files to: android-18-toolchain Cleaning up... Done.

Let’s notice a few things about these commands. First, the NDK we downloaded, android-ndk-r10e-linux-x86_64.zip is not the most recent release (which at the time of this writing is 'r11c’). Rust’s std is built against r10e and links to symbols that are no longer included in the NDK. So for now we have to use the older NDK. Second, in building the standalone toolchain we passed --platform=android-18 to make-standalone-toolchain.sh. The “18” here is the Android API level. Today, Rust’s arm-linux-androideabi target is built against Android API level 18, and should theoretically be forwards-compatible with subsequent Android API levels. So we’re picking level 18 to get the greatest Android compatibility that Rust presently allows.

The final thing for us to do is tell Cargo where to find the android linker, which is in the standalone NDK toolchain we just created. To do that we configure the arm-linux-androideabi target in .cargo/config with the 'linker’ value. And while we’re doing that we’ll go ahead and set the default target for this project to Android so we don’t have to keep calling cargo with the --target option.

[build] target = "arm-linux-androideabi" [target.arm-linux-androideabi] linker = "/home/rust/android-18-toolchain"

Now let’s change back to the 'hello’ project directory and try to build again:

rust:~$ cd hello rust:~/hello$ cargo build Compiling hello v0.1.0 (file:///home/rust/hello)

Success! Of course just getting something to build is not the end of the story. You’ve also got to package your code up as an Android APK. For that you can use cargo-apk.

Rust everywhere else

Rust is a software platform with the potential to run on anything with a CPU. In this post I showed you a little bit of what Rust can already do, with the rustup tool. Today Rust runs on most of the platforms you use daily. Tomorrow it will run everywhere.

So what should you expect next?

In the coming months we’re going to continue removing barriers to Rust cross-compilation. Today rustup provides access to the standard library, but as we’ve seen in this post, there’s more to cross-compilation than rustc + std. It’s acquiring and configuring the linker and C toolchain that is the most vexing — each combination of host and target platform requires something slightly different. We want to make this easier, and will be adding “NDK support” to rustup. What this means will again depend on the exact scenario, but we’re going to start working from the most demanded uses cases, like Android, and try to automate as much of the detection, installation and configuration of the non-Rust toolchain components as we can. On Android for instance, the hope is to automate everything for a basic initial setup except for accepting the licenses.

In addition to that there are multiple efforts to improve Rust cross-compilation tooling, including xargo, which can be used to build the standard library for targets unsupported by rustup, and cargo-apk, which builds Android packages from Cargo packages.

Finally, the most exciting platform on the horizon for Rust is not a traditional target for systems languages: the web. With Emscripten today it’s quite easy to run C++ code on the web by converting LLVM IR to JavaScript (or the asm.js subset of JavaScript). And the upcoming WebAssembly (wasm) standard will cement the web platform as a first-class target for programming languages.

Rust is uniquely-positioned to be the most powerful and usable wasm-targetting language for the immediate future. The same properties that make Rust so portable to real hardware makes it nearly trivial to port Rust to wasm. The same can’t be said for languages with complex runtimes that include garbage collectors.

Rust has already been ported to Emscripten (at least twice), but the code has not yet fully landed. This summer it’s happening though: Rust + Emscripten. Rust on the Web. Rust everywhere.

Epilogue

While many people are reporting success with rustup, it remains in beta, with some key outstanding bugs, and is not yet the officially recommended installation method for Rust (though you should try it). We’re going to keep soliciting feedback, applying polish, and fixing bugs. Then we’re going to improve the rustup installation experience on Windows by embedding it into a GUI that behaves like a proper Windows installer.

At that point we’ll likely update the download instructions on www.rust-lang.org to recommend rustup. I expect all the existing installation methods to remain available, including the non-rustup Windows installers, but at that point our focus will be on improving the installation experience through rustup. It’s also plausible that rustup itself will be packaged for package managers like Homebrew and apt.

If you want to try rustup for yourself, visit www.rustup.rs and follow the instructions. Then leave feedback on the dedicated forum thread, or file bugs on the issue tracker. More information about rustup is available in the README.

Thanks

Rust would not be the powerful system it is without the help of many individuals. Thanks to Diggory Blake for creating rustup, to Jorge Aparicio for fixing lots of cross-compilation bugs and documenting the process, Tomaka for pioneering Rust on Android, and Alex Crichton for creating the release infrastructure for Rust’s many platforms.

And thanks to all the rustup contributors: Alex Crichton, Brian Anderson, Corey Farwell, David Salter, Diggory Blake, Jacob Shaffer, Jeremiah Peschka, Joe Wilm, Jorge Aparicio, Kai Noda, Kamal Marhubi, Kevin K, llogiq, Mika Attila, NODA, Kai, Paul Padier, Severen Redwood, Taylor Cramer, Tim Neumann, trolleyman, Vadim Petrochenkov, V Jackson, Vladimir, Wayne Warren, Yasushi Abe, Y. T. Chung

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: AMO technical architecture

do, 12/05/2016 - 22:52

addons.mozilla.org (AMO) has been around for more than 12 years, making it one of the oldest websites at Mozilla. It celebrated its 10th anniversary a couple of years ago, as Wil blogged about.

AMO started as a PHP site that grew and grew as new pieces of functionality were bolted on. In October 2009 the rewrite from PHP to Python began. New features were added, the site grew ever larger, and now a few cracks are starting to appear. These are merely the result of a site that has lots of features and functionality and has been around for a long time.

The site architecture is currently something like below, but please note this simplifies the site and ignores the complexities of AWS, the CDN and other parts of the site.image01

Basically, all the code is one repository and the main application (a Django app) is responsible for generating everything—from HTML, to emails, to APIs, and it all gets deployed at the same time. There’s a few problems with this:

  • The amount of functionality in the site has caused such a growth in interactions between the features that it is harder and harder to test.
  • Large JavaScript parts of the site have no automated testing.
  • The JavaScript and CSS spill over between different parts of the site, so changes in one regularly break other parts of the site.
  • Not all parts of the site have the same expectation of uptime but are all deployed at the same time.
  • Not all parts of the site have the same requirements for code contributions.

We are moving towards a new model similar to the one used for Firefox Marketplace. Whereas Marketplace built its own front-end framework, we are going to be using React on the front end.

The end result will start look something like this:

image00

A separate version of the site is rendered for the different use cases, for example developers or users. In this case a request comes in hits one of the appropriate front-end stacks. That will render the site using React universal in node.js on the server. It will access the data store by calling the appropriate Python REST APIs.

In this scenario, the legacy Python code will migrate to being a REST API that manages storage, transactions, workflow, permissions and the like. All the front-facing user interface work will be done in React and be independent from each other as much as possible.

It’s not quite micro services, but the breaking of a larger site into smaller independent pieces. The first part of this is happening with the “discovery pane” (accessible at about:addons). This is our first project using this infrastructure, which features a new streamlined way to install add-ons with a new technical architecture to serve it to users.

As we roll out this new architecture we’ll be doing more blog posts, so if you’d like to get involved then join our mailing list or check out our repositories on Github.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: What’s Up with SUMO – 12th May

do, 12/05/2016 - 22:22

Hello, SUMO Nation!

Yes, we know, Friday the 13th is upon us… Fear not, in good company even the most unlucky days can turn into something special ;-) Pet a black cat, find a four leaf clover, smile and enjoy what the weekend brings!

As for SUMO, we have a few updates coming your way. Here they are!

Welcome, new contributors! If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month! Most recent SUMO Community meeting The next SUMO Community meeting
  • …is happening on WEDNESDAY the 18th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.
Community Social Support Forum Knowledge Base & L10n
Firefox
  • for iOS
    • Firefox for iOS 4.0 IS HERE! The highlights are:
      • Firefox is now present on the Today screen.
      • You can access your bookmarks in the search bar.
      • You can override the certificate warning on sites that present them (but be careful!).
      • You can print webpages.
      • Users with older versions of iOS 8 or lower will not be able to add the Firefox widget. See Common Response Available.
    • Start your countdown clocks ;-) Firefox for iOS 5.0 should be with us in approximately 6 weeks!

Thanks for your attention and see you around SUMO, soon!

Categorieën: Mozilla-nl planet

Air Mozilla: Web QA Team Meeting, 12 May 2016

do, 12/05/2016 - 18:00

Web QA Team Meeting Weekly Web QA team meeting - please feel free and encouraged to join us for status updates, interesting testing challenges, cool technologies, and perhaps a...

Categorieën: Mozilla-nl planet

Air Mozilla: Reps weekly, 12 May 2016

do, 12/05/2016 - 18:00

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Daniel Glazman: BlueGriffon 2.0 approaching...

do, 12/05/2016 - 15:48

BlueGriffon 2.0 is approaching, a major revamp of my cross-platform Wysiwyg Gecko-based editor. You can find previews here for OSX, Windows and Ubuntu 16.04 (64 bits).

BlueGriffon 2.0

Warnings:

  • it's HIGHLY recommended to NOT overwrite your existing 1.7 or 1.8 version ; install it for instance in /tmp instead of /Applications
  • it's VERY HIGHLY recommended to start it creating a dedicated profile
    • open BlueGriffon.app --args -profilemanager (on OSX)
    • bluegriffon.exe -profilemanager (on Windows)
  • add-ons will NOT work with it, don't even try to install them in your test profile
  • it's a work in progress, expect bugs, issues and more

Changes:

- major revamp, you won't even recognize the app :-) - based on a very recent version of Gecko, that was a HUGE work. - no more floating panels, too hacky and expensive to maintain - rendering engine support added for Blink, Servo, Vivliostyle and Weasyprint! - tons of debugging in *all* areas of the app - BlueGriffon now uses the native colorpicker on OSX. Yay!!! The native colorpicker of Windows is so weak and ugly we just can't use it (it can't even deal with opacity...) and decided to stick to our own implementation. On Linux, the situation is more complicated, the colorpicker is not as ugly as the Windows' one, but it's unfortunately too weak compared to what our own offers. - more CSS properties handled - helper link from each CSS property in the UI to MDN - better templates handling - auto-reload of html documents if modified outside of BlueGriffon - better Markdown support - zoom in Source View - tech changes for future improvements: support for :active and other dynamic pseudo-classes, support for ::before and ::after pseudo-elements in CSS Properties; rely on Gecko's CSS lexer instead of our own. We're also working on cool new features on the CSS side like CSS Variables and even much cooler than that :-)
Categorieën: Mozilla-nl planet

Christian Heilmann: ChakraCore and Node musings at NodeConf London

do, 12/05/2016 - 13:50

Yesterday morning I dragged myself to present at NodeConf London in the Barbican to present. Dragged not because I didn’t want to, but because I had 3 hours sleep coming back from Beyond Tellerand the day before.

Presenting at NodeConfLondonPhoto by Adrian Alexa

I didn’t quite have time to prepare my talk, and I ended up finishing my slides 5 minutes before it. That’s why I was, to use a simple term, shit scared of my talk. I’m not that involved in the goings on in Node, and the impostor in me assumed the whole audience to be all experts and me making an utter berk of myself. However, this being a good starting point I just went with it and used the opportunity to speak to an audience that much in the know about something I want Node to be.

I see the Node environment and ecosystem as an excellent opportunity to test out new JavaScript features and ideas without the issue of browser interoperability and incompatibility.

The thing I never was at ease about it though is that *everything is based on on one JS engine&. This is not how you define and test out a standard. You need to have several runtimes to execute your code. Much like a browser monoculture was a terrible thing and gave us thousands of now unmaintainable and hard to use web sites, not opening ourselves to various engines can lead to terrible scripts and apps based on Node.

The talk video is already live and you can also see all the other talks in this playlist:

The slides are on Slideshare:

NodeConfLondon – Making ES6 happen with ChakraCore and Node from Christian Heilmann

A screencast recording of the talk is on YouTube.

Resources I mentioned:

I was very happy to get amazing feedback from everyone I met, and that people thoroughly enjoyed my presentation. Goes to show that the voice in your head telling you that you’re not good enough often is just being a a dick.

Categorieën: Mozilla-nl planet

Karl Dubost: Schools Of Thoughts In Web Standards

do, 12/05/2016 - 07:29

Last night, I had the pleasure of reading Daniel Stenberg's blog post about URL Standards. It led me to the discussion happening about the WHATWG URL spec about "It's not immediately clear that "URL syntax" and "URL parser" conflict". As you can expect, the debate is inflammatory on both sides, border line hypocrite at some occasions and with a lot of the arguments I have seen in the last 20 years I have followed discussions around the Web development.

This post has no intent to be the right way to talk about it. It's more a collection of impression I had when reading the thread with my baggage of ex-W3C staff, Web agency work and, ex-Opera and now-Mozilla Web Compatibility work.

"Le chat a bon dos". French expression to basically say we are in the blaming game in that thread. Maybe not that useful.

What is happening?

  • Deployed Web Content: Yes there is a lot of content broken out there and some of it will never be fixed which ever effort you put into it. That's normal and this is not a broken per se. Think about abandon editions of old dictionaries with mistakes into it. History and fabric of time. What should happen? When a mistake is frequent enough, it is interesting to try to have a part of the parsing algorithm to recover it. The decision then becomes to decide on frequent enough meaning. And that opens a new debate in itself, because it's dependent on countries, market shares, specific communities. Everything the society can provide in terms of economy, social behavior, history, etc.
  • Browsers: We can also often read in that thread, that it's not browser's fault, it's because of the Web Content. Well that's not entirely true too. When a browser recovers from a previously-considered-broken pattern found on the Web, it just entrenches the pattern. Basically, it's not an act of saying, we need to be compatible with the deployed content (aka not our fault). It would be a false pretense. It's an implementation decision which further drags the once-broken-pattern into a the normal patterns of the Web, a standardization process (a king of jurisprudence). So basically it's about recognizing that this term, pattern is now part of the bigger picture. There's no such things as saying: "It is good for people who decide to be compatible with browsers" (read "Join us or go to hell, I don't want to discuss with you."). There's a form of understandable escapism here to hide a responsibility and to hide the burden of creating a community. It would be more exact to say "Yes, we make the decision that the Web should be this and not anything else." It doesn't make the discussion easier but it's more the point of the power play in place.
  • $BROWSER lord: In the discussion, the $BROWSER is Google's Chrome. A couple of years ago, it was IE. Again, saying Chrome has no specific responsibility is again an escapism. The same way that Safari has a lot of influences on the mobile Web, Chrome currently by its market share creates a tide which influences a lot the Web content and its patterns out there. I can guarantee that it's easier now for Chrome to be stricter with regards to syntax than it is for Edge or Firefox. Opera had to give up its rendering engine (Presto) because of this and switched to blink.

There are different schools for the Web specifications:

  1. Standards defining a syntax considered ideal and free for implementations to recover with their own strategy when it's broken.
  2. Standards defining how to recover for all the possible ways it is mixed up. By doing that the intent is often to recover from a previous stricter syntax, but in the end it is just defining, expanding the possibilities.
  3. Standards defining a different policy for parsing and producing with certain nuances in between. [Kind of Postel's law.].

I'm swaying in between these three schools all the time. I don't like the number 2 at all, but because of survival it is sometimes necessary. My preferred way it's 3, having a clear strict syntax for producing content, and a recovery parsing technique. And when possible I would prefer a sanitizer version of the Postel's law.

What did he say btw?

RFC 760

The implementation of a protocol must be robust. Each implementation must expect to interoperate with others created by different individuals. While the goal of this specification is to be explicit about the protocol there is the possibility of differing interpretations. In general, an implementation should be conservative in its sending behavior, and liberal in its receiving behavior. That is, it should be careful to send well-formed datagrams, but should accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).

Then in RFC 1122: The 1.2.2 section, the Robustness Principle

At every layer of the protocols, there is a general rule whose application can lead to enormous benefits in robustness and interoperability [IP:1]:

"Be liberal in what you accept, and conservative in what you send"

Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect. This assumption will lead to suitable protective design, although the most serious problems in the Internet have been caused by unenvisaged mechanisms triggered by low-probability events; mere human malice would never have taken so devious a course!

Adaptability to change must be designed into all levels of Internet host software. As a simple example, consider a protocol specification that contains an enumeration of values for a particular header field -- e.g., a type field, a port number, or an error code; this enumeration must be assumed to be incomplete. Thus, if a protocol specification defines four possible error codes, the software must not break when a fifth code shows up. An undefined code might be logged (see below), but it must not cause a failure.

The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features. It is unwise to stray far from the obvious and simple, lest untoward effects result elsewhere. A corollary of this is "watch out for misbehaving hosts"; host software should be prepared, not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility.

The important point in the discussion of Postel's law is that he is talking about software behavior, not specifications. The new school of thoughts for Web standards is to create specification which are "software-driven", not "syntax-driven". And it's why you can read entrenched debates about the technology.

My sanitizer version of the Postel's law would be something along:

  1. Be liberal in what you accept
  2. Be conservative in what you send
  3. Make conservative what you accepted (aka fixing it)

Basically when you receive something broken, and there is a clear path for fixing it, do it. Normalize it. In the debated version, about accepting http://////, it would be

  • parse it as http://////
  • communicate it to the next step as http:// and possibly with an optional notification it has been recovered.

Otsukare!

Categorieën: Mozilla-nl planet

Mike Taylor: MITM compatibility issues

do, 12/05/2016 - 07:00

(Alternate title: Kaspersky is one typo away from being called Kaspesky)

Bug 1271875 is an interesting case of a compat issue not caused by a website, or a browser, but by a 3rd party. In this case, Kaspersky AntiVirus.

Apparently they Malcolm in the Middle you to keep you safe:

screenshot of facebook.com cert

I guess that's normal for Anti-Virus programs?

(Personally I stay safe via a combination of essential oils and hyper-link homeopathy.)

Anyways, the issue is that Facebook just turned on Brotli compression for some of their HTML resources. Which is great! Firefox supports that since v44, and it makes Facebook faster for its users.

Kaspersky's MITM happily sends Accept-Encoding: br in the request but strips Content-Encoding: br from the response. Suddenly Facebook looks like ISIS is trying to hack you:

screenshot of facebook.com all jacked up

So, in this instance, if Facebook.com looks like binary garbage in your Firefox (and you have Kaspersky AV installed), consider a new anti-virus strategy (ideally they'll also have an update very soon if you're somehow stuck with it).

Categorieën: Mozilla-nl planet

Pagina's