mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 1 uur 56 min geleden

Mozilla Reps Community: Council at Whistler Work Week

wo, 22/07/2015 - 17:15

From the 23rd to 26th of June the Reps Council attended the Mozilla Work Week in Whistler, British Columbia, Canada to discuss the future plans with the rest of the Participation team. Unfortunately Bob Reyes couldn’t attend due to delays in the Visa process.

Human centered design workshop

At the beginning of the week Emma introduced us to “Human centered design” where the overall goal is to find solutions to a problem while keeping the individual in the focus. We split up in groups of two to do the exercises. We tried to come up with individual solutions for the problem statement “How might we improve the gift giving experience?”.

At first we interviewed our partner to get to the root of the problem they currently have with gift giving. This might either be that they don’t have ideas on what to give to a person, or maybe the person doesn’t want to get a gift, or something entirely different. All in all, we came up with a lot of different root causes to analyze.

After talking intensively to the partner, we came up with individual solution proposals which we then discussed with the partner and improved based on their feedback.

We think this was a valuable workshop for the following sessions with the functional area teams.

Sessions with other functional areas

With this knowledge Council members, as part of the Participation team, attended more than 27 meetings with other functional teams from all across Mozilla. The goal was to invite these teams to a session, where we analyze their issues they have with Participation.

During these meetings we provided feedback to the team’s plans for bringing more participation and enabling more community members into their projects. We also received a lot of insights on what functional teams think about the community and how valuable is the work of volunteer to them. In every session the goal was to come up with a solid problem statement and then find possible solutions for it. Due to the Council being volunteers each of us could give valuable input and ideas.

Some problem statements we tackled during the week (of course this is just a selection):

  • How might we increase the Code contribution retention so people come back to us after doing a code contribution?
  • How might community approach organizations who already touch the “socially involved” audience to infiltrate their systems so Shape of the Web can touch people so that it can have a lasting impact?
  • How might we improve the SUMO retention ratio from 10% to 30% in the next 6 months?
  • How might we deliver operational capability to run suggested titles in German speaking countries by the end of the year?

Our plans (along with the Participation team) are to continue working with most of these teams and the community in order to accomplish our common plans for bringing more participation into these functional areas.

Council meetings & Leadership workshop

During the week we also held council meetings where we prioritised tasks and worked on the most important ones. Mentor’s selection criteria 2.0, new budget SOPs, Reps recognition and Reps selection criteria for important events are some of these tasks. After completing these important tasks we would like to focus on Reps as leadership platform and set our goals for the next (at least) one year.

On that direction, Rosana Ardila run a highly interesting workshop on Friday around volunteer leadership and the current Organisation model within Mozilla. In three hours we tried to think outside the box and come up with solutions to make volunteer leadership more effective. We haven’t started any plans to incorporate this with the community, but in the coming weeks we will look at this and figure out which parts might work for community as well.

Radical Participation session

On Wednesday evening, a lot of volunteers and staff come together for the “Radical Participation” session. Mozilla invited several external experts on Participation to give a lightning talk to inspire us.

After these lightning talks we evaluated what resonated with us the most for our job and for Mozilla in general. It was good to get an outside view and tips so we can move forward with our plans as best prepared as we can be.

At the end, Mark and Mitchell gave us an update on what they think about Radical Participation. We think we’re on a good way planning for impact, but there is still a lot to do!

Participation Planning

This was one of the most interesting sessions we had since we used a new and innovative post-it notes (more post-it notes!) method for framing the current status of Participation in Mozilla. Identifying in detail the current status and the structure of Participation, was the most important step towards having more impact within the project.

By re-ordering and evaluating the notes we managed to make statements on what we could change and come with a plan for the following months. We looked at an 18-month timeline. George Roter is currently in charge of getting the document which describes the 18-month plan in more detail finished. Stay tuned for this!

Unofficial meetings with other teams

During the social parts of the Work Week (arrival apero and dinners), we all had a chance to talk to other teams informally and discuss our pain points for projects we’re working on outside of Council. We had a lot of talks with different teams outside our Participation sessions, therefore we could move forward with several other, non-Reps related, projects as well. To give an example: Michael met Patrick Finch on Monday of the work week to discuss the Community Tile for the German-speaking community. Due to several talks during the week, the Community Tile went online around a week after the work week.

Conclusion

All in all it was crucial to sit together and work on future plans. We could get a good understand of what the Participation team has been working on and could share what we have been working on as a Council. We could set the next steps for future work. After a hard working Work Week we are all ready to tackle the next steps for Participation. Let’s help Mozilla move forward with Participation! You can find other blog posts from the Participation Team below.

More blog posts about the Work Week from the Participation Team

Participation at Whistler
Storify – Recap Participation at Whistler

Categorieën: Mozilla-nl planet

Armen Zambrano: Few mozci releases + reduced memory usage

wo, 22/07/2015 - 16:21
While I was away adusca released few releases of mozci.

From the latest release I want to highlight that we're replacing the standard json library for ijson since it solves some memory leak issues we were facing for pulse_actions (bug 1186232).

This was important to fix since our Heroku instance for pulse_actions has an upper limit of 1GB of RAM.

Here are the release notes and the highlights of them:

  • 0.9.0 - Re-factored and cleaned-up part of the modules to help external consumers
  • 0.10.0:
    • --existing-only flag prevents triggering builds that are needed to trigger test jobs
    • Support for pulse_actions functionality
  • 0.10.1 - Fixed KeyError when querying for the request_id
  • 0.11.0 - Added support for using ijson to load information, which decreases our memory usage


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Development Meeting

wo, 22/07/2015 - 16:00

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Categorieën: Mozilla-nl planet

Botond Ballo: It’s official: the Concepts TS has been voted for publication!

wo, 22/07/2015 - 16:00

I mentioned in an earlier post that the C++ Concepts Technical Specification will come up for its final publication vote during a committee-wide teleconference on July 20.

That teleconference has taken place, and the outcome was to unanimously vote the TS for publication!

With this vote having passed, the final draft of the TS will be sent to the ISO head offices, which will complete the publication process within a couple of months.

With the TS published, the committee will be on the lookout for feedback from implementers and users, to see how the proposed design weathers real-world codebases and compiler architectures. This will allow the committee to determine whether any design changes need to be made before merging the contents of the TS into the C++ International Standard, thus making the feature all official (and hard to change the design of).

GCC has a substantially complete implementation of the Concepts TS in a branch; if you’re interested in concepts, I encourage you to try it out!


Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Development Meeting

wo, 22/07/2015 - 16:00

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Categorieën: Mozilla-nl planet

The Servo Blog: Servo developer tools overview

wo, 22/07/2015 - 14:00

Servo is a new web browser engine. It is one of the largest Rust-based projects, but the total Rust code is still dwarfed by the size of the code provided in native C and C++ libraries. This post is an overview of how we have structured our development environment in order to integrate the Cargo build system, with its “many small and distributed dependencies” model with our needs to provide many additional features not often found in smaller Rust-only projects.

Mach

Mach is a python driver program that provides a frontend to Servo’s development environment that both reduces the number of steps required and integrates our various tools into a single frontend harness. Similar to its purpose in the Firefox build, we use it to centralize and simplify the number of commands that a developer has to perform.

mach bootstrap

The steps that mach will handle before issuing a normal cargo build command are:

  • Downloading the correct versions of the cargo and rustc tools. Servo uses many unstable features in Rust, most problematically those that change pretty frequently. We also test the edges of feature compatibility and so are the first ones to notice many changes that did not at first seem as if they would break anyone. Further, we build a custom version of the tools that additionally supports cross-compilation targeting Android (and ARM in the near future). A random local install of the Rust toolchain is pretty unlikely to work with Servo.

  • Updating git submodules. Some of Servo’s dependencies cannot be downloaded as Cargo dependencies because they need to be directly referenced in the build process, and Cargo adds a hash that makes it difficult to locate those files. For such code, we add them as submodules.

mach build & run

The build itself also verifies that the user has explicitly requested either a dev or release build — the Servo dev build is debuggable but quite slow, and it’s not clear which build should be the default.

Additionally, there’s the question of which cargo build to run. Servo has three different “toplevel” Cargo.toml files.

  • components/servo/Cargo.toml is used to build an executable binary named servo and is used on Linux and OSX. There are also horrible linker hacks in place that will cause an Android-targeted build to instead produce a file named servo that is actually an APK file that can be loaded onto Android devices.

  • ports/gonk/Cargo.toml produces a binary that can run on the Firefox OS Boot2Gecko mobile platform.

  • ports/cef/Cargo.toml produces a shared library that can be loaded within the Chromium Embedding Framework to provide a hostable web rendering engine.

The presence of these three different toplevel binaries and the curious directory structure means that mach also provides a run command that will execute the correct binary with any provided arguments.

mach test

Servo has several testing tools that can be executed via mach.

  • mach tidy will verify that there are no trivial syntactic errors in source files. It checks for valid license headers in each file, no tab characters, no trailing whitespaces, etc.

  • mach test-ref will run the Servo-specific reference tests. These tests render a pair of web pages that implement the same final layout using different CSS features to images. If the images are not pixel-identically, the test fails.

  • mach test-wpt runs the cross-browser W3C Web Platform Tests, which primarily test DOM features.

  • mach test-css runs the cross-browser CSS WG reference tests, which are a version of the reference tests that are intended to work across many browsers.

  • mach test-unit runs the Rust unit tests embedded in Servo crates. We do not have many of these, except for basic tests of per-crate functionality, as we rely on the WPT and CSS tests for most of our coverage. Philosophically, we prefer to write and upstream a cross-browser test where one does not exist instead of writing a Servo-specific test.

cargo

While the code that we have written for Servo is primarily in Rust, we estimate that at least 2/3 of the code that will run inside of Servo will be written in C/C++, even when we ship. From the SpiderMonkey JavaScript engine to the Skia and Azure/Moz2D graphics pipeline to WebRTC, media extensions, and proprietary video codecs, there is a huge portion of the browser that is integrated and wrapped into Servo, rather than rewritten. For each of these projects, we have a crate that has a build.rs file that performs the custom build steps to produce a static library and then produce a Rust rlib file to link into Servo.

The rest of Servo is a significant amount of code (~150k lines of Rust; ~250k if you include autogenerated DOM bindings), but follows the standard conventions of Cargo and Rust as far as producing crates. For the many crates within the Servo repo, we simply have a Cargo.toml file next to a lib.rs that defines the module structure. When we break them out into a separate GitHub repository, though, we follow the convention of a toplevel Cargo.toml file with a src directory that holds all of the Rust code.

Servo's dependency graph

Updating dependencies

Since there are three toplevel Cargo.toml files, there are correspondingly three Cargo.lock files. This configuration makes the already challenging updates of dependencies even moreso. We have added a command, mach update-cargo -p {package} --precise {version} to handle updates across all three of the lockfiles. While running this command without any arguments does attempt to upgrade all dependencies to the highest SemVer-compatible versions, in practice that operation is unlikely to work, due to a mixture of:

  • git-only dependencies, which do not have a version number

  • Dependencies with different version constraints on a common dependency, resulting in two copies of a library and conflicting types

  • Hidden Rust compiler version dependencies

Things we’d like to fix in the future

It would be great if there was a single Cargo.toml file and it was at the toplevel of the Servo repo. It’s confusing to people familiar with Rust projects, who go looking for a Cargo.toml file and can’t find them.

Cross-compilation to Android with linker hacks feels a bit awkward. We’d like to clean that up, remove the submodule that performs that linker hackery, and have a more clean/consistent feel to our cross-targeted builds.

Managing the dependencies — particularly if there is a cross-repo update like a Rust upgrade — is both a real pain and requires network access in order to clone the dependency that you would like to edit. The proposed cargo clone command would be a huge help here.

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Proposal For Encoding Source-Level Environment Information Within Source Maps

wo, 22/07/2015 - 09:00

A month ago, I wrote about how source maps are an insufficient debugging format for the web. I tried to avoid too much gloom and doom, and focus on the positive aspect. We can extend the source map format with the environment information needed to provide a rich, source-level debugging experience for languages targeting JavaScript. We can do this while maintaining backwards compatibility.

Today, I'm happy to share that I have a draft proposal and a reference implementation for encoding source-level environment information within source maps. It's backwards compatible, compact, and future extensible. It enables JavaScript debuggers to rematerialize source-level scopes and bindings, and locate any given binding's value, even if that binding does not exist in the compiled JavaScript.

I look forward to the future of debugging languages that target JavaScript.

Interested in getting involved? Join the discussion.

Categorieën: Mozilla-nl planet

Jordan Lund: Mozharness now lives in Gecko

wo, 22/07/2015 - 01:28
What's changed?

continuous-integration and release jobs that use Mozharness will now get Mozharness from the Gecko repo that the job is running against.

How?

Whether the job is a build (requires a full gecko checkout) or a test (only requires a Firefox/Fennec/Thunderbird/B2G binary), automation will first grab a copy of Mozharness from the gecko tree, even before checking out the rest of the tree. Effectively minimizing changes to our current infra.

This is thanks to a new relengapi endpoint, Archiver, and hg.mozilla.org's sub directory archiving abilities. Essentially Archiver will get a tar ball of Mozharness from within a target gecko repo, rev, and sub-repo-directory and upload it to Amazon's S3.

What's nice about Achiver is that it is not restricted to just grabbing Mozharness. You could, for example, put https://hg.mozilla.org/build-tools in the Gecko tree or, improving on our tests.zip model, simply grab subdirectories from within the testing/* part of the tree and request them on a suite by suite basis.

What does this mean for you?

it depends. if you are...

1) developing on Mozharness

You will need to checkout gecko and patches will now land like any other gecko patch: 1) land on a development tree-branch (e.g. mozilla-inbound) 2) ride the trains. This now means:

  • we can have tree specific configs and even scripts
  • the Mozharness pinning abstraction layer is no longer needed
  • we have more deterministic builds and tests as the jobs are tied to a specific Gecko repo + rev
  • there is more transparency on how automation runs continuous integration jobs
  • there should be considerably less strain on hg.mozilla.org as we no longer rm + clone mozharness jobs
  • development tree-branches (inbound, fx-team, etc) will behave like the Mozharness default branch while mozilla-central will act as production branch.

This also means:

  • Mozharness patches that require tree wide changes will need to be uplifted across trees
  • development on Mozharness will require a Gecko tree checkout
  • Github + Travis tests will not be run on each Mozharness change (mh tests will have to be run locally for now)
  • Mozharness changes will not be documented or visible (outside of diffing gecko revs)

2) just needing to deploy Mozharness or get a copy of it without gecko

Like the usage docs linked to Archiver above, you could hit the API directly. But I recommend using the client that buildbot uses. The client will wait until the api call is complete, download the archive from a response location, and unpack it to a specified destination.

Let's take a look at that in action: say you want to download and unpack a copy of mozharness based on mozilla-beta at 93c0c5e4ec30 to some destination.

python archiver_client.py mozharness --repo releases/mozilla-beta --rev 93c0c5e4ec30 --destination /home/jlund/downloads/mozharness

Note: if that was the first time Archiver was polled for that repo + rev, it might take a few seconds as it has to download Mozharness from hgmo and then upload it to S3. Subsequent calls will happen near instantly

Note 2: if your --destination path already exists with a copy of Mozharness or something else, the client won't rm that path, it will merge (just like unpacking a tarball behaves)

3) a Release Engineering service that is still using hg.mozilla.org/build/mozharness

Not all Mozharness scripts are used for continuous integration / release jobs. There are a number of Releng services that are based on Mozharness: e.g. Bumper, vcs-sync, and merge_day. As these services transition to using Archiver, they will continue to use hgmo/build/mozharness as the Repository of Record (RoR).

If certain services that can not use gecko based Mozharness, then we can fork Mozharness and setup a separate repo. That will of course mean such services won't receive upstream changes from the gecko copy so we should avoid this if possible.

If you are an owner or major contributor to any of these releng services, we should meet and talk about such a transition. Archiver and its client should make deployments pretty painless in most cases.

Have something that may benefit from Archiver?

If you want to move something into a larger repository or be able to pull something out of such a repository for lightweight deployments, feel free to chat to me about Archiver and Relengapi.

As always, please leave your questions, comments, and concerns below

Categorieën: Mozilla-nl planet

QMO: Help us Triage Firefox Bugs – Introducing the Bug Triage tool

wo, 22/07/2015 - 01:09

Interested in helping with some Firefox Bug Triage?  We have a new experimental tool that makes it really easy to help on a daily basis.

Please visit our new triage tool and sign up to triage some bugs. If you can give us a few minutes of your day, you can help Mozilla move faster!

Categorieën: Mozilla-nl planet

Air Mozilla: July Privacy Lab - Crypto Wars with guest speakers from CDT and EFF

wo, 22/07/2015 - 01:00

July Privacy Lab - Crypto Wars with guest speakers from CDT and EFF July's Privacy Lab will include guest speakers from CDT and EFF to talk about backdoors and crypto wars.

Categorieën: Mozilla-nl planet

Air Mozilla: July Privacy Lab - Crypto Wars with guest speakers from CDT and EFF

wo, 22/07/2015 - 01:00

July Privacy Lab - Crypto Wars with guest speakers from CDT and EFF July's Privacy Lab will include guest speakers from CDT and EFF to talk about backdoors and crypto wars.

Categorieën: Mozilla-nl planet

Ben Hearsum: Mozilla Software Release GPG Key Transition

di, 21/07/2015 - 20:45

Late last week we discovered the expiration of the GPG key that we use to sign Firefox, Fennec, and Thunderbird nightly builds and releases. We had been aware that this was coming up, but we unfortunately missed our deadline to renew it. This caused failures in many of our automated nightly builds, so it was quickly noticed and acted upon.

Our new GPG key is as follows, and available on keyservers such as gpg.mozilla.org and pgp.mit.edu:

pub 4096R/0x61B7B526D98F0353 2015-07-17 Key fingerprint = 14F2 6682 D091 6CDD 81E3 7B6D 61B7 B526 D98F 0353 uid Mozilla Software Releases sub 4096R/0x1C69C4E55E9905DB 2015-07-17 [expires: 2017-07-16]

The new primary key is signed by many Mozillians, the old master key, as well as our OpSec team's GPG key. Nightlies and releases will now be signed with the subkey (0x1C69C4E55E9905DB), and a new one will be generated from the same primary key before this one expires. This means that you can validate Firefox releases with the primary public key in perpetuity.

We are investigating a few options to make sure key renewal happens without delay in the future.

Categorieën: Mozilla-nl planet

Mozilla IT & Operations: Troubleshooting the Internet

di, 21/07/2015 - 17:36
Introduction

If you’re working from home, a coffee shop or even an office and are experiencing “issues with the Internet” this blogpost might be useful to you.
We’re going to go through some basic troubleshooting and tips to help solve your problem. At worse gather data so your support team can investigate.

Something to keep in mind while reading is that everything is packets. When downloading a webpage, image, email or video-conferencing, you have to imagine your computer splitting everything into tiny numbered packets and sending them to their destination through pipes, where they will be reassembled in order.
Network issues are those little packets not getting properly from A to Z, because of pipes full, bugs, overloaded computers, external perturbations and a LOT more possible reasons.

Easy tests

We can start by running some easy online tests, even if not 100% accurate, you can run them in a few clicks from your browser (or even phone) and can point toward the right direction.

Speedtest is a good example. Hit the big button and wait, don’t run any download or bandwidth heavy applications. The best is to know what a “normal” value is, so you can compare both.

As some sneaky providers can prioritize connections to the famous Speedtest, you can try another less known website like ping.online.net. Start the download of a large file and check how fast it goes.

Note that the connection is shared between all the connected users. So one can easily monopolize the bandwidth and clog the pipe for everyone else. This is less likely to happen in our offices where we try to have pipes large enough for everyone, but can happen in public places. In this case there is nothing much you can do.

Next is a GREAT one: Netalyzr. You will need Java but it’s worth it (first time I have ever said that). There is also an Android app but it’s better to run it from the device experiencing the issue (but it’s also interesting to see how good your mobile/data “Internet” really is). Netalyzr will runs TONS of tests, DNS, blocked ports, latency, etc… And give you a report quite easy to understand or share with a helpdesk. See Mozilla’s Paris office.

Screenshot of Netalyzr resultSome ISPs start to roll out a new version of the Internet, called IPv6 (finally!). As it’s “new” for them, there can be some issues. To check that, run the test on Test-ipv6.

Basic connectivity

If the previous tests show that something is wrong or you still have a doubt, these tools are made to test basic connectivity to the Internet as well as checking the path and basic health of each network segments from you to a distant server.

“ping” will send probes each second to a target and will reports how much time it takes (round trip) to reach it. For an indication, a basic time between Europe and the US west coast is about ~160ms, US east to US west coast ~90ms, Taipei to Europe ~300ms. If it is higher than that you might indeed see some “laggish/slow Internet”.

Screenshot of pings to Wikipedia“traceroute” is a bit different; it does the same but shows you the intermediate hops in between source and destination.

The best in this domain is called “mtr” and is a great combination of the previous 2: for each hop, it runs a continuous ping. Add –report so you can easily copy and paste the result if you need to share it. Having some higher latency or packet loss for a hop or two is usually not an issue. This is because network devices are not made to reply to those tests, they just do it if they are not too busy with their main function, redirecting a packet in a direction or another. A good issue indicator is if many nodes (and especially your target) start showing packet loss or higher latency.

Screenshot of an mtr to www.mozilla.orgIf the numbers are high on the first hop this means something is wrong between your computer and your home/office/bar’s Internet gateway.

If the issues appear “around the middle” it’s an issue with your ISP or your ISP’s ISP and the only thing you can do is to call their support to complain or change ISP when possible. That is what happened with Netflix in the US.

If the numbers are high near the really end it’s probably an issue with the service you’re testing, try to contact them.

Wireless

An easy one when possible. If you are on wireless try to switch to a wired network and re run the tests mentioned above. It might be surprising but wireless isn’t magic :)

Most of the home and small shops wifi routers (labeled “BGN” or 2.4Ghz) can only use 1 out of 3 frequencies/channels to talk to their clients, and clients have to wait for their turn to talk.
So imagine your neighbor’s wifi is on the same frequency. Even if your wifi distinguishes your packets from your neighbor’s, they will have to share the channel. If you have more than 2 neighbors (Wireless networks visible in the wifi list), you will have an non optimal wireless experience and this is exponential.

To solve/mitigate it you can use an app like “Wifi Analyzer” on Android. X are the frequencies, only 1, 6 and 11 are really usable as they don’t overlap. and Y is how noisy they are. Locate yours, then if a channel is less busy go in your wireless router’s settings and move to that one.

Screenshot of wifi analyserIf they are all busy, last option is to buy a router that supports more recent standard, labeled “ABGN” (even now “AC”) or 5Ghz. Most high end phones and laptop support it as well. That range has around 20 usable channels instead of 3.

Another common issue in public places is when some people can’t connect at all but other people don’t have any issue. This is usually due to a mechanism called DHCP that allocates an address (here you can see it as a slot) to your device. Default settings are made for small networks and remember slots for a long time even if they are not used anymore. It’s usually possible to reduce the “remember” time and/or enlarge the pool of slots in the router’s settings.

Wired

These are less complicated to troubleshot. You can start by unplugging everything else connected to your router and see if it’s better. If not the issue might come from that box. Also be careful not to create physical loops in your network (like plugging 2 cables between your router and a switch).

Categorieën: Mozilla-nl planet

Air Mozilla: Martes mozilleros

di, 21/07/2015 - 17:00

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Categorieën: Mozilla-nl planet

Air Mozilla: Martes mozilleros

di, 21/07/2015 - 17:00

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Categorieën: Mozilla-nl planet

About:Community: Hacking Tech Evangelism in Bangalore: Q & A With Kaustav Das Modak

di, 21/07/2015 - 16:38

Back in May, we completed the pilot run of a new program. Mozilla Tech Speakers is designed to empower and support technical evangelists around the world who are serving their communities as speakers and trainers, presenting Mozilla and open web technologies at conferences, workshops, and events. We’ve already posted about the first phase of the program and shared examples of talks and activities from our first cohort of participants.

Not long after that post went up, I learned that Mozilla Rep and Mozillian Tech Speaker Kaustav Das Modak was organizing a Tech Evangelism Workshop with a group of volunteers from Mozilla India’s Bangalore community. Their goal: Work together over a weekend to build confidence and communication skills for technical evangelism. Have each participant finish the weekend with a new presentation and accompanying blog post or article ready to go. The reported results were impressive.

Photo by Kaustav Das Modak

Photo by Kaustav Das Modak

I invited Kaustav to share his activity and its outcome with more Mozillians, who might want to replicate a version of this event in their own communities. The basics apply for all presenters, so you don’t have to be a technologist to find value. Here’s what I learned from Kaustav (in Q & A format):

1) Kaustav, tell us a little about who you are and the work you do as a Mozilla contributor and technical evangelist.

I’m currently working on my start-up, Applait, where we are building a unified layer for real-time communications over the internet.

I’ve been publicly involved with Mozilla a little over 2 years now. Meeting people all over the world and working on open technologies has been my motivation to volunteer with Mozilla.

I’ve always enjoyed sharing what I know with everyone else, since my childhood. My inspiration to pursue technical evangelism as a profession, and then as a passion, came from attending a workshop on technical evangelism conducted by Christian Heilmann, Robert Nyman and Ali Spivak in Bangalore in 2013.

Photo by Kaustav Das Modak.

Photo by Kaustav Das Modak.

I was involved with the Mobilizers team during the Firefox OS launch, and I try to coordinate community evangelism for Mozilla in India, whenever I can.

2) What inspired you to create this event?

I’ve been planning over a year to conduct workshops to help fellow Mozillians get more confident in presenting themselves. I have helped folks individually all along. But, the Tech Speakers pilot programme finally made me get over the lethargy and actually the start the event. I plan to make this into a series, generating a ton of useful content in the process.

3) Can you share your thinking about the agenda and how you designed it?

The core goal of the Tech Evangelism Workshop is to help participants get better at what they are already good at. Participants are asked to choose a topic in which they think they have sufficient knowledge. Then, through the rest of the workshop, they practice building content around that topic – they give 2 presentations, write a talk abstract and an article.

By the end of the workshop, they realize that they already had the capability within them. The true success of this workshop is to make participants realize that all they needed was to do quality research, better practice and letting go of the shyness within.

4) What advice would you offer for other Mozillians who would like to organize a training/workshop like this to prepare presentations and practice public speaking? Do you have specific advice for technical presenters?

One thing that has always helped is to do your homework. _Nothing_ beats a healthy research. Research your audience and respect cultural differences.

5) What are you planning next? What advice do you have for other Mozillians who want to organize a workshop focused on technical evangelism skills?

I’m already planning for a second run of this workshop. I’m also eager to help any Mozillian who needs help individually. It’s okay to ping me anytime on IRC, my nick is kaustavdm.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

di, 21/07/2015 - 10:13

the following changes have been pushed to bugzilla.mozilla.org:

  • [1184454] unable to create new products
  • [1184456] cannot create a new product with ‘detect’ as the default platform
  • [1183899] Restricting access of bugs submitted from the FSA Budget Request form
  • [1184755] Update docker image runtests.sh to clone bmo git repo with full history instead of –depth=1
  • [1180571] remove the ability to search attachment data
  • [1183524] api bustage caused by bug 1173442
  • [1183892] Bugzilla disables browser context menu after showing its username left-click menu
  • [1184984] Current Selenium tests are failing due to changes made by bug 1173442
  • [1184982] The cpanfile generated by checksetup results in an unsuccessful mod_perl install
  • [1185440] activity bound to comments which are default-hidden is not hidden by default
  • [1185455] Remove use of non-standard flag argument of String.prototype.replace in inline-history.js.
  • [1177497] Backport upstreams 5.0 rST docs to BMO and make publicly available at https://bmo.readthedocs.org
  • [1184001] deliver error report to sentry via cron instead of immediately
  • [1180572] create attachment_storage parameter
  • [1185852] sentry.pl should exit early if there aren’t any reports to send

to improve security bugzilla.mozilla.org now serves attachments from a different domain – bmoattachments.org – instead of from a subdomain of bugzilla.mozilla.org.  all existing links should continue to work, and will redirect to a bmoattachments.org url.

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Tantek Çelik: #IndieWebCamp 2014 Year in Review — This Is A Movement

di, 21/07/2015 - 08:59
Foreword

We’re more than halfway through 2015 and I’m only now posting a summary of IndieWebCamp’s 2014. We lost someone quite special in the community last year. Despite incredible growth & progress in the IndieWeb community, the emotions around that made it hard to write & finish this post, harder still to figure out how to recognize our loss and not let it overshadow all the amazing work that everyone has done in the community. Though six months later than intended, I have found plenty to recognize, celebrate, and be proud of the IndieWebCamp community for our accomplishments in 2014. I hope you do too. — Tantek

2014 was a breakthrough year for IndieWebCamp and the IndieWeb movement. Beyond our technical achievements in creating, building, deploying, and using simple formats & protocols on our personal sites, we organized record numbers of IndieWebCamps and Homebrew Website Club meetups. We gave talks to audiences of thousands, and the press started covering us in earnest. We saw the launch of Known and its hosted service Withknown, a user-friendly mobile-web ready solution for anyone to get on the indieweb.

With our increasing visibility and popularity, we encountered perhaps the inevitable re-use of our community terms or similar terms to mean other things, and subsequent online confusion. As expected we also saw the shutdowns of many more silos. We lost a very special member of the community. We kept moving forward and finished the year with the first of its kind virtual online IndieWebCamp, and verbal commitments to each other to launch personal site features for the new year.

Table of Contents

A lot happened in 2014. Enough for a table of contents.

  1. The IndieWeb Movement
  2. Record Numbers
    1. IndieWebCamps
    2. Homebrew Website Clubs
    3. Press
    4. Talks
  3. Losses and Challenges
    1. Losing One Of Our Own
    2. The Web We Lost 2014
    3. “Indie” Term Re-use
  4. Technologies
  5. Services
  6. Community Resources
  7. Summary And Looking Forward
  8. New Year Commitments

Let’s get started.

The IndieWeb Movement

Anyone can call something a movement, but that doesn’t make it so.
A tweet is not a movement.
A blog post is not a movement.
A single-page-site is not a movement.
A manifesto is not a movement.

This is a movement. People are a movement.

2014 IndieWeb movement grid of faces

This is everyone who participated in one or more IndieWebCamps during 2014. Real people (with the exception of one cat), passionately using their own personal websites to express themselves on the web, creating, sharing, and collaborating with each other to grow the independent web.

Click / tap the image to go to a fully interactive version on the IndieWebCamp wiki, with every person (but 3!) linked to their personal site.

Record Numbers

The 100+ participants above participated in six IndieWebCamps in San Francisco, New York City, Portland Oregon, Berlin, Brighton, Cambridge MA, and Online. Twice as many as the previous year:

IndieWebCamps by year

Handcrafted ASCII graph (took Tufte class twice, not his fault):

WWW MIT NYC LA SF UK UK UK PDX PDX PDX PDX/NYC/Berlin ———— ———— ———— ——————————————— 2011 2012 2013 2014

You can see summaries and links to all of them here: IndieWebCamps

Beyond double the number, 2014 saw innovation in the very format of IndieWebCamps with a simultaneous three location annual main event, as well as the first IndieWebCamp Online. Thanks to David Shanske for organizing and leading the charge with IndieWebCamp Online using IRC and Google Hangouts.

Homebrew Website Clubs

In addition, 2014 was the first full year of Homebrew Website Club meetups. 27 days in total across several cities: San Francisco, Portland, Chicago, Minneapolis, New York, London, Paris.

Press

2014 had breakthrough press coverage of IndieWebCamp and the IndieWeb as a whole. Most notably:

See more articles about the IndieWeb in 2014.

Talks

2014 had a record number of IndieWeb related talks being given at conferences by community members. Here are a few of them ranging from introductory to technical:

If you enjoyed those, check out the videos about the IndieWeb page for many more.

Losses and Challenges

The IndieWeb community went through some minor growing pains in 2014, and tragically lost a key community member. There was also the continued series of site shutdowns, some of which members were able export from, but all of which broke the web.

Losing One Of Our Own

Mid last year we lost IndieWeb community member Chloe Weil, and we miss her very much.

Chloe participated in the very first IndieWebCamp 2011, as a shy apprentice, but learned quickly & eagerly, and put her many creative skills to work building & growing her own personal web presence. Here she is at that event, third from the left edge:

Photo of IndieWebCamp 2011 participants

She built her own personal-site-based replacement for tweeting. She participated in both the first IndieWebCamp NYC as well as the subsequent main IndieWebCamp 2014 East at the NYC Location. Here she is again, front row and confident:

IndieWebCamp 2014 East club photo

She captioned this photo:

“Your high school’s yearbook club just graduated and knows HTML”

Here are a few posts about Chloe from the community:

If you’ve written your own blog post in memory of Chloe, please let me know so I may link to it in the above list.

The Web We Lost 2014

We saw many silos go offline, taking millions of permalinks with them. Here are a few of the notable clusters of sites the web lost:

Acquishutdowns

The most common shutdowns were acquisitions or acquihires:

  • Yahoo shutdowns: Ptch.com, Donna, Vizify
  • Skype shutdown: Qik
  • eBay shutdown: Svpply
  • Ancestry.com shutdown: MyFamily.com
  • Vox Media acquired the staff & technology of Editorially, whose founders subsequently shut it down
Short Notice Shutdowns

The second most frequent shutdowns came suddenly, or nearly suddenly, unexpectedly, and sometimes with a complete loss of content (without any opportunity to export it).

  • Spreadly - site went offline without any notice
  • Fotopedia - 10 days notice and "all photos and data will be permanently deleted"
  • Justin.tv - two weeks notice and all videos deleted
  • Codespaces - most content deleted by vandals, site shutdown rather than attempt recovery.
The Cloud Is A Lie

So-called "cloud" services have been heralded as the new most reliable, scalable, available thing for storage etc., and yet last year:

  • Ubuntu One cloud sync service shut down with only two months notice.
Breaking The Web

All these shutdowns break the web in some way or other. However there are particularly egregious examples of breaking the web, such as when third-party link-shorteners and identity providers are shutdown. In 2014 we lost another one of each:

  • s.tt link shortener, shutdown by parent company and site Repost which itself shutdown as well
  • myOpenID.com, a popular OpenID provider, also shutdown.
Losing A Classic

Lastly we lost a classic site in 2014:

  • 43things.com - after 10 years of service, the sites owners decided to shut it down.

See: IndieWebCamp: site-deaths 2014 for more.

“Indie” Term Re-use

Last and least of our challenges, but worth noting for the consternation it’s caused (at least on Twitter, and perhaps that’s telling), the overloading of the term and prefix “indie” has led to some confusion.

When I first used the phrase “indie web” to refer specifically to independents using their personal websites for their online identity and content (instead of large corporate silos like Facebook & Twitter, or even group sites running open source like Diaspora), I knew both that the prefix “indie” was already both in heavy use across industries and with different meanings.

When Aaron Parecki and I deliberately chose to use the term “IndieWeb” or phrase “Indie Web” to refer to a difference in focus from “Federated Social Web”, and then co-found IndieWebCamp with Amber Case & Crystal Beasley, we viewed our usage of “Indie” as deliberately continuing in the same spirit and theme as earlier "Independent Web" efforts (such as the early 2000s "Independents Day" campaign), and complementary to “Indie” efforts in other fields.

2014 saw the launch or promotion of other things labeled “indie” on the web (and at least somewhat related to it), which had little or nothing to do with the “IndieWeb” and was a source of repeated confusion (and continues to be).

Ind.ie Confusion

The privately held startup “ind.ie”, bootstrapped & crowdfunded, and yet developing various "independent technology" or "indietech" efforts which could easily be assumed to overlap with "indieweb" did not relate in substance to the IndieWeb at all.

There were numerous instances of people confusing "ind.ie" and the "IndieWeb" in their posts, and criticism of one would inevitably lead to errant conflation with and criticism of the other. It got so bad that "ind.ie" themselves posted a blog post:

Are you the same as IndieWeb?

No. IndieWeb is a separate movement and yet we have some overlap of goals.

The IndieWeb community similarly documented as much on the wiki: ind.ie is not IndieWeb nor IndieWebCamp. Others have also noted that naming something even more similarly e.g. "indienet" will only create more misunderstandings (nevermind that IndieWeb itself is peer-to-peer/distributed).

Despite this effort at proactive documentation, confusion has continued, though now it's typically quickly followed up by a clarification that the two are not the same, and link to one or both of the above.

indie.vc not IndieWeb-specific

A new VC firm launched in 2014 called "indie.vc". Due to their name and web presence, they too were inevitably confused with “IndieWeb” or people assumed that they were some sort of IndieWeb investment fund. Neither of which is true.

In the future it is possible that indie.vc will fund an IndieWeb startup, but until that day comes, they are disjoint.

IndieWeb Technologies

Despite such challenges, the IndieWeb community proposed, discussed, specified, built, and interoperably deployed the following indieweb technologies in 2014. These IndieWeb innovations in the past year were nothing short of web technology breakthroughs.

And the best part: all of the following are 100% free as in freedom, creative commons zero (CC0) licensed, openly documented, and real: interoperably shipping, often with multiple open source implementations.

This is technology by independents declaring independence. You could even call them “indietech” if you thought they needed another buzzword, which they don’t.

In alphabetical order:

  • fragmention — a way to use a URL to link and cite individual words or phrases of a document.
  • h-feed — while previously proposed on microformats.org, in 2014 the indieweb community adopted h-feed as the primary DRY way to markup a feed or stream on an HTML page, published multiple indieweb sites with it, as well as multiple indieweb readers consuming it, consequently upgrading it to an official microformats.org draft.
  • indie-config for webactions — indie-config is a set of client & server libraries to enable seamless webactions across sites (invented, and implemented interoperably at IndieWebCamp Brighton 2014)
  • marginalia — within a few months of the invention of fragmentions, community members realized they could post indie replies to specific paragraphs or any phrase of a post, and the receiving post could display them as comments in the margins, thus inventing distributed marginalia, a feature previously only available in proprietary text editors like Word, Google Docs, or the Medium silo, and not actually distributed, aside from emailing around Word documents.
  • Micropub — a standard API for publishing and updating posts on indieweb sites (conceived in 2013, first interoperably implemented in 2014) with:
  • person-tag — a special kind of tag on a post or in post content that refers to a specific person by URL (and name) rather than just a word or phrase. Only publishing examples in 2014 (subsequent interop in 2015).
  • Vouch — a webmention protocol extension to prevent spam (interop at 2014/Cambridge)

In addition to all those groundbreaking technologies, IndieWeb community members continued to evolve what types of content they posted on their own sites, documenting the paths as they paved them with permalinks on their own websites — all of the following have documented real world public web publishing examples (at least one, typically many more) on their very pages in contrast to the more aspirational approaches taken by other current attempts in this space (e.g. ActivityStreams, and the since defunct OpenSocial)

  • collection — a type of post that explicitly lists/embeds multiple other posts chosen by the author
  • edit — a special type of reply that indicates a set of suggested changes to a post
  • exercise — a broad post type that represents some form of physical activity, i.e. quantified self, or in particular:
  • food — a new post type that represents eating or drinking
  • invitation — a new post type for sending someone one person an invitation to someone else’s posted indie event. Also supported by Bridgy as a way of backfeeding invitations made on Facebook POSSE copies of event posts.
  • quotation — a type of post that is primarily a subset of the contents of another post usually with a citation.
  • sleep — similar to exercise this post type is for tracking when, how deeply, and how long you sleep.
  • travel — a post type about plans to change locations in the future.
IndieWeb Services

Beyond technologies, several indieweb services were built, deployed, and significantly improved by the community.

IndieWeb Community Resources

There's lots to technology development beyond the technology itself. Over 1000 new pages were created in 2014 that documented everything from concepts, to brainstorms, designs, and everything else indieweb related that the community came up with.

The IndieWebCamp wiki is now the pre-eminent reference for all things Independent Web.

If you have a question about something "independent" and "web", you're very likely to find the answer at https://indiewebcamp.com/

Here are some of the top such resources created in 2014:

  • archive — the UI Pattern of providing archives of your posts that users can navigate
  • communication — how to create a communication / contact page on your own indieweb site, with clear one-click buttons for people to contact you as desire and are capable of being contacted.
  • disclosure — how to proactively disclose some aspect about a site that the site owner wants the user to explicitly be aware of
  • facepile — the UI pattern of providing a set of small face icons as a summary of people, e.g. that like a post, or have RSVPd to an event
  • file-storage — why, how, and examples of the common IndieWeb practice of storing your data in flat files (instead of the customary webdev habit of using a database)
  • follow & unfollow — documentation and implementation of the concept of (un)following people and posts
  • FreeMyOAuth — a one stop page to quickly access the "what have I authorized on what services" lists so you can de-authorize any apps you no longer use or don't recognize
  • generations — perhaps one of the most important pages created in 2014. generations showed for the first time an overview of how the IndieWeb approach of engaging development leaders first (e.g. by focusing on selfdogfooding), then journalists & bloggers, etc. provides a rational and steady growth path for the indieweb to eventually reach anyone who desires an independent presence on the web that they own and control.
  • HTTPS — best step-by-step documentation for how to setup HTTPS on an independent site, with choices, levels to achieve, and real world examples
  • mobile — a great summary of mobile first and other mobile specific design considerations, how tos, etc. for any indieweb site
  • mute — the ability to skip seeing someone's posts, while still following them in general
  • notification — research and analysis of both push notifications and notification pages across various applications and silos
  • onboarding — the user experience of a first time user of a site, service, or product, who is looking to sign-up or otherwise get started using it.
  • payment — how to create a payment page on your own indieweb site, and how to create the links to various payment services for your readers to click and pay you directly
  • scope — summary of what are OAuth scopes, examples of them used by IndieWeb apps, sites, and silos.
  • this-week-in-the-indieweb — a weekly digest of activities of the IndieWebCamp community, including a summary of wiki edits for the week
  • URL design — a collection of analysis and best practices for designing human-friendly and robust URLs, e.g. for permalinks
  • wikifying — simple steps for new community members to start engaging on the wiki
Summary And Looking Forward

2014 was a year of incredible gains, and yet, a very sad loss for the community. In many ways I think a lot of us are still coping, reflecting. But we continue, day to day to grow and improve the indieweb, as I think Chloe would have wanted us to, as she herself did.

By the end of 2014 we had community members organize IndieWebCamps in 2014 in more cities than ever before, and similarly, start more local chapters of the Homebrew Website Club as well.

I'm grateful for each and every person I've met and worked with in the community. Everybody brings their own perspective, their own wants and desires for their own website. As a community, we can best help people by channeling their desires of what should be done, into what they should do on their own website for themselves, building upon the work of the community, and then, how can we connect amongst our sites, and in-person, to motivate each other to do even more.

That's exactly what we did at the end of the year.

New Year Commitments

At the last Homebrew Website Club meetup of the year on 2014-12-17, we decided to make verbal commitments to each other of what we wanted to create, launch, and start using on our own site by the start of the next year.

As you might guess, we did pretty well with those commitments, but that's a subject for another post.

If you’ve gotten this far, congratulations, this was a long post, and long overdue. You’re clearly interested, so you should come by for more:

Independence on the web is within your grasp, and there’s a whole community just waiting to help you take the next steps. The first step is up to you.

Thanks to reviews and feedback from fellow IndieWeb Community members Kevin Marks, Kartik Prabhu, Ryan Barrett, and Shane Hudson.

Epilogue

I wrote most of this post incrementally on the IndieWebCamp wiki with a bunch of contributions from the IndieWeb community (citations, images etc.). Thus the text content of this blog post is CC0 licensed for you to re-use as you wish and preferably quote, cite, and link. Please credit the “IndieWeb Community”. Thank you for your consideration. — Tantek

Categorieën: Mozilla-nl planet

Kent James: Is Mozilla an Open Source Project?

di, 21/07/2015 - 07:13

At the 2015 Community Leadership Summit, keynote speaker Henrik Ingo asked what he intended to be a trick question:

Everybody knows that Redhat is the largest open source company by revenue, with 1.5 billion dollars per year in revenue. What is the second largest open source company?

Community Leadership Summit 2015

Community Leadership Summit 2015

It took awhile before someone came up with the correct answer – Mozilla! Why is this a trick question? Because people don’t view Mozilla as an open source software company! Even in an open-source friendly crowd, people need to be reminded that Mozilla is open source, and not another Google or Apple. The “open source” brand is getting ever more powerful, with hot new technologies like OpenStack, Docker, and node.js adopting the foundation-owned open source model, while Mozilla seems to be drifting away from that image.

The main point of Henrik’s talk was that projects that are “open-source” while dominated by a single company show limited growth potential when compared to projects where there is an independent foundation without any single dominating company. Mozilla is an odd model, with a company that is dominated by a foundation (at least in theory). It seems though that these days, what has emerged is a foundation that is dominated by a company, exactly the model that Henrik claims limits growth. As that company gets more and more “professional” (acting like a company), it gets harder to perceive Mozilla to be anything other than another big tech company.

Something has changed at Mozilla, that I don’t really understand. Not that I have any inside knowledge (Thunderbird folks like me don’t get invited to large Mozilla gatherings any more), but is this really the brand image that Mozilla wants? I doubt it. Hopefully people smarter than me can figure out how to fix it, as there is still something about Mozilla that many of us love.

Categorieën: Mozilla-nl planet

The Servo Blog: Environment

di, 21/07/2015 - 02:00
Servo developer tools overview

Servo is a new web browser engine. It is one of the largest Rust-based projects, but the total Rust code is still dwarfed by the size of the code provided in native C and C++ libraries. This post is an overview of how we have structured our development environment in order to integrate the Cargo build system, with its “many small and distributed dependencies” model with our needs to provide many additional features not often found in smaller Rust-only projects.

Mach

Mach is a python driver program that provides a frontend to Servo’s development environment that both reduces the number of steps required and integrates our various tools into a single frontend harness. Similar to its purpose in the Firefox build, we use it to centralize and simplify the number of commands that a developer has to perform.

mach bootstrap

The steps that mach will handle before issuing a normal cargo build command are: * Downloading the correct versions of the cargo and rustc tools. Servo uses many unstable features in Rust, most problematically those that change pretty frequently. We also test the edges of feature compatibility and so are the first ones to notice many changes that did not at first seem as if they would break anyone. Further, we build a custom version of the tools that additionally supports cross-compilation targeting Android (and ARM in the near future). A random local install of the Rust toolchain is pretty unlikely to work with Servo.

  • Updating git submodules. Some of Servo’s dependencies cannot be downloaded as Cargo dependencies because they need to be directly referenced in the build process, and Cargo adds a hash that makes it difficult to locate those files. For such code, we add them as submodules.
mach build & run

The build itself also verifies that the user has explicitly requested either a dev or release build — the Servo dev build is debuggable but quite slow, and it’s not clear which build should be the default.

Additionally, there’s the question of which cargo build to run. Servo has three different “toplevel” Cargo.toml files. * components/servo/Cargo.toml is used to build an executable binary named servo and is used on Linux and OSX. There are also horrible linker hacks in place that will cause an Android-targeted build to instead produce a file named servo that is actually an APK file that can be loaded onto Android devices. * ports/gonk/Cargo.toml produces a binary that can run on the Firefox OS Boot2Gecko mobile platform. * ports/cef/Cargo.toml produces a shared library that can be loaded within the Chromium Embedding Framework to provide a hostable web rendering engine.

The presence of these three different toplevel binaries and the curious directory structure means that mach also provides a run command that will execute the correct binary with any provided arguments.

mach test

Servo has several testing tools that can be executed via mach.

  • mach tidy will verify that there are no trivial syntactic errors in source files. It checks for valid license headers in each file, no tab characters, no trailing whitespaces, etc.

  • mach test-ref will run the Servo-specific reference tests. These tests render a pair of web pages that implement the same final layout using different CSS features to images. If the images are not pixel-identically, the test fails.

  • mach test-wpt runs the cross-browser W3C Web Platform Tests, which primarily test DOM features.

  • mach test-css runs the cross-browser CSS WG reference tests, which are a version of the reference tests that are intended to work across many browsers.

  • mach test-unit runs the Rust unit tests embedded in Servo crates. We do not have many of these, except for basic tests of per-crate functionality, as we rely on the WPT and CSS tests for most of our coverage. Philosophically, we prefer to write and upstream a cross-browser test where one does not exist instead of writing a Servo-specific test.

cargo

While the code that we have written for Servo is primarily in Rust, we estimate that at least 2/3 of the code that will run inside of Servo will be written in C/C++, even when we ship. From the SpiderMonkey JavaScript engine to the Skia and Azure/Moz2D graphics pipeline to WebRTC, media extensions, and proprietary video codecs, there is a huge portion of the browser that is integrated and wrapped into Servo, rather than rewritten. For each of these projects, we have a crate that has a build.rs file that performs the custom build steps to produce a static library and then produce a Rust rlib file to link into Servo.

The rest of Servo is a significant amount of code (~150k lines of Rust; ~250k if you include autogenerated DOM bindings), but follows the standard conventions of Cargo and Rust as far as producing crates. For the many crates within the Servo repo, we simply have a Cargo.toml file next to a lib.rs that defines the module structure. When we break them out into a separate GitHub repository, though, we follow the convention of a toplevel Cargo.toml file with a src directory that holds all of the Rust code.

Servo's dependency graph

Updating dependencies

Since there are three toplevel Cargo.toml files, there are correspondingly three Cargo.lock files. This configuration makes the already challenging updates of dependencies even moreso. We have added a command, mach update-cargo -p {package} --precise {version} to handle updates across all three of the lockfiles. While running this command without any arguments does attempt to upgrade all dependencies to the highest SemVer-compatible versions, in practice that operation is unlikely to work, due to a mixture of:

  • git-only dependencies, which do not have a version number

  • Dependencies with different version constraints on a common dependency, resulting in two copies of a library and conflicting types

  • Hidden Rust compiler version dependencies

Things we’d like to fix in the future

It would be great if there was a single Cargo.toml file and it was at the toplevel of the Servo repo. It’s confusing to people familiar with Rust projects, who go looking for a Cargo.toml file and can’t find them.

Cross-compilation to Android with linker hacks feels a bit awkward. We’d like to clean that up, remove the submodule that performs that linker hackery, and have a more clean/consistent feel to our cross-targeted builds.

Managing the dependencies — particularly if there is a cross-repo update like a Rust upgrade — is both a real pain and requires network access in order to clone the dependency that you would like to edit. The proposed cargo clone command would be a huge help here.

Categorieën: Mozilla-nl planet

Pagina's