mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Armen Zambrano: What Mozilla CI tools is and what it can do for you (aka mozci)

Mozilla planet - fr, 24/04/2015 - 22:23
Mozci (Mozilla CI tools) is a python library, scripts and package which allows you to trigger jobs on treeherder.mozilla.org.
Not all jobs can be triggered but those that are run on Release Engineering's Buildbot setup. Most (if not all) Firefox desktop and Firefox for Android jobs can be triggered. I believe some B2G jobs can still be triggered.

NOTE: Most B2G jobs are not supported yet since they run on TaskCluster. Support for it will be given on this quarter.

Using itOnce you check out the code:
git clone https://github.com/armenzg/mozilla_ci_tools.git
python setup.py developyou can run scripts like this one (click here for other scripts):
python scripts/trigger.py \
  --buildername "Rev5 MacOSX Yosemite 10.10 fx-team talos dromaeojs" \
  --rev e16054134e12 --times 10which would trigger a specific job 10 times.

NOTE: This is independent if a build job exist to trigger the test job. mozci will trigger everything which is required to get you what you need.
One of the many other options is if you want to trigger the same job for the last X revisions, this would require you to use --back-revisions X.
There are many use cases and options listed in here.
A use case for developersOne use case which could be useful to developers (thanks @mike_conley!) is if you pushed to try and used this try syntax: "try: -b o -p win32 -u mochitests -t none". Unfortunately, you later determine that you really need this one: "try: -b o -p linux64,macosx64,win32 -u reftest,mochitests -t none".
In normal circumstances you would go and push again to the try server, however, with mozci (once someone implements this), we could simply pass the new syntax to a script (or with ./mach) and trigger everything that you need rather than having to push again and waster resources and your time!
If you have other use cases, please file an issue in here.
If you want to read about the definition of the project, vision, use cases or FAQ please visit the documentation.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Jeff Walden: Government speech and compelled speech

Mozilla planet - fr, 24/04/2015 - 20:17

Yesterday I discussed specialty plate programs in lower courts and the parties’ arguments in Walker v. Texas Division, Sons of Confederate Veterans. Today I begin to analyze the questions in the case.

But first, a disclaimer.

Disclaimer

The following is my understanding of First Amendment law, gleaned from years of reading numerous free speech opinions, summaries, and analyses. I’m generally confident in this explanation, but I may well have made mistakes, or simply missed nuance present in the cases but not in the summaries I’ve read. Please point out mistakes in the comments.

Of course, I really have no business trying to explain First Amendment jurisprudence, if I want it explained correctly. First Amendment law is incredibly complex. My haphazard reading will miss things.

But I’m barging ahead anyway, for a few reasons. First, I want to talk about this. Second, it’s fun to talk about it! Third, you don’t learn unless you’re willing to look like a fool from time to time. Fourth, the law is not this recondite, bizarre arcana that only lawyers and judges can understand. It may require some work to correctly understand laws, terms of art, rules of statutory construction, and relevant past decisions in the common law. But any intelligent person can do it if they make the effort.

And fifth, nobody with any sense will unconditionally rely on this as authoritative, not when there are far better places to look for the finest in free Internet legal advice.

Government speech

The “recently minted” government speech doctrine occupies an uneasy place in the realm of speech. For when government speech occurs, non-governmental speech open to First Amendment challenge is reduced. There must be some government speech: otherwise we’d absurdly conclude that the government’s World War II war-bond propaganda must be accompanied by anti-bond propaganda. Government programs often have viewpoints suppressible only in the voting booth. But this mechanism is sluggish and imperfectly responsive, and government speech’s discretion can be abused. So it’s best to be careful anointing government speech.


This is your government. This is your government on beef. Any questions?

Certainly some license plates — the state’s default designs and designs ordered by the legislature — are government speech, even if they’re also individual speech under Wooley v. Maynard. In each case the government wholly chooses what it wishes to say, and that message is government speech. The individual’s choice to assist in conveying it, under Wooley, isn’t government speech.

Circularity

But Texas’s government-speech argument, applied beyond plates it designs itself, is laughable. The linchpin of Texas’s argument is that because they control the program, that makes it government speech they can control. This argument is completely circular! By starting from their control over the program’s speech, they’ve assumed their conclusion.

This doesn’t mean Texas is wrong. But their circular central government-speech argument can prove nothing. This logical flaw is blindingly obvious. Texas’s lawyers can’t have missed this. If they made this their lead argument, they’re scrambling.

Compelling Texas to speak?

Texas’s better argument is that vehicle licenses and plates are its program, implicating its right to speak or not speak under Wooley. But the First Amendment restrains government power, not individual power. And many courts (although so far not the Supreme Court) have held that government can be compelled to “speak” in accepting advertising in government-controlled places (public transit systems, for a common example). The problem is Texas voluntarily created a specialty plate program open to all for speech. No “compulsion” derives from a voluntary act.

Texas didn’t fully control the specialty plate program, but rather opened it to anyone with money. (As Chief Justice Roberts noted in oral argument: “They’re only doing this to get the money.”) It’s possible there’s government speech in Texas SCV‘s plate, perhaps the occasionally-proposed “hybrid” speech. But once Texas opens the program to all, it loses full control over what’s said.

How then do we consider specialty plate programs? What controls may Texas exercise? Now we must decide how to classify the specialty-plate program with respect to First Amendment-protected speech. What sort of forum for speech is Texas’s specialty-plate program?

Tomorrow, First Amendment forum doctrine.

Categorieën: Mozilla-nl planet

TomTom Partners with Mozilla, Telefonica for Online Maps - GPS World magazine

Nieuws verzameld via Google - fr, 24/04/2015 - 19:39

myce.com

TomTom Partners with Mozilla, Telefonica for Online Maps
GPS World magazine
TomTom is partnering with Mozilla and Telefónica to bring its Maps Online and Nav Online apps to HTML5-powered Firefox OS smartphone devices. “We're thrilled to offer Firefox OS users TomTom's Maps Online and Nav Online apps in the Firefox ...
TomTom cooperates with Mozilla on navigation on Firefox OSmyce.com

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Daniel Stenberg: curl on the NASDAQ tower

Mozilla planet - fr, 24/04/2015 - 18:54

Apigee posted this lovely picture over at twitter. A curl command line on the NASDAQ tower.

curl-nasdaq-cropped

Categorieën: Mozilla-nl planet

Mozilla rejoint la fronde contre le projet de loi sur le renseignement - Le Monde

Nieuws verzameld via Google - fr, 24/04/2015 - 16:43

Le Monde

Mozilla rejoint la fronde contre le projet de loi sur le renseignement
Le Monde
Les mesure présentées par le projet de loi constituent, selon Mozilla, « une menace pour l'infrastructure d'Internet, la vie privée des utilisateurs, ainsi que pour la sécurité des données ». Cette mise au point vient s'ajouter aux nombreuses et ...
Loi renseignement : ''une menace" aux multiples visages juge MozillaZDNet France
Firefox : la fondation Mozilla conteste la loi française sur le renseignementmetronews
Loi renseignement : « une menace réelle » selon la fondation MozillaBegeek.fr
01net -L'Humanité
alle 9 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Armen Zambrano: Firefox UI update testing

Mozilla planet - fr, 24/04/2015 - 16:42
We currently trigger manually UI update tests for Firefox releases. There are automated headless update verification tests but they don't test the UI of Firefox.

The goal is to integrate this UI update testing as part of the Firefox releases.This will require changes to firefox-ui-tests, buildbot scheduling changes, Marionette changes and other Mozbase packages. The ultimate goal is to speed up our turn around on releases.
The update testing code was recently ported from Mozmill to use Marionette to drive the testing.
I've already written some documentation on how to run the update verification using Release Engineering configuration files. You can use my tools repository until the code lands (update_testing is the branch to be used).
My deliverable is to ensure that the update testing works reliably on Release Engineering infrastructure and there is existing scheduling code for it.
You can read more about this project in bug 1148546.
Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Loi renseignement : « une menace réelle » selon la fondation Mozilla - Begeek.fr

Nieuws verzameld via Google - fr, 24/04/2015 - 15:00

Le Monde

Loi renseignement : « une menace réelle » selon la fondation Mozilla
Begeek.fr
De plus, les contours de la loi semblent trop flous pour Mozilla qui estime que « les modalités exactes de ce projet de loi semblent changer fréquemment » et que « les discussions menées secrètement, à huis clos, aboutissent rarement à une législation ...
Loi renseignement : ''une menace" aux multiples visages juge MozillaZDNet France
Mozilla rejoint la fronde contre le projet de loi sur le renseignementLe Monde
Firefox : la fondation Mozilla conteste la loi française sur le renseignementmetronews
Silicon -01net -L'Humanité
alle 10 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Pierros Papadeas: KPI Dashboard on reps.mozilla.org

Mozilla planet - fr, 24/04/2015 - 12:41

Mozilla Reps as a program is full of activities. Reps around the world, do extraordinary  things everyday, promoting Mozilla’s mission and getting new contributors on board.

Moving forward, trying to identify how those activities align with top-tier initiatives, Mozilla Reps program wanted a way to visualize some Key Progress Indicators around the program.

We (the Participation Infrastructure team) sat down with the programmatic owners of Reps (Nuke & Rosana) and identified what numbers and metrics we would like to expose in a much more digestible way, so we can assess the progress of the program on many levels.

We identified 3 different KPIs:

  • Number of Reps (and growth rates)
  • Number of Events (and growth rates)
  • Number of Reports (and growth rates)

… and also 3 different filters you can apply on those numbers:

  • Country
  • Functional Area (of Mozilla)
  • Initiative (associated with Rep, Event or Report)

You can find the spec for this initial iteration here.

We decided to have the filters as drop-downs, applied on the whole page (in combination or one-by-one). Then for each KPI group we would have a time graph for the past 6 weeks (fixed for now) with a HUD of basic numbers and growth rates.

29Technology-wise, we tied the coding of this new dashboard to the delivery of a proper API for the Reps Portal (more info on that soon). The new API enabled us to easily create custom endpoints to calculate the numbers needed for our Reps KPI graphs (based on the existing Conversion Points). Nemo and Tasos did a fantastic work to deliver the new API and the custom endpoints, while making sure that this is not heavy on our DB.

Nikos then worked on the front-end using D3.js as the visualization library to create the graphs dynamically (each time you access the page or you filter using Country, Area or Initiative).

05The overall result is smooth and easily helps you assess progress of various Areas and Initiatives on specific Countries, for Reps, Events and Reports.

You can check out the dashboard here.

Next step would be to introduce a time-slider for customizing the time range you want to be displayed.

Categorieën: Mozilla-nl planet

Loi renseignement : "une menace" aux multiples visages juge Mozilla - ZDNet France

Nieuws verzameld via Google - fr, 24/04/2015 - 11:30

01net

Loi renseignement : "une menace" aux multiples visages juge Mozilla
ZDNet France
Législation : Les déclarations du gouvernement ne convainquent toujours pas, et certainement pas Mozilla qui voit dans le projet de loi renseignement "une menace pour l'infrastructure d'Internet, la vie privée des utilisateurs, ainsi que pour la ...
Mozilla s'inquiète de la Loi sur le renseignement01net
Mozilla s'exprime sur la Loi RenseignementL'Humanité
Loi renseignement : lettre ouverte de la fondation Mozilla au gouvernement ...metronews

alle 6 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Chris Lord: Web Navigation Transitions

Mozilla planet - fr, 24/04/2015 - 11:26

Wow, so it’s been over a year since I last blogged. Lots has happened in that time, but I suppose that’s a subject for another post. I’d like to write a bit about something I’ve been working on for the last week or so. You may have seen Google’s proposal for navigation transitions, and if not, I suggest reading the spec and watching the demonstration. This is something that I’ve thought about for a while previously, but never put into words. After reading Google’s proposal, I fear that it’s quite complex both to implement and to author, so this pushed me both to document my idea, and to implement a proof-of-concept.

I think Google’s proposal is based on Android’s Activity Transitions, and due to Android UI’s very different display model, I don’t think this maps well to the web. Just my opinion though, and I’d be interested in hearing peoples’ thoughts. What follows is my alternative proposal. If you like, you can just jump straight to a demo, or view the source. Note that the demo currently only works in Gecko-based browsers – this is mostly because I suck, but also because other browsers have slightly inscrutable behaviour when it comes to adding stylesheets to a document. This is likely fixable, patches are most welcome.

 Navigation Transitions specification proposal Abstract

An API will be suggested that will allow transitions to be performed between page navigations, requiring only CSS. It is intended for the API to be flexible enough to allow for animations on different pages to be performed in synchronisation, and for particular transition state to be selected on without it being necessary to interject with JavaScript.

Proposed API

Navigation transitions will be specified within a specialised stylesheet. These stylesheets will be included in the document as new link rel types. Transitions can be specified for entering and exiting the document. When the document is ready to transition, these stylesheets will be applied for the specified duration, after which they will stop applying.

Example syntax:

<link rel="transition-enter" duration="0.25s" href="URI" /> <link rel="transition-exit" duration="0.25s" href="URI" />

When navigating to a new page, the current page’s ‘transition-exit‘ stylesheet will be referenced, and the new page’s ‘transition-enter‘ stylesheet will be referenced.

When navigation is operating in a backwards direction, by the user pressing the back button in browser chrome, or when initiated from JavaScript via manipulation of the location or history objects, animations will be run in reverse. That is, the current page’s ‘transition-enter‘ stylesheet will be referenced, and animations will run in reverse, and the old page’s ‘transition-exit‘ stylesheet will be referenced, and those animations also run in reverse.

[Update]

Anne van Kesteren suggests that forcing this to be a separate stylesheet and putting the duration information in the tag is not desirable, and that it would be nicer to expose this as a media query, with the duration information available in an @-rule. Something like this:

@viewport { navigate-away-duration: 500ms; } @media (navigate-away) { ... }

I think this would indeed be nicer, though I think the exact naming might need some work.

Transitioning

When a navigation is initiated, the old page will stay at its current position and the new page will be overlaid over the old page, but hidden. Once the new page has finished loading it will be unhidden, the old page’s ‘transition-exit‘ stylesheet will be applied and the new page’s ‘transition-enter’ stylesheet will be applied, for the specified durations of each stylesheet.

When navigating backwards, the CSS animations timeline will be reversed. This will have the effect of modifying the meaning of animation-direction like so:

Forwards | Backwards -------------------------------------- normal | reverse reverse | normal alternate | alternate-reverse alternate-reverse | alternate

and this will also alter the start time of the animation, depending on the declared total duration of the transition. For example, if a navigation stylesheet is declared to last 0.5s and an animation has a duration of 0.25s, when navigating backwards, that animation will effectively have an animation-delay of 0.25s and run in reverse. Similarly, if it already had an animation-delay of 0.1s, the animation-delay going backwards would become 0.15s, to reflect the time when the animation would have ended.

Layer ordering will also be reversed when navigating backwards, that is, the page being navigated from will appear on top of the page being navigated backwards to.

Signals

When a transition starts, a ‘navigation-transition-startNavigationTransitionEvent will be fired on the destination page. When this event is fired, the document will have had the applicable stylesheet applied and it will be visible, but will not yet have been painted on the screen since the stylesheet was applied. When the navigation transition duration is met, a ‘navigation-transition-end‘ will be fired on the destination page. These signals can be used, amongst other things, to tidy up state and to initialise state. They can also be used to modify the DOM before the transition begins, allowing for customising the transition based on request data.

JavaScript execution could potentially cause a navigation transition to run indefinitely, it is left to the user agent’s general purpose JavaScript hang detection to mitigate this circumstance.

Considerations and limitations

Navigation transitions will not be applied if the new page does not finish loading within 1.5 seconds of its first paint. This can be mitigated by pre-loading documents, or by the use of service workers.

Stylesheet application duration will be timed from the first render after the stylesheets are applied. This should either synchronise exactly with CSS animation/transition timing, or it should be longer, but it should never be shorter.

Authors should be aware that using transitions will temporarily increase the memory footprint of their application during transitions. This can be mitigated by clear separation of UI and data, and/or by using JavaScript to manipulate the document and state when navigating to avoid keeping unused resources alive.

Navigation transitions will only be applied if both the navigating document has an exit transition and the target document has an enter transition. Similarly, when navigating backwards, the navigating document must have an enter transition and the target document must have an exit transition. Both documents must be on the same origin, or transitions will not apply. The exception to these rules is the first document load of the navigator. In this case, the enter transition will apply if all prior considerations are met.

Default transitions

It is possible for the user agent to specify default transitions, so that navigation within a particular origin will always include navigation transitions unless they are explicitly disabled by that origin. This can be done by specifying navigation transition stylesheets with no href attribute, or that have an empty href attribute.

Note that specifying default transitions in all situations may not be desirable due to the differing loading characteristics of pages on the web at large.

It is suggested that default transition stylesheets may be specified by extending the iframe element with custom ‘default-transition-enter‘ and ‘default-transition-exit‘ attributes.

Examples

Simple slide between two pages:

[page-1.html]

<head> <link rel="transition-exit" duration="0.25s" href="page-1-exit.css" /> <style> body { border: 0; height: 100%; } #bg { width: 100%; height: 100%; background-color: red; } </style> </head> <body> <div id="bg" onclick="window.location='page-2.html'"></div> </body>

[page-1-exit.css]

#bg { animation-name: slide-left; animation-duration: 0.25s; } @keyframes slide-left { from {} to { transform: translateX(-100%); } }

[page-2.html]

<head> <link rel="transition-enter" duration="0.25s" href="page-2-enter.css" /> <style> body { border: 0; height: 100%; } #bg { width: 100%; height: 100%; background-color: green; } </style> </head> <body> <div id="bg" onclick="history.back()"></div> </body>

[page-2-enter.css]

#bg { animation-name: slide-from-left; animation-duration: 0.25s; } @keyframes slide-from-left { from { transform: translateX(100%) } to {} }

I believe that this proposal is easier to understand and use for simpler transitions than Google’s, however it becomes harder to express animations where one element is transitioning to a new position/size in a new page, and it’s also impossible to interleave contents between the two pages (as the pages will always draw separately, in the predefined order). I don’t believe this last limitation is a big issue, however, and I don’t think the cognitive load required to craft such a transition is considerably higher. In fact, you can see it demonstrated by visiting this link in a Gecko-based browser (recommended viewing in responsive design mode Ctrl+Shift+m).

I would love to hear peoples’ thoughts on this. Am I actually just totally wrong, and Google’s proposal is superior? Are there huge limitations in this proposal that I’ve not considered? Are there security implications I’ve not considered? It’s highly likely that parts of all of these are true and I’d love to hear why. You can view the source for the examples in your browser’s developer tools, but if you’d like a way to check it out more easily and suggest changes, you can also view the git source repository.

Categorieën: Mozilla-nl planet

Mozilla Firefox 37.0.1 Latest Download and Install – Bug Fixes, Safe Browsing ... - Press and Update

Nieuws verzameld via Google - fr, 24/04/2015 - 10:10

Press and Update

Mozilla Firefox 37.0.1 Latest Download and Install – Bug Fixes, Safe Browsing ...
Press and Update
When the name Firefox is mentioned, what first comes to the minds of the millions of people who use this browsing application is security. When Mozilla Firefox was started, it was based on two major principles and these are organization and security ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla joins opponents of French intelligence bill - Telecompaper (subscription)

Nieuws verzameld via Google - fr, 24/04/2015 - 09:27

Mozilla joins opponents of French intelligence bill
Telecompaper (subscription)
The Mozilla Foundation has joined a number of French institutions, businesses, and civil society organisations in expressing deep concern proposals being put forward by the French government, such as allowing for bulk collection of metadata, automated ...

Categorieën: Mozilla-nl planet

Cameron Kaiser: IonPower progress report

Mozilla planet - fr, 24/04/2015 - 05:30
Remember: comparing the G5 optimized PPCBC Baseline-only compiler against the unoptimized test version of IonPower on V8!

% /Applications/TenFourFoxG5.app/Contents/MacOS/js --no-ion -f run.js
Richards: 203
DeltaBlue: 582
Crypto: 358
RayTrace: 584
EarleyBoyer: 595
RegExp: 616
Splay: 969
NavierStokes: 432
----
Score (version 7): 498

% ../../../../mozilla-36t/obj-ff-dbg/dist/bin/js -f run.js
Richards: 337
DeltaBlue: 948
Crypto: 1083
RayTrace: 913
EarleyBoyer: 350
RegExp: 259
Splay: 584
NavierStokes: 3262
----
Score (version 7): 695

I've got one failing test case left to go (the other is not expected to pass because it assumes a little-endian memory alignment)! We're almost to the TenFourFox 38 port!

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Rust Once, Run Everywhere

Mozilla planet - fr, 24/04/2015 - 02:00

Rust's quest for world domination was never destined to happen overnight, so Rust needs to be able to interoperate with the existing world just as easily as it talks to itself. For this reason, Rust makes it easy to communicate with C APIs without overhead, and to leverage its ownership system to provide much stronger safety guarantees for those APIs at the same time.

To communicate with other languages, Rust provides a foreign function interface (FFI). Following Rust's design principles, the FFI provides a zero-cost abstraction where function calls between Rust and C have identical performance to C function calls. FFI bindings can also leverage language features such as ownership and borrowing to provide a safe interface that enforces protocols around pointers and other resources. These protocols usually appear only in the documentation for C APIs -- at best -- but Rust makes them explicit.

In this post we'll explore how to encapsulate unsafe FFI calls to C in safe, zero-cost abstractions. Working with C is, however, just an example; we'll also see how Rust can easily talk to languages like Python and Ruby just as seamlessly as with C.

Rust talking to C

Let's start with a simple example of calling C code from Rust and then demonstrate that Rust imposes no additional overhead. Here's a C program which will simply double all the input it's given:

int double_input(int input) { return input * 2; }

To call this from Rust, you might write a program like this:

extern crate libc; extern { fn double_input(input: libc::c_int) -> libc::c_int; } fn main() { let input = 4; let output = unsafe { double_input(input) }; println!("{} * 2 = {}", input, output); }

And that's it! You can try this out for yourself by checking out the code on GitHub and running cargo run from that directory. At the source level we can see that there's no burden in calling an external function beyond stating its signature, and we'll see soon that the generated code indeed has no overhead, either. There are, however, a few subtle aspects of this Rust program, so let's cover each piece in detail.

First up we see extern crate libc. The libc crate provides many useful type definitions for FFI bindings when talking with C, and it makes it easy to ensure that both C and Rust agree on the types crossing the language boundary.

This leads us nicely into the next part of the program:

extern { fn double_input(input: libc::c_int) -> libc::c_int; }

In Rust this is a declaration of an externally available function. You can think of this along the lines of a C header file. Here's where the compiler learns about the inputs and outputs of the function, and you can see above that this matches our definition in C. Next up we have the main body of the program:

fn main() { let input = 4; let output = unsafe { double_input(input) }; println!("{} * 2 = {}", input, output); }

We see one of the crucial aspects of FFI in Rust here, the unsafe block. The compiler knows nothing about the implementation of double_input, so it must assume that memory unsafety could happen whenever you call a foreign function. The unsafe block is how the programmer takes responsibility for ensuring safety -- you are promising that the actual call you make will not, in fact, violate memory safety, and thus that Rust's basic guarantees are upheld. This may seem limiting, but Rust has just the right set of tools to allow consumers to not worry about unsafe (more on this in a moment).

Now that we've seen how to call a C function from Rust, let's see if we can verify this claim of zero overhead. Almost all programming languages can call into C one way or another, but it often comes at a cost with runtime type conversions or perhaps some language-runtime juggling. To get a handle on what Rust is doing, let's go straight to the assembly code of the above main function's call to double_input:

mov $0x4,%edi callq 3bc30 <double_input>

And as before, that's it! Here we can see that calling a C function from Rust involves precisely one call instruction after moving the arguments into place, exactly the same cost as it would be in C.

Safe Abstractions

Most features in Rust tie into its core concept of ownership, and the FFI is no exception. When binding a C library in Rust you not only have the benefit of zero overhead, but you are also able to make it safer than C can! Bindings can leverage the ownership and borrowing principles in Rust to codify comments typically found in a C header about how its API should be used.

For example, consider a C library for parsing a tarball. This library will expose functions to read the contents of each file in the tarball, probably something along the lines of:

// Gets the data for a file in the tarball at the given index, returning NULL if // it does not exist. The `size` pointer is filled in with the size of the file // if successful. const char *tarball_file_data(tarball_t *tarball, unsigned index, size_t *size);

This function is implicitly making assumptions about how it can be used, however, by assuming that the char* pointer returned cannot outlive the input tarball. When bound in Rust, this API might look like this instead:

pub struct Tarball { raw: *mut tarball_t } impl Tarball { pub fn file(&self, index: u32) -> Option<&[u8]> { unsafe { let mut size = 0; let data = tarball_file_data(self.raw, index as libc::c_uint, &mut size); if data.is_null() { None } else { Some(slice::from_raw_parts(data as *const u8, size as usize)) } } } }

Here the *mut tarball_t pointer is owned by a Tarball, which is responsible for any destruction and cleanup, so we already have rich knowledge about the lifetime of the tarball's memory. Additionally, the file method returns a borrowed slice whose lifetime is implicitly connected to the lifetime of the source tarball itself (the &self argument). This is Rust's way of indicating that the returned slice can only be used within the lifetime of the tarball, statically preventing dangling pointer bugs that are easy to make when working directly with C. (If you're not familiar with this kind of borrowing in Rust, have a look at Yehuda Katz's blog post on ownership.)

A key aspect of the Rust binding here is that it is a safe function, meaning that callers do not have to use unsafe blocks to invoke it! Although it has an unsafe implementation (due to calling an FFI function), the interface uses borrowing to guarantee that no memory unsafety can occur in any Rust code that uses it. That is, due to Rust's static checking, it's simply not possible to cause a segfault using the API on the Rust side. And don't forget, all of this is coming at zero cost: the raw types in C are representable in Rust with no extra allocations or overhead.

Rust's amazing community has already built some substantial safe bindings around existing C libraries, including OpenSSL, libgit2, libdispatch, libcurl, sdl2, Unix APIs, and libsodium. This list is also growing quite rapidly on crates.io, so your favorite C library may already be bound or will be bound soon!

C talking to Rust

Despite guaranteeing memory safety, Rust does not have a garbage collector or runtime, and one of the benefits of this is that Rust code can be called from C with no setup at all. This means that the zero overhead FFI not only applies when Rust calls into C, but also when C calls into Rust!

Let's take the example above, but reverse the roles of each language. As before, all the code below is available on GitHub. First we'll start off with our Rust code:

#[no_mangle] pub extern fn double_input(input: i32) -> i32 { input * 2 }

As with the Rust code before, there's not a whole lot here but there are some subtle aspects in play. First off, we've labeled our function definition with a #[no_mangle] attribute. This instructs the compiler to not mangle the symbol name for the function double_input. Rust employs name mangling similar to C++ to ensure that libraries do not clash with one another, and this attribute means that you don't have to guess a symbol name like double_input::h485dee7f568bebafeaa from C.

Next we've got our function definition, and the most interesting part about this is the keyword extern. This is a specialized form of specifying the ABI for a function which enables the function to be compatible with a C function call.

Finally, if you take a look at the Cargo.toml you'll see that this library is not compiled as a normal Rust library (rlib) but instead as a static archive which Rust calls a 'staticlib'. This enables all the relevant Rust code to be linked statically into the C program we're about to produce.

Now that we've got our Rust library squared away, let's write our C program which will call Rust.

#include <stdint.h> #include <stdio.h> extern int32_t double_input(int32_t input); int main() { int input = 4; int output = double_input(input); printf("%d * 2 = %d\n", input, output); return 0; }

Here we can see that C, like Rust, needs to declare the double_input function that Rust defined. Other than that though everything is ready to go! If you run make from the directory on GitHub you'll see these examples getting compiled and linked together and the final executable should run and print 4 * 2 = 8.

Rust's lack of a garbage collector and runtime enables this seamless transition from C to Rust. The external C code does not need to perform any setup on Rust's behalf, making the transition that much cheaper.

Beyond C

Up to now we've seen how FFI in Rust has zero overhead and how we can use Rust's concept of ownership to write safe bindings to C libraries. If you're not using C, however, you're still in luck! These features of Rust enable it to also be called from Python, Ruby, JavaScript, and many more languages.

When writing code in these languages, you sometimes want to speed up some component that's performance critical, but in the past this often required dropping all the way to C, and thereby giving up the memory safety, high-level abstractions, and ergonomics of these languages.

The fact that Rust can talk to easily with C, however, means that it is also viable for this sort of usage. One of Rust's first production users, Skylight, was able to improve the performance and memory usage of their data collection agent almost instantly by just using Rust, and the Rust code is all published as a Ruby gem.

Moving from a language like Python and Ruby down to C to optimize performance is often quite difficult as it's tough to ensure that the program won't crash in a difficult-to-debug way. Rust, however, not only brings zero cost FFI, but also makes it possible to retain the same safety guarantees as the original source language. In the long run, this should make it much easier for programmers in these languages to drop down and do some systems programming to squeeze out critical performance when they need it.

FFI is just one of many tools in the toolbox of Rust, but it's a key component to Rust's adoption as it allows Rust to seamlessly integrate with existing code bases today. I'm personally quite excited to see the benefits of Rust reach as many projects as possible!

Categorieën: Mozilla-nl planet

Emma Irwin: My year on Reps Council

Mozilla planet - fr, 24/04/2015 - 01:36

It’s been one year! An incredible year of learning, leading and helping evolve the Mozilla Reps program as a council member. As my term ends I want to share my experiences with those considering this same path, but also as a way to lend to the greater narrative of Reps as a leadership platform.

I could write 12 posts for each month of my term, but instead I thought it might be more helpful to say what I know for sure.

The 6 things I know for sure

(after 12 months on Reps Council)

1. Mozilla Reps Council Is a Journey of Learning and Inspiration

I assumed, when I first started council, that my workload would  consist of mostly administrative tasks (although to be truthful there is a lot of that). I also assumed I would effortlessly lean on my existing leadership skills to ‘help out’ where needed.  It turns out, I had a lot to learn and improve on – here are some of the new and sharpened skills I am emerging with:

  • Problem solving
  • Conflict Resolution/ Crisis Management
  • Communication
  • Strategy
  • Transparency
  • Project Planning
  • Task Management
  • Writing
  • Respecting Work-Life Balance
  • Debating Respectfully
  • Public Speaking
  • Facilitation
  • The art of saying ‘no’/when to step back
  • The art of ‘not dropping balls’ or knowing which balls will bounce back, and which will break
  • Being brave (aka talking to leadership and with nagging imposter syndrome)
  • Empathy
  • Planning for Diversity
  • Outreach
  • Teaching

2. 2015 is a (super) important year for Reps

Nurtured by the loving hands of 5 previous Reps councils, a strong mentorship structure and over 400 Reps and thousands of community members the Mozilla Reps program has come to an important milestone as a recognized body of leadership across Mozilla.  The  clearly articulated vision of Reps as a ‘launch pad for leadership’ has pushed us to be more  strategic in our goals.  And we are.  The next council together with mentors will be critical in executing these goals.

3. The voice of community is valued, and Mozilla is listening

In the past few months, we’ve worked with Mitchell Baker, Chris Beard, Mark Surman and David Slater, Mary-Ellen and others on everything from conflict resolution, to VP interview and on-boarding processes. Reps Council is on the Mozilla leadership page. The Mozilla Reps call has been attended by Firefox and Brand teams in need of feedback.  It’s not a coincidence, and it’s not casual – your voice matters.  Reps as leaders have the ear of the entire organization, because Reps are the voice of their extended community.

2015-04-23_1942

 

I encourage Rep Mentors with loud minds – to run for council this year.

4. Mozilla Reps is a ever-evolving

View post on imgur.com

When I joined Reps Council, I had a lot of ideas about what would would ‘fix’.  And I laugh at myself now – ‘fixing’ is something we do to flaws, to errors and mistakes – but the Reps program is not an immovable force  – it’s a living organism, it’s alive with people, their ideas, inventions and actions.  How we evolve, while aligning with the needs of project goals, is a bit like changing the tire on a moving car .   If you are considering a run for council, it might help to envision ways you can evolve, improve and grow the program as it shifts, and in response to community vision for their own participation goals.

 5. Changing minds is hard / Outreach matters

I can’t write a list like this without acknowledging a my personal challenge of recognizing and trying to change ‘perception problems’.  It was strange to move from what had been a fairly easy transition between community, Rep and mentor to Reps council where almost suddenly –  I was regarded as part of a bureaucratic structure.   Perceptions of our extended community have also been challenging – the idea that Reps is somehow isolated or a special  contributor group is contrary to the leadership platform we are really building.

Slowly we are changing minds, slowly outreach is making a difference – I am happy and optimistic about this.

 6.  Diversity Matters  Reps is an incredibly diverse community with diverse representation in many areas including age, geography and experience. Few other communities can compare .  But,  like much of the technology world we struggle with the representation of women in our council, and mentorship base.  To be truly reflective of our community, and our world – to have the benefit of all perspectives we need to encourage women leaders.  As I leave council, my hope is that we will continue to prioritize women in leadership roles.

 

To the Reps community, mentors, the Reps team, Mozilla leadership and community I thank you for this incredible opportunity to contribute and to grow.  I plan to pay it forward.

2015-04-23_1946

Feature Image Credit:  Fay Tandog

 

 

Categorieën: Mozilla-nl planet

Air Mozilla: Privacy Lab and Cryptoparty with guest speaker Melanie Ensign - How Security/Crypto Experts Can Communicate with Non-Technical Audiences

Mozilla planet - fr, 24/04/2015 - 01:30

Privacy Lab and Cryptoparty with guest speaker Melanie Ensign - How Security/Crypto Experts Can Communicate with Non-Technical Audiences Our April Privacy Lab will include a speaker and an optional and free Cryptoparty, hosted by Wildbee (https://wildbee.org/cryptoparty.html). Our speaker will be Melanie Ensign. Melanie's...

Categorieën: Mozilla-nl planet

Air Mozilla: Privacy Lab and Cryptoparty with guest speaker Melanie Ensign - How Security/Crypto Experts Can Communicate with Non-Technical Audiences

Mozilla planet - fr, 24/04/2015 - 01:30

Privacy Lab and Cryptoparty with guest speaker Melanie Ensign - How Security/Crypto Experts Can Communicate with Non-Technical Audiences Our April Privacy Lab will include a speaker and an optional and free Cryptoparty, hosted by Wildbee (https://wildbee.org/cryptoparty.html). Our speaker will be Melanie Ensign. Melanie's...

Categorieën: Mozilla-nl planet

L. David Baron: Thoughts on migrating to a secure Web

Mozilla planet - fr, 24/04/2015 - 00:12

Brad Hill asked what I and other candidates in the TAG election think of Tim Berners-Lee's article Web Security - "HTTPS Everywhere" harmful. The question seems worth answering, and I don't think an answer fits within a tweet. So this is what I think, even though I feel the topic is a bit outside my area of expertise:

  • The current path of switching content on the Web to being accessed through secure connections generally involves making the content available via http URLs also available via https URLs, redirecting http URLs to https ones, and (hopefully, although not all that frequently in reality) using HSTS to ensure that the user's future attempts to access HTTP resources get converted to HTTPS without any insecure connection being made. This is a bit hacky, and hasn't solved the problem of the initial insecure connection, but it mostly works, and doesn't degrade the security of anything we have today (e.g., bookmarks or links to https URLs).

  • It's not clear to me what the problem that Tim is trying to solve is. I think some of it is concern over the semantic Web (e.g., his concern over the “identity of the resource”), although there may be other concerns there that I don't understand. I'd tend to prioritize the interests of the browseable Web (with users counted in the billions) and other uses of the Web that are widespread, over those of the semantic Web.

  • There are good reasons for the partitioning that browsers do between http and https:

    • Some of the partitioning prevents attacks directly (for example, sending a cookie that should be sent only to an https site to its http equivalent could allow an active attacker to steal the information in that cookie). Likewise for many other attacks involving the same-origin policy, where http and https are considered different origins.
    • Some of it (e.g., identifying https pages that load resources over http as insecure) is intended to prevent large classes of mistakes that would otherwise be widespread and drastically reduce the security of the Web. Circa 2000, a common Web developer complaint about browser security UI was that a site couldn't be considered secure if an image was loaded over HTTP. This might have been fine if the image was the company logo (and the attack under consideration was avoiding theft of money or credentials rather than avoiding monitoring), but isn't fine if the image is a graph of a bank account balance or if the image's URL has authentication information in it. (On the other hand, if it were a script rather than an image, an active attacker could compromise the entire page if the script could be loaded without authentication.) I think a similar rationale applies for not having mechanisms to do authentication without encryption (even though there are many cases where that would be fine).

    It's not clear to me how Tim's proposal of making http secure would address these issues (and keep everything else working at the same time). For example, is a secure-http page same-origin with insecure-http on the same host, or with https, or neither? They may well be solvable, but I don't see how to solve them off the top of my head, and I think they'd need to be solved before actually pursuing this approach.

  • One problem that I think is worth solving is that HTTPS as a user-presentable prefix has largely failed. Banks tell their customers to go to links like "bofa.com/activate" or "wellsfargo.com/activate". (The first one doesn't even work if the user adds "https://". I guess there's a chance that the experience of existing users could be fixed with HSTS, but that's not the case today.) They do this for a good reason; each additional character (especially the strange characters) is going to reduce the chance the user succeeds at the task.

    It's possible Tim's proposal might help solve this, although it's not clear to me how it could do so with an active man-in-the-middle attacker. (It could help against passive attackers, as could browsers trying https before trying http.) In the long term, maybe the Web will get to a point where typing such URLs tries https and doesn't try http, but I think we're a long way away from a browser being able to do that without losing a large percentage of its users.

I think I basically understand the current approach of migrating to secure connections by migrating to https, which seems to be working, although slowly. I'm hopeful that Let's Encrypt will help speed this up. It's possible that the approach Tim is suggesting could lead to a faster migration to secure connections on the Web, although I don't see enough in Tim's article to evaluate its security and feasibility.

Categorieën: Mozilla-nl planet

TomTom Launches Online Maps and Navigation In HTML5 Through Partnership ... - GISuser.com (press release)

Nieuws verzameld via Google - to, 23/04/2015 - 22:43

TomTom Launches Online Maps and Navigation In HTML5 Through Partnership ...
GISuser.com (press release)
AMSTERDAM—TomTom (TOM2) today announces a partnership with Mozilla and Telefónica to bring its Maps Online and Nav Online apps to HTML5 powered Firefox OS smartphone devices. “TomTom is excited to be embracing the openness of HTML5 to ...

Categorieën: Mozilla-nl planet

Air Mozilla: German speaking community bi-weekly meeting

Mozilla planet - to, 23/04/2015 - 21:00

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

Categorieën: Mozilla-nl planet

Pages