mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Niko Matsakis: Maximally minimal specialization: always applicable impls

Mozilla planet - fr, 09/02/2018 - 06:00

So aturon wrote this beautiful post about what a good week it has been. In there, they wrote:

Breakthrough #2: @nikomatsakis had a eureka moment and figured out a path to make specialization sound, while still supporting its most important use cases (blog post forthcoming!). Again, this suddenly puts specialization on the map for Rust Epoch 2018.

Sheesh I wish they hadn’t written that! Now the pressure is on. Well, here goes nothing =).

Anyway, I’ve been thinking about the upcoming Rust Epoch. We’ve been iterating over the final list of features to be included and I think it seems pretty exciting. But there is one “fancy type system” feature that’s been languishing for some time: specialization. Accepted to much fanfare as RFC 1210, we’ve been kind of stuck since then trying to figure out how to solve an underlying soundness challenge.

As aturon wrote, I think (and emphasis on think!) I may have a solution. I call it the always applicable rule, but you might also call it maximally minimal specialization1.

Let’s be clear: this proposal does not support all the specialization use cases originally envisioned. As the phrase maximally minimal suggests, it works by focusing on a core set of impls and accepting those. But that’s better than most of its competitors! =) Better still, it leaves a route for future expansion.

The soundness problem

I’ll just cover the soundness problem very briefly; Aaron wrote an excellent blog post that covers the details. The crux of the problem is that code generation wants to erase regions, but the type checker doesn’t. This means that we can write specialization impls that depend on details of lifetimes, but we have no way to test at code generation time if those more specialized impls apply. A very simple example would be something like this:

impl<T> Trait for T { } impl Trait for &'static str { }

At code generation time, all we know is that we have a &str – for some lifetime. We don’t know if it’s a static lifetime or not. The type checker is supposed to have assured us that we don’t have to know – that this lifetime is “big enough” to cover all the uses of the string.

My proposal would reject the specializing impl above. I basically aim to solve this problem by guaranteeing that, just as today, code generation doesn’t have to care about specific lifetimes, because it knows that – whatever they are – if there is a potentially specializing impl, it will be applicable.

The “always applicable” test

The core idea is to change the rule for when overlap is allowed. In RFC 1210 the rule is something like this:

  • Distinct impls A and B are allowed to overlap if one of them specializes the other.

We have long intended to extend this via the idea of intersection impls, giving rise to a rule like:

  • Two distinct impls A and B are allowed to overlap if, for all types in their intersection:
    • there exists an applicable impl C and C specializes both A and B.2

My proposal is to extend that intersection rule with the always applicable test. I’m actually going to start with a simple version, and then I’ll discuss an important extension that makes it much more expressive.

  • Two distinct impls A and B are allowed to overlap if, for all types in their intersection:
    • there exists an applicable impl C and C specializes both A and B,
    • and that impl C is always applicable.

(We will see, by the way, that the precise definition of the specializes predicate doesn’t matter much for the purposes of my proposal here – any partial order will do.)

When is an impl always applicable?

Intuitively, an impl is always applicable if it does not impose any additional conditions on its input types beyond that they be well-formed – and in particular it doesn’t impose any equality constraints between parts of its input types. It also has to be fully generic with respect to the lifetimes involved.

Actually, I think the best way to explain it is in terms of the implied bounds proposal3 (RFC, blog post). The idea is roughly this: an impl is always applicable if it meets three conditions:

  • it relies only on implied bounds,
  • it is fully generic with respect to lifetimes,
  • it doesn’t repeat generic type parameters.

Let’s look at those three conditions.

Condition 1: Relies only on implied bounds.

Here is an example of an always applicable impl (which could therefore be used to specialize another impl):

struct Foo<T: Clone> { } impl<T> SomeTrait for Foo<T> { // code in here can assume that `T: Clone` because of implied bounds }

Here the impl works fine, because it adds no additional bounds beyond the T: Clone that is implied by the struct declaration.

If the impl adds new bounds that are not part of the struct, however, then it is not always applicable:

struct Foo<T: Clone> { } impl<T: Copy> SomeTrait for Foo<T> { // ^^^^^^^ new bound not declared on `Foo`, // hence *not* always applicable } Condition 2: Fully generic with respect to lifetimes.

Each lifetime used in the impl header must be a lifetime parameter, and each lifetime parameter can only be used once. So an impl like this is always applicable:

impl<'a, 'b> SomeTrait for &'a &'b u32 { // implied bounds let us assume that `'b: 'a`, as well }

But the following impls are not always applicable:

impl<'a> SomeTrait for &'a &'a u32 { // ^^^^^^^ same lifetime used twice } impl SomeTrait for &'static str { // ^^^^^^^ not a lifetime parmeter } Condition 3: Each type parameter can only be used once.

Using a type parameter more than once imposes “hidden” equality constraints between parts of the input types which in turn can lead to equality constraints between lifetimes. Therefore, an always applicable impl must use each type parameter only once, like this:

impl<T, U> SomeTrait for (T, U) { }

Repeating, as here, means the impl cannot be used to specialize:

impl<T> SomeTrait for (T, T) { // ^^^^ // `T` used twice: not always applicable } How can we think about this formally?

For each impl, we can create a Chalk goal that is provable if it is always applicable. I’ll define this here “by example”. Let’s consider a variant of the first example we saw:

struct Foo<T: Clone> { } impl<T: Clone> SomeTrait for Foo<T> { }

As we saw before, this impl is always applicable, because the T: Clone where clause on the impl follows from the implied bounds of Foo<T>.

The recipe to transform this into a predicate is that we want to replace each use of a type/region parameter in the input types with a universally quantified type/region (note that the two uses of the same type parameter would be replaced with two distinct types). This yields a “skolemized” set of input types T. When check if the impl could be applied to T.

In the case of our example, that means we would be trying to prove something like this:

// For each *use* of a type parameter or region in // the input types, we add a 'forall' variable here. // In this example, the only spot is `Foo<_>`, so we // have one: forall<A> { // We can assume that each of the input types (using those // forall variables) are well-formed: if (WellFormed(Foo<A>)) { // Now we have to see if the impl matches. To start, // we create existential variables for each of the // impl's generic parameters: exists<T> { // The types in the impl header must be equal... Foo<T> = Foo<A>, // ...and the where clauses on the impl must be provable. T: Clone, } } }

Clearly, this is provable: we infer that T = A, and then we can prove that A: Clone because it follows from WellFormed(Foo<A>). Now if we look at the second example, which added T: Copy to the impl, we can see why we get an error. Here was the example:

struct Foo<T: Clone> { } impl<T: Copy> SomeTrait for Foo<T> { // ^^^^^^^ new bound not declared on `Foo`, // hence *not* always applicable }

That example results in a query like:

forall<A> { if (WellFormed(Foo<A>)) { exists<T> { Foo<T> = Foo<A>, T: Copy, // <-- Not provable! } } }

In this case, we fail to prove T: Copy, because it does not follow from WellFormed(Foo<A>).

As one last example, let’s look at the impl that repeats a type parameter:

impl<T> SomeTrait for (T, T) { // Not always applicable }

The query that will result follows; what is interesting here is that the type (T, T) results in two forall variables, because it has two distinct uses of a type parameter (it just happens to be one parameter used twice):

forall<A, B> { if (WellFormed((A, B))) { exists<T> { (T, T) = (A, B) // <-- cannot be proven } } } What is accepted?

What this rule primarily does it allow you to specialize blanket impls with concrete types. For example, we currently have a From impl that says any type T can be converted to itself:

impl<T> From<T> for T { .. }

It would be nice to be able to define an impl that allows a value of the never type ! to be converted into any type (since such a value cannot exist in practice:

impl<T> From<!> for T { .. }

However, this impl overlaps with the reflexive impl. Therefore, we’d like to be able to provide an intersection impl defining what happens when you convert ! to ! specifically:

impl From<!> for ! { .. }

All of these impls would be legal in this proposal.

Extension: Refining always applicable impls to consider the base impl

While it accepts some things, the always applicable rule can also be quite restrictive. For example, consider this pair of impls:

// Base impl: impl<T> SomeTrait for T where T: 'static { } // Specializing impl: impl SomeTrait for &'static str { }

Here, the second impl wants to specialize the first, but it is not always applicable, because it specifies the 'static lifetime. And yet, it feels like this should be ok, since the base impl only applies to 'static things.

We can make this notion more formal by expanding the property to say that the specializing impl C must be always applicable with respect to the base impls. In this extended version of the predicate, the impl C is allowed to rely not only on the implied bounds, but on the bounds that appear in the base impl(s).

So, the impls above might result in a Chalk predicate like:

// One use of a lifetime in the specializing impl (`'static`), // so we introduce one 'forall' lifetime: forall<'a> { // Assuming the base impl applies: if (exists<T> { T = &'a str, T: 'static }) { // We have to prove that the // specialized impls type's can unify: &'a str = &'static str } } }

As it happens, the compiler today has logic that would let us deduce that, because we know that &'a str: 'static, then we know that 'a = 'static, and hence we could solve this clause successfully.

This rule also allows us to accept some cases where type parameters are repeated, though we’d have to upgrade chalk’s capability to let it prove those predicates fully. Consider this pair of impls from RFC 1210:

// Base impl: impl<E, T> Extend<E, T> for Vec<E> where T: IntoIterator<Item=E> {..} // Specializing impl: impl<'a, E> Extend<E, &'a [E]> for Vec<E> {..} // ^ ^ ^ E repeated three times!

Here the specializing impl repeats the type parameter E three times! However, looking at the base impl, we can see that all of those repeats follow from the conditions on the base impl. The resulting chalk predicate would be:

// The fully general form of specializing impl is // > impl<A,'b,C,D> Extend<A, &'b [C]> for Vec<D> forall<A, 'b, C, D> { // Assuming the base impl applies: if (exists<E, T> { E = A, T = &'b [B], Vec<D> = Vec<E>, T: IntoIterator<Item=E> }) { // Can we prove the specializing impl unifications? exists<'a, E> { E = A, &'a [E] = &'b [C], Vec<E> = Vec<D>, } } }

This predicate should be provable – but there is a definite catch. At the moment, these kinds of predicates fall outside the “Hereditary Harrop” (HH) predicates that Chalk can handle. HH predicates do not permit existential quantification and equality predicates as hypotheses (i.e., in an if (C) { ... }). I can however imagine some quick-n-dirty extensions that would cover these particular cases, and of course there are more powerful proving techniques out there that we could tinker with (though I might prefer to avoid that).

Extension: Reverse implied bounds rules

While the previous examples ought to be provable, there are some other cases that won’t work out without some further extension to Rust. Consider this pair of impls:

impl<T> Foo for T where T: Clone { } impl<T> Foo for Vec<T> where T: Clone { }

Can we consider this second impl to be always applicable relative to the first? Effectively this boils down to asking whether knowing Vec<T>: Clone allows us to deduce that T: Clone – and right now, we can’t know that. The problem is that the impls we have only go one way. That is, given the following impl:

impl<T> Clone for Vec<T> where T: Clone { .. }

we get a program clause like

forall<T> { (Vec<T>: Clone) :- (T: Clone) }

but we need the reverse:

forall<T> { (T: Clone) :- (Vec<T>: Clone) }

This is basically an extension of implied bounds; but we’d have to be careful. If we just create those reverse rules for every impl, then it would mean that removing a bound from an impl is a breaking change, and that’d be a shame.

We could address this in a few ways. The most obvious is that we might permit people to annotate impls indicating that they represent minimal conditions (i.e., that removing a bound is a breaking change).

Alternatively, I feel like there is some sort of feature “waiting” out there that lets us make richer promises about what sorts of trait impls we might write in the future: this would be helpful also to coherence, since knowing what impls will not be written lets us permit more things in downstream crates. (For example, it’d be useful to know that Vec<T> will never be Copy.)

Extension: Designating traits as “specialization predicates”

However, even when we consider the base impl, and even if we have some solution to reverse rules, we still can’t cover the use case of having “overlapping blanket impls”, like these two:

impl<T> Skip for T where T: Read { .. } impl<T> Skip for T where T: Read + Seek { .. }

Here we have a trait Skip that (presumably) lets us skip forward in a file. We can supply one default implementation that works for any reader, but it’s inefficient: it would just read and discard N bytes. It’d be nice if we could provide a more efficient version for those readers that implement Seek. Unfortunately, this second impl is not always applicable with respect to the first impl – it adds a new requirement, T: Seek, that does not follow from the bounds on the first impl nor the implied bounds.

You might wonder why this is problematic in the first place. The danger is that some other crate might have an impl for Seek that places lifetime constraints, such as:

impl Seek for &'static Foo { }

Now at code generation time, we won’t be able to tell if that impl applies, since we’ll have erased the precise region.

However, what we could do is allow the Seek trait to be designated as a specialization predicate (perhaps with an attribute like #[specialization_predicate]). Traits marked as specialization predicates would be limited so that every one of their impls must be always applicable (our original predicate). This basically means that, e.g., a “reader” cannot conditionally implement Seek – it has to be always seekable, or never. When determining whether an impl is always applicable, we can ignore where clauses that pertain to #[specialization_predicate] traits.

Adding a #[specialization_predicate] attribute to an existing trait would be a breaking change; removing it would be one too. However, it would be possible to take existing traits and add “specialization predicate” subtraits. For example, if the Seek trait already existed, we might do this:

impl<T> Skip for T where T: Read { .. } impl<T> Skip for T where T: Read + SeekPredicate { .. } #[specialization_predicate] trait UnconditionalSeek: Seek { fn seek_predicate(&self, n: usize) { self.seek(n); } }

Now streams that implement seek unconditionally (probably all of them) can add impl UnconditionalSeek for MyStream { } and get the optimization. Not as automatic as we might like, but could be worse.

Default impls need not be always applicable

This last example illustrates an interesting point. RFC 1210 described not only specialization but also a more flexible form of defaults that go beyond default methods in trait definitions. The idea was that you can define lots of defaults using a default impl. So the UnconditionalSeek trait at the end of the last section might also have been expressed:

#[specialization_predicate] trait UnconditionalSeek: Seek { } default impl<T: Seek> UnconditionalSeek for T { fn seek_predicate(&self, n: usize) { self.seek(n); } }

The interesting thing about default impls is that they are not (yet) a full impl. They only represent default methods that real impls can draw upon, but users still have to write a real impl somewhere. This means that they can be exempt from the rules about being always applicable – those rules will be enforced at the real impl point. Note for example that the default impl above is not always available, as it depends on Seek, which is not an implied bound anywhere.

Conclusion

I’ve presented a refinement of specialization in which we impose one extra condition on the specializing impl: not only must it be a subset of the base impl(s) that it specializes, it must be always applicable, which means basically that if we are given a set of types T where we know:

  • the base impl was proven by the type checker to apply to T
  • the types T were proven by the type checker to be well-formed
  • and the specialized impl unifies with the lifetime-erased versions of T

then we know that the specialized impl applies.

The beauty of this approach compared with past approaches is that it preserves the existing role of the type checker and the code generator. As today in Rust, the type checker always knows the full region details, but the code generator can just ignore them, and still be assured that all region data will be valid when it is accessed.

This implies for example that we don’t need to impose the restrictions that aturon discussed in their blog post: we can allow specialized associated types to be resolved in full by the type checker as long as they are not marked default, because there is no danger that the type checker and trans will come to different conclusions.

Thoughts?

I’ve opened an internals thread on this post. I’d love to hear whether you see a problem with this approach. I’d also like to hear about use cases that you have for specialization that you think may not fit into this approach.

Footnotes
  1. We don’t say it so much anymore, but in the olden days of Rust, the phrase “max min” was very “en vogue”; I think we picked it up from some ES6 proposals about the class syntax.

  2. Note: an impl is said to specialize itself.

  3. Let me give a shout out here to scalexm, who recently emerged with an elegant solution for how to model implied bounds in Chalk.

Categorieën: Mozilla-nl planet

Air Mozilla: Bay Area Rust Meetup February 2018

Mozilla planet - fr, 09/02/2018 - 04:00

Bay Area Rust Meetup February 2018  - Matthew Fornaciari from Gremlin talking about a version of Chaos Monkey in Rust - George Morgan from Flipper talking about their embedded Rust...

Categorieën: Mozilla-nl planet

Karl Dubost: Pour holy Web Compatibility in your CSS font

Mozilla planet - fr, 09/02/2018 - 02:00

Yet another Webcompat issue with the characters being cut in the bottom, this will join the other ones, such as cross characters not well centered in a rounded box and many other cases. What about it?

The sans-serif issue

All of these have the same pattern. They rely on the intrinsic font features to get the right design. So… this morning was another of this case. Take this very simple CSS rule:

.gsc-control-cse, .gsc-control-cse .gsc-table-result { width: 100%; font-family: Arial, sans-serif; font-size: 13px; }

Nothing fancy about it. It includes Arial, a widely used font and it gives a sans-serif fallback. It seems to be a sound and fail-safe choice.

Well… meet the land of mobile and your font declaration doesn't seem to be that reliable anymore. Mobile browsers have different default fonts on Android.

The sans-serif doesn't mean the same thing in all browsers on the same OS.

For example, for sans-serif and western languages

  • Chrome: Roboto
  • Firefox: Clear Sans

If you use Chinese or Japanese characters, the default will be different.

Fix The Users Woes On Mobile

Why is it happening so often? Same story, the web developers didn't have time, budget to test on all browsers. They probably tested on Chrome and Safari (iOS) and they decided to make a pass on Firefox Android. And because fonts have different features, they do not behave the same to line-height, box sizes and so on. Clear Sans and Roboto are different enough that it creates breakage on some sites.

If you test only on Chrome Android (you should not), but let says we reached the shores of Friday… and it's time to deploy at 5pm. This is your fix:

.gsc-control-cse, .gsc-control-cse .gsc-table-result { width: 100%; font-family: Arial, Roboto, sans-serif; font-size: 13px; }

Name the fonts available on mobile OS, you expect the design to be working on. It's still not universally accessible and will not make it reliable in all cases, but it will cover a lot of cases. It will also make your Firefox Android users less grumpy and your Mondays will be brighter.

Otsukare!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Creating an Add-on for the Project Things Gateway

Mozilla planet - to, 08/02/2018 - 23:01

The Project Things Gateway exists as a platform to bring all of your IoT devices together under a unified umbrella, using a standardized HTTP-based API. Currently, the platform only has support for a limited number of devices, and we need your help expanding our reach! It is fairly straightforward to add support for new devices, and we will walk you through how to do so. The best part: you can use whatever programming language you’d like!

High-Level Concepts Add-on

An Add-on is a collection of code that the Gateway runs to gain a new features, usually a new adapter. This is loosely modeled after the add-on system in Firefox where each add-on adds to the functionality of your Gateway in new and exciting ways.

Adapter

An Adapter is an object that manages communication with a device or set of devices. This could be very granular, such as one adapter object communicating with one GPIO pin, or it could be much more broad, such as one adapter communicating with any number of devices over WiFi. You decide!

Device

A Device is just that, a hardware device, such as a smart plug, light bulb, or temperature sensor.

Property

A Property is an individual property of a device, such as its on/off state, its energy usage, or its color.

Supported Languages

Add-ons have been written in Node.js, Python, and Rust so far, and official JavaScript and Python bindings are available on the gateway platform. If you want to skip ahead, you can check out the list of examples now. However, you are free to develop an add-on in whatever language you choose, provided the following:

  • Your add-on is properly packaged.
  • Your add-on package bundles all required dependencies that do not already exist on the gateway platform.
  • If your package contains any compiled binaries, they must be compiled for the armv6l architecture. All Raspberry Pi families are compatible with this architecture. The easiest way to do this would be to build your package on a Raspberry Pi 1/2/Zero.
Implementation: The Nitty Gritty Evaluate Your Target Device

First, you need to think about the device(s) you’re trying to target.

  • Will your add-on be communicating with one or many devices?
  • How will the add-on communicate with the device(s)? Is a separate hardware dongle required?
    • For example, the Zigbee and Z-Wave adapters require a separate USB dongle to communicate with devices.
  • What properties do these devices have?
  • Is there an existing Thing type that you can advertise?
  • Are there existing libraries you can use to talk to your device?
    • You’d be surprised by how many NPM modules, Python modules, C/C++ libraries, etc. exist for communicating with IoT devices.

The key here is to gain a strong understanding of the devices you’re trying to support.

Start from an Example

The easiest way to start development is to start with one of the existing add-ons (listed further down). You can download, copy and paste, or git clone one of them into:

/home/pi/mozilla-iot/gateway/build/addons/

Alternatively, you can do your development on a different machine. Just make sure you test on the Raspberry Pi.

After doing so, you should edit the package.json file as appropriate. In particular, the name field needs to match the name of the directory you just created.

Next, begin to edit the code. The key parts of the add-on lifecycle are device creation and property updates. Device creation typically happens as part of a discovery process, whether that’s through SSDP, probing serial devices, or something else. After discovering devices, you need to build up their property lists, and make sure you handle property changes (that could be through events you get, or you may have to poll your devices). You also need to handle property updates from the user.

Restart the gateway process to test your changes:

$ sudo systemctl restart mozilla-iot-gateway.service

Test your add-on thoroughly. You can enable it through the Settings->Add-ons menu in the UI.

Get Your Add-on Published!

Run ./package.sh or whatever else you have to do to package up your add-on. Host the package somewhere, i.e. on Github as a release. Then, submit a pull request or issues to the addon-list repository.

Notes
  • Your add-on will run in a separate process and communicate with the gateway process via nanomsg IPC. That should hopefully be irrelevant to you.
  • If your add-on process dies, it will automatically be restarted.
Examples

The Project Things team has built several add-ons that can serve as a good starting point and reference.

Node.js:

Python:

Rust:

References

Additional documentation, API references, etc., can be found here:

Find a bug in some of our software? Let us know! We’d love to have issues, or better yet, pull requests, filed to the appropriate Github repo.

Categorieën: Mozilla-nl planet

Shing Lyu: Minimal React.js Without A Build Step (Updated)

Mozilla planet - to, 08/02/2018 - 22:39

Back in 2016, I wrote a post about how to write a React.js page without a build step. If I remember correctly, at that time the official React.js site have very little information about running React.js without [Webpack][webpack], [in-browser Babel transpiler][babel] is not very stable and they are deprecating JSXTransformer.js. After the post my focus turned to browser backend projects and I haven’t touch React.js for a while. Now after 1.5 years, when I try to update one of [my React.js project][itinerary-viewer], I notice that the official site now has a clearer instruction on how to use React.js without a build step. So I’m going to write an update the post here.

You can find the example code on GitHub.

1. Load React.js from CDN instead of npm

You can use the official minimal HTML template here. The most crucial bit is the importing of scripts:

<script src="https://unpkg.com/react@16/umd/react.development.js"></script> <script src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script> <script src="https://unpkg.com/babel-standalone@6.15.0/babel.min.js"></script>

If you want better error message, you might want to add the crossorigin attribute to the <script> tags, as suggested in the official document. Why the attribute you ask? As describe in MDN, this attribute will allow your page to log errors on CORS scripts loaded from the CDN.

If you are looking for better performance, load the *.production.min.js instead of *.development.js.

2. Get rid of JSX

I’m actually not that against JSX now, but If you don’t want to include the babel.min.js script, you can consider using the React.createElement function. Actually all JSX elements are syntatic sugar for calling React.createElement(). Here are some examples:

<h1>Hello Word</h1>

can be written as

React.createElement('h1', null, 'Hello World')

And if you want to pass attributes around, you can do

<div onClick={this.props.clickHandler} data={this.state.data}> Click Me! </div> React.createElement('div', { 'onClick': this.props.clickHandler, 'data': this.state.data }, 'Click Me!')

Of course you can have nested elements:

<div> <h1>Hello World</h1> <a>Click Me!</a> </div> React.createElement('div', null, React.createElement('h1', null, 'Hello World') React.createElement('a', null, 'Click Me!') )

You can read how this works in the official documentation.

  1. Split the React.js code into separate files

In the official HTML template, they show how to write script directly in HTML like:

<html> <body> <div id="root"></div> <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('root') ); </script> </body> </html>

But for real-word projects we usually don’t want to throw everything into one big HTML file. So you can put everything between <script> and </script> in to a separate JavaScript file, let’s name it app.js and load it in the original HTML like so:

<html> <body> <div id="root"></div> <script src="app.js" type="text/babel"></script> </body> </html>

The pitfall here is that you must keep the type="text/babel" attribute if you wants to use JSX. Otherwise the js script will fail when it first reaches a JSX tag, resulting an error like this:

SyntaxError: expected expression, got '<'[Learn More] app.js:2:2 Using 3rd-party NPM components Modules with browser support

You can find tons of ready-made React components on NPM, but the quality varies. Some of them are released with browser support, for example Reactstrap, which contains Bootstrap 4 components wrapped in React. In its documentation you can see a “CDN” section with a CDN link, which should just work by adding it to a script tag:

<!-- react-transition-group is required by reactstrap --> <script src="https://unpkg.com/react-transition-group@2.2.1/dist/react-transition-group.min.js" charset="utf-8"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/reactstrap/4.8.0/reactstrap.min.js" charset="utf-8"></script>

then you can find the components in a gloabl variable Reactstrap:

<script type="text/babel" charset="utf-8"> // "Import" the components from Reactstrap const {Button} = Reactstrap; // Render a Reactstrap Button element onto root ReactDOM.render( <Button color="danger">Hello, world!</Button>, document.getElementById('root') ); </script>

(In case you are curious, the first line is the destructing assignment of objects in JavaScript).

Of course it also works without JSX:

<script type="text/javascript" charset="utf-8"> // "Import" the components from Reactstrap const {Button} = Reactstrap; // Render a Reactstrap Button element onto root ReactDOM.render( React.createElement(Button, {'color': 'danger'}, "Hello world!"), document.getElementById('root'), ); </script> Modules without browser support

For modules without explicit browser support, you can still try to expose it to the browser with Browserify, as described in this post. Browserify is a tool that converts a Node.js module into something a browser can take. There are two tricks here:

  1. Use the --standalone option so Browserify will expose the component under the window namespace, so you don’t need a module system to use it.
  2. Use the browserify-global-shim plugin to strip all the usage of React and ReactDOM in the NPM module code, so it will use the React and ReactDOM we included using the <script> tags.

I’ll use a very simple React component on NPM, simple-react-modal, to illustrate this. First, we download this module to see what it looks like:

npm install simple-react-modal

If we go to node_modules/simple-react-modal, we can see a pre-built JavaScript package in the dist folder. Now we can install Browserify by npm install -g browserify. But we can’t just run it yet, because the code uses require('react') but we want to use our version loaded in the browser. So we need to install npm install browserify-global-shim and add the configuration to package.json:

// package.json "browserify-global-shim": { "react": "React", "react-dom": "ReactDOM" }

Now we can run

browserify node_modules/simple-react-modal \ -o simple-react-modal-browser.js \ --transform browserify-global-shim \ --standalone Modal

We’ll get a simple-react-modal-browser.js file, which we can just load in the browser using the <script> tag. Then you can use the Modal like so:

<script type="text/javascript" charset="utf-8"> // "Import" the components from Reactstrap const Modal = window.Modal.default; // Render a Reactstrap Button element onto root ReactDOM.render( React.createElement(Modal, { 'show': true, 'closeOnOuterClick': true }, React.createElement("h1", null, "Hello") ), document.getElementById('root') ); </script>

(There are some implementation detail about the simple-react-modal module in the above code, so don’t be worried if you don’t get everything.)

The benefits

Using this method, you can start prototyping by simply copying a HTML file. You don’t need to install Node.js, NPM and all the NPM modules that quickly make your small proof-of-concept page bloat.

Secondly, this method is compatible with the React-DevTools. Which is available in both Firefox and Chrome. So debugging is much easier.

Finally, It’s super easy to deploy the program. Simply drop the files into any web server (or use GitHub pages). The server doesn’t even need to run Node and NPM, any pure HTTP server will be sufficient. Other people can also easily download the HTML file and start hacking. This is a very nice way to rapidly prototype complex UIs without spending an extra hour setting up all the build steps (and maybe waste another 2 hour helping the team setting their environment).

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting, 08 Feb 2018

Mozilla planet - to, 08/02/2018 - 17:00

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Science Lab February 2018 Bi-monthly Community-Call 20170208

Mozilla planet - to, 08/02/2018 - 17:00

Mozilla Science Lab February 2018 Bi-monthly Community-Call 20170208 Mozilla Science Lab Community Call, February 8

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10N Report: February Edition

Mozilla planet - wo, 07/02/2018 - 22:31
Welcome!

New localizers

  • Kumar has recently joined us to localize in Angika. Welcome Kumar!
  • Francesca has joined Pontoon to localize Firefox in Friulan. Do you speak the language? Join her!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added
  • Mixteco Yucuhiti (“meh”) locale was recently added to our l10n repositories and will soon have single-locale builds to test Firefox Android on!
  • Angika (“anp”) locale was added to Pontoon and will soon start to localize Focus for Android. Welcome!
  • Friulan (“fur”) has been enabled in Pontoon to localize Firefox, starting from old translations recovered from Pootle.
New content and projects What’s new or coming up in Firefox desktop Migration to FTL (Fluent)

In the past releases we reached a few small but important milestones for the Fluent project:

  • Firefox 58 was released on January 23 with the first ever Fluent string.
  • Firefox 59, which will be released on March 13, has 4 more Fluent strings. For this milestone we focused on the migration tools we created to seamlessly port translations from the old format (.properties, .DTD) to Fluent.

For Firefox 60, currently in Nightly, we aim to migrate as many strings as possible to Fluent for Firefox Preferences. The process for these migrations is detailed in this email to dev-l10n, and there are currently 2 patches almost ready to land, while a larger one for the General pane is in progress.

While Pontoon’s documentation already had a section dedicated to Fluent, constantly updated as the interface evolves, our documentation now has a section dedicated to Fluent for localizers, explaining the basic syntax and some of the specific features available in Gecko.

Plural forms

We already talked about plurals in the December report. The good news is that strings using the wrong number of plural forms are now reported on the l10n dashboard (example). Here’s a summary of all you need to know about plurals.

How plurals work in .properties files
Plural forms in Firefox and Firefox for Android are obtained using a hack on top of .properties files (plural forms are separated by a semicolon). For example:

#1 tab has arrived from #2;#1 tabs have arrived from #2

English has 2 plural forms, one for singular, and one for all other numbers. The situation is much more complex for other languages, reaching up to 5 or 6 plural forms. In Russian the same string has 3 forms, each one separated from the other by a semicolon:

С #2 получена #1 вкладка;С #2 получено #1 вкладки;С #2 получено #1 вкладок

The semicolon is a separator, not a standard punctuation element:

  • You should evaluate and translate each sentence separately. Some locales start the second sentence lowercase because of the semicolon, or with a leading space. Both are errors.
  • You shouldn’t replace the semicolon with a character from your script, or another punctuation sign (commas, periods). Again, that’s not a punctuation sign, it’s a separator.

Edge cases
Sometimes English only has one form, because the string is used for cases where the number is always bigger than 1.

;Close #1 tabs

Note that this string has still two plural forms, the first form (used for case ‘1’, or singular in English) is empty. That’s why the string starts with a semicolon. If your locale only has 1 form, you should drop the leading semicolon.

In other cases, the variable is indicated only in the second form:

Close one tab;Close #1 tabs

If your locale only has 1 form, or use the first case for more than ‘1’, use the second sentence as reference for your translation.

There are also cases of “poor” plural forms, where the plural is actually used as a replacement for logic, like “1 vs many”. These are bugs, and should be fixed. For example, this string was fixed in Firefox 59 (bug 658191).

Known limitations
Plurals form in Gecko are supported only in .properties files, and JavaScript code (not C++).

What about devtools?
If your locale has more plural forms than English, and you’re copying and pasting English into DevTools strings, the l10n dashboard will show warnings.

You can ignore them, as there’s no way to exclude locales from DevTools, or fix them by creating the expected number of plural forms by copying the English text as many times as needed.

Future of plurals
With Fluent, plurals become much more flexible, allowing locales to create special cases beyond the number of forms expected for their language.

What’s new or coming up in mobile

You might have noticed that Focus (iOS/Android) has been on a hiatus since mid-December 2017. That’s because the small mobile team is focusing on Firefox for Amazon Fire TV development at the moment!

We should be kicking things off again some time in mid-February. A firm date is not confirmed yet, but stay tuned on our dev-l10n mailing list for an upcoming announcement!

In the meantime, this means we are not shipping new locales on Focus, and we won’t be generating screenshots until the schedule resumes.

For Firefox on Fire TV – we are still figuring out which locales are officially supported by Amazon, and going to set up the l10n repositories to open it up to Mozilla localizations. There should also a language switcher in the works very soon, too.

Concerning the Firefox for iOS schedule, it’s almost time to kick-off l10n work for v11! Specific dates will be announced shortly – but expect strings to arrive towards the end of the month. March 29 will be the expected release date.

On the Firefox for Android front, we’ve now released v58. With this new version we bring you two new locales: Nepali (ne-NP) and Bengali from Bangladesh (bn-BD)!

We’re also in the process of adding Tagalog (tl), Khmer (km) and Mixteco Yucuhiti (meh) locales to all-locales to start Fennec single-locale builds.

What’s new or coming up in web projects
  • Marketing:
    • Firefox email: The team in charge of the monthly project targeting 6 locales will start following the standard l10n process by email team using bugzilla to communicate the initial requests, Pontoon to host the content, and l10n-driver sending the request through mailing list. Testing emails for verification purpose will be sent to those who worked on the project for the month. The process change has been communicated to the impacted communities. Thanks for responding so well to the change.
    • Regional single language request will also follow the standard process, moving localization tasks from Google docs to Pontoon. If you are pinged by marketing people for these requests through email or bugzilla, please let the l10n-drivers know. We want to make Pontoon the source of truth, the tool for community collaboration, for future localization references, consistency of terminology usage, for tracking contribution activity.
    • Mozilla.org has a slow start this year. Most updates have been cleanups and minor fixes. There have been discussions on redesigning the mozilla.org site so the entire site has a unified and modern look from one page to another. This challenges the current way of content delivery, which is at page level. More to share in the upcoming monthly reports.
  • AMO-Linter, a new project is enabled on Pontoon. This features target add-ons developers. As soon as the information on the feature, the release cycle, the staging server is available, the AMO documentation and Pontoon will be updated accordingly. In the meantime, report bugs by filing an issue.
  • Firefox Marketplace will be officially shut down on March 30th. Email communication was sent in English. However, a banner with the announcement was placed on the product in top 5 languages.
What’s new or coming up in Foundation projects

Our 2017 fundraising campaign just finished, but we’re already kicking off this year’s campaign.
One area we want to improve is our communication with donors, so starting in February we will send a monthly donor newsletter. This will help us better communicate how donations are put to use, and build a trust relationship with our supporters.
We will also start raising money much earlier. Our first fundraising email will be a fun one for Valentine’s Day.

A quick update on other localized campaigns:

  • The *Privacy not included website is being redesigned to remove the holiday references, and some product reviews might be added soon.
  • We expect to have some actions this spring around GDPR in Europe, but there is no concrete plan yet.
  • We’ve got some news on the Copyright reform — the JURI Committee will be tentatively voting on March 27th, so we will do some promotion of our call tool over the next few weeks.

The final countdown has started for the Internet Health Report! The second edition is on its way and should be published in March, this time again in English, German, French and Spanish.

What’s new or coming up in Pontoon
  • On February 3, Pontoon passed 3,000 registered users. Congratulations to Balazs Zubak for becoming the 3,000th registered user of Pontoon!
  • We’re privileged to have VishalCR7, karabellyj and maiquynhtruong join the Pontoon community of contributors recently. Stay tuned for more details about the work they are doing coming up soon in a blog post!
Friends of the Lion

Image by Elio Qoshi

Shout out to Adrien G, aka Alpha, for his continuous dedication to French localization on Pontoon and his great progress! He is now an official team member, and we’re happy to have him take on more responsibilities. Congrats!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Forging Better Tools for the Web

Mozilla planet - wo, 07/02/2018 - 22:26
A Firefox DevTools Retrospective

2017 was a big year for Firefox DevTools. We updated and refined the UI, refactored three of the panels, squashed countless bugs, and shipped several new features. This work not only provides a faster and better DevTools experience, but lays the groundwork for some exciting new features and improvements for 2018 and beyond. We’re always striving to make tools and features that help developers build websites using the latest technologies and standards, including JavaScript frameworks and, of course, CSS Grid.

To better understand where we’re headed with Firefox Devtools, let’s take a quick look back.

2016

In 2016, the DevTools team kicked off an ambitious initiative to completely transition DevTools away from XUL and Firefox-specific APIs to modern web technologies. One of the first projects to emerge was debugger.html.

Debugger.html is not just an iteration of the old Firefox Debugger. The team threw everything out, created an empty repo, and set out to build a debugger from scratch that utilized reusable React components and a Redux store model.

The benefits of this modern architecture become obvious right away. Everything was more predictable, understandable, and testable. This approach also allows debugger.html to target more than just Firefox. It can target other platforms such as Chrome and Node.

We also shipped a new Responsive Design Mode in 2016 that was built using only modern web technologies.

2017

Last year, we continued to build on the work that was started in 2016 by building, and rebuilding parts of Firefox DevTools (and adding new features along the way). As a result, our developer tools are faster and more reliable. We also launched Firefox Quantum, which focused on browser speed, and performance.

Debugger

The debugger.html work that started in 2016 shipped to all channels with Firefox 56. We also added several new features and improvements, including better search tools, collapsed framework call-stacks, async stepping, and more.

Console

Just as with debugger.html, we shipped a brand-new Firefox console with Firefox Quantum. It has a new UI, and has been completely rewritten using React and Redux. This new console includes several new improvements such as the ability to collapse log groups, and the ability to inspect objects in context.

Network Monitor

We also shipped a new network monitor to all channels in Firefox 57. This new Network Monitor has a new UI, and is (you guessed it) built with modern web technologies such as React and Redux. It also has a more powerful filter UI, new Netmonitor columns, and more.

CSS Grid Layout Panel

Firefox 57 shipped with a new CSS Grid Layout Panel. CSS Grid is revolutionizing web design, and we wanted to equip designers and developers with powerful tools for building and inspecting CSS Grid layouts. You can read all about the panel features here; highlights include an overlay to visualize the grid, an interactive grid outline, displaying grid area names, and more.

Photon UI

We also did a complete visual refresh of the DevTools themes to coincide with the launch of Firefox Quantum and the new Photon UI. This refresh brings a design that is clean, slick, and easy to read.

2018 and Beyond

All of this work has set up an exciting future for Firefox DevTools. By utilizing modern web technologies, we can create, test, and deploy new features at a faster pace than when we were relying on XUL and Firefox-specific APIs.

So what’s next? Without giving too much away, here are just some of the areas we are focusing on:

Better Tools for Layouts and Design

This is 2018 and static designs made in a drawing program are being surpassed by more modern tools! Designing in the browser gives us the freedom to experiment, innovate, and build faster. Speaking with hundreds of developers over the past year, we’ve learned that there is a huge desire to bring better design tools to the browser.

We’ve been thrilled by overwhelmingly positive feedback around the CSS Grid Layout Panel and we’ve heard your requests for more tools that help design, build, and inspect layouts.

We are making a Firefox Inspector tool to make it easier to write Flexbox code. What do you want it to do the most? What’s the hardest part for you when struggling with Flexbox?
@jensimmons, 14 Nov 2017

I’m so pleased about this reaction to the Firefox Grid Inspector. That was the plan. We’ve just gotten started. More super-useful layout tools are coming in 2018.
@jensimmons, 24 Nov 2017

Better Tools for Frameworks

2017 was a banner year for JavaScript frameworks such as React and Vue. There are also older favorites such as Angular and Ember that continue to grow and improve. These frameworks are changing the way we build for the web, and we have ideas for how Firefox DevTools can better equip developers who work with frameworks.

An Even Better UI

The work on the Firefox DevTools UI will never be finished. We believe there is always room for improvement. We’ll continue to work with the Firefox Developer community to test and ship improvements.

New DevTools poll: Which of these three toolbar layouts do you prefer for the Network panel?
@violasong

More Projects on GitHub

We tried something new when we started building debugger.html. We decided to build the project in GitHub. Not only did we find a number of new contributors, but we received a lot of positive feedback about how easy it was to locate, manage, and work with the code. We will be looking for more opportunities to bring our projects to GitHub this year, so stay tuned.

Get Involved

Have an idea? Found a bug? Have a (gasp) complaint? We will be listening very closely to devtools users as we move into 2018 and we want to hear from you. Here are some of the ways you can join our community and get involved:

Join us on Slack

You can join our devtools.html Slack community. We also hang out on the #devtools channel on irc.mozilla.org

Follow us on Twitter

We have an official account that you can follow, but you can also follow various team members who will occasionally share ideas and ask for feedback. Follow @FirefoxDevTools here.

Contribute

If you want to get your hand dirty, you can become a contributor:

List of open bugs
GitHub

Download Firefox Developer Edition

Firefox Developer Edition is built specifically for developers. It provides early access to all of the great new features we have planned for 2018.

Thank you to everyone who has contributed so far. Your tweets, bug reports, feedback, criticisms, and suggestions matter and mean the world to us. We hope you’ll join us in 2018 as we continue our work to build amazing tools for developers.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 07 Feb 2018

Mozilla planet - wo, 07/02/2018 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Wladimir Palant: Easy Passwords is now PfP: Pain-free Passwords

Mozilla planet - wo, 07/02/2018 - 20:28

With the important 2.0 milestone I decided to give my Easy Passwords project a more meaningful name. So now it is called PfP: Pain-free Passwords and even has its own website. And that’s the only thing most people will notice, because the most important changes in this release are well-hidden: the crypto powering the extension got an important upgrade. First of all, the PBKDF2 algorithm for generating passwords was dumped in favor of scrypt which is more resistant to brute-force attacks. Also, all metadata written by PfP as well as backups are encrypted now, so that they won’t even leak information about the websites used. Both changes required much consideration and took a while to implement, but now I am way more confident about the crypto than I was back when Easy Passwords 1.0 was released. Finally, there is now an online version compiled from the same source code as the extensions and having mostly the same functionality (yes, usability isn’t really great yet, the user interface wasn’t meant for this use case).

Now that the hard stuff is out of the way, what’s next? The plan for the next release is publishing PfP for Microsoft Edge (it’s working already but I need to figure out the packaging), adding sync functionality (all encrypted just like the backups, so that in theory any service where you can upload files could be used) and importing backups created with a different master password (important as a migration path when you change your master password). After that I want to look into creating an Android client as well as a Node-based command line interface. These new clients had to be pushed back because they are most useful with sync functionality available.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting, 07 Feb 2018

Mozilla planet - wo, 07/02/2018 - 18:00

Weekly SUMO Community Meeting This is the SUMO weekly call

Categorieën: Mozilla-nl planet

The Mozilla Blog: Announcing the Reality Redrawn Challenge Winners!

Mozilla planet - wo, 07/02/2018 - 17:01

I’m delighted to announce the winners of Mozilla’s Reality Redrawn Challenge after my fellow judges and I received entries from around the globe. Since we issued the challenge just two months ago we have been astonished by the quality and imagination behind proposals that use mixed reality and other media to make the power of misinformation and its potential impacts visible and visceral.

If you have tried to imagine the impact of fake news – even what it smells like – when it touches your world, I hope you will come to experience the Reality Redrawn exhibit at the Tech Museum of Innovation in San Jose. Our opening night runs from 5-9pm on May 17th and free tickets are available here. Keep an eye on Twitter @mozilla with the hashtag #RealityRedrawn for more details in the coming weeks. After opening night you can experience the exhibit in normal daily museum hours for a limited engagement of two weeks, 10am-5pm. We will be looking to bring the winning entries to life also for those who are not in the Bay Area.

The winner of our grand prize of $15,000 is Yosun Chang from San Francisco with Bubble Chaos. Yosun has won many competitions including the Salesforce Dreamforce 2011 Hackathon, Microsoft Build 2016 Hackathon and TechCrunch Disrupt 2016 Hackathon. She will use augmented reality and virtual reality to create an experience that allows the user to interact with misinformation in a creative new way.

Yosun says of her entry: “We iPhoneX face track a user’s face to puppeteer their avatar, then bot and VR crowdsource lipreading that avatar to form political sides. This powers the visuals of a global macroscopic view showing thousands of nodes transmitting to create misinformation. We present also the visceral version where the user can try to “echo” their scented-colored bubble in a “bubble chamber” to make the room smell like their scent with multiple pivoting SensaBubble machines.”

Our second prize joint semi-finalist is Stu Campbell (aka Sutu) from Roeburne in Western Australia. Sutu will receive $7,500 to complete the creation of his entry FAKING NEWS. He is known for ‘Nawlz’, a 24 episode interactive cyberpunk comic book series created for web and iPad. In 2016 he was commissioned by Marvel and Google to create Tilt Brush Virtual Reality paintings. He was also the feature subject of the 2014 documentary, ‘Cyber Dreaming’.

As Sutu explains: “The front page of a newspaper will be reprinted in a large format and mounted to the museum wall. Visitors will also find physical copies of the paper in the museum space. Visitors will be encouraged to download our EyeJack Augmented Reality app and then hold their devices over the paper to see the story augment in real time. Small fake news bots will animate across the page, rearranging and deleting words and inserting news words and images. The audience then has the option to share the new augmented news to their own social media channels, thus perpetuating its reach.”

Mario Ezekiel Hernandez from Austin also receives $7,500 to complete his entry: Where You Stand. Mario graduated from Texas State University in 2017 with a degree in Applied Mathematics. He currently works as a data analyst and is a member of the interactive media arts collective, vûrv.

Mario’s entry uses TouchDesigner, Python, R, OpenCV, web cameras, projectors, and a mac mini. Mario says of his entry: “Our solution seeks to shine a light on the voices of policymakers and allow participants to freely explore the content that is being promoted by their legislative representatives. The piece dynamically reacts to actor locations. As they move along the length of the piece tweets from each legislator are revealed and hidden. To highlight the polarization we group the legislators by party alignment so that the most partisan legislators are located at the far ends of the piece. As participants move away from the middle in either direction, they will see more tweets from increasingly partisan legislators.”

Emily Saltz is a UX Designer from Bloomberg LP and will be traveling from New York with her entry Filter Bubble Roulette, after receiving prize money of $5,000. Previously she was UX and Content Strategist at Pop Up Archive, an automatic speech recognition service and API acquired by Apple.

Emily says of her entry: “This social webVR platform plays into each user’s curiosity to peek into other social media filter bubbles, using content pulled from social media as conversational probes. It will enable immersive connection people across diverse social and political networks. The project is based on the hypotheses that 1) users are curious to peek into the social media universes of others, 2) it’s harder to be a troll when you’re immersed in someone else’s 3D space, and 3) viewing another person’s filter bubble in context of their other interests will enable more reflection and empathy between groups.”

Rahul Bhargava is a researcher and technologist specializing in civic technology and data literacy at the MIT Center for Civic Media. There he leads technical development on projects ranging from interfaces for quantitative news analysis, to platforms for crowd-sourced sensing. Based in Boston, he also won $5,000 to create his entry Gobo: understanding social media algorithms.

Rahul says of his entry: “The public lacks a basic understanding of the algorithm-driven nature of most online platforms. In parallel, technology companies generally place blind trust in algorithms as “neutral” actors in content promotion. Our idea tackles this perfect storm with a card-driven interactive piece, where social media content is scored with a variety of algorithms and prompts to discuss how those can drive content filtering and promotion. Visitors are engaged to use these scores as inputs to construct their own meta-algorithm, deciding whether things like “gender” detection, “rudeness” ranking, or “sentiment” analysis would drive which content they want to see.”

The Reality Redrawn Challenge is part of the Mozilla Information Trust Initiative announced last year to build a movement to fight misinformation online. The initiative aims to stimulate work towards this goal on products, research, literacy and creative interventions.

The post Announcing the Reality Redrawn Challenge Winners! appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: What Thunderbird Learned at FOSDEM

Mozilla planet - wo, 07/02/2018 - 15:23

Hello everyone! I’m writing this following a visit to Brussels this past weekend to the Free and Open Source Software conference called FOSDEM. As far as I know it is one of the largest, if not the largest FOSS conference in Europe. It proved to be a great opportunity to discuss Thunderbird with a wide range of contributors, users, and interested developers – and the feedback I received at the event was fantastic (and helpful)!

First, some background, the Thunderbird team was stationed in the Mozilla booth, on the second floor of building K. We were next to the Apache Software Foundation and the Kopano Collaborative software booths (the Kopano folks gave us candy with “Mozilla” printed on it – very cool). We had hundreds of people stop by the booth and I got to ask a bunch of them about what they thought of Thunderbird. Below are some insights I gained from talking to the FOSDEM attendees.

Feedback from FOSDEM

1. I thought the project was dead. What’s the plan for the future of Thunderbird?

This was the number one thing I heard repeatedly throughout the conference. This is not surprising as, while the project has remained active following its split from Mozilla corp, it has not been seen to push the boundaries or made a lot of noise about its own initiatives. We, as the Thunderbird community, should be planning on the future and what that looks like – once we have a concrete roadmap, we should share that with the world to solicit interest and enthusiasm.

For fear of this question being misunderstood, this was never asked with malevolent intent or in a dismissive way (as far as I could tell). Most of the people who commented on the project being dead were generally interested in using Thunderbird (or were still), but didn’t realize anyone was actively doing development. I got many stories where people shared their relief saying “I was planning on having to move to something else for a mail client, but now that I’ve seen the project making plans, I’m going to stay with it.”

Currently, we have a lot to talk about regarding the future of Thunderbird. We have made new hires (yours truly included), we are hiring a developer to work on various parts of the project, and we are working with organizations like Monterail in order to get feedback on the interface. With the upcoming Thunderbird Council elections, the Community will get an opportunity to shape the leadership of the project as well.

2. I would like to see a mobile app.

The second most prevalent thing expressed to me at FOSDEM was the desire for a Thunderbird mobile app. When I asked what that might look like the answers were uniformly along the lines of: “There is not a really good, open source, Email client on mobile. Thunderbird seems like a great project with the expertise to solve that.”

3. Where’s the forum?

Heard this a few times and was surprised out how adamant the people asking were. They pointed out that they were Thunderbird users, but weren’t really into mailing lists. I had it iterated to me a handful of times that Discourse allows you to respond via Email or the website. As a result I have begun working on setting something up.

The biggest barrier I see to making a forum a core part of the community effort is getting buy-in from MOST of the contributors to the project currently. So, over the next week I’m going to try and get an idea of who is interested in participating and who is opposed.

4. I want built-in Encryption

This was a frequent request asked for in two forms, repeatedly. First, “How can I encrypt my Thunderbird Email?” and second, “Can you make encryption a default feature?” – the frequency with which this was asked indicates that this is important to this segment of our users (open source, technical).

To those who are curious as to how to encrypt your mail currently – the answer is you may use the Enigmail extension. In the future, we may be able to make this easier by having it built-in to Thunderbird and making it possible to enable in the settings. But that is a discussion that the community and developers need to explore further.

Final Thoughts

In closing, I heard a great many things beyond those four key points above – but many were thoughts on specific bugs people experienced (you can file bugs here), or just comments on how people used mostly webmail these days. On that second point, I heard that so frequently that I began to wonder what more we could offer as a project that would provide added value to users over what things like GMail, Inbox, and Outlook365 were offering.

All-around FOSDEM was a great event, met great people, heard amazing talks, and got to spread the good word of Thunderbird. Would love to hear the community’s ideas on what they think of what I heard, that means you, so please leave a comment below.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MDN Changelog for January 2018

Mozilla planet - wo, 07/02/2018 - 01:00

Here’s what happened in January to the code, data, and tools that support MDN Web Docs:

Here’s the plan for February:

Done in January Completed CSS Compatibility Data Migration and More

Thanks to Daniel D. Beck and his 83 Pull Requests, the CSS compatibility data is migrated to the browser-compat-data repository. This finishes Daniel’s current contract, and we hope to get his help again soon.

The newly announced MDN Product Advisory Board supports the Browser Compatibility Data project, and members are working to migrate more data. In January, we saw an increase in contributions, many from first-time contributors. The migration work jumped from 39% to 43% complete in January. See the contribution guide to learn how to help.

On January 23, we turned on the new browser compatability tables for all users. The new presentation provides a good overview of feature support across desktop and mobile browsers, as well as JavaScript run-time environments like Node.js, while still letting implementors dive into the details.

Florian Scholz promoted the project with a blog post, and highlighted the compat-report addon by Eduardo Bouças that uses the data to highlight compatibility issues in a developer tools tab. Florian also gave a talk about the project on February 3 at FOSDEM 18. We’re excited to tell people about this new resource, and see what people will do with this data.

compat-report

Shipped a New Method for Declaring Language Preference

If you use the language switcher on MDN, you’ll now be asked if you want to always view the site in that language. This was added by Safwan Rahman in PR 4321.

language-switcher

This preference goes into effect for our “locale-less” URLs. If you access https://developer.mozilla.org/docs/Web/HTML, MDN uses your browser’s preferred language, as set by the Accept-Language header. If it is set to Accept-Language: en-US,en;q=0.5, then you’ll get the English page at https://developer.mozilla.org/en-US/docs/Web/HTML, while Accept-Language: de-CH will send you to the German page at https://developer.mozilla.org/de/docs/Web/HTML. If you’ve set a preference with this new dialog box, the Accept-Language header will be ignored and you’ll get your preferred language for MDN.

This is useful for MDN visitors who like to browse the web in their native language, but read MDN in English, but it doesn’t fix the issue entirely. If a search engine thinks you prefer German, for instance, it will pick the German translations of MDN pages, and send you to https://developer.mozilla.org/de/docs/Web/HTML. MDN respects the link and shows the German page, and the new language preference is not used.

We hope this makes MDN a little easier to use, but more will be needed to satisfy those who get the “wrong” page. I’m not convinced there is a solution that will work for everyone. I’ve suggested a web extension in bug 1432826, to allow configurable redirects, but it is unclear if this is the right solution. We’ll keep thinking about translations, and adjusting to visitors’ preferences.

Increased Availability of MDN

MDN easily serves millions of visitors a month, but struggles under some traffic patterns, such as a single visitor requesting every page on the site. We continue to make MDN more reliable despite these traffic spikes, using several different strategies.

The most direct method is to limit the number of requests. We’ve updated our rate limiting to return the HTTP 429 “Too Many Requests” code (PR 4614), to more clearly communicate when a client hits these limits. Dave Parfitt automated bans for users making thousands of requests a minute, which is much more than legitimate scrapers.

Another strategy is to reduce the database load for each request, so that high traffic doesn’t slow down the database and all the page views. We’re reducing database usage by changing how async processes store state (PR 4615) and using long-lasting database connections to reduce time spent establishing per-request connections (PR 4644).

Safwan Rahman took a close look at the database usage for wiki pages, and made several changes to reduce both the number of queries and the size of the data transmitted from the database (PR 4630). This last change has significantly reduced the network traffic to the database.

network-traffic-drop response-time

All of these add up to a 10% to 15% improvement in server response time from December’s performance.

Ryan Johnson continued work on the long-term solution, to serve MDN content from a CDN. This requires getting our caching headers just right (PR 4638). We hope to start shipping this in February. At that point, a high-traffic user may still slow down the servers, but most people will quickly get their content from the CDN instead.

Shipped Tweaks and Fixes

There were 326 PRs merged in January:

67 of these were from first-time contributors:

Other significant PRs:

Planned for February Continue Development Projects

In February, we’ll continue working on our January projects. Our plans include:

  • Converting more compatibility data
  • Serving developer.mozilla.org from a CDN
  • Updating third-party libraries for compatibility with Django 1.11
  • Designing interactive examples for more complex scenarios
  • Preparing for a team meeting and “Hack on MDN” event in March

See the December report for more information on these projects.

Categorieën: Mozilla-nl planet

K Lars Lohn: Lars and the Real Internet of Things - Part 1

Mozilla planet - ti, 06/02/2018 - 22:08
This is the first in a series of blog postings about the Internet of Things (IoT).  I'm going to cover some history, and then talk about and demonstrate Mozilla's secure privacy protecting Things Gateway and finally talk about writing the software for my own IoT devices to work with the Things Gateway.  


First, though, my history with home automation:

When I was a teenager in the 1970s, I had an analog alarm clock with an electrical outlet on the back labeled "coffee".  About ten minutes before the alarm would go off, it would turn on the power to the outlet.  This was apparently to start a coffee maker that had been setup the night before.  I, instead, used the outlet to turn on my record player so I could wake to music of my own selection.  Ten years after the premier of the Jetsons automated utopia, this was the extent of home automation available to the average consumer.

By the late 1970s and into the 1980s, the landscape changed in consumer home automation.  A Scottish electronics company conceived of a remote control system that would communicate over power lines.  By the mid 1980s, the X10 system of controllers and devices was available at Radio Shack and many other stores.

I was an early adopter of this technology, automating lamps, ceiling lights and porch lights.  After the introduction of an RS-232 controller that allowed the early MS-DOS PCs to control lights, I was able to get porch lights to follow sunrise, sunset and daylight savings rules.

X10 was unreliable.  In communicating over power lines, it encoded its data into the momentary zero voltage between the peaks of alternating current: it maxed out at about 20 bits per second.  Nearly anything could garble communication: the dish washer, the television, the neighbor's electric drill.  Many of the components were poorly manufactured.  Wall switches not only completely lacked style and ergonomics, but they would last only a year or so before requiring replacement. In 1990, a power surge during a thunderstorm wiped out almost all of my X10 devices.  I was done with X10, it was too expensive and unreliable. 

For the next twenty years, I lived just fine without home automation, but the industry advanced.  Insteon, Z-Wave and Zigbee were all invented in the 2000s for home automation.  Their high cost and my soured experience with X10, kept me away.

In the last ten years, there has been a renaissance in home automation in connection with the Internet: the Internet of Things.  I looked at the new options, saw they were still expensive and they had new flaws: security and privacy.  I bought a couple of the Belkin Wemo devices that I could control with my iPhone and found they were, like X10, unreliable.  Sometimes they'd work and sometimes they wouldn't.  Then in 2013, a security flaw was found that could result in someone else taking control or even invade the home network.  The Wemo devices required a firmware security update and on hurting my back crawling behind the couch to do the update, I decided they were not worth the effort.  The Wemo devices were added to my local landfill.


I watched from the sidelines as more and more companies jumped into the IoT field.  A little research showed how ZWave and Zigbee devices could be more secure, but with two competing incompatible standards, how could I decide?  I didn't want to buy the wrong thing and then suffer a orphaned  system.  I couldn't justify the expense.

What really got me interested again were the Phillips Hue system of color changeable lights.  The cost coupled with Phillips on again, off again willingness to allow third party products to interact with their hub, forestalled my adoption.

I held back until the Samsung SmartThings device was introduced.  Here was a smart home hub that could talk to both ZWave and Zigbee devices.  I added one to my Amazon shopping cart along with a number of lamp controller switches.  I didn't press the buy button because I was looking for the flaw.  Of course, there was one, a big one: the Internet was required.  Since it relied on mobile phones to control the Smart Home hub, if the Internet was down, the control of the devices stopped.  Or so it seemed, the documentation was vague on the subject.  I finally confirmed it by talking with an acquaintance that had the system.  This system was not for me.

I was again a IoT wallflower, longing to dance but unwilling to step onto the dance floor.

In December of 2017, however, I saw a demonstration of a new experimental system from Mozilla called the Things Gateway.  It offers a protocol agnostic control over IoT devices.  It can control the Z-Wave and the Zigbee stuff at the same time.  The software runs on a computer, even a Raspberry Pi.  Because it offers a web server on the local home network, any Web browser on a phone, tablet or desktop machine at home can control it.  Unlike most commercial IoT controllers, if the internet is out, I can still control things while I'm home.  As a plus, Mozilla offers a secure method from the internet to the local Things Gateway web server. For many folks controlling things while away from home is important, for me, I could do without that feature.

The final convincing argument?  It's open source and completely customizable.   I cannot resist any longer.

My next blog posting will walk through the process of downloading and setting up a Mozilla Things Gateway.   I'll show how I connected  Z-Wave, Zigbee and Philips Hue lights into one smart home network.  Subsequent postings will show how I can use the Python programming language to enable new devices to join the Internet of Things.

I'm quite excited about this project.

Mozilla Things Gateway

Mozilla Hacks Blog about Things Gateway


Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: How to build your own private smart home with a Raspberry Pi and Mozilla’s Things Gateway

Mozilla planet - ti, 06/02/2018 - 18:07

Last year we announced Project Things by Mozilla. Project Things is a framework of software and services that can bridge the communication gap between connected devices by giving “things” URLs on the web.

Today I’m excited to tell you about the latest version of the Things Gateway and how you can use it to directly monitor and control your home over the web, without a middleman. Instead of installing a different mobile app for every smart home device you buy, you can manage all your devices through a single secure web interface. This blog post will explain how to build your own Web of Things gateway with a Raspberry Pi and use it to connect existing off-the-shelf smart home products from various different brands using the power of the open web.

There are lots of exciting new features in the latest version of the gateway, including a rules engine for setting ‘if this, then that’ style rules for how things interact, a floorplan view to lay out devices on a map of your home, experimental voice control and support for many new types of “things”. There’s also a brand new add-ons system for adding support for new protocols and devices, and a new way to safely authorise third party applications to access your gateway.

Hardware

The first thing to do is to get your hands on a Raspberry Pi® single board computer. The latest Raspberry Pi 3 has WiFi and Bluetooth support built in, as well as access to GPIO ports for direct hardware connections. This is not essential as you can use alternative developer boards, or even your laptop or desktop computer, but it currently provides the best experience.

If you want to use smart home devices using other protocols like Zigbee or Z-Wave, you will need to invest in USB dongles. For Zigbee we currently support the Digi XStick (ZB mesh version). For Z-Wave you should be able to use any OpenZWave compatible dongle, but so far we have only tested the Sigma Designs UZB Stick and the Aeotec Z-Stick (Gen5). Be sure to get the correct device for your region as Z-Wave operating frequencies can vary between countries.

You’ll also need a microSD card to flash the software onto! We recommend at least 4GB.

Then there’s the “things” themselves. The gateway already supports many different smart plugs, sensors and smart bulbs from lots of different brands using Zigbee, Z-Wave and WiFi. Take a look at the wiki for devices which have already been tested. If you would like to contribute, we are always looking for volunteers to help us test more devices. Let us know what other devices you’d like to see working and consider building your own adapter add-on to make it work! (see later).

If you’re not quite ready to splash out on all this hardware, but you want to try out the gateway software, there’s now a Virtual Things add-on you can install to add virtual things to your gateway.

Software

Next you’ll need to download the Things Gateway 0.3 software image for the Raspberry Pi and flash it onto your SD card. There are various ways of doing this but Etcher is a graphical application for Windows, Linux and MacOS which makes it easy and safe to do.

If you want to experiment with the gateway software on your laptop or desktop computer, you can follow the instructions on GitHub to download and build it yourself. We also have an experimental OpenWrt package and support for more platforms is coming soon. Get in touch if you’re targeting a different platform.

First Time Setup

Before booting up your gateway with the SD card inserted, ensure that any Zigbee or Z-Wave USB dongles are plugged in.

When you first boot the gateway, it acts as a WiFi hotspot broadcasting the network name (SSID) “Mozilla IoT Gateway”. You can connect to that WiFi hotspot with your laptop or smartphone which should automatically direct you to a setup page. Alternatively, you can connect the Raspberry Pi directly to your network using a network cable cable and type gateway.local into your browser to begin the setup process.

First, you’re given the option to connect to a WiFi network:

 

If you choose to connect to a WiFi network you’ll be prompted for the WiFi password and then you’ll need to make sure you’re connected to that same network in order to continue setup.

Next, you’ll be asked to choose a unique subdomain for your gateway, which will automatically generate an SSL certificate for you using LetsEncrypt and set up a secure tunnel to the Internet so you can access the gateway remotely. You’ll be asked for an email address so you can reclaim your subdomain in future if necessary. You can also choose to use your own domain name if you don’t want to use the tunneling service, but you’ll need to generate your own SSL certificate and configure DNS yourself.

You will then be securely redirected to your new subdomain and you’ll be prompted to create your user account on the gateway.

You’ll then automatically be logged into the gateway and will be ready to start adding things. Note that the gateway’s web interface is a Progressive Web App that you can add to homescreen on your smartphone with Firefox.

Adding Things

To add devices to your gateway, click on the “+” icon at the bottom right of the screen. This will put all the attached adapters into pairing mode. Follow the instructions for your individual device to pair it with the gateway (this often involves pressing a button on the device while the gateway is in pairing mode).

Devices that have been successfully paired with the gateway will appear in the add device screen and you can give them a name of your choice before saving them on the gateway.

The devices you’ve added will then appear on the Things screen.

You can turn things on and off with a single tap, or click on the expand button to go to an expanded view all of all the thing’s properties. For example a smart plug has an on/off switch and reports its current power consumption, voltage, current and frequency.

With a dimmable colour light, you can turn the light on and off, set its colour, and set its brightness level.

Rules Engine

By clicking on the main menu you can access the rules engine.

The rules engine allows you to set ‘if this, then that’ style rules for how devices interact with each other. For example, “If Smart Plug A turns on, turn on Smart Plug B”.

To create a rule, first click the “+” button at the bottom right of the rules screen. Then drag and drop things onto the screen and select the properties of the things you wish to connect together.

 

You can give your rule a name and then click back to get back to the rules screen where you’ll see your new rule has been added.

Floorplan

Clicking on the “floorplan” option from the main menu allows you to arrange devices on a floorplan of your home. Click the edit button at the bottom right of the screen to upload a floorplan image.

You’ll need to create the floorplan image yourself. This can be done with an online tool or graphics editor, or you can just scan of a hand drawn map of your home! An SVG file with white lines and a transparent background works best.

You can arrange devices on the floor plan by dragging them around the screen.

Just click “save” when you’re done and you’ll see all of your devices laid out. You can click on them to access their expanded view.

Add-ons

The gateway has an add-ons system so that you can extend its capabilities. It comes with the Zigbee and Z-Wave adapter add-ons installed by default, but you can add support for additional adapters through the add-ons system under “settings” in the main menu.

Click the “+ Add” button on any add-on you want to install.

For example, there is a Virtual Things add-on which allows you to experiment with different types of web things without needing to buy any real hardware. Click the “+” button at the bottom right of the screen to see a list of available add-ons.

Click the “+ Add” button on any add-ons you want to install. When you navigate back to the add-ons screen you’ll see the list of add-ons that have been installed and you can enable or disable them.

In the next blog post, you’ll learn how to create, package, and share your own adapter add-ons in the programming language of your choice (e.g. JavaScript, Python or Rust).

Voice UI

The gateway also comes with experimental voice controls which are turned off by default. You can enable this feature through “experiments” in settings.

Once the “Speech Commands” experiment is turned on you’ll notice a microphone icon appear at the top right of the things screen.

If the smartphone or PC you’re using has a microphone you can tap the microphone and issue a voice command like “Turn kitchen on” to control devices connected to the gateway.

The voice control is still very experimental and doesn’t yet recognise a very wide range of vocabulary, so it’s best to try to stick to common words like kitchen, balcony, living room, etc. This is an area we’ll be working on improving in future, in collaboration with the Voice team at Mozilla.

Updates

Your gateway software should automatically keep itself up to date with over-the-air updates from Mozilla. You can see what version of the gateway software you’re running by clicking on “updates” in Settings.

What’s Coming Next?

In the next release, the Mozilla IoT team plans to create new gateway adapters to connect more existing smart home devices to the Web of Things. We are also starting work on a collection of software libraries in different programming languages, to help hackers and makers build their own native web things which directly expose the Web Thing API, using existing platforms like Arduino and Android Things. You will then be able to add these things to the gateway by their URL.

We will continue to contribute to standardisation of a Web Thing Description format and API via the W3C Web of Things Interest Group. By giving connected devices URLs on the web and using a standard data model and API, we can help create more interoperability on the Internet of Things.

The next blog post will explain how to build, package and share your own adapter add-on using the programming language of your choice, to add new capabilities to the Things Gateway.

How to Contribute

We need your help! The easiest way to contribute is to download the Things Gateway software image (0.3 at the time of writing) and test it out for yourself with a Raspberry Pi, to help us find bugs and suggest new features. You can view our source code and file issues on GitHub. You can also help us fix issues with pull requests and contribute your own adapters for the gateway.

If you want to ask questions, you can find us in #iot on irc.mozilla.org or the “Mozilla IoT” topic in Discourse. See iot.mozilla.org for more information and follow @MozillaIoT on Twitter if you want to be kept up to date with developments.

Happy hacking!

 

Categorieën: Mozilla-nl planet

The Mozilla Blog: Announcing “Project Things” – An open framework for connecting your devices to the web.

Mozilla planet - ti, 06/02/2018 - 18:06

Last year, we said that Mozilla is working to create a framework of software and services that can bridge the communication gap between connected devices. Today, we are pleased to announce that anyone can now build their own Things Gateway to control their connected device directly from the web.

We kicked off “Project Things”, with the goal of building a decentralized ‘Internet of Things’ that is focused on security, privacy, and interoperability. Since our announcement last year, we have continued to engage in open and collaborative development with a community of makers, testers, contributors, and end-users, to build the foundation for this future.

Today’s launch makes it easy for anyone with a Raspberry Pi to build their own Things Gateway. In addition to web-based commands and controls, a new experimental feature shows off the power and ease of using voice-based commands. We believe this is the most natural way for users to interact with their smart home. Getting started is easy, and we recommend checking out this tutorial to get connected.

The Future of Connected Devices

Internet of Things (IoT) devices have become more popular over the last few years, but there is no single standard for how these devices should talk to each other. Each vendor typically creates a custom application that only works with their own brand. If the future of connected IoT devices continues to involve proprietary solutions, then costs will stay high, while the market remains fragmented and slow to grow. Consumers should not be locked into a specific product, brand, or platform. This will only lead to paying premium prices for something as simple as a “smart light bulb”.

We believe the future of connected devices should be more like the open web. The future should be decentralized, and should put the power and control into the hands of the people who use those devices. This is why we are committed to defining open standards and frameworks.

A Private “Internet of Things”

Anyone can build a Things Gateway using popular devices such as the Raspberry Pi. Once it is set up, it will guide you through the process of connecting to your network and adding your devices. The setup process will provide you with a secure URL that can be used to access and control your connected devices from anywhere.

Powerful New Features

Our latest release of the Things Gateway has several new features available. These features include:

  • The ability to use the microphone on your computer to issue voice commands
  • A rules engine for setting ‘If this, then that’ logic for how devices interact with each other
  • A floor-plan view to lay out devices on a map of your home
  • Additional device type support, such as smart plugs, dimmable and colored lights, multi-level switches and sensors, and “virtual” versions of them, in case you don’t have a real device
  • An all-new add-on system for supporting new protocols and devices
  • A new system for safely authorizing third-party applications (using OAuth)
Built for hackers everyone

If you have been following our progress with Project Things, you’ll know that up to now, it was only really accessible to those with a good amount of technical knowledge. With today’s release, we have made it easy for anyone to get started on building their own Things Gateway to control their devices. We take care of the complicated stuff so that you can focus on the fun stuff such as automation, ‘if this, then that’ rules, adding a greater variety of devices, and more.

Getting Started

We have provided a full walkthrough of how to get started on building your own private smart home using a Raspberry Pi. You can view the complete walkthrough here.

If you have questions, or you would like to get involved with this project you can join the #iot channel on irc.mozilla.org and participate in the development on GitHub. You can also follow @MozillaIoT on twitter for the latest news.

For more information, please visit iot.mozilla.org.

The post Announcing “Project Things” – An open framework for connecting your devices to the web. appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Nordic Free Software Award reborn

Mozilla planet - ti, 06/02/2018 - 14:32

Remember the glorious year 2009 when I won the Nordic Free Software Award?

This award tradition that was started in 2007 was put on a hiatus after 2010 (I believe) and there has not been any awards handed out since, and we have not properly shown our appreciation for the free software heroes of the Nordic region ever since.

The award has now been reignited by Jonas Öberg of FSFE and you’re all encourage to nominate your favorite Nordic free software people!

Go ahead and do it right away! You only have to the end of February so you better do it now before you forget about it.

I’m honored to serve on the award jury together with previous award winners.

This year’s Nordic Free Software Award winner will be announced and handed their prize at the FOSS-North conference on April 23, 2018.

(Okay, yes, the “photo” is a montage and not actually showing a real trophy.)

Categorieën: Mozilla-nl planet

Pages