mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 22 uur 25 min geleden

Niko Matsakis: An experimental new type inference scheme for Rust

wo, 09/07/2014 - 16:08

While on vacation, I’ve been working on an alternate type inference scheme for rustc. (Actually, I got it 99% working on the plane, and have been slowly poking at it ever since.) This scheme simplifies the code of the type inferencer dramatically and (I think) helps to meet our intutions (as I will explain). It is however somewhat less flexible than the existing inference scheme, though all of rustc and all the libraries compile without any changes. The scheme will (I believe) make it much simpler to implement to proper one-way matching for traits (explained later).

Note: Changing the type inference scheme doesn’t really mean much to end users. Roughly the same set of Rust code still compiles. So this post is really mostly of interest to rustc implementors.

The new scheme in a nutshell

The new scheme is fairly simple. It is based on the observation that most subtyping in Rust arises from lifetimes (though the scheme is extensible to other possible kinds of subtyping, e.g. virtual structs). It abandons unification and the H-M infrastructure and takes a different approach: when a type variable V is first related to some type T, we don’t set the value of V to T directly. Instead, we say that V is equal to some type U where U is derived by replacing all lifetimes in T with lifetime variables. We then relate T and U appropriately.

Let me give an example. Here are two variables whose type must be inferred:

'a: { // 'a --> name of block's lifetime let x = 3; let y = &x; ... }

Let’s say that the type of x is $X and the type of y is $Y, where $X and $Y are both inference variables. In that case, the first assignment generates the constraint that int <: $X and the second generates the constraint that &'a $X <: $Y. To resolve the first constraint, we would set $X directly to int. This is because there are no lifetimes in the type int. To resolve the second constraint, we would set $Y to &'0 int – here '0 represents a fresh lifetime variable. We would then say that &'a int <: &'0 int, which in turn implies that '0 <= 'a. After lifetime inference is complete, the types of x and y would be int and &'a int as expected.

Without unification, you might wonder what happens when two type variables are related that have not yet been associated with any concrete type. This is actually somewhat challenging to engineer, but it certainly does happen. For example, there might be some code like:

let mut x; // type: $X let mut y = None; // type: Option<$0> loop { if y.is_some() { x = y.unwrap(); ... } ... }

Here, at the point where we process x = y.unwrap(), we do not yet know the values of either $X or $0. We can say that the type of y.unwrap() will be $0 but we must now process the constrint that $0 <: $X. We do this by simply keeping a list of outstanding constraints. So neither $0 nor $X would (yet) be assigned a specific type, but we’d remember that they were related. Then, later, when either $0 or $X is set to some specific type T, we can go ahead and instantiate the other with U, where U is again derived from T by replacing all lifetimes with lifetime variables. Then we can relate T and U appropriately.

If we wanted to extend the scheme to handle more kinds of inference beyond lifetimes, it can be done by adding new kinds of inference variables. For example, if we wanted to support subtyping between structs, we might add struct variables.

What advantages does this scheme have to offer?

The primary advantage of this scheme is that it is easier to think about for us compiler engineers. Every type variable is either set – in which case its type is known precisely – or unset – in which case its type is not known at all. In the current scheme, we track a lower- and upper-bound over time. This makes it hard to know just how much is really known about a type. Certainly I know that when I think about inference I still think of the state of a variable as a binary thing, even though I know that really it’s something which evolves.

What prompted me to consider this redesign was the need to support one-way matching as part of trait resolution. One-way matching is basically a way of saying: is there any substitution S such that T <: S(U) (whereas normal matching searches for a substitution applied to both sides, like S(T) <: S(U)).

One-way matching is very complicated to support in the current inference scheme: after all, if there are type variables that appear in T or U which are partially constrained, we only know bounds on their eventual type. In practice, these bounds actually tell us a lot: for example, if a type variable has a lower bound of int, it actually tells us that the type variable is int, since in Rust’s type system there are no super- of sub-types of int. However, encoding this sort of knowledge is rather complex – and ultimately amounts to precisely the same thing as this new inference scheme.

Another advantage is that there are various places in the Rust’s type checker whether we query the current state of a type variable and make decisions as a result. For example, when processing *x, if the type of x is a type variable T, we would want to know the current state of T – is T known to be something inherent derefable (like &U or &mut U) or a struct that must implement the Deref trait? The current APIs for doing this bother me because they expose the bounds of U – but those bounds can change over time. This seems “risky” to me, since it’s only sound for us to examine those bounds if we either (a) freeze the type of T or (b) are certain that we examine properties of the bound that will not change. This problem does not exist in the new inference scheme: anything that might change over time is abstracted into a new inference variable of its own.

What are the disadvantages?

One form of subtyping that exists in Rust is not amenable to this inference. It has to do with universal quantification and function types. Function types that are “more polymorphic” can be subtypes of functions that are “less polymorphic”. For example, if I have a function type like <'a> fn(&'a T) -> &'a uint, this indicates a function that takes a reference to T with any lifetime 'a and returns a reference to a uint with that same lifetime. This is a subtype of the function type fn(&'b T) -> &'b uint. While these two function types look similar, they are quite different: the former accepts a reference with any lifetime but the latter accepts only a reference with the specific lifetime 'b.

What this means is that today if you have a variable that is assigned many times from functions with varying amounts of polymorphism, we will generally infer its type correctly:

fn example<'b>(..) { let foo: <'a> |&'a T| -> &'a int = ...; let bar: |&'b T| -> &'b int = ...; let mut v; v = foo; v = bar; // type of v is inferred to be |&'b T| -> &'b int }

However, this will not work in the newer scheme. Type ascription of some form would be required. As you can imagine, this is not a very .common problem, and it did not arise in any existing code.

(I believe that there are situations which the newer scheme infers correct types and the older scheme will fail to compile; however, I was unable to come up with a good example.)

How does it perform?

I haven’t done extensive measurements. The newer scheme creates a lot of region variables. It seems to perform roughly the same as the older scheme, perhaps a bit slower – optimizing region inference may be able to help.

Categorieën: Mozilla-nl planet

Doug Belshaw: Why Mozilla cares about Web Literacy [whitepaper]

wo, 09/07/2014 - 15:41

One of my responsibilities as Web Literacy Lead at Mozilla is to provide some kind of theoretical/conceptual underpinning for why we do what we do. Since the start of the year, along with Karen Smith and some members of the community, I’ve been working on a Whitepaper entitled Why Mozilla cares about Web Literacy.

Webmaker whitepaper

The thing that took time wasn’t really the writing of it – Karen (a post-doc researcher) and I are used to knocking out words quickly – but the re-scoping and design of it. The latter is extremely important as this will serve as a template for future whitepapers. We were heavily influenced by P2PU’s reports around assessment, but used our own Makerstrap styling. I’d like to thank FuzzyFox for all his work around this!

Thanks also to all those colleagues and community members who gave feedback on earlier drafts of the whitepaper. It’s available under a Creative Commons Attribution 4.0 International license and you can fork/improve the template via the GitHub repository. We’re planning for the next whitepaper to be around learning pathways. Once that’s published, we’ll ensure there’s a friendlier way to access them - perhaps via a subdomain of webmaker.org.

Questions? Comments? I’m @dajbelshaw and you can email me at doug@mozillafoundation.org.

Categorieën: Mozilla-nl planet

Gervase Markham: The Latest Airport Security Theatre

wo, 09/07/2014 - 15:23

All passengers flying into or out of the UK are being advised to ensure electronic and electrical devices in hand luggage are sufficiently charged to be switched on.

All electronic devices? Including phones, right? So you must be concerned that something dangerous could be concealed inside a package the size of a phone. And including laptops, right? Which are more than big enough to contain said dangerous phone-sized electronics package in the CD drive bay, or the PCMCIA slot, and still work perfectly. Or, the evilness could even be occupying 90% of the body of the laptop, while the other 10% is taken up by an actual phone wired to the display and the power button which shows a pretty picture when the laptop is “switched on”.

Or are the security people going to make us all run 3 applications of their choice and take a selfie using the onboard camera to demonstrate that the device is actually fully working, and not just showing a static image?

I can’t see this as being difficult to engineer around. And meanwhile, it will cause even more problems trying to find charging points in airports. Particularly for people who are transferring from one long flight to another.

Categorieën: Mozilla-nl planet

Just Browsing: Speeding Up Grunt Builds Using Watchify

di, 08/07/2014 - 18:47

Grunt is a great tool for automating boring tasks. Browserify is a magical bundler that allows you to require your modules, Node.js style, in your frontend code.

One the most useful plugins for grunt is grunt-contrib-watch. Simply put, it watches your file system and runs predefined commands whenever a change occurs (i.e. file was deleted, added or updated). This comes in especially handy when you want to run your unit tests every time anything changes.

Browserify works by parsing your JS files and scanning for require and exports statements. Then it determines the dependency order of the modules and concatenates them together to create a “superbundle” JS file. Hence every time your code changes, you need to tell browserify to rebuild your superbundle or your changes will not be picked up by the browser.

Watching Your Build: the Naive Approach

When connecting Grunt with Browserify, it might be tempting to do something like this:

watch: {   sources: {     files: [       '<%= srcDir %>/**/*.coffee',       '<%= srcDir %>/**/*.js'     ],     tasks: ['browserify', 'unittest']   } }

And it would work. All your code would be parsed and processed and turned into a superbundle when a file changes. But there’s a problem here. Can you spot it? Hint: all your code would be parsed and processed and turned into a superbundle.

Yep. Sloooooow.

On my MacBook Air with SSD and 8GB of RAM, this takes about 4 seconds (and that’s after I made all the big dependencies such as jQuery or Angular external). That’s a long time to wait for feedback from your tests, but not long enough to go grab a coffee. The annoying kind of long, in other words. We can do better.

Enter Watchify

Watchify is to Browserify what grunt-contrib-watch is to Grunt. It watches the filesystem for you and recompiles the bundle when a change is detected. There is an important twist, however, and that is caching. Watchify remembers the parsed JS files, making the rebuild much faster (about ten times faster in my case).

There’s one caveat you have to look out for though. When you’re watching your files in order to run tests (which you still need to do via grunt-contrib-watch because Browserify only takes care of the bundling), make sure you target the resulting (browserified) files and not the source files. Otherwise your changes might not get detected by Grunt watch (on some platforms Watchify seems to “eat” the file system events and they don’t get through to grunt-contrib-watch).

In other words, do something like this:

watch: {   sources: {     files: [       '<%= buildDir %>/**/*.coffee',       '<%= buildDir %>/**/*.js'     ]     tasks: ['test']   } }

where test is an alias for (for example):

test: [   'mocha_phantomjs',   'notify:build_complete' ]

You should see a huge improvement in your build times.

Happy grunting!

Categorieën: Mozilla-nl planet

Marco Zehe: Accessibility in Google Apps – an overview

di, 08/07/2014 - 18:03

I recently said that I would write a blog series about Google apps accessibility, providing some hints and caveats when it comes to using Google products such as GMail, Docs, and Drive in a web browser.

However, when I researched this topic further, I realized that the documentation Google provide on each of their products for screen reader users is actually quite comprehensive. So, instead of repeating what they already said, I’m going to provide some high-level tips and tricks, and links to the relevant documentation so you can look the relevant information up yourself if you need to.

There is really not much difference between Google Drive, GMail and the other consumer-related products and the enterprise-level offerings called Google Apps for Business. All of them are based on the same technology base. The good thing is that there is also no way for administrators to turn accessibility off entirely when they administrate the Google Apps for Business setup for their company. And unless they mess with default settings, themes and other low-vision features should also work in both end user and enterprise environments.

A basic rule: Know your assistive technology!

This is one thing I notice pretty often when I start explaining certain web-related stuff to people, be they screen reader users or users of other assistive technologies. It is vital for your personal, but even more your professional life, to know your assistive technology! As a screen reader user, just getting around a page by tabbing simply isn’t enough to get around complex web applications efficiently and deal with stuff in a timely fashion. You need to be familiar with concepts such as the difference between a virtual document (or virtual buffer or browse mode document) and the forms or focus mode of your screen reader, especially when on Windows. You need to know at least some quick navigation commands available in most browse/virtual mode scenarios. You should also be familiar with what landmarks are to navigate to certain sections of a page. If you just read this and don’t know what I was talking about, consult your screen reader manual and key stroke reference now! If you are a person who requires training to memorize these things and isn’t good at self-paced learning this, go get help and training for this, especially in professional environments. You will be much more proficient afterwards and provide much better services. And besides, it’ll make you feel better because you will have a feeling of greater accomplishment and less frustrations. I promise!

Now with that out of the way, let’s move on to some specific accessibility info, shall we?

GMail

One of the most used things you’l be using is GMail. If you want to use a desktop or mobile e-mail client because that is easiest for you, you can do so! Talk to your administrator if you’re in a corporate or educational environment, or simply set up your GMail account in your preferred client. Today, most clients even won’t require you to enter an IMAP or SMTP server any more, because they know what servers they need to talk to for GMail. So unless your administrator has turned off IMAP and SMTP access, which they most likely haven’t, you should be able to just use your preferred client of choice. Only if you want to add server-side e-mail filters and change other settings will you then need to enter the web interface.

If you want to, or have to, use the web interface, don’t bother with the basic HTML view. It is so stripped down in functionality that the experience by today’s standards is less than pleasant. Instead, familiarize yourself with the GMail guide for screen reader users, and also have a look at the shortcuts for GMail. Note that the latter will only work if your screen reader’s browse or virtual mode is turned off. If you’re a screen reader user, experiment with which way works better for you, browse/virtual mode or non-virtual mode.

Personally, I found the usability of GMail quite improved in recent months compared to earlier times. I particularly am fond of the conversation threading capabilities and the power of the search which can also be applied to filters.

Note that in some documentation, it is said that the chat portion of GMail is not accessible. However, I found that this seems to be outdated information, since the above guide very well states that Chat works, and describes some of its features. Best way to find out: Try it!

Contacts

Contacts are accessible on the web, too, but again you can use your e-mail client’s capabilities or extension to sync your contacts through that as well.

Calendar

Google Calendar’s Agenda View can be used with screen readers on the web, but it, too, allows access from desktop or mobile CalDav clients as well. The Google Calendar guide for screen reader users and Keyboard Shortcuts for Google Calendar provide the relevant info.

Google Docs and Sites

This is probably the most complicated suite of the Google offerings, but don’t fear, they are accessible and you can actually work with them nowadays. For this to work best, Google recommends to use either JAWS or NVDA with Firefox, or IE, or Chrome + ChromeVox. I tested, and while Safari and VoiceOver on OS X also provided some feedback, the experience wasn’t as polished as one would hope. So if you’re on the Mac, using Google Chrome and ChromeVox is probably your best bet.

Also, all of these apps work best if you do not rely on virtual/browse modes when on Windows. In NVDA, it’s easy to just turn it off by pressing NVDA+Space. For JAWS, the shortcut is JAWSKey+Z. Bt since this has multiple settings, consult your JAWS manual to make this setting permanent for the Google Drive domain.

The documentation on Drive is extensive. I suggest to start at this hub and work your way through all linked documentation top to bottom. It’s a lot at first, but you’ll quickly get around and grasp the concepts, which are pretty consistent throughout.

Once you’re ready to dive into Docs, Sheets, Slides and the other office suite apps, use the Docs Getting Started document as a springboard to all the others and the in-depth documentation on Docs itself.

One note, in some places, it is said that creating forms is not accessible yet. However, since there is documentation on that, too, those documents stating that creating forms isn’t accessible yet are out of date. One of those, among other things, is the Administrator Guide to Apps Accessibility.

I found that creating and working in documents and spreadsheets works amazingly well already. There are some problems sometimes with read-only documents which I’ve made the Docs team aware of, but since these are sometimes hard to reproduce, it may need some more time before this works a bit better. I found that, if you get stuck, alt-tabbing out of and back into your browser often clears things up. Sometimes, it might even be enough to just open the menu bar by pressing the Alt key.

Closing remarks

Like with any other office productivity suite, Google Docs is a pretty involved product. In a sense, it’s not less feature-rich than a desktop office suite of programs, only that it runs in a web browser. So in order to effectively use Google Apps, it cannot be said enough: Know your browser, and know your assistive technology! Just tabbing around won’t get you very far!

If you need more information not linked to above, here’s the entry page for all things Google accessibility in any of their apps, devices and services. From here, most of the other pages I mention above can also be found.

And one more piece of advice: If you know you’ll be switching to Google Apps in the future in your company or government or educational institution, and want to get a head start, get yourself a GMail account if you don’t have one. Once you have that, all of Google Drive, Docs, and others, are available to you as well to play around with. There’s no better way than creating a safe environment and play around with it! Remember, it’s only a web application, you can’t break any machines by using it! And if you do, you’re up for some great reward from Google! :)

Enjoy!

Categorieën: Mozilla-nl planet

Christian Heilmann: Have we lost our connection with the web? Let’s #webexcite

di, 08/07/2014 - 17:11

I love the web. I love building stuff in it using web standards. I learned the value of standards the hard way: building things when browser choices were IE4 or Netscape 3. The days when connections were slow enough that omitting quotes around attributes made a real difference to end users instead of being just an opportunity to have another controversial discussion thread. The days when you did everything possible – no matter how dirty – to make things look and work right. The days when the basic functionality of a product was the most important part of it – not if it looks shiny on retina or not.

Let's get excited

I am not alone. Many out there are card-carrying web developers who love doing what I do. And many have done it for a long, long time. Many of us don a blue beanie hat once a year to show our undying love for the standard work that made our lives much, much easier and predictable and testable in the past and now.

Enough with the backpatting

However, it seems we live in a terrible bubble of self-affirmation about just how awesome and ever-winning the web is. We’re lacking proof. We build things to impress one another and seem to forget that what we do sooner than later should improve the experience of people surfing the web out there.

In places of perceived affluence (let’s not analyse how much of that is really covered-up recession and living on borrowed money) the web is very much losing mind-share.

Apps excite people

People don’t talk about “having been to a web site”; instead they talk about apps and are totally OK if the app is only available on one platform. Even worse, people consider themselves a better class than others when they have iOS over Android which dares to also offer cheaper hardware.

The web has become mainstream and boring; it is the thing you use, and not where you get your Oooohhhs and Aaaahhhhs.

Why is that? We live in amazing times:

  • New input types allow for much richer forms
  • Video and Audio in HTML5 has matured to a stage where you can embed a video without worrying about showing a broken grey box
  • Canvas allows us to create and manipulate graphics on the fly
  • WebRTC allows for Skype-like functionality straight in the browser.
  • With Web Audio we can create and manipulate music in the browser
  • SVG is now an embed in HTML and doesn’t need to be an own document which allows us scalable vector graphics (something Flash was damn good in)
  • IndexedDB allows us to store data on the device
  • AppCache, despite all its flaws allows for basic offline functionality
  • WebGL brings 3D environments to the web (again, let’s not forget VRML)
  • WebComponents hint at finally having a full-fledged Widget interface on the web.

Shown, but never told

The worry I have is that most of these technologies never really get applied in commercial, customer-facing products. Instead we build a lot of “technology demos” and “showcases” to inspire ourselves and prove that there is a “soon to come” future where all of this is mainstream.

This becomes even more frustrating when the showcases vanish or never get upgraded. Many of the stuff I showed people just two years ago only worked in WebKit and could be easily upgraded to work across all browsers, but we’re already bored with it and move on to the next demo that shows the amazing soon to be real future.

I’m done with impressing other developers; I want the tech we put in browsers to be used for people out there. If we can’t do that, I think we failed as passionate web developers. I think we lost the connection to those we should serve. We don’t even experience the same web they do. We have fast macs with lots of RAM and Adblock enabled. We get excited about parallax web sites that suck the battery of a phone empty in 5 seconds. We happily look at a loading bar for a minute to get an amazing WebGL demo. Real people don’t do any of that. Let’s not kid ourselves.

Exciting, real products

I remember at the beginning of the standards movement we had showcase web sites that showed real, commercial, user-facing web sites and praised them for using standards. The first CSS layout driven sites, sites using clever roll-over techniques for zooming into product images, sites with very clean and semantic markup – that sort of thing. #HTML on ircnet had a “site of the day”, there was a “sightings” site explaining a weekly amazing web site, “snyggt” in Sweden showcased sites with tricky scripts and layout solutions.

I think it may be time to re-visit this idea. Instead of impressing one another with codepens, dribbles and other in-crowd demos, let’s tell one another about great commmercial products aimed not at web developers using up-to-date technology in a very useful and beautiful way.

That way we have an arsenal of beautiful and real things to show to people when they are confused why we like the web so much. The plan is simple:

  • If you find a beautiful example of modern tech used in the wild, tweet or post about it using the #webexcite hash tag
  • We can also set up a repository somewhere on GitHub once we have a collection going
Categorieën: Mozilla-nl planet

Gervase Markham: Spending Our Money Twice

di, 08/07/2014 - 15:45

Mozilla Corporation is considering moving its email and calendaring infrastructure from an in-house solution to an outsourced one, seemingly primarily for cost but also for other reasons such as some long-standing bugs and issues. The in-house solution is corporate-backed open source, the outsourced solution under consideration is closed source. (The identities of the two vendors concerned are well-known, but are not relevant to appreciate the point I am about to make.) MoCo IT estimates the outsourced solution as one third of the price of doing it in-house, for equivalent capabilities and reliability.

I was pondering this, and the concept of value for money. Clearly, it makes sense that we avoid spending multiple hundreds of thousands of dollars that we don’t need to. That prospect makes the switch very attractive. Money we don’t spend on this can be used to further our mission. However, we also need to consider how the money we do spend on this furthers our mission.

Here’s what I mean: I understand that we don’t want to self-host. IT has enough to do. I also understand that it may be that no-one is offering to host an open source solution that meets our feature requirements. And the “Mozilla using proprietary software or web services” ship hasn’t just sailed, it’s made it to New York and is half way back and holding an evening cocktail party on the poop deck. However, when we do buy in proprietary software or services, I assert we should nevertheless aim to give our business to companies which are otherwise aligned with our values. That means whole-hearted support for open protocols and data formats, and for the open web. For example, it would be odd to be buying in services from a company who had refused to, or dragged their feet about, making their web sites work on Firefox for Android or Firefox OS.

If we deploy our money in this way, then we get to “spend it twice” – it gets us the service we are paying for, and it supports companies who will spend it again to bring about (part of) the vision of the world we want to see. So I think that a values alignment between our vendors and us (even if their product is not open source) is something we should consider strongly when outsourcing any service. It may give us better value for money even if it’s a little more expensive.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

di, 08/07/2014 - 11:14

the following changes have been pushed to bugzilla.mozilla.org:

  • [1033258] bzexport fails to leave a comment when attaching a file using the BzAPI compatibility layer
  • [1003244] Creation of csv file attachement on new Mozilla Reps Swag Request
  • [1033955] pre-load all related bugs during show_bug initialisation
  • [1034678] Use of uninitialized value $_[0] in pattern match (m//) at Bugzilla/Util.pm line 74. The new value for request reminding interval is invalid: must be numeric.
  • [1033445] Certain webservice methods such as Bug.get and Bug.attachments should not use shadow db if user is logged in
  • [990980] create an extension for server-side filtering of bugmail

server-side bugmail filtering

accessible via the “Bugmail Filtering” user preference tab, this feature provides fine-grained control over what changes to bugs will result in an email notification.

for example to never receive changes made to the “QA Whiteboard” field for bugs where you are not the assignee add the following filter:

Field: QA Whiteboard
Product: __Any__
Component: __Any__
Relationship: Not Assignee
Action:Exclude

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Pagina's