mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 uur 41 min geleden

Ben Hearsum: What's New with Balrog - September 14th, 2016

wo, 14/09/2016 - 19:25

The pace of Balrog development has been increasing since the beginning of 2016. We're now at a point where we push new code to production nearly every week, and I'd like to start highlighting all the great work that's being done. The past week was particularly busy, so let's get into it!

The most exciting thing to me is Meet Mangukiya's work on adding support for substituting some runtime information when showing deprecation notices. This is Meet's first contribution to Balrog, and he's done a great job! This work will allow us to send users to better pages when we've deprecated support for their platform or OS.

Njira has continued to chip away at some UI papercuts, fixing some display issues with history pages, addressing some bad word wrapping on some pages, and reworking some dialogs to make better use of space and minimize scrolling.

A few weeks ago Johan suggested that it might be time to get rid of the submodule we use for UI and integrate it with the primary repository. This week he's done so, and it's already improved the workflow for developers.

For my part, I got the final parts to my Scheduled Changes work landed - a project that has been in the works since January. With it, we can pre-schedule changes to Rules, which will help minimize the potential for human error when we ship, and make it unnecessary for RelEng to be around just to hit a button. I also fixed a regression (that I introduced) that made it impossible to grant full admin access, oops!

Scheduled Changes UI

I also want to give a big thank you to Benson Wong for his help and expertise in getting the Balrog Agent deployed - it was a key piece in the Scheduled Changes work, and went pretty smoothly!

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 71

wo, 14/09/2016 - 19:00

The Joy of Coding - Episode 71 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting Sept14, 2016

wo, 14/09/2016 - 18:00

Weekly SUMO Community Meeting Sept14, 2016 This is the sumo weekly call

Categorieën: Mozilla-nl planet

Myk Melez: The Once And Future GeckoView

wo, 14/09/2016 - 17:13

GeckoView is an Android library for embedding Gecko into an Android app. Mark Finkle introduced it via GeckoView: Embedding Gecko in your Android Application back in 2013, and a variety of Fennec hackers have contributed to it, including Nick Alexander, who described a Maven repository for GeckoView in 2014. It’s also been reused, at least experimentally, by Joe Bowser to implement MozillaView – GeckoView Proof of Concept.

But GeckoView development hasn’t been a priority, and parts of it have bitrotted. It has also remained intertwined with Fennec, which makes it more complicated to reuse for another Android app. And the core WebView class in Android (along with the cross-platform implementation in Crosswalk), already address a variety of web rendering use cases for Android app developers, which complicates its value proposition.

Nevertheless, it may have an advantage for the subset of native Android apps that want to provide a consistent experience across the fragmented Android install base or take advantage of the features Gecko provides, like WebRTC, WebVR, and WebAssembly. More research (and perhaps some experimentation) will be needed to determine to what extent that’s true. But if “there’s gold in them thar hills,” then I want to mine it.

So Nick recently determined what it would take to completely separate GeckoView from Fennec, and he filed a bunch of bugs on the work. I then filed meta-bug 1291362 — standalone Gradle-based GeckoView libary to track those bugs along with the rest of the work required to build and distribute a standalone Gradle-based GeckoView library reusable by other Android apps. Nick, Jim Chen, and Randall Barker have already made some progress on that project.

It’s still early days, and I’m still pursuing the project’s prioritization (say that ten times fast). So I can’t yet predict when we’ll complete that work. But I’m excited to see the work underway, and I look forward to reporting on its progress!

Categorieën: Mozilla-nl planet

Julia Vallera: Share your expertise with Mozilla Clubs around the world!

wo, 14/09/2016 - 15:21

Club guides are an important part to the Mozilla Clubs program as they provide direction and assistance for specific challenges that clubs may face during day-to-day operation. We are looking for volunteers to help us author new guides and resources that will be shared globally. By learning more about the process and structure of our guides, we hope that you’ll collaborate on a Mozilla club guide soon!

Background

In early 2015, Mozilla Clubs staff began publishing a series of guides that provide Club leaders with helpful tips and resources they need to maintain their Clubs. Soon after, community members began assisting with, collaborating on and authoring these guides alongside staff.

Guides are created in response to inquiries from Club Captains and Regional Coordinators around challenges they face within their clubs. Some challenges are common across Clubs and others are specific to one Club. In either case, the Mozilla Clubs team tries to create guides that assist in overcoming those challenges. Once a guide is published, it is listed as a resource on the Mozilla Clubs’ website.

Club guide created by Simeon Oriko, Carolina Tejada Alvarez, Kristina Gorr and the Mozilla Clubs team

Screenshot of a guide co-authored by Simeon Oriko, Carolina Tejada Alvarez, Kristina Gorr and the Mozilla Clubs team

How are club guides used around the world?

At Mozilla Clubs, there is a growing list of guides and resources that help Club participants maintain Club activity around the world. These guides are in multiple languages and cover topics related to teaching the web, sustaining communities, growing partnerships, fostering collaborations and more.

Guides should be used and adapted as needed. Club leaders are free to choose which guides they use and don’t use. The information included in each guide is drawn from experienced community leaders that are willing to share their expertise. Guides will continue to evolve and we welcome suggestions for how to improve them. The source code, template and content are free and available on Github.

Here are a few examples of how guides have been used:

  • A new Club Captain is wondering how to teach their Club learners about open practices so they read the “Teaching Web Literacy in the Open” guide for facilitation tips and activity ideas.
  • A librarian is interested in starting a Club in a library and uses the “Hosting a Mozilla Club in your Library” guide for tips and event ideas.
  • A Club wants to a make a website, so they use the “Creating a website” guide to learn how to secure a domain, choose a web host and use a free website builder.
What is the process for creating club guides?

The process for creating guides is evolving and sometimes varies on a case-by-case basis. In general, it goes something like this:

Step 1: Club leaders make suggestions for new guides on our discussion forum.

Step 2: Mozilla Club staff respond to the suggestion and share any existing resources.

Step 3: If there are no existing resources, the suggestion is added to a list of upcoming guides.

Step 4: Staff seek experts from the community to contribute or help author the guide (in some cases this could be the person who made the suggestion).

Step 5: Once we find an expert (internal to Mozilla or a volunteer from the community) who is interested in collaborating, the guide is drafted in as little or as much time as needed.

We currently have 50+ guides and resources available and look forward to seeing that number grow. If you have ideas for guides and/or would like to contribute to them, please let us know here.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Commission Proposal to Reform Copyright is Inadequate

wo, 14/09/2016 - 15:20

The draft directive released today thoroughly misses the goal to deliver a modern reform that would unlock creativity and innovation in the Single Market.

Today the EU Commission released their proposal for a reformed copyright framework. What has emerged from Brussels is disheartening. The proposal is more of a regression than the reform we need to support European businesses and Internet users.

To date, over 30,000 citizens have signed our petition urging the Commission to update EU copyright law for the 21st century. The Commission’s proposal needs substantial improvement.  We collectively call on the EU institutions to address the many deficits in the text released today in subsequent iterations of this political process.

The proposal fails to bring copyright in line with the 21st century

The proposal does little to address much-needed exceptions to copyright law. It provides some exceptions for education and preservation of cultural heritage. Still, a new exception for text and data mining (TDM), which would advance EU competitiveness and research, is limited to public interest research institutions (Article 3). This limitation could ultimately restrict, rather than accelerate, TDM to unlock research and innovation across sectors throughout Europe.

These exceptions are far from sufficient. There are no exceptions for panorama, parody, or remixing. We also regret that provisions which would add needed flexibility to the copyright system — such as a UGC (user-generated content) exception and an flexible user clause like an open norm, fair dealing or fair use — have not been included. Without robust exceptions, and provisions that bring flexibility and a future-proof element, copyright law will continue to chill innovation and experimentation.

Pursuing the ‘snippet tax’ on the EU level will undermine competition, access to knowledge

The proposal calls for ancillary copyright protection, or a ‘snippet tax’. Ancillary copyright would allow online publishers to copyright ‘press publications’, which is broadly defined to cover works that have the purpose of providing “information related to news or other topics and published in any media under the initiative, editorial responsibility and control of a service provider” (Article 2(4)). This content would remain under copyright for 20 years after its publication — an eternity online. This establishment of a new exclusive right would limit the free flow of knowledge, cripple competition, and hinder start-ups and small- and medium-sized businesses. It could, for example, require bloggers linking out to other sites to pay new and unnecessary fees for the right to direct additional traffic to existing sites, even though having the snippet would benefit both sides.

Ancillary copyright has already failed miserably in both Germany and Spain. Including such an expansive exclusive right at the EU level is puzzling.

The proposal establishes barriers to entry for startups, coders, and creators

Finally, the proposal calls for an increase in intermediaries’ liability. Streaming services like YouTube, Spotify, and Vimeo, or any ISPs that “provide to the public access to large amounts of works or other subject-matter uploaded by their users” (Article 13(1)), will be obliged to broker agreements with rightsholders for the use of, and protection of their works. Such measures could include the use of “effective content recognition technologies”, which imply universal monitoring and strict filtering technologies that identify and/or remove copyrighted content. This is technically challenging — and more importantly, would disrupt the very foundations that make many online activities possible in the EU. For example, putting user generated content in the crosshairs of copyright takedowns. Only the largest companies would be able to afford the complex software required to comply if these measures are deemed obligatory, resulting in a further entrenchment of the power of large platforms at the expense of EU startups and free expression online.

These proposals, if adopted as they are, would deal a blow to EU startups, to independent coders, creators, and artists, and to the health of the internet as a driver for economic growth and innovation. The Parliament certainly has its work cut out for it. We reiterate the call from 24 organisations in a joint letter expressing many of these concerns and urge the European Commission to publish the results of the Related rights and Panorama exception public consultation.

We look forward to working toward a copyright reform that takes account of the range of stakeholders who are affected by copyright law. And we will continue to advocate for an EU copyright reform that accelerates innovation and creativity in the Digital Single Market.

Categorieën: Mozilla-nl planet

Luis Villa: Copyleft and data: databases as poor subject

wo, 14/09/2016 - 15:00

tl;dr: Open licensing works when you strike a healthy balance between obligations and reuse. Data, and how it is used, is different from software in ways that change that balance, making reasonable compromises in software (like attribution) suddenly become insanely difficult barriers.

In my last post, I wrote about how database law is a poor platform to build a global public copyleft license on top of. Of course, whether you can have copyleft in data only matters if copyleft in data is a good idea. When we compare software (where copyleft has worked reasonably well) to databases, we’ll see that databases are different in ways that make even “minor” obligations like attribution much more onerous.

Card Puncher from the 1920 US Census.
Card Puncher from the 1920 US Census.
How works are combined

In software copyleft, the most common scenarios to evaluate are merging two large programs, or copying one small file into a much larger program. In this scenario, understanding how licenses work together is fairly straightforward: you have two licenses. If they can work together, great; if they can’t, then you don’t go forward, or, if it matters enough, you change the license on your own work to make it work.

In contrast, data is often combined in three ways that are significantly different than software:

  • Scale: Instead of a handful of projects, data is often combined from hundreds of sources, so doing a license conflicts analysis if any of those sources have conflicting obligations (like copyleft) is impractical. Peter Desmet did a great job of analyzing this in the context of an international bio-science dataset, which has 11,000+ data sources.
  • Boundaries: There are some cases where hundreds of pieces of software are combined (like operating systems and modern web services) but they have “natural” places to draw a boundary around the scope of the copyleft. Examples of this include the kernel-userspace boundary (useful when dealing with the GPL and Linux kernel), APIs (useful when dealing with the LGPL), or software-as-a-service (where no software is “distributed” in the classic sense at all). As a result, no one has to do much analysis of how those pieces fit together. In contrast, no natural “lines” have emerged around databases, so either you have copyleft that eats the entire combined dataset, or you have no copyleft. ODbL attempts to manage this with the concept of “independent” databases and produced works, but after this recent case I’m not sure even those tenuous attempts hold as a legal matter anymore.
  • Authorship: When you combine a handful of pieces of software, most of the time you also control the licensing of at least one of those pieces of software, and you can adjust the licensing of that piece as needed. (Widely-used exceptions to this rule, like OpenSSL, tend to be rare.) In other words, if you’re writing a Linux kernel driver, or a WordPress theme, you can choose the license to make sure it complies. Not necessarily the case in data combinations: if you’re making use of large public data sets, you’re often combining many other data sources where you aren’t the author. So if some of them have conflicting license obligations, you’re stuck.
How attribution is managed

Attribution in large software projects is painful enough that lawyers have written a lot on it, and open-source operating systems vendors have built somewhat elaborate systems to manage it. This isn’t just a problem for copyleft: it is also a problem for the supposedly easy case of attribution-only licenses.

Now, again, instead of dozens of authors, often employed by the same copyright-owner, imagine hundreds or thousands. And imagine that instead of combining these pieces in basically the same way each time you build the software, imagine that every time you have a different query, you have to provide different attribution data (because the relevant slices of data may have different sources or authors). That’s data!

The least-bad “solution” here is to (1) tag every field (not just data source) with licensing information, and (2) have data-reading software create new, accurate attribution information every time a new view into the data is created. (I actually know of at least one company that does this internally!) This is not impossible, but it is a big burden on data software developers, who must now include a lawyer in their product design team. Most of them will just go ahead and violate the licenses instead, pass the burden on to their users to figure out what the heck is going on, or both.

Who creates data

Most software is either under a very standard and well-understood open source license, or is produced by a single entity (or often even a single person!) that retains copyright and can adjust that license based on their needs. So if you find a piece of software that you’d like to use, you can either (1) just read their standard FOSS license, or (2) call them up and ask them to change it. (They might not change it, but at least they can if they want to.) This helps make copyleft problems manageable: if you find a true incompatibility, you can often ask the source of the problem to fix it, or fix it yourself (by changing the license on your software).

Data sources typically can’t solve problems by relicensing, because many of the most important data sources are not authored by a single company or single author. In particular:

  • Governments: Lots of data is produced by governments, where licensing changes can literally require an act of the legislature. So if you do anything that goes against their license, or two different governments release data under conflicting licenses, you can’t just call up their lawyers and ask for a change.
  • Community collaborations: The biggest open software relicensing that’s ever been done (Mozilla) required getting permission from a few thousand people. Successful online collaboration projects can have 1-2 orders of magnitude more contributors than that, making relicensing is hard. Wikidata solved this the right way: by going with CC0.
What is the bottom line?

Copyleft (and, to a lesser extent, attribution licenses) works when the obligations placed on a user are in balance with the benefits those users receive. If they aren’t in balance, the materials don’t get used. Ultimately, if the data does not get used, our egos feel good (we released this!) but no one benefits, and regardless of the license, no one gets attributed and no new material is released. Unfortunately, even minor requirements like attribution can throw the balance out of whack. So if we genuinely want to benefit the world with our data, we probably need to let it go.

So what to do?

So if data is legally hard to build a license for, and the nature of data makes copyleft (or even attribution!) hard, what to do? I’ll go into that in my next post.

Categorieën: Mozilla-nl planet

Tantek Çelik: #XOXOFest 2016: Ten Overviews & Personal Perspectives

wo, 14/09/2016 - 08:45

I braindumped my rough, incomplete, and barely personal impressions from XOXO 2016 last night: #XOXOfest 2016: Independent Creatives Inspired, Shared, Connected. I encourage you to read the following well-written XOXO overview posts and personal perspectives. In rough order of publication (or when I read them):

(Maybe open Ben Darlow’s XOXO 2016 Flickr Set to provide some visual context while you read these posts.)

  1. Casey Newton (The Verge): In praise of the internet's best festival, which is going away (posted before mine, but I deliberately didn’t read it til after I wrote my own first XOXO 2016 post).
  2. Sasha Laundy: xoxo from XOXO
  3. Nabil “Nadreck” Maynard: XOXO, XOXO
  4. Matt Haughey: Starving artists / Memories of XOXO 2016
  5. Courtney Patubo Kranzke: XOXO Festival Thoughts
  6. Zoe Landon: Hugs and Kisses / A Year of XOXO
  7. Clint Bush: Andy & Andy: The XOXO legacy
  8. Erin Mickelson: XOXO
  9. Dylan Wilbanks: Eight short-ish thoughts about XOXO 2016
  10. Doug Hanke: Obligatory XOXO retrospective

There’s plenty of common themes across these posts, and lots I can personally relate to. For now I’ll leave you with just the list, no additional commentary. Go read these and see how they make you feel about XOXO. If you had the privilege of participating in XOXO this year, consider posting your thoughts as well.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: WebExtensions and parity with Chrome

wo, 14/09/2016 - 00:14

A core strength of Firefox is its extensibility. You can do more to customize your browsing experience with add-ons than in any other browser. It’s important to us, and our move to WebExtensions doesn’t change that. One of the first goals of implementing WebExtensions, however, is reaching parity with Chrome’s extension APIs.

Parity allows developers to write add-ons that work in browsers that support the same core APIs with minimum fuss. It doesn’t mean the APIs are identical, and I wanted to clarify the reasons why there are implementation differences between browsers.

Different browsers

Firefox and Chrome are different browsers, so some APIs from Chrome do not translate directly.

One example is tab highlight. Chrome has this API because it has the concept of highlighted tabs, which Firefox does not. So instead of browser.tabs.onHighlighted, we fire this event on the active tab as documented on MDN. It’s not the same functionality as Chrome, but that response makes the most sense for Firefox.

Another more complicated example is private browsing mode. The equivalent in Chrome is called incognito mode and extensions can support multiple modes: spanning, split or not_allowed. Currently we throw an error if we see a manifest that is not spanning as that is the mode that Firefox currently supports. We do this to alert extension authors testing out their extension that it won’t operate the way they expect.

Less popular APIs

Some APIs are more popular than others. With limited people and time we’ve had to focus on the APIs that we thought were the most important. At the beginning of this year we downloaded 10,000 publicly available versions of extensions off the Chrome store and examined the APIs called in those extensions. It’s not a perfect sample, but it gave us a good idea.

What we found was that there are some really popular APIs, like tabs, windows, and runtime, and there are some APIs that are less popular. One example is fontSettings.get, which is used in 7 out of the 10,000 (0.07%) add-ons. Compare that to tabs.create, which is used in 4,125 out of 10,000 (41.25%) add-ons.

We haven’t prioritized the development of the least-used APIs, but as always we welcome contributions from our community. To contribute to WebExtensions, check out our contribution page.

Deprecated APIs

There are some really popular APIs in extensions that are deprecated. It doesn’t make sense for us to implement APIs that are already deprecated and are going to be removed. In these cases, developers will need to update their extensions to use the new APIs. When they do, they will work in the supported browsers.

Some examples are in the extension API, which are mostly replaced by the runtime API. For example, use runtime.sendMessage instead of extension.sendMessage; use runtime.onMessage instead of extension.onRequest and so on.

W3C

WebExtensions APIs will never completely mirror Chrome’s extension APIs, for the reasons outlined above. We are, however, already reaching a point where the majority of Chrome extensions work in Firefox.

To make writing extensions for multiple browsers as easy as possible, Mozilla has been participating in a W3C community group for extension compatibility. Also participating in that group are representatives of Opera and Microsoft. We’ll be sending a representative to TPAC this month to take part in discussions about this community group so that we can work towards a common browser standard for browser extensions.

Update: please check the MDN page on incompatibilities.

Categorieën: Mozilla-nl planet

Armen Zambrano: Increasing test coverage

di, 13/09/2016 - 23:27
Last quarter I spent some time increasing mozci's test coverage. Here are some notes I took to help me remember in the future how to do it.

Here's some of what I did:
  • Read Python's page about increasing test coverage
    • I wanted to learn what core Python recommends
    • Tthey recommend is using coverage.py
  • Quick start with coverage.py
    • "coverage run --source=mozci -m py.test test" to gather data
    • "coverage html" to generate an html report
    • "/path/to/firefox firefox htmlcov/index.html" to see the report
  • NOTE: We have coverage reports from automation in coveralls.io
    • If you find code that needs to be ignored, read this.
      • Use "# pragma: no cover" in specific lines
      • You can also create rules of exclusion
    • Once you get closer to 100% you might want to consider to increase branch coverage instead of line coverage
    • Once you pick a module to increase coverage
      • Keep making changes until you run "coverage run" and "coverage html".
      • Reload the html page to see the new results
      After some work on this, I realized that my preferred place to improve tests is focusing on the simplest unit tests. I say this since integration tests do require proper work and thinking how to properly test them rather than *just* increasing coverage for the sake of it.
      Creative Commons License
      This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
      Categorieën: Mozilla-nl planet

      The Mozilla Blog: Cybersecurity is a Shared Responsibility

      di, 13/09/2016 - 16:23

      There have been far too many “incidents” recently that demonstrate the Internet is not as secure as it needs to be. Just in the past few weeks, we’ve seen countless headlines about online security breaches. From the alleged hack of the National Security Agency’s “cyberweapons” to the hack of the Democratic National Committee emails, and even recent iPhone security vulnerabilities, these stories reinforce how crucial it is to focus on security.

      Internet security is like a long chain and each link needs to be tested and re-tested to ensure its strength. When the chain is broken, bad things happen: a website that holds user credentials (e.g., email addresses and passwords) is compromised because of weak security; user credentials are stolen; and, those stolen credentials are then used to attack other websites to gain access to even more valuable information about the user.

      One weak link can break the chain of security and put Internet users at risk. The chain only remains strong if technology companies, governments, and users work together to keep the Internet as safe as it can be.

      Technology companies must focus on security.

      Technology companies need to develop proactive, pro-user cybersecurity technology solutions.

      We must invest in creating a secure platform. That means supporting things like adopting and standardizing secure protocols, building features that improve security, and empowering users with education and better tools for their security.

      At Mozilla, we have security features like phishing and malware protection built into Firefox. We started one of the first Bug Bounty programs in 2004 because we want to be informed about any vulnerabilities found in our software so we can fix them quickly. We also support the security of the broader open source ecosystem (not just Mozilla developed products). We launched the Secure Open Source (SOS) Fund as part of the Mozilla Open Source Support program to support security audits and the development of patches for widely used open source technologies.

      Still, there is always room for improvement. The recent headlines show that the threat to user safety online is real, and it’s increasing. We can all do better, and do more.

      Governments must work with technology companies.  

      Cybersecurity is a shared responsibility and governments need to do their part. Governments need to help by supporting security solutions that no individual company can tackle, instead of advancing policies that just create weak links in the chain.

      Encryption, something we rely on to keep people’s information secure online everyday, is under attack by governments because of concerns that it inadvertently protects the bad guys. Some governments have proposed actions that weaken encryption, like in the case between Apple and the FBI earlier this year. But encryption is not optional – and creating backdoors for governments, even for investigations, compromises the security of all Internet users.

      The Obama Administration just appointed the first Federal Chief Information Security officer as part of the Cybersecurity National Action Plan. I’m looking forward to seeing how this role and other efforts underway can help government and technology companies work better together, especially in the area of security vulnerabilities. Right now, there’s not a clear process for how governments disclose security vulnerabilities they discover to affected companies.

      While lawful hacking by a government might offer a way to catch the bad guys, stockpiling vulnerabilities for long periods of time can further weaken that security chain. For example, the recent alleged attack and auction of the NSA’s “cyberweapons” resulted in the public release of code, files, and “zero day” vulnerabilities that gave companies like Cisco and Fortinet just that- zero days to develop fixes before they were possibly exploited by hackers. There aren’t transparent and accountable policies in place that ensure the government is handling vulnerabilities appropriately and disclosing them to affected companies. We need to make this a priority to protect user security online.

      Users can take easy and simple steps to strengthen the security chain.   

      Governments and companies can’t do this without you. Users should always update their software to benefit from new security features and fixes, create strong passwords to guard your private information, and use available resources to become educated digital citizens. These steps don’t just protect people who care about their own security, they help create a more secure system and go a long way in making it harder to break the chain.

      Working together is the only way to protect the security of the Internet for the billions of people online. We’re dedicated to this as part of our mission and we will continue our work to advance these issues.

      Categorieën: Mozilla-nl planet

      Mozilla Release Management Team: Firefox 49 delayed

      di, 13/09/2016 - 12:57

      The original 2016 Firefox release schedule had the release of Firefox 49 shipping on September 13, 2016. During our release qualification period for Firefox 49, we discovered a bug in the release that causes some desktop and Android users to see a slow script dialog more often than we deem acceptable. In order to allow time to address this issue, we have rescheduled the release of Firefox 49 to September 20, 2016.

      In order to accommodate this change, we will shorten the following development cycle by a week. No other scheduled release dates are impacted by this change.

      In parallel, Firefox ESR 45.4.0 is also delayed by a week.

      Categorieën: Mozilla-nl planet

      Tantek Çelik: #XOXOfest 2016: Independent Creatives Inspired, Shared, Connected

      di, 13/09/2016 - 07:20

      Inspired, once again. This was the fifth XOXO Conference & Festival (my fourth, having missed last year).

      There’s too much about XOXO 2016 to fit into one "XOXO 2016" blog post. So much that there’s no way I’d finish if I tried.

      Outdoors on the last day of XOXO Festival 2016

      4-ish days of:

      Independent creatives giving moving, inspiring, vulnerable talks, showing their films with subsequent Q&A, performing live podcast shows (with audience participation!).

      Games, board games, video games, VR demos. And then everything person-to-person interactive. All the running into friends from past XOXOs (or dConstructs, or classic SXSWi), meetups putting IRL faces to Slack aliases.

      Friends connecting friends, making new friends, instantly bonding over particular creative passions, Slack channel inside jokes, rare future optimists, or morning rooftop yoga under a cloud-spotted blue sky.

      The walks between SE Portland venues. The wildly varying daily temperatures, sunny days hotter than predicted highs, cool windy nights colder than predicted lows. The attempts to be kind and minimally intrusive to local homeless.

      More conversations about challenging and vulnerable topics than small talk. Relating on shared losses. Tears. Hugs, lots of hugs.

      Something different happens when you put that many independent creatives in the same place, and curate & iterate for five years. New connections, between people, between ideas, the energy and exhaustion from both. A sense of a safer place.

      I have so many learnings from all the above, and emergent patterns of which swimming in my head that I’m having trouble sifting and untangling. Strengths of creative partners and partnerships. Uncountable struggles. The disconnects between attention, popularity, money. The hope, support, and understanding instead of judgment.

      I'm hoping to write at least a few single-ish topic posts just to get something(s) posted before the energies fade and memories start to blur.

      Categorieën: Mozilla-nl planet

      This Week In Rust: This Week in Rust 147

      di, 13/09/2016 - 06:00

      Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

      This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

      Updates from Rust Community News & Blog Posts New Crates & Project Updates Crate of the Week

      This week's crate of the week is tokio, a high-level asynchronous IO library based on futures. Thanks to notriddle for the suggestion.

      Submit your suggestions and votes for next week!

      Call for Participation

      Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

      Some of these tasks may also have mentors available, visit the task page for more information.

      If you are a Rust project owner and are looking for contributors, please submit tasks here.

      Updates from Rust Core

      84 pull requests were merged in the last two weeks.

      New Contributors
      • Cobrand
      • Jake Goldsborough
      • John Firebaugh
      • Justin LeFebvre
      • Kylo Ginsberg
      • Nicholas Nethercote
      • orbea
      • Richard Janis Goldschmidt
      • Ulrich Weigand
      Approved RFCs

      Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

      Final Comment Period

      Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

      New RFCs Upcoming Events

      If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

      fn work(on: RustProject) -> Money

      Tweet us at @ThisWeekInRust to get your job offers listed here!

      Quote of the Week

      No quote was selected for QotW.

      Submit your quotes for next week!

      This Week in Rust is edited by: nasa42, llogiq, and brson.

      Categorieën: Mozilla-nl planet

      Cameron Kaiser: Sierraspoof is here

      di, 13/09/2016 - 01:41
      Sierraspoof is here for TenFourFox 45.4 (which is now live). And it's even a week early.
      Categorieën: Mozilla-nl planet

      Mozilla Open Design Blog: The Conversation About Design Has Changed

      ma, 12/09/2016 - 22:30

      Like many of us in the design community, I’ve followed along in recent years as seemingly countless companies have undertaken the exciting and often fraught challenge of redesigning their visual identities. A quick glance at the Before/After section of Brand New, the well-known design blog dedicated to the critique of such things, shows 216 projects chronicled year-to-date.

      Some redesigns have been well received like Google’s, while others have drawn an enormous amount of criticism from both the design community and the general public, such as Uber’s. These are interesting times for design as the critique of our work has moved from something those of us in the trade might discuss with colleagues over dinner, to something that anyone with an @handle and opinion can weigh in publicly over social media. On several occasions, this public discourse has taken such an extreme tone that Andrew Beck has described it as design crit as bloodsport.

      Designing in the Open

      Earlier this year I began consulting with non-profit Mozilla to tee up a logo redesign initiative. During that time, Mozilla’s Creative Director Tim Murray proposed the idea of designing in the open. His vision was to build off of the open source principles that are bedrock to Mozilla by applying them to the end-to-end process of an identity redesign. The idea was to be as transparent as possible with the process, the initial concepts, the refinement and the outcome, and to have an open, public dialog with many people as possible along the way. He would engage the typical stakeholders one would expect, such as Mozilla’s senior leadership, as well as Mozilla’s 10,000+ strong volunteer community. But Tim also wanted to reach beyond Mozillians. He invited not only the design community into the discussion, but anyone for whom the Mozilla mission – to keep the internet healthy, open and safe for all – resonates.

      Initially, his proposal made me slightly uncomfortable. I felt a mix of caution and curiosity and I had to ask myself: why?

      A Mix of Caution and Curiosity

      I was concerned that opening up earlier stages of the design process to that kind of public commentary (think stakeholders at scale) would negatively affect the work. And my hesitancy was also rooted in a lack of understanding as to what Mozilla was asking from the design community. I questioned how we as designers could meaningfully participate in a public dialog about design work. After all, by submitting a professional opinion on everything from initial thinking, to design exploration through concept and execution, weren’t we engaging in a kind of spec work?

      As for my curiosity, it was piqued by the opportunity to re-examine the methodology by which design outcomes are generated. Would a larger and more diverse conversation upfront in fact lead to a better outcome? And as design crit has gone mainstream and instantaneous thanks to social media, how can we show up in public conversations about design deliverables without compromising our point of view against spec work?

      Where Things Stand Now

      The identity redesign is now well underway. johnson banks was selected as the agency partner and Mozilla has indeed undertaken a fully transparent, moderated, and public design process. The first round creative concepts were shared a week ago and met with hundreds if not thousands of responses and a full news cycle in the design press.

      While the end result of this unconventional approach remains uncertain, we do know that Tim and team created a process that is true to Mozilla’s open source beliefs and the manifesto that guides the company’s conduct. And we know they are willing to withstand the outcome even if it rises to the level of bloodsport. For that, they should be commended.

      As for the questions raised about spec work and the Mozilla initiative, if you’re aligned with Mozilla’s mission and choose to provide critique then your participation as a practicing professional is an act of volunteerism. In their words…

      “What we’re seeking is input on work that’s in process. We welcome your feedback in a form that suits you best, be it words, napkin sketches, or Morse Code. We simply want to incorporate as many perspectives and voices into this open design process as possible. We don’t take any single contribution lightly. We hope you’ll agree that by helping Mozilla communicate its purpose better through design, you’ll be helping improve the future Internet.”

      As for the larger questions raised by increasing public dialog about design, it’s up to each of us personally to determine how we participate and when. But all industries experience change, design is no exception. By at least trying to understand Mozilla’s approach to this project and how it fits within a broader narrative, designers can use this as an opportunity to challenge long-held methodologies, and perhaps pave the way for new ones.

      aigasf_logo

      Republished with permission from AIGA SF / The Professional Association for Design

       

      Photo credit: Wikimedia Commons “And Phoebus’ Tresses Stream Athwart the Glade”

      Categorieën: Mozilla-nl planet