mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 1 dag 2 min geleden

Michael Kaply: First Beta of CCK2 2.1 Available

do, 14/05/2015 - 22:32

The first beta of the next CCK2 is available here.

This upgrade has three main areas of focus:

  1. Support for the new in content preferences
  2. Remove the need for the distribution directory (except in the case of disabling safe mode)
  3. Support for new Firefox 38 features (not done yet).

Removing support for the distribution directory was a major internal change, so I would appreciate any testing you can do.

My plan is to finish support for a few Firefox 38 specific features and then release next week.

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 32

do, 14/05/2015 - 22:30

In the past three weeks, we merged 141 pull requests.

Samsung OSG published another blog post by Lars and Mike. This one focuses on Servo’s support for embedding via the CEF API.

The Rust upgrade of doom is finally over. This brings us up to a Rust version from late April. We’ve now cleared all of the pre-1.0 breaking changes!

Firefox Nightly now has experimental support for components written in Rust. There’s a patch up to use Servo’s URL parser, and another team is working on media libraries.

Notable additions New contributors
  • Emilio Cobos Álvarez
  • Allen Chen
  • Andrew Foote
  • William Galliher
  • Jinank Jain
  • Rucha Jogaikar
  • Cyryl Płotnicki-Chudyk
  • Jinwoo Song
  • Jacob Taylor-Hindle
  • Shivaji Vidhale
Screenshots

Having previously conquered rectangles, Servo’s WebGL engine is now capable of drawing a triangle inside a rectangle:

Meetings

We’ve switched from Critic to Reviewable and it’s working pretty well.

Mozillians will be gathering in Whistler, BC next month, and we’ve started planning out how the Servo team will participate. We’re going to run Rust and Servo training sessions, as well as meetings with other teams to plan for the shared future of Gecko and Servo.

Aside from those ongoing topics, here’s the breakdown by date of what we’ve discussed:

April 27

  • Intermittent test failures on the builders
  • We talked about what it would take to use Bugzilla instead of GitHub Issues.
  • We discussed what to blog about next; suggestions are welcome!

May 4

  • The Rust upgrade of doom

May 11

  • Discussion with Brian Birtles about the emerging Web Animations API
  • We’re going to start assigning PRs to their reviewers on GitHub.
  • Status update on Rust in Gecko. The Gecko teams are doing most of the work :D
  • We talked about issues with the switch to Piston’s image library.
Categorieën: Mozilla-nl planet

Air Mozilla: Participation at Mozilla

do, 14/05/2015 - 19:00

Participation at Mozilla The Participation Forum

Categorieën: Mozilla-nl planet

Air Mozilla: Reps weekly

do, 14/05/2015 - 17:00

Reps weekly Weekly Mozilla Reps call

Categorieën: Mozilla-nl planet

Air Mozilla: Participation Metrics at Mozilla

do, 14/05/2015 - 15:38

Participation Metrics at Mozilla A story of systems and data.

Categorieën: Mozilla-nl planet

Mozilla Science Lab: #mozsprint Projects: The BioUno Project

do, 14/05/2015 - 15:00

This guest post is by Bruno Kinoshita and Ioannis Moutsatsos, leads on BioUno, one of the projects you’ll get the opportunity to jump into live at #mozsprint, June 4 & 5.

Complexity vs Quality: The Bumpy Relation of Scientific Software

Scientific software is used in physical, environmental, earth and life sciences on a daily basis to make important discoveries. Due to its highly specialized nature, scientific software is frequently developed by scientists with deep domain knowledge, but not necessarily deep knowledge in technologies and tools used by software engineers and developers that build more mainstream applications. As a result, scientific software tends to be highly customized, less flexible, complex, poorly tested, less documented and even less maintained in the long run [1].

Reproducible Computational Research

Many issues plaguing scientific software have been discussed in the literature, but the ability to reproduce computational discoveries has taken center stage in recent years [2]. The term reproducible computational research has been coined, and used as an umbrella concept for identifying and proposing solutions to issues that affect the reproducibility of computational scientific research.

Some Proposed Solutions

Although the challenge of reproducible computational research is multi-dimensional, some of the proposed solutions are rooted in existing, well established and robust software engineering solutions such as:

  1. Source code management (SCM)
  2. Computational Workflow Engines
  3. Scalable and distributed compute platforms
  4. Compute and storage hardware virtualization
  5. Centralized repositories of digital collections of scientific data

In addition, the organized and homogeneous tagging of scientific data with metadata (data about data) has been a well-established foundation for information retrieval and discovery. The development of consistent metadata and controlled vocabularies is another important component to searching, finding and using scientific data in a manner consistent with reproducible research.

Finally, (and to some degree an obvious requirement) reproducible computational research depends on the ability of other scientists or research experts to freely access the source code and scientific data used in generating new computational discoveries. These free and open access concepts have been championed by many in the software development community under the umbrella of the open-source community. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community [3].

Project Mission

The BioUno open-source project seeks to improve scientific application automation, performance, reproducibility, usability, and management by applying and extending software engineering (SE) best practices in the field of scientific research applications. Deliverables from the project have found a variety of applications in life-science research (bioinformatics, genetics, drug discovery).

Project Objectives
  • We explore and apply the application of best practices in software engineering to support the project mission
  • We develop extensions to established SE tools, frameworks and technologies that directly support or indirectly enhance scientific applications.
  • We develop APIs and integration points that empower scientific applications
  • We promote collaboration and reuse by contributing to existing open source projects
  • We educate users through blog, wiki, and presentations on the application of SE best practices in scientific applications
  • We advocate with software engineers for enabling SE tools and frameworks for use by scientists
Project Strategy

BioUno has pioneered the use of continuous integration tools and techniques to create reproducible computational pipelines and to manage computer clusters in support of scientific research applications.

In addition, BioUno has adopted a variety of Software Engineering best practices, to achieve its objectives:

Finally, BioUno strives to minimize the open source proliferation problem [4]. While the BioUno project covers a broad range of technologies and tools, it tries to avoid the Open-Source proliferation problem by actively contributing to existing open-source projects rather than releasing or starting a new project.

BioUno Objectives for Mozilla Science 2015 Global Sprint

The BioUno project is participating in the 2015 Mozilla Science Global Sprint (MSGS 2015) with three main objectives.

  1. Expose the MSGS participants to the BioUno strategy of using Jenkins, a popular continuous integration system, for managing and building reproducible scientific workflows
  2. Engage the MSGS participants in hands-on review and enhancement of the BioUno tool-kit (Jenkins plugins and API) and gather new ideas for its extension for research applications. In the process participants will gain valuable experience on how to create, maintain and debug Jenkins plugins for research applications.
  3. Create a lasting collaboration with MSGS participants and projects so that the BioUno project can continue to deliver on it mission statement with an expanded pool of active contributors and users.

Check out our etherpad with our ideas, issues and more information for the sprint. You can help us with suggestions, documentation, coding or testing – so you can help us even if you are not a programmer.

References

[1] [Computational science: Error… why scientific programming does not compute](http://www.nature.com/news/2010/101013/full/467775a.html)(Zeeya Merali, Nature: 2010)

[2] [Scientific Reproducibility through Computational Workflows and Shared Provenance Representations](http://www.evernote.com/l/AJ8x2KJTSTlGmbrFDKXSR709G2wRjbN32Tk/) (Yolanda Gil, NSF Workshop: 2010)

[3] Wikipedia on [Open source](http://en.wikipedia.org/wiki/Open_source) (accessed 5/7/2015)

[4] [The real Open-Source proliferation problem](http://gondwanaland.com/mlog/2013/10/22/open-source-proliferation-problem/) (Mike Linksvayer, Blog 2013)

Categorieën: Mozilla-nl planet

About:Community: Announcing the MDN Fellows!

do, 14/05/2015 - 02:08

The Mozilla Developer Network (MDN) is one of the most frequently used resources for web developers for all things documentation and code. This year we’re making the rich content on MDN available to even more people. We’re developing beginning learning materials as well as a template (which we’re calling “Content Kits”) to make preparing presentations on web topics much easier.

As part of this effort, we also launched the MDN Fellowship last quarter. This is a 7-week pilot contribution program for advanced web developers to expand their expertise through curriculum development on MDN. MDN Fellows are experts that will continue to grow their skills and impact by teaching others about web technologies. Specifically, the Fellows will be developing  Content Kits, a collection of resources about specific topics related to web development to empower technical topic presenters.

After a lengthy process where we solicited applications and involved reviewers from across Mozilla, we’re delighted to announce our inaugural MDN Fellowship Fellows! Here they are in their own words – feel free to Tweet them a congratulations!

Steve Kinney, Curriculum Fellow (Colorado, U.S.A.) – @stevekinney

I am an instructor at the Turing School of Software and Design in Denver, Colorado, where I teach open web technologies. Prior to Turing, I hailed from the great state of New Jersey and was a New York City public school teacher for seven years, where I taught special education, science, and — eventually — JavaScript to students in high-need schools in Manhattan, Brooklyn, and Queens. I have a master’s degree in special education. I’ve spoken at EmberConf and RailsConf, and will be speaking at JSConf US at the end of the month. In my copious free time, I teach classes on web development with Girl Develop It.

István “Flaki” Szmozsánszky, Service Workers Fellow (Budapest, Hungary) – @slsoftworks

I’ve been following Service Workers’ journey since before it was cool  as a web developer and longtime contributor to Mozilla. Known as “Flaki” in the community, I’ve been evangelizing new technologies to display the Open Web as a first-class citizen. As Service Workers seemingly plays a key role in this battle, there is no better place to do this than at Mozilla, the most adamant proponent of the Open Web. During my Fellowship I hope to further previous work on MDN’s offline support, while helping in the explorations into Firefox OS’s reimagined new architecture.

Ben Boyle, Test The Web Forward Fellow (Upper Caboolture, Australia) – @bboyle

I’m a front-end developer from Australia, making websites since 1998 primarily for the Queensland Government. Lots of forms, templates and QA. I also mentor front-end web development students at Thinkful. I got interested in automated quality control using custom stylesheets and scripts in Opera, then YUItest, then inspired by ThoughtWorks developers on a project when they introduced selenium and automated acceptance tests in the browser. I’m excited to be helping Test The Web Forward as an opportunity to both learn and share. Because everything runs off browser. Love the latest front-end frameworks? They don’t exist without web standards. I cannot sufficiently appreciate the work so many people have done creating a solid foundation for everything we (as web developers) often take for granted. I am really glad to have this chance to give back!

Vignesh Shanmugam, Web App Performance Fellow (Bangalore, India) – @_vigneshh

I’m a Web Developer from India focused on building the Web Performance platform at Flipkart, one of Asia’s leading e-commerce sites. I am also responsible for advocating front-end engineering best practises, developing tools that help identify performance bottlenecks, and analyzing metrics. I am an Open Source contributor with a deep research background in front-end performance and am happy to be a part of the MDN Fellowship program to contribute to MDN’s Web App Performance curriculum.

Greg Tatum, WebGL Fellow (Oklahoma, U.S.A) – @tatumcreative

My background is in contemporary sculpture, aquarium exhibit design, marketing, animation, and web development: in short, it is all over the place :)  But the central guiding principle behind my work is to find the middle ground between the technical and creative, and explore it to see what emerges. I am a Senior Web Developer at Cubic, a Tulsa, Oklahoma-based creative branding agency where I help create rich experiences for tourism and destinations. I applied for the MDN Fellowship because I’m passionate about the open web and inspired by the possibilities of using 3D for new, richer experiences online with the potential reach that WebGL can have. I really enjoy helping to build the creative coding community and hope to make it even easier for more people to get involved with my own passion of exploring creative code.

Categorieën: Mozilla-nl planet

Kevin Ngo: Dropdown Component Using Custom Elements (vs. React)

do, 14/05/2015 - 02:00
Dropdowns have never been easier and native.

We have been building an increasing amount of Custom Elements, or Web Components, over at the Firefox Marketplace (using a polyfill). Custom Elements are a W3C specification that allow you to define your own HTML elements. Using Custom Elements, rather than arbitrary JS, encourages modularity and testability, with portability and reusability being the enticer.

Over the last several months, I worked on revamping the UI for the Firefox Marketplace. Part of it was building a custom dropdown element that would allow users to filter apps based on platform compatibility. I wanted it to behave exactly like a <select> element, complete with its interface, but with the full license to style it however I needed.

In this post, I'll go over Custom Elements, introduce an interesting "proxy" pattern to extend native elements, and then compare Custom Elements with the currently reigning Component king, React.

Custom Select source code.

Building a Custom Element

Custom Elements are still in working draft, but there is a nice cocument.registerElement polyfill. Here is an extremely simple Custom Element that simple wraps a div and defines some interface on the element's prototype.

document.registerElement('input-wrapper', { prototype: Object.create(HTMLElement.prototype, { createdCallback: { value: function () { // Called after the component is "mounted" onto the DOM. this.appendChild(document.createElement('input')); } }, input: { get: function() { return this.querySelector('input'); } }, value: { get: function() { return this.input.value; }, set: function(val) { this.input.value = val; } } }) }); var inputWrapper = document.createElement('input-wrapper'); document.body.appendChild(inputWrapper);

We define the interface using Javascript's Object.create, extending the basic HTMLElement. The element simply wraps an input, and provides a getter and setter on the input's value. We drop it into the DOM, and it will natively have whatever interface we defined for it. So we could do something like inputWrapper.value = 5 to directly set the inner input's value. Basic example, but being able to create these native Custom Elements can go far in modular development.

Proxy Pattern: Extending the Select Element by Rerouting Interface

Now we got a gist of what a Custom Element is, let's see how we can use it to create a custom dropdown by extending the native <select> element.

Here's an example of how our element will be used in the HTML:

<custom-select name="my-select"> <custom-selected> The current selected option is <custom-selected-text></custom-selected-text> </custom-selected> <optgroup> <option value="1">First value</option> <option value="2">Second value</option> <option value="3">Third value</option> <option value="4">Fourth value</option> </optgroup> </mkt-select>

What we'll do in the createdCallback is, if you check the source code, create an actual internal hidden select element, copying the attributes defined on <custom-select>. Then we'll create <custom-options>, copying the original options into the hidden select. We extend the custom select's interface to have an attribute pointing to the hidden select like so:

select: { // Actual <select> element to proxy to, steal its interface. // Value set in the createdCallback. get: function() { return this._select; }, set: function(select) { copyAttrs(select, this); this._select = select; } },

This will allow our custom element to absorb the functionality of the native select element. All we have to do is implement the entire interface of the select element by routing to the internal select element.

function proxyInterface(destObj, properties, methods, key) { // Proxies destObj.<properties> and destObj.<methods>() to // destObj.<key>. properties.forEach(function(prop) { if (Object.getOwnPropertyDescriptor(destObj, prop)) { // Already defined. return; } // Set a property. Object.defineProperty(destObj, prop, { get: function() { return this[key][prop]; } }); }); methods.forEach(function(method) { // Set a method. Object.defineProperty(destObj, method, { value: function() { return this[key][method].call(arguments); } }); }); } proxyInterface(CustomSelectElement.prototype, ['autofocus', 'disabled', 'form', 'labels', 'length', 'multiple', 'name', 'onchange', 'options', 'required', 'selectedIndex', 'size', 'type', 'validationMessage', 'validity', 'willValidate'], ['add', 'blur', 'checkValidity', 'focus', 'item', 'namedItem', 'remove', 'setCustomValidity'], 'select');

proxyInterface will "route" the property lookups (the first array), and method calls (the second array) from the custom select element to the internal select element. Then all we need to do is make sure our select element's value is up-to-date while we interact with our custom select element, then we can do things like customSelectElement.selectedIndex or customSelectElement.checkValidity() without manually implementing the interface.

Note we could have simply looped over HTMLSelectElement.prototype rather than manually entering in each property and method name, but unfortunately that doesn't play well with some older browsers.

With all of this, we have a custom select element that is fully stylizable while having all the functionality of a native select element (because it extends it!).

Comparing Custom Elements to React

I love React and am using it for a couple of projects. How does Custom Elements compare to it?

  1. Custom Elements has no answer to React's JSX template/syntax. In most of our Custom Elements, we have to manually shuffle things around using the native DOM API. JSX is much, much easier.

  2. Custom Elements has no data-binding or automatic DOM updates whenever data updates. It's all imperative with Custom Elements, you have to listen for changes and manually update the DOM. React is, well, reactive. Whenever a component's state, so does its representation in the DOM.

  3. Custom Elements is a bit harder to nest components than React. In React, it's natural for a component to render a component that renders a component that renders other components. With Custom Elements, it's a bit difficult to connect components together in a nice heirarchy, and the only communication you get is through Events.

  4. Custom Elements, however, is smaller in KB. React is about 26KB after min+gzip whereas a Custom Elements polyfill is maybe a few KB. Though the 26KB might be worth it since you'll end up writing less code, and you get the performance of virtual DOM diffing.

  5. Custom Elements has no answer to React Native.

They're both just as portable, they both can be dropped into any framework. They both have similar interfaces as well in creating components. Although, React is more powerful. In React, I really enjoy keeping data compartmentalized in states and passing down data as props. Loose comparison, but it's like combining the data-binding and template power of Angular with the ideas of Custom Elements.

However, it doesn't have to be one or the other either. Why not both? React can wrap Custom Elements if you want it to. As always, choose the best tools for the job. I'm always chasing and adopting the newest things like an excited dog going after the mailman, but despite that I can say React is a winner.

Categorieën: Mozilla-nl planet

Air Mozilla: Quality Team (QA) Public Meeting

wo, 13/05/2015 - 22:30

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Categorieën: Mozilla-nl planet

Emma Irwin: Participation Team Heartbeat #1 – Demos!

wo, 13/05/2015 - 21:22

As shared on the recent Participation Call, the Participation ‘Team’ is starting to work in heartbeats – mirroring the success of the Mozilla Foundation Team working ‘agile and open’. We just completed our first heartbeat, which included evaluation of the Heartbeat process and the new tool we’ll use to bring community into the center.

As you can see  from the Heartbeat ‘life cycle’: Demo is an important milestone of evaluation and measurement prior to starting the next one.  Like the Webmaker team, we will be inviting contributors, and streaming our Demos on Air Mozilla.  ‘Hearbeat #1 was about finding our feet in the process, so we apologize that demos for this cycle will come in the form of a blog post – but also hope you’re excited as we are about the potential of working, collaborating and demoing in the open for the next cycle.

heartbeat

For  Heartbeat project on our team we asked the following questions for Demo:

  1. What did we do and why?
  2. What was the impact?
  3. What was learned?
  4. What are the next steps?

Our call went well over one hour,  and so I am providing  the TL;DR version of Demos.

Establish a solid Heartbeat process for Participation

We established our process for this first heartbeat, built out user stories (contributor, team and project), evaluated the Mozilla Foundation tool, and others to ensure our decision was informed. Emphasis on leveraging the ‘open-ness’ of the Heartbeat tool, while improving and building better efficiency for project management by leveraging Github and Github issues.

Bangladesh Meetup

Planning for a Bangladesh Meetup is well underway for early June.  Plan includes collaborating with Foundation on Webmaker goals, and early development to support distributed leadership as the community grows.

Community Surveys

Working on betterment of planned community surveys to measure community health. Thanks to collaboration with the Metrics team and improved hypotheses, questions are improving to get better and more relevant answers.  Plan is to go live by the end of the week.  This is especially important in with recent changes to Regional Leadership.

Events

Collected information about existing processes, assets and products relating to events.  We don’t have a consistent way to measure events yet.  We drafted a survey, an agenda template and a document that explains the template for re-usability. We have a busy May and June filled with meetups where we can test out new things.

External Expert Engagement

This project was about identifying organizations outside of Mozilla who are doing incredible things with participation.  We were especially focused on not the typical ‘big players’, or those who can’t be seen from silicon valley.  We will be reaching out to the community and preparing them to identify and interview local orgs. Might be able to tie Marketpulse interviewing course into this effort. We are also building a Participation advisory board for well-known experts in participation, who we’ll be inviting to the Whistler work week in June.

Firefox OS in Africa

africa

 

 

 

 

 

 

Started the ball rolling for participation experiments in Africa.  SUMO workshops, intro to SUMO and internships at Universities + Firefox OS launch and Webmaker Club initiative.

Initial impact was great, lots of momentum around Firefox OS, positive reception to workshops, and a re-energized Mozilla Senegal.  Timing matters, more time to plan in advance would be good, especially across initiatives.

Marketing approach Android: India

We used some time during the recent Indian community meetup to get ideas on what a product focused approach on Firefox for Android could look like in India. During a short workshop, we explored the product positioning, opportunities, challenges, and developed a series of personas that could be users. The next steps will involve exploring what a detailed set of experiments could be for this product in India.

Version 2! New release of App for collecting competitor phone data.  Coming soon: release of supporting educational content, and refresh of existing training materials, supported by thoughful participation design and a new community manager , Akshay onboard (yay!).  Work and planning to launch a 4 week participatory course ‘Interviewing User’ on the horizon, thanks to involvement of market research experts at Mozilla. Challenge compiling with communication channels in target markets.

Participation Infrastructure

Setup a participation infrastructure stack on AWS to meet our user stories for developers, community and support team. We can deploy in ¼ of the time a new or newer version of an app. We learned that we can solidly build on best practices gathered within Mozilla, while working with volunteers in the open.

Leadership Workshop  – India Community

india

 

 

 

 

 

 

We tested an Actions to Impact workshop just last week at the Mozilla India Taskforce Meetup, many volunteers seemed to find the leadership workshop very useful, and used it throughout the weekend. Still to capture notes on leadership and capture the workshop facilitator’s guide.

Reps/Regional 2.0

Conversation between Rosana, William and Brian, as well as Reps team.  Presented slide deck outlining the hypothesis for helping shape what comes next.

Support External Comms of Firefox OS (Global Communications Group)

We want to better support communications, announcements about important issues in Mozilla, and make them more valuable for community, as well as a way to collect feedback. We recorded as many channels as we could, people saw many uses for this list and were enthusiastic.  We learned that communities are more fragmented than we thought, lots of groups using lots of channels so communicating centrally is especially difficult.  Also identifying NDA is challenging and complex. Volunteer members of this team doing a fantastic job.

Volunteers at Whistler

Spoke with 25+ team leaders to get them thinking strategically about who they invite, which came with a letter from Chris Beard.  Wrangled a final list.  98% are now booked for travel. Many hands make great work, a team effort with Brianna, Francisco and Brian. As a result of this, teams are thinking of volunteers in more strategic way and reasons behind invitations are clearer to all of community. We will continue to work on a plan for workweek & volunteers.

Tracking Experiments – Participation Lab

This project is two-fold:  creating a system to bring experiments currently going on in the organization into the ‘Participation Lab’, and also tracking the experiments in a way that is beneficial to all.  We are currently tracking 10 experiments, 7 focused and 3 distributed.  People (Community and Staff) appear to feel supported, optimistic and grateful to have monitoring and support in goals, hypothesis and assistance measuring success.  We will continue to track and offer support to move these along to their goals during the next heartbeat.

Categorieën: Mozilla-nl planet

Mark Banner: Using eslint alongside the Firefox Hello code base to help productivity

wo, 13/05/2015 - 21:19

On Firefox Hello, we recently added the eslint linter to be run against the Hello code base. We started of with a minimal set of rules, just enough to get us something running. Now we’re working on enabling more rules.

Since we enabled it, I feel like I’m able to iterate faster on patches. For example, if just as I finish typing I see something like:

eslint syntax error in sublime I know almost immediately that I’ve forgotten a closing bracket and I don’t have to run anything to find out – less run-edit-run cycles.

Now I think about it, I’m realising it has also helped reduced the amount of review nits on my patches – due to trivial formatting mistakes being caught automatically, e.g. trailing white-space or missing semi-colons.

Talking about reviews, as we’re running eslint on the Hello code, we just have to apply the patch, and run our tests, and we automatically get eslint output:

eslint output - no trailing spacesHopefully our patch authors will be running eslint before uploading the patch anyway, but this is an additional test, and a few less things that we need to look at during review which helps speed up that cycle as well.

I’ve also put together a global config file for eslint (see below), that I use for outside of the Hello code, on the rest of the Firefox code base (and other projects). This is enough, that, when using it in my editor it gives me a reasonable amount of information about bad syntax, without complaining about everything.

I would definitely recommend giving it a try. My patches feel faster overall, and my test runs are for testing, not stupid-mistake catching!

Want more specific details about the setup and advantages? Read on…

My Setup

For my setup, I’ve recently switched to using Sublime. I used to use Aquamacs (an emacs variant), but when eslint came along, the UI for real-time linting within emacs didn’t really seem great.

I use sublime with the SublimeLinter and SublimeLinter-contrib-eslint packages. I’m told other editors have eslint integration as well, but I’ve not looked at any of them.

You need to have eslint installed globally, or at least in your path, other than that, just follow the installation instructions given on the SublimeLinter page.

One configuration I change I did have to make to the global configuration:

  • Open up a normal javascript (*.js) file.
  • Select “Preferences” -> “Settings – More” -> “Syntax Specific – User”
  • In the file that appears, set the configuration up as follows (or whatever suits you):
{ "extensions": [ "jsm", "jsx", "sjs" ] }

This makes sure sublime treats the .jsm and .jsx files as javascript files, which amongst other things turns on eslint for those files.

Global Configuration

I’ve uploaded my global configuration to a gist, if it changes I’ll update it there. It isn’t intended to catch everything – there’s too many inconsistencies across the code base for that to be sensible at the moment. However, it does at least allow general syntax issues to be highlighted for most files – which is obviously useful in itself.

I haven’t yet tried running it across the whole code base via eslint on the command line – there seems to be some sort of configuration issue that is messing it up and I’ve not tracked it down yet.

Firefox Hello’s Configuration

The configuration files for Hello can be found in the mozilla-central source. There’s a few of these because we have both content and chrome code, and some of the content code is shared with a website that can be viewed by most browsers, and hence isn’t currently able to use all the es6 features, whereas the chrome code can. This is another thing that eslint is good for enforcing.

Our eslint configuration is evolving at the moment, as we enable more rules, which we’re tracking in this bug.

Any Questions?

Feel free to ask any questions about eslint or the setup in the comments, or come and visit us in #loop on irc.mozilla.org (IRC info here).

Categorieën: Mozilla-nl planet

Air Mozilla: Product Coordination Meeting

wo, 13/05/2015 - 20:00

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 14

wo, 13/05/2015 - 19:00

The Joy of Coding (mconley livehacks on Firefox) - Episode 14 Watch mconley livehack on Firefox Desktop bugs!

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Compatibility update: Firefox 38 and 38.0.5

wo, 13/05/2015 - 18:46

Following up on my previous post on Firefox 38 compatibility, I wanted to highlight an important change that I missed: in-content preferences. The preferences window is no more, and instead the main preferences UI is shown in a new tab. This only affects add-ons that overlay the preferences window, which should be rare.

Additionally, there’s an out-of-cycle release planned for June 2nd, which will go with version number 38.0.5. This is 38 with a few additions, which you can see in the release notes. These additions shouldn’t conflict with extensions compatible with 38, but it could conflict with complete themes. The new release is now available in the beta channel, so we recommend you test your add-ons on that version.

Categorieën: Mozilla-nl planet

Mozilla Science Lab: Registration open for #mozsprint 2015!

wo, 13/05/2015 - 17:39

globalsprint

Join us as we learn to build projects helping researchers leverage the open web! Registration is now open with our second global sprint June 4-5, 2015. We could use your help to make this year’s #mozsprint bigger and better than last.

HOW TO REGISTER

Ready to join us for two days of learning and building together? To register:

  1. Go to our Planning Etherpad
  2. Find a listed site near you (sites start after line 162)
  3. Add your name/twitter/github under ‘Participants’ for that site

Can’t find a site near you? You can still join us remotely! Add your name under ‘Remote Participants’ at the bottom of the pad, and join in from anywhere.

Categorieën: Mozilla-nl planet

Gervase Markham: Anonymity and the Secure Web

wo, 13/05/2015 - 10:56

Ben Klemens has written an essay criticising Mozilla’s moves towards an HTTPS web. In particular, he is worried about the difficulty of setting up an HTTPS website and the fact that (as he sees it) getting a certificate requires the disclosure of personal information. There were some misunderstandings in his analysis, so I wanted to add a comment to clarify what we are actually planning to do, and how we are going to meet his concerns.

However, he wrote it on Medium. Medium does not have its own login system; it only permits federated login using Twitter or Facebook. Here’s the personal information I would have to give away to Medium (and the powers I would have to give it) in order to comment on his essay about the problems Mozilla are supposedly causing by requiring people to give away personal information:

twitter

Don’t like that? That’s OK, I could use Facebook login, if I was willing to give away:

facebook

So I’ll have to comment here and hope he sees it. (Anyone who has decided the tradeoffs on Medium are worth it could perhaps post the URL in a comment for me.)

The primary solution to his issues is Let’s Encrypt. With Let’s Encrypt, you will be able to get a cert, which works in 99%+ of browsers anyone uses, without needing to supply any personal information or to pay, and all at the effort of running a single command on the command line. That is, the command line of the machine (or VM) that you have rented from the service provider and to whom you gave your credit card details and make a monthly payment to put up your DIY site. That machine. And the cert will be for the domain name that you pay your registrar a yearly fee for, and to whom you have also provided your personal information. That domain name.

If you have a source of free, no-information-required server hosting and free, no-information-required domain names (as Ben happens to for his Caltech Divinity School example), then it’s reasonable to say that you are a little inconvenienced if your HTTPS certificate is not also free and no-information-required. But most people doing homebrew DIY websites aren’t in that position – they have to rent such things. Once Let’s Encrypt is up and running, the situation with certificates will actually be easier and more anonymous than that with servers or domain names.

“Browsers no longer supporting HTTP” may well never happen, and it’s a long way off if it does. But insofar as the changes we do make are some small infringement on your right to build an insecure website, see it as a civic requirement, like passing a driving test. This is a barrier to someone just getting in a car and driving, but most would suggest it’s reasonable given the wider benefit to society of training those in control of potentially dangerous technology. Given the Great Cannon and similar technologies, which can repurpose accesses to any website as a DDOS tool, there are no websites which “don’t need to be secure”.

Categorieën: Mozilla-nl planet

John O'Duinn: Interviews about “Work from home” policies at Facebook, Virgin and yes, Yahoo!

wo, 13/05/2015 - 09:04

When I talk about “remoties”, I frequently get asked my thoughts on Yahoo’s now (in)famous “no more work-from-home” policy.

Richard Branson (Virgin, link to first video) and the separate comments from Jackie Reses (Yahoo, 2.27 into the link to second video) confirm what I’d heard from multiple unofficial mutterings – that Yahoo’s now (in)famous “no more work from home” decree was actually intended as a way to jolt the company culture into action.

I also liked Sheryl Sandberg (Facebook) comments about how a successful remote workplace depends on having clear measures of successful results. Rather then valuing someone by how many hours they are seen working in the office, instead it is better to have a company culture where you measure people by results. This echoes comments I’ve seen from Jason Fried in his “Remote” book, comments I’ve made in my “we are all remoties” presentations and which I’ve heard again and again from various long-term remote workers.

These two interviews discuss these points really well. The entire article is well worth a read, and both videos are only a few minutes long, so worth the quick watch.

Encouraging!

Categorieën: Mozilla-nl planet

Andy McKay: Rebuilding your application

wo, 13/05/2015 - 09:00

Building a complex, long-term application? Periodically setup your local development environment from scratch.

— Jason Garber (@jgarber) May 6, 2015

If you have a complex long-term application, it will be complex to set up. That means:

  • it gets harder to bring new developers on board
  • it gets harder to add in new pieces
  • bugs start to occur between different developers deployments
  • setting up the application becomes handed down knowledge

The worst thing is that you can't see the technical debt that is piling up. It's underneath your application, but because you never set up your development environment again, you don't see it. You can stay productive, setting up the application is a rite of passage for other people.

Reduce the number of settings. Make it work well out of the box. Get as few steps as possible. Document it. Script it.

Once you've got the rebuild happening quickly, you won't fear setting up your application again.

Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: Community Participation

wo, 13/05/2015 - 07:00

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

Generally at Mozilla we want to engage and activate our community to further what we do. Because all our work is open source, and we default to open on our planning, we have a lot of potential to include people in our work. But removing barriers to participation doesn’t make participation happen.

A couple reasons it’s particularly challenging:

  1. Volunteers and employees work at different paces. Employees can devote more time, and can have pressures to meet deadlines so that sometimes the work just needs to get done. So everything is going fast and a volunteer can have a hard time keeping up. Until the project is cancelled and then wham, the employees are all gone.

  2. Employees become acclimated to whatever processes are asked of them, because whether they like it or not that’s the expectation that comes with their paycheck. Sometimes employees put up with stupid shit as a result. And sometimes volunteers aren’t willing to make investments to their process even when it’s the smart thing to do, ‘cause who knows how long you’ll stick around?

  3. Employee work has to satisfy organizational goals. The organization can try to keep these aligned with mission goals, and keep the mission aligned with the community, but when push comes to shove the organization’s goals – including the goals that come from the executive team – are going to take priority for employees.

  4. Volunteers are unlikely to be devoted to Mozilla’s success. Instead they have their own goals that may intersect with Mozilla’s. This overlap may only occur on one project.  And while that’s serendipitous, limited overlap means a limit on the relationships those volunteers can build, and it’s the relationships that are most likely to retain and reward participation.

I have a theory that agency is one of the most important attractors to open source participation. Mozilla, because of its size and because it has a corporate structure, does not offer a lot of personal agency. Though in return it does offer some potential of leverage.

I am not sure what to do with respect to participation in PageShot. If I open things up more, will anyone care? What would people care about? Maybe people would care about building a product. Maybe the building blocks would be more interesting. We have an IRC channel, but we also meet regularly over video, which I think has been important for us to assimilate the concept and goals of the project. Are there other people who would care to show up?

I’m also somewhat conflicted about trying to bring people in. Where will PageShot end up? The project could be cancelled. It’s open source, sure, but is it interesting as open source if it’s a deadend addon with no backing site? Our design is focused on making something broadly appealing such that it could be included in the browser – and if things go well, the addon will be part of the browser itself. If that happens (and I hope it will!) even my own agency with respect to the project will be at threat. That’s what it means to get organizational support.

If the project was devolved into a set of libraries, it would be easier to contribute to, and easier for volunteers to find value in their participation. Each piece could be improved on its own, and can live on even if the product that inspired the library does not continue. People who use those libraries will maintain agency, because they can remix those libraries however they want, include them in whatever product of their own conception that they have. The problem: I don’t care about the libraries! And I don’t want this to be a technology demonstration, I want it to be a product demonstration, and libraries shift the focus to the wrong part.

Despite these challenges, I don’t want to give up on the potential of participation. I just doubt would look like normal open source participation. I’ve expanded our participation section, including an invitation to our standup meetings. But mostly I need to know if anyone cares, and if you do: what do you care about and what do you want from your participation?

Categorieën: Mozilla-nl planet

Pagina's