mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Gervase Markham: A New Scam?

Mozilla planet - za, 06/01/2018 - 21:38

I got this email recently; I’m 99% sure it’s some new kind of scam, but it’s not one I’ve seen before. Anyone have any info? Seems like it’s not totally automated, and targets Christians. Or perhaps it’s some sort of cult recruitment? The email address looks very computer-generated (firstnamelastnamesixdigits@gmail.com).

Good morning,

I am writing in accordance to my favourite Christian website, I could do with sending you some documents regarding Christ. I am a Christian since the age of 28, when I got a knock at the door at my house by a group of males asking me to come to a Christian related event, I of course graciously accepted.

I have since opened up about my homosexuality which my local church somewhat accepted, as I am of course, one of the most devout members of the Church. I am very grateful to the church for helping me discover whom I really was at a time where I needed to discover who I was the most.

I would like to obtain your most recent address, as I have seen on your website that you have recently moved house (as of 2016) to a Loughborough address. I would like to send you some documents regarding my struggles with depression and then finding God and how much he helped me discover my real identity.

I thank you very much for your aid in helping me find God and Christ within myself, as you helped me a lot with your website and your various struggles, which gave me strength to succeed and to carry on in the name of Jesus Christ, our Lord and Saviour.

Hope to hear a reply soon,

Kind regards,

<name>

Categorieën: Mozilla-nl planet

Cameron Kaiser: Is PowerPC susceptible to Spectre? Yep.

Mozilla planet - za, 06/01/2018 - 21:09
UPDATE: Yes, TenFourFox will implement relevant Spectre-hardening features being deployed to Firefox, and the changes to performance.now will be part of FPR5 final. We also don't support SharedArrayBuffer anyway and right now are not likely to implement it any time soon.

UPDATE the 2nd: This post is getting a bit of attention and was really only intended as a quick skim, so if you're curious whether all PowerPC chips are vulnerable in the same fashion and why, read on for a deeper dive.

If you've been under a rock the last couple days, then you should read about Meltdown and Spectre (especially if you are using an Intel CPU).

Meltdown is specific to x86 processors made by Intel; it does not appear to affect AMD. But virtually every CPU going back decades that has a feature called speculative execution is vulnerable to a variety of the Spectre attack. In short, for those processors that execute "future" code downstream in anticipation of what the results of certain branching operations will be, Spectre exploits the timing differences that occur when certain kinds of speculatively executed code changes what's in the processor cache. The attacker may not be able to read the memory directly, but (s)he can find out if it's in the cache by looking at those differences (in broad strokes, stuff in the cache is accessed more quickly), and/or exploit those timing changes as a way of signaling the attacking software with the actual data itself. Although only certain kinds of code can be vulnerable to this technique, an attacker could trick the processor into mistakenly speculatively executing code it wouldn't ordinarily run. These side effects are intrinsic to the processor's internal implementation of this feature, though it is made easier if you have the source code of the victim process, which is increasingly common.

Power ISA is fundamentally vulnerable going back even to the days of the original PowerPC 601, as is virtually all current architectures, and there are no simple fixes. So what's the practical impact to Power Macs? Well, not much. As far as directly executing an attacking application, there are a billion more effective ways to write a Trojan horse than this, and they would have to be PowerPC-specific (possibly even CPU-family specific due to microarchitectural changes) to be functional. It's certainly possible to devise JavaScript that could attack the cache in a similar fashion, especially since TenFourFox implements a PowerPC JIT, but such an attack would -- surprise! -- almost certainly have to be PowerPC-specific too, and the TenFourFox JIT doesn't easily give up the instruction sequences necessary. Either way, even if the attacker knew exactly the memory they wanted to read and went to its address immediately, the attack would be rather slow on a Power Mac and you'd definitely notice the CPU usage whether it succeeded or not.

There are ways to stop speculative execution using certain instructions the processor must serialize, but this can seriously harm performance: speculative execution, after all, is a way to keep the processor busy with (hopefully) useful work while it waits for previous instructions to complete. On PowerPC, cache manipulation instructions, some kinds of special-purpose register accesses and even instructions like b . (branch to the next instruction, essentially a no-op) can halt speculative execution with a sometimes notable time penalty. I think there may be some ways we can harden the TenFourFox JIT with these instructions used selectively to reduce their overhead, though as I say, I don't find such attacks very practical on our geriatric machines in general.

Anyway, you can sleep well, because everybody's all in the same boat. Perhaps it's time to dust off those old strict CPUs. The world needs a port of Classilla to the Commodore 64. :)

Categorieën: Mozilla-nl planet

Jared Hirsch: An idea for fixing fake news with crypto

Mozilla planet - za, 06/01/2018 - 01:09

Here’s a sketch of an idea that could be used to implement a news trust system on the web. (This is apropos of Mozilla’s Information Trust Initiative, announced last year.)

Suppose we used public key crypto (digital badges?) to make some verifiable assertions about an online news item:

  • person X wrote the headline / article
  • news organization Y published the headline / article
  • person X is a credentialed journalist (degreed or affiliated with a news organization)
  • news organization Y is a credentialed media organization (not sure how to define this)
  • the headline / article has not been altered from its published form

The article could include a meta tag in the page’s header that links to a .well-known/news-trust file at a stable URL, and that file could then include everything needed to verify the assertions.

If this information were available to social media feeds / news aggregators, then it’d be easy to:

  • automatically calculate the trustworthiness of shared news items at internet scale
  • inform users which items are trustworthy (for example, show a colored border around the item)
  • factor trustworthiness into news feed algorithms (allow users to hide or show lower-scored items, based on user settings)

One unsolved issue here is who issues the digital credentials for media. I need to look into this.

What do you think? Is this a good idea, a silly idea, something that’s been done before?

I don’t have comments set up here, so you can let me know on twitter or via email.

Categorieën: Mozilla-nl planet

Mozilla B-Team: Looking back at Bugzilla and BMO in 2017

Mozilla planet - vr, 05/01/2018 - 06:46

dylanwh:

Recently in the Bugzilla Project meeting, Gerv informed us that he would be resigning, and it was pretty clear that my lack of technical leadership was the cause. While I am sad to see Gerv go, it did make me realize I need to write more about the things I do.

As is evident in this post, all of the things I’ve accomplished have been related to the BMO codebase and not upstream Bugzilla – which is why upstream must be rebased on BMO. See Bug 1427884 for one of the blockers to this.

Accessibility Changes

In 2017, we made over a dozen a11y changes, and I’ve heard from a well-known developer that using BMO with a screen reader is far superior to other bugzillas. :-)

Infrastructure Changes

BMO is quite happy to use carton to manage its perl dependencies, and Docker handle its system-level dependencies.

We’re quite close to being able to run on Kubernetes.

While the code is currently turned off in production, we also feature a very advanced query translator that allows the use of ElasticSearch to index all bugs and comments.

Performance Changes

I sort of wanted to turn each of these into a separate blog post, but I never got time for that – and I’m even more excited about writing about future work. But rather than just let them hide away in bugs, I thought I’d at least list them and give a short summary.

February
  • Bug 1336958 - HTML::Tree requires manual memory management or it leaks memory. I discovered this while looking at some unrelated code.
  • Bug 1335233 - I noticed that the job queue runner wasn’t calling the end-of-request cleanup code, and a result it was also leaking memory.
March
  • Bug 1345181 - make html_quote() about five times faster.
  • Bug 1347570 - make it so apache in the dev/test environments didn’t need to restart after every request (by enforcing a minimum memory limit)
  • Bug 1350466 - switched JSON serialization to JSON::XS, which is nearly 1000 times faster.
  • Bug 1350467 - caused more modules (those provided by optional features) to be preloaded at apache startup.
  • Bug 1351695 - Pre-load “.htaccess” files and allow apache to ignore them
April
  • Bug 1355127 - rewrote a template that is in a tight loop to Perl.
  • Bug 1355134 - fetch all group at once, rather than row-at-a-time.
  • Bug 1355137 - Cache objects that represent bug fields.
  • Bug 1355142 - Instead of using a regular expression to “trick” Perl’s string tainting system, use a module to directly flip the “taint” bit. This was hundreds of times faster.
  • Bug 1352264 - Compile all templates and store them in memory. This actually saved both CPU time and RAM, because the memory used by templates is shared by all workers on a given node.
May
  • Bug 1362151 - Cache bzapi configuration API, making ‘bz export’ commands (on developer machines) faster by 2-5 seconds.
  • Bug 1352907 - Rewrite the Bugzilla extension loading system. The previous one was incredibly inefficient.
June
  • Bug 1355169 - Mentored intern to implement token-bucket based rate limiting. Not strictly a performance thing but it reduced API abuse.
December
  • Bug 1426963 - Use a hash lookup to determine group membership, rather than searching an unsorted list. Bug 1427230
Developer Experience Changes

My favorite communities optimize for fun. Frequently fun means being able to get things done. So in 2017 I did the following:

  • Made a vagrant development environment setup that closely mapped to BMO production. ** I tested installing it on various machines – Linux, OSX, Windows ** I wrote a README explaining how to use it. ** This dev environment has been tested by people with little or no experience with Bugzilla development.
  • I changed to a pull-request based workflow. We use Bugzilla to track bugs and tasks, but not do code review.
  • I made it so the entire test suite could run against pull requests. This isn’t trivial, you have to work a bit harder to build docker images and run them without having any dockerhub credentials. (Pull requests don’t get any dockerhub credentials, I have to say to make sure my friend ulfr doesn’t have a heart attack)
  • I made sure that I understood how to use Atom and Visual Studio Code. I actually rather like the later now – and more importantly it is easy to help out new-comers with these editors.
  • I adopted Perl::Critic for code linting and Perl::Tidy for code-formatting, using the PBP ruleset for the later. I also made it a point to not make code style a part of code review – let the machine do that.
Numbers

In the last year, we had almost 500 commits to the BMO repo, from 20 different people. Some people were new, and some were returning contributors (such as Sebastin Santy).

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Mozilla statement on breach of Aadhaar data

Mozilla planet - vr, 05/01/2018 - 05:08

Mozilla is deeply concerned about recent reports that a private citizen was able to easily access the private Aadhaar data of more than one billion Indian citizens as reported by The Tribune.

Despite declaring in November that the Aadhaar system had strict privacy controls, the Unique Identification Authority of India (UIDAI) has failed to protect the private details entrusted to them by Indians, proving the concerns that Mozilla and other organizations have been raising. Breaches like this demonstrate the urgent need for India to pass a strong data protection law.

Mozilla has been raising concerns about the security risks of companies using and integrating Aadhaar into their systems, and this latest, egregious breach should be a giant red flag to all companies as well as to the UIDAI and the Modi Government.

The post Mozilla statement on breach of Aadhaar data appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Niko Matsakis: Lessons from the impl period

Mozilla planet - vr, 05/01/2018 - 00:00

So, as you likely know, we tried something new at the end of 2017. For roughly the final quarter of the year, we essentially stopped doing design work, and instead decided to focus on implementation – what we called the “impl period”. We had two goals for the impl period: (a) get a lot of high-value implementation work done and (b) to do that by expanding the size of our community, and making it easy for new people to get involved. To that end, we spun up about 40 working groups, which is really a tremendous figure when you think about it, each of which was devoted to a particular task.

For me personally, this was a very exciting three months. I really enjoyed the enthusiasm and excitement that was in the air. I also enjoyed the opportunity to work in a group of people collectively trying to get our goals done – one thing I’ve found working on an open-source project is that it is often a much more “isolated” experience than working in a more traditional company. The impl period really changed that feeling.

I wanted to write a brief post kind of laying out my experience and trying to dive a bit into what I felt worked well and what did not. I’d very much like to hear back from others who participated (or didn’t). I’ve opened up a dedicated thread on internals for discussion, please leave comments there!

TL;DR

If you don’t want to read the details, here are the major points:

  • Overall, the impl period worked great. Having structure to the year felt liberating and I think we should do more of it.
  • We need to grow and restructure the compiler team around the idea of mentoring and inclusion. I think having more focused working groups will be a key part of that.
  • We have work to do on making the compiler code base accessible, beginning with top-down documentation but also rustdoc.
  • We need to develop skills and strategies for how to split tasks up.
  • IRC isn’t great, but Gitter wasn’t either. The search for a better chat solution continues. =)
Worked well: establishing focus and structure to the year

Working on Rust often has this kind of firehose quality: so much is going on at once. At any one time, we are:

  • fixing bugs in existing code,
  • developing code for new features that have been designed,
  • discussing the minutae and experience of some existing feature we may consider stabilizing,
  • designing new features and APIs via RFCs.

It can get pretty exhausting to keep all that in your head at once. I really enjoyed having a quarter to just focus on one thing – implementing. I would like us to introduce more structure into future years, so that we can have a time when we are just focused on design, and so forth.

I also appreciated that the impl period imposed a kind of “soft deadline”. I found that helpful for defining our scope. I felt like it ensured that difficult discussions did reach an end point.

That said, I don’t think we managed this deadline especially well this year. The final discussions were pretty frantic and it was hard – no, impossible – to keep up with all of them (I know I certainly couldn’t, and I work on Rust full time). Clearly in the future we need to manage the schedule better, and make sure that design work is happening at a more measured pace. I think that having more structure to the year can help with that, by ensuring that we do the design work at the time it needs to get done.

Worked well: newcomers developing key, important features

Earlier, I said that the goals of impl period were to (a) get a lot of high-value implementation work done and (b) to do that by expanding the size of our community. There is a bit of a tension there: if you have some high-value new feature, there is a tendency to think that we should have an established developer do it. After all, they know the codebase, and they will get it done the fastest. That is (often) true, but it is not the complete story.

What we wanted to do in the impl period was to focus on bringing new people into the project. Hopefully, many of those people will stick around, working on new projects, and eventually becoming experienced Rust compiler developers themselves. This increases our overall bandwidth and grows our community, making us stronger.

And even when people don’t have time to keep hacking on the Rust compiler, there are still advantages to developing through mentoring. The fact is that coding takes a lot of time. A single experienced developer can only really effectively code up a single feature at a time, but they can be mentoring many people at once.

Still, it must be said, there are plenty of people who just enjoy coding and who don’t particularly want to do mentoring. So obviously we should ensure we always have a place for experienced devs who just want to code.

Worked mostly well: smaller working groups

First and foremost, a key part of our plan was breaking up tasks into working groups. A working group was meant to be a small set of people focused on a common goal. The hope was that having smaller groups would make it easier for people to get involved and would also encourage more collaboration.

I felt the working groups worked best when they had relatively clear focus and an active leader: the NLL group is a good example. It was great to see the people in the chatrooms working together and starting to help one another out when more experienced devs weren’t available.

Other working group divisions worked less well. For example, there were a few groups in the compiler that were not specific to particular tasks, but rather parts of the compiler pipeline: WG-compiler-front, WG-compiler-middle, etc. Lots of people participated in those groups, and a lot got done, but the division into groups felt a bit more arbitrary to me. It wasn’t always clear where to put the tasks.

Going forward, I continue to think there is a role for working groups, but I think we should try to keep them focused on goals, not on the parts of the project that they touch.

Worked well: clear mentoring instructions

I’ve noticed something: if you tag a bug on the Rust’s issue tracked as E-Easy and leave a comment like “ping me on IRC”, it can easily sit there for years and years. But if you write some mentoring instructions – that is, lay out the steps to take – it will be closed, often within hours.

This makes total sense. You want to make sure that all the tools people need to hack on Rust are ready and immediately available. This way, when somebody says “I have a few hours, let me see if I can fix a bug in rustc”, they can sieze the moment. If you say “ping me on IRC”, then it may well be that you are not available at that time. Or that may be intimidating. In general, every roadblock gives them a chance to get distracted.

Of course, ideally mentoring doesn’t stop at mentoring instructions. Especially for more complex projects, I often find myself scheduling times with people so that we can have an hour or two to discuss directly what is going on, often with screen sharing or a voice call. That doesn’t always work – timezones being what they are – but when it does, it can be a big win.

Clear problem: lack of leadership bandwidth

One problem we encountered is that there just weren’t enough experienced rustc developers who were willing and able to lead up working groups. Writing mentoring instructions is hard work. Breaking up a big task into smaller parts is hard work. This is a problem outside of the impl period too. It’s hard to balance all the maintenance, bug fixing, performance monitoring, and new feature development work that needs to get done.

I don’t see a real solution here other than growing the set of people who hack on rustc. I think this should be a top priority for us. I think we should try to incorporate the idea of “contributor accessibility” into our workflow wherever possible. In other words, we should have clear paths for (a) how to get started hacking on rustc and then (b) once you’ve gotten a few PRs under your belt, how to keep growing. The impl period focused on (a) and it’s clear we do pretty well there, but have room for improvement. Part (b) is harder, and I think we need to work on it.

Clear problem: rustc documentation

One problem that makes writing mentoring instructions very difficult is that the compiler is woefully underdocumented. At the start of the impl period, many of the basic idioms and concepts (e.g., what is “the HIR” or “the MIR”? what is this 'tcx I see everywhere?) were not written up at all. It’s somewhat better now, but not great.

We also lack documentation on common workflows. How do I build the compiler? How do I debug things and get debug logs? How do I run an individual test? Some of this exists, but not always in an easy-to-find place.

I think we really need to work on this. I’d like to form a working group and focus on it early this year – but more on that later. (If you’re interested in the idea of helping to document the compiler, though, please contact me, or stay tuned!)

Clear problem: some tasks are hard to subdivide

One thing we also found is that some tasks are just plain hard to subdivide. I think a good example of this was incremental compilation: it seems like, in principle, there ought to be a lot of things that can be done in parallel there. And we had some success with newcomers, for example, picking off tasks relating to testing and doing other refactorings. I think we need to work on better strategies here. Knowing how to structure tasks for massive participation is a skillset – not unrelated to coding, but clearly distinct from it. I don’t have answers yet, but I suspect we can gain experience with this as a community and find best practices.

In the case of NLL, the model that seemed to work best was to have one more experienced developer pushing on the “main trunk” of development (myself), but actively seeking places to spin out isolated tasks into issues that could be mentored. To avoid review and bors latecy from slowing us down, we used a dedicated feature branch on my repo (nll-master) and I would periodically open up pull requests containing a variety of commits. This seemed to work out pretty well – oh, and by the way, the job is not done. If you’re still hoping to get involved, we’ve still got plenty of work to do. =) (Though most of those issues do not yet have mentoring instructions.)

Mixed bag: gitter and dedicated chat rooms

One key part of our experiment was moving from a small number of chat rooms on IRC (e.g., #rustc) to dedicate rooms on Gitter, one per working group. I had mixed feelings about this.

Let me start with the pros of Gitter itself:

  • Gitter means everybody has a persistent connection. It is great to be able to send someone a message when they may or may not be online, and get an answer sometime later.
  • Gitter means everything can be easily linked from the web. I love being able to make a link to some conversation with one click and copy it into a GitHub issue. I love being able to link to a Gitter chat room very easily.
  • Gitter means single sign on and only one name to remember. I love that I can just use people’s GitHub names, which makes it easier for me to then correlate their pull requests, or checkout their fork of Rust, etc.

But there are some pretty big cons. Mostly having to do with Gitter being buggy. The android client doesn’t deliver notifications (and maybe others as well). The IRC bridge seems to mostly work, but sometimes people get funny names (e.g., I think the Discord bridge has only one user?) or we hit other arbitrary limits.

Similarly, I felt like having dedicated rooms had pros and cons. On the one hand, it was really helpful to me personally. I find it hard to keep up with #rustc on IRC. I liked that I could be sure to read every message in WG-compiler-nll, but I could just skim over groups like WG-compiler-const that I was not directly involved in.

On the other hand, a bigger room offers more opportunity for “cross talk”. People have told me that they like having the chance to hear something interesting. And others found it was hard to follow all the rooms they were interested in.

Finally, I found that I personally still wound up doing a lot of mentoring over private messages. This is not ideal, because it doesn’t offer visibility to the rest of the group, and you can wind up repeating things, but – particularly when you’re discussing asynchronously – it’s often the most natural way to set things up.

I don’t know what’s the ideal solution here, but I do think there’s going to be a role for smaller chat rooms (though probably not based on Gitter).

Conclusion

The impl period was awesome. We got a lot of things done. And I do mean we: the vast majority of that work was done by newcomers to the community, many of whom had never worked on a compiler before. I loved the overall enthusiasm that was in the air. To me, it felt like what open source is supposed to be like.

Of course, though, there are things we can do better. I hope to drill into these more in later posts (or perhaps forum discussion), but I think the most important thing is that we need to think carefully about how to enable mentoring and inclusion throughout our team structure. I think we do quite well, but we can do better – and in particular we should think more about how to help people who have already done a few PRs take the next step.

Advertisement

As you may have heard, we’re trying something new this year. We’re encouraging people to write blog posts about what they think Rust ought to focus on for 2018 – if you do it, you can either tweet about it with the hashtag #Rust2018, or else e-mail community@rust-lang.org. I’m pretty excited about this; I’ve been enjoying reading the posts that have arrived thus far, and I plan to write a few of my own!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: New flexbox guides on MDN

Mozilla planet - do, 04/01/2018 - 17:10

In preparation for CSS Grid shipping in browsers in March 2017, I worked on a number of guides and reference materials for the CSS Grid specification, which were published on MDN. With that material updated, we thought it would be nice to complete the documentation with similar guides for Flexbox, and so I updated the existing material to reflect the core use cases of Flexbox.

This works well; with the current state of the specs, Flexbox now sits alongside Grid and Box Alignment to form a new way of thinking about layout for the web. It is useful to reflect this in the documentation.

The new docs at a glance

I’ve added eight new guides to MDN:

One of the things I’ve tried to do in creating this material is to bring Flexbox into place as one part of an overall layout system. Prior to Grid shipping, Flexbox was seen as the spec to solve all of our layout problems, yet a lot of the difficulty in using Flexbox is when we try to use it to create the kind of two-dimensional layouts that Grid is designed for. Once again, we find ourselves fighting to persuade a layout method to do things it wasn’t designed to do.

Therefore, in these guides I have concentrated on the real use cases of Flexbox, looked at where Grid should be used instead, and also clarified how the specification works with Writing Modes, Box Alignment and ordering of items.

A syllabus for layout

I’ve been asked whether people should learn Flexbox first then move on to Grid Layout. I’d suggest instead learning the basics of each specification, and the reasons to use each layout method. In a production site you are likely to have some patterns that make sense to lay out using Flexbox and some using Grid.

On MDN you should start with Basic Concepts of flexbox along with the companion article for Grid — Basic concepts of grid layout. Next, take a look at the two articles that detail how Flexbox and Grid fit into the overall CSS Layout picture:

Having worked through these guides you will have a reasonable overview. As you start to create design patterns using the specifications, you can then dig into the more detailed guides for each specification.

Similarities between Flexbox and Grid

You will discover as you study Flexbox and Grid that there are many similarities between the specifications. This is by design; you’ll find this note in the Grid specification:

“If you notice any inconsistencies between this Grid Layout Module and the Flexible Box Layout Module, please report them to the CSSWG, as this is likely an error.”

The Box Alignment Properties that are part of the Flexbox spec have been moved into their own specification — Box Alignment Level 3. Grid uses that specification as a reference rather than duplicating the properties. This means that you should find the information about Aligning items in a flex container very similar to that found in Box alignment in Grid Layout. In many ways, alignment is easier to understand when working with Grid Layout as you always have the two axes in play.

This movement of properties from Flexbox to Grid works in the other direction too. In Grid we have the grid-gap property, representing a shorthand way of setting grid-column-gap and grid-row-gap. During the course of this year these will have been moved to the Box Alignment Specification too, and renamed gap, column-gap and row-gap. Once browsers implement these for Flexbox, we will be able to have gaps in Flexbox as we do for Grid, rather than needing to use margins to create space between items.

Browser support for flexbox

Flexbox has excellent browser support at this point. I have detailed the things to look out for in Backwards compatibility of flexbox. Issues today tend to be in older versions of browsers which supported earlier versions of the specification under vendor prefixes. The issues are well documented at this point both in my MDN guide and on the Flexbugs site, which details interoperability issues and workarounds for them.

The Flexbox and Grid specifications are both now at Candidate Recommendation status. We don’t expect there to be large changes to either specification at this point. You should feel confident about learning and integrating Flexbox and Grid into production websites.

Moving to a new model of thinking about layout

With Flexbox and Grid, plus the related specifications of Box Alignment and Writing Modes, we have new layout models for the web, which have been designed to enable the types of layouts we need to create. Those of us who have been battling with floats to create layout for years have to shift our thinking in order to really take advantage of what is now possible, rather than trying to force it back into what is familiar. Whether your interest is in being able to implement more creative designs, or simply to streamline development of complex user interfaces I hope that the materials I’ve created will help you to gain a really thorough understanding of these specs.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting, 04 Jan 2018

Mozilla planet - do, 04/01/2018 - 17:00

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Mozilla B-Team: happy bmo push day!

Mozilla planet - do, 04/01/2018 - 15:00

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1426390] Serve WOFF2 Fira Sans font
  • [1426963] Make Bugzilla::User->in_group() use a hash lookup
  • [1427230] Avoid loading CGI::Carp, which makes templates slow.
  • [1326233] nagios_blocker_checker.pl doesn’t fail NRPE gracefully with bad inputs.
  • [1426507] Upgrade BMO to HTML5
  • [1330293] Prevent nagios_blocker_checker.pl from running longer than 5 minutes (and log to sentry if it does)
  • [1426673] The logout link cannot be found as what Sessions page says
  • [1427656] Remove ZeroClipboard helper
  • [1426685] Fix regressions from fixed-positioning global header
  • [1427743] legacy phabbugz API code errors when trying to set an inactive review flag
  • [1427646] Remove Webmaker from product selector on Browse page and guided bug entry form
  • [1426518] Revisions can optionally not have a bug id so we need to make it optional in the type constraints of Revision.pm
  • [1423998] Add ‘Pocket’ to Business Unit drop down for Legal bugs
  • [1426475] Make unknown bug id / alias error message more obvious to prevent content spoofing
  • [1426409] github_secret key has no rate limiting

discuss these changes on mozilla.tools.bmo.

Categorieën: Mozilla-nl planet

David Burns: What makes a senior developer or senior engineer

Mozilla planet - do, 04/01/2018 - 12:52

Over the festive break I sent out this tweet.

I would fire the "senior" engineers for being over levelled. Senior engineers jobs are to mentor and build up the engineers below them. If they find that a burden then they need to re-evaluate their seniority. https://t.co/f0G5TtumRE

— David Burns (@AutomatedTester) December 22, 2017

The, now deleted, quoted tweet went along the lines of "If you have 3 senior engineers earning $150k and a junior developer breaks the repository is it worth the $60k for having a junior". The original tweet, and then similar tweets that came out after that shows that there seems to be a disconnect on what some engineers, and even managers, believe a senior engineer should act like and what it really takes to be a senior or higher engineer.

The following is my belief, and luckily for me its also the guide I am given by my employer, is that seniority of an engineer is more to do with their interpersonal skills and less to do with their programming ability. So what do I mean by this? The answer is really simple.

A senior developer or senior engineer should be able to build up and mentor engineers who are "below" them. A senior engineer should be working to make the rest of their team senior engineers. This might actually mean that a senior engineer might do less programming than the more junior members in a team. This is normally accounted for by engineering management and even by project management. The time when they are not programming is now filled with architectural discussions, code reviews and general mentoring of newer members. These tasks might not be as tangible as producing code but it is just as important.

Whether you are on an management track or individual contributor track, the further you go up the more it is dependent on you doing less coding and more making sure that you raise everyone up with you. This is just how it all goes. After all "A raising tide raises all ships".

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: January’s Featured Extensions

Mozilla planet - do, 04/01/2018 - 02:34

Firefox Logo on blue background

Pick of the Month: Search by Image – Reverse Image Search

by Armin Sebastian
Powerful image search tool that’s capable of leveraging multiple engines, such as Google, Bing, Yandex, Baidu, and TinEye.

“I tried several ‘search by image’ add-ons and this one seems to be the best out there with a lot of features.”

Featured: Resurrect Pages

by Anthony Lieuallen
Bring back broken links and dead pages from previous internet lives!

“One of my favorite websites took down content from readers and I thought I’d never see those pages again. Three minutes and an add-on later I’m viewing everything as if it was never deleted. Seriously stunned and incredibly happy.”

Featured: VivaldiFox

by Tim Nguyen
Change the colors of Firefox pages with this adaptive interface design feature (akin to Vivaldi-style coloring).

“Definitely brings a bit more life to Firefox.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post January’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Mitigations landing for new class of timing attack

Mozilla planet - do, 04/01/2018 - 01:23

Several recently-published research articles have demonstrated a new class of timing attacks (Meltdown and Spectre) that work on modern CPUs.  Our internal experiments confirm that it is possible to use similar techniques from Web content to read private information between different origins.  The full extent of this class of attack is still under investigation and we are working with security researchers and other browser vendors to fully understand the threat and fixes.  Since this new class of attacks involves measuring precise time intervals, as a partial, short-term, mitigation we are disabling or reducing the precision of several time sources in Firefox.  This includes both explicit sources, like performance.now(), and implicit sources that allow building high-resolution timers, viz., SharedArrayBuffer.

Specifically, in all release channels, starting with 57:

  • The resolution of performance.now() will be reduced to 20µs.
  • The SharedArrayBuffer feature is being disabled by default.

Furthermore, other timing sources and time-fuzzing techniques are being worked on.

In the longer term, we have started experimenting with techniques to remove the information leak closer to the source, instead of just hiding the leak by disabling timers.  This project requires time to understand, implement and test, but might allow us to consider reenabling SharedArrayBuffer and the other high-resolution timers as these features provide important capabilities to the Web platform.

Update [January 4, 2018]: We have released the two timing-related mitigations described above with Firefox 57.0.4, Beta and Developers Edition 58.0b14, and Nightly 59.0a1 dated “2018-01-04” and later. Firefox 52 ESR does not support SharedArrayBuffer and is less at risk; the performance.now() mitigations will be included in the regularly scheduled Firefox 52.6 ESR release on January 23, 2018.

The post Mitigations landing for new class of timing attack appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.23

Mozilla planet - do, 04/01/2018 - 01:00

The Rust team is happy to announce a new version of Rust, 1.23.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.23.0 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.23.0 on GitHub.

What’s in 1.23.0 stable

New year, new Rust! For our first improvement today, we now avoid some unnecessary copies in certain situations. We’ve seen memory usage of using rustc to drop 5-10% with this change; it may be different with your programs.

The documentation team has been on a long journey to move rustdoc to use CommonMark. Previously, rustdoc never guaranteed which markdown rendering engine it used, but we’re finally committing to CommonMark. As part of this release, we render the documentation with our previous renderer, Hoedown, but also render it with a CommonMark compliant renderer, and warn if there are any differences. There should be a way for you to modify the syntax you use to render correctly under both; we’re not aware of any situations where this is impossible. Docs team member Guillaume Gomez has written a blog post showing some common differences and how to solve them. In a future release, we will switch to using the CommonMark renderer by default. This warning landed in nightly in May of last year, and has been on by default since October of last year, so many crates have already fixed any issues that they’ve found.

In other documentation news, historically, Cargo’s docs have been a bit strange. Rather than being on doc.rust-lang.org, they’ve been at doc.crates.io. With this release, that’s changing. You can now find Cargo’s docs at doc.rust-lang.org/cargo. Additionally, they’ve been converted to the same format as our other long-form documentation. We’ll be adding a redirect from doc.crates.io to this page, and you can expect to see more improvements and updates to Cargo’s docs throughout the year.

See the detailed release notes for more.

Library stabilizations

As of Rust 1.0, a trait named AsciiExt existed to provide ASCII related functionality on u8, char, [u8], and str. To use it, you’d write code like this:

use std::ascii::AsciiExt; let ascii = 'a'; let non_ascii = '❤'; let int_ascii = 97; assert!(ascii.is_ascii()); assert!(!non_ascii.is_ascii()); assert!(int_ascii.is_ascii());

In Rust 1.23, these methods are now defined directly on those types, and so you no longer need to import the trait. Thanks to our stability guarantees, this trait still exists, so if you’d like to still support Rust versions before Rust 1.23, you can do this:

#[allow(unused_imports)] use std::ascii::AsciiExt;

…to suppress the related warning. Once you drop support for older Rusts, you can remove both lines, and everything will continue to work.

Additionally, a few new APIs were stabilized this release:

See the detailed release notes for more.

Cargo features

cargo check can now check your unit tests.

cargo uninstall can now uninstall more than one package in one command.

See the detailed release notes for more.

Contributors to 1.23.0

Many people came together to create Rust 1.23. We couldn’t have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 03 Jan 2018

Mozilla planet - wo, 03/01/2018 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting, 03 Jan 2018

Mozilla planet - wo, 03/01/2018 - 18:00

Weekly SUMO Community Meeting This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #11

Mozilla planet - wo, 03/01/2018 - 14:40

Newsletter #11 is finally here, even later than usual due to an intense week in Austin where all of Mozilla’s staff and a few independent contributors gathered, followed by yours truly taking two weeks off.

Our focus before the Austin allhands was on performance, especially on Windows. We had some great results out of this and are shifting priorities back to correctness issues for a little while.

Notable WebRender changes
  • Martin added some clipping optimizations in #2104 and #2156.
  • Ethan improved the performance of rendering large ellipses.
  • Kvark implemented different texture upload strategies to be selected at runtime depending on the driver. This has a very large impact when using Windows.
  • Kvark worked around the slow depth clear implementation in ANGLE.
  • Glenn implemented splitting rectangle primitives, which allows moving a lot of pixels to the opaque pass and reduce overdraw.
  • Ethan sped up ellipse calculations in the shaders.
  • Morris implemented the drop-shadow() CSS filter.
  • Gankro introduced deserialize_from in serde for faster deserialization, and added it to WebRender.
  • Glenn added dual-source blending path for subpixel text when supported, yielding performance improvements when the text color is different between text runs.
  • Many people fixed a lot of bugs, too many for me to list them here.
Notable Gecko changes
  • Sotaro made Gecko use EGL_EXPERIMENTAL_PRESENT_PATH_FAST_ANGLE for WebRender. This avoids a full screen copy when presenting. With this change the peak fps of http://learningwebgl.com/lessons/lesson03/index.html on P50(Win10) was changed from 50fps to 60fps.
  • Sotaro prevented video elements from rendering at 60fps when they have a lower frame rate.
  • Jeff removed two copies of the display list (one of which happens on the main thread).
  • Kats removed a performance cliff resulting from linear search through clips. This drastically improves MazeSolver time (~57 seconds down to ~14 seconds).
  • Jeff removed a copy of the glyph buffer
  • Lots and lots of more fixes and improvements.
Enabling WebRender in Firefox Nightly

In about:config:

  • set “gfx.webrender.enabled” to true,
  • set “gfx.webrender.blob-images” to true,
  • set “image.mem.shared” to true,
  • if you are on Linux, set “layers.acceleration.force-enabled” to true.

Note that WebRender can only be enabled in Firefox Nightly.

Categorieën: Mozilla-nl planet

Ryan Harter: Managing Someday-Maybe Projects with a CLI

Mozilla planet - wo, 03/01/2018 - 09:00

I have a problem managing projects I'm interested in but don't have time for. For example, the CLI for generating slack alerts I posted about last year. Not really a priority, but helpful and not that complicated. I sat on that project for about a year before I could finally …

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: New Year's Rust: A Call for Community Blogposts

Mozilla planet - wo, 03/01/2018 - 01:00

‘Tis the season for people and communities to reflect and set goals- and the Rust team is no different. Last month, we published a blogpost about our accomplishments in 2017, and the teams have already begun brainstorming goals for next year.

Last year, the Rust team started a new tradition: defining a roadmap of goals for the upcoming year. We leveraged our RFC process to solicit community feedback. While we got a lot of awesome feedback on that RFC, we’d like to try something new in addition to the RFC process: a call for community blog posts for ideas of what the goals should be.

As open source software becomes more and more ubiquitous and popular, the Rust team is interested in exploring new and innovative ways to solicit community feedback and participation. We’re commited to extending and improving our community organization and outreach- and this effort is just the first of what we hope to be many iterations of new kinds of community feedback mechanisms.

#Rust2018

Starting today and running until the end of January we’d like to ask the community to write blogposts reflecting on Rust in 2017 and proposing goals and directions for Rust in 2018. These can take many forms:

  • A post on your personal or company blog
  • A Medium post
  • A GitHub gist
  • Or any other online writing platform you prefer.

We’re looking for posts on many topics:

  • Ideas for community programs
  • Language features
  • Documentation improvements
  • Ecosystem needs
  • Tooling enhancements
  • Or anything else Rust related you hope for in 2018 :D

A great example of what we’re looking for is this post, “Rust and the case for WebAssembly in 2018” by @mgattozzi or this post, “Rust in 2017” by Anonyfox.

You can write up these posts and email them to community@rust-lang.org or tweet them with the hashtag #Rust2018. We’ll aggregate any blog posts sent via email or with the hashtag in one big blog post here.

The Core team will be reading all of the submitted posts and using them to inform the initial roadmap RFC for 2018. Once the RFC is submitted, we’ll open up the normal RFC process, though if you want, you are welcome to write a post and link to it on the GitHub discussion.

Preliminary Timeline

We hope to get a draft roadmap RFC posted in mid January, so blog posts written before then would be the most useful. We expect discussion and final comment period of that RFC to last through at least the end of January, though, so blog posts until then will also be considered for ideas.

Dates are likely to change, but this is a general overview of the upcoming process:

  • Jan 3: call for posts!
  • throughout Jan: read current posts, draft roadmap RFC
  • mid Jan: post roadmap RFC on GitHub
  • late Jan: evaluate roadmap based on RFC comments
  • late Jan - early Feb: final RFC comment period
  • Feb: assuming discussion has reached a steady state, and we’ve found consensus, accept final roadmap

So, as we kick off 2018, if you find yourself in a reflective mood and your mind drifts to the future of Rust, please write up your thoughts and share them! We’re excited to hear where you want Rust to go this coming year!

Categorieën: Mozilla-nl planet

Shing Lyu: Taking notes with MkDocs

Mozilla planet - di, 02/01/2018 - 21:14

I’ve been using TiddlyWiki for note-taking for a few years. I use them to keep track of my technical notes and checklists. TiddlyWiki is a brilliant piece of software. It is a single HTML file with a note-taking interface, where the notes you take are stored directly in the HTML file itself, so you can easily carry (copy) the file around and easily deploy it online for sharing. However, most modern browsers don’t allow web pages to access the filesystem, so in order to let TiddlyWiki save the notes, you need to rely on browser extensions or Dropbox integration service like TiddlyWiki in the Sky. But they still have some frictions.

So recently I started to look for other alternatives for note-taking. Here are some of the requirements I desire:

  1. Notes are stored in non-proprietary format. In case the service/software is discontinued, I can easily migrate them to other tool.
  2. Has some form of formatting, e.g. write in Markdown and render to HTML.
  3. Auto-generated table of contents and search functionality.
  4. Can be used offline and data is stored locally.
  5. Can be easily shared with other people.
  6. Can be version-controlled, and you won’t lose data if there is a conflict.

TiddlyWiki excels at 1 to 5, and I can easily sync it with Dropbox. However, I often forgot to click “save” in TiddlyWiki, and in some case when I accidentally create some conflict while syncing, Dropbox simply creates two copies and I have to manually merge them. There is also no version history so it’s hard to merge and look into the history.

Then I suddenly had an epiphany during shower: all I need is a bunch of Markdown files, version controlled by git, then I can use a static site generator to render them as HTML, with table of contents and client-side search. After some quick search I found MkDocs, which is a Python-based static-site generator for project documentations. It also have a live-reloading development server, which I really love.

Using MkDocs

MkDocs is really straight-forward to setup and use (under Linux). To install, simply use pip (assuming you have Python and pip installed):

sudo pip install mkdocs

Then you can create your document folder using

mkdocs new <project name>

The generated folder will have the following structure:

<project name> ├── docs │   └── index.md └── mkdocs.yml

You can then start to write documents in the docs folder. You can create subfolders to organize the Markdown files. To view the generated HTML, cd into the project folder, then run mkdocs serve, the development server will start to serve on port 8000. Opening 127.0.0.1:8000 in your browser then you can see the document. You can also run mkdocs build to generate the static HTML into the sites folder. The folder can then be hosted using any server.

MkDocs

The mkdocs.yml file contains some configuration. For example, you can use the ReadTheDocs theme by adding the line:

theme: readthedocs

readthedocs

If you wish to open the generated site locally using the file:// protocol, you can add this line:

use_directory_urls: false

When you have a note named foo.md, the generated file will be /foo/index.html. If use_directory_urls is set to true, the URL for linking to the foo.md page will be /foo/, which is a more modern URL naming convention. But for this routing to work, you must host the files in a web server. If you want to open it locally, you need to set the config to false and all the link will be /foo/index.html instead.

Migrating from TiddlyWiki

Moving all the notes from TiddlyWiki to MkDocs is very easy. If you are using TiddlyWiki 5.x+, you can go to “Tools” right under the search box, there is a “export all” button. Export the tiddlers (notes) to CSV. Then you can use the tiddly2md script to convert the CSV to individual .md files. If your tiddler has UTF-8 titles, you need to add a parameter encoding='utf-8' to the pd.read_csv() call in the script for it to work.

export_all

The exported Markdown files will lose the tag information, so if you are using tag as folder structure like me, you’ll have to manually create folders to arrange them. Some tiddlers using the old WikiText format will also be empty (I use Markdown in my TiddlyWiki, but there are some old ones from the old versions). You can use ls -lS to see which file has no content and manually fix them. After the .md files are in place, run mkdocs as usual.

Conclusion

MkDocs is a pretty simple but powerful tool for note-taking. You get all the benefit of editing the notes using plain text editor, and have them version controlled by git. But you also get a nicely rendered HTML version with search functionality. One thing I miss from TiddlyWiki is the ability to generate a single HTML file containing all the notes. (MkDocs generates a folder of separate HTML, CSS and JS files.) There are some scripts like mkdocs-combine that claims to do this using Pandoc, but I haven’t tested them.

Categorieën: Mozilla-nl planet

Ryan Harter: Removing Disqus

Mozilla planet - di, 02/01/2018 - 09:00

I'm removing Disqus from this blog. Disqus allowed readers to post comments on articles. I added it because it was easy to do, but I no longer think it's worth keeping.

If you'd like to share your thoughts, feel free to shoot me an email at harterrt on gmail. I …

Categorieën: Mozilla-nl planet

Pagina's