mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Stenberg: curl up 2019 is over

Mozilla planet - ma, 01/04/2019 - 12:03

(I will update this blog post with more links to videos and PDFs to presentations as they get published, so come back later in case your favorite isn’t linked already.)

The third curl developers conference, curl up 2019, is how history. We gathered in the lovely Charles University in central Prague where we sat down in an excellent class room. After the HTTP symposium on the Friday, we spent the weekend to dive in deeper in protocols and curl details.

I started off the Saturday by The state of the curl project (youtube). An overview of how we’re doing right now in terms of stats, graphs and numbers from different aspects and then something about what we’ve done the last year and a quick look at what’s not do good and what we could work on going forward.

James Fuller took the next session and his Newbie guide to contributing to libcurl presentation. Things to consider and general best practices to that could make your first steps into the project more likely to be pleasant!

Long term curl hacker Dan Fandrich (also known as “Daniel two” out of the three Daniels we have among our top committers) followed up with Writing an effective curl test where the detailed what different tests we have in curl, what they’re for and a little about how to write such tests.

<figcaption>Sign seen at the curl up dinner reception Friday night</figcaption>

After that I was back behind the desk in the classroom that we used for this event and I talked The Deprecation of legacy crap (Youtube). How and why we are removing things, some things we are removing and will soon remove and finally a little explainer on our new concept and handling of “experimental” features.

Igor Chubin then explained his new protect for us: curlator: a framework for console services (Youtube). It’s a way and tooling that makes it easier to provide access to shell and console oriented services over the web, using curl.

Me again. Governance, money in the curl project and someone offering commercial support (Youtube) was a presentation about how we intend for the project to join a legal entity SFC, and a little about money we have, what to spend it on and how I feel it is good to keep the project separate from any commercial support ventures any of us might do!

While the list above might seems like more than enough, the day wasn’t over. Christian Schmitz also did his presentation on Using SSL root certificate from Mac/Windows.

Our local hero organizer James Fuller then spoiled us completely when we got around to have dinner at a monastery with beer brewing monks and excellent food. Good food, good company and curl related dinner subjects. That’s almost heaven defined!

Sunday

Daylight saving time morning and you could tell. I’m sure it was not at all related to the beers from the night before…

James Fuller fired off the day by talking to us about Curlpipe (github), a DSL for building http execution pipelines.

<figcaption>The class room we used for the curl up presentations and discussions during Saturday and Sunday.</figcaption>

Robin Marx then put in the next gear and entertained us another hour with a protocol deep dive titled HTTP/3 (QUIC): the details (slides). For me personally this was a exactly what I needed as Robin clearly has kept up with more details and specifics in the QUIC and HTTP/3 protocols specifications than I’ve managed and his talk help the rest of the room get at least little bit more in sync with current development.

Jakub Nesetril and Lukáš Linhart from Apiary then talked us through what they’re doing and thinking around web based APIs and how they and their customers use curl: Real World curl usage at Apiary.

Then I was up again and I got to explain to my fellow curl hackers about HTTP/3 in curl. Internal architecture, 3rd party libs and APIs.

Jakub Klímek explained to us in very clear terms about current and existing problems in his talk IRIs and IDNs: Problems of non-ASCII countries. Some of the problems involve curl and while most of them have their clear explanations, I think we have to lessons to learn from this: URLs are still as messy and undocumented as ever before and that we might have some issues to fix in this area in curl.

To bring my fellow up to speed on the details of the new API introduced the last year I then made a presentation called The new URL API.

Clearly overdoing it for a single weekend, I then got the honors of doing the last presentation of curl up 2019 and for an audience that were about to die from exhaustion I talked Internals. A walk-through of the architecture and what libcurl does when doing a transfer.

Summary

I ended up doing seven presentations during this single weekend. Not all of them stellar or delivered with elegance but I hope they were still valuable to some. I did not steal someone else’s time slot as I would gladly have given up time if we had other speakers wanted to say something. Let’s aim for more non-Daniel talkers next time!

A weekend like this is such a boost for inspiration, for morale and for my ego. All the friendly faces with the encouraging and appreciating comments will keep me going for a long time after this.

Thank you to our awesome and lovely event sponsors – shown in the curl up logo below! Without you, this sort of happening would not happen.

curl up 2020

I will of course want to see another curl up next year. There are no plans yet and we don’t know where to host. I think it is valuable to move it around but I think it is even more valuable that we have a friend on the ground in that particular city to help us out. Once this year’s event has sunken in properly and a month or two has passed, the case for and organization of next year’s conference will commence. Stay tuned, and if you want to help hosting us do let me know!


Categorieën: Mozilla-nl planet

Mozilla Thunderbird: All Thunderbird Bugs Have Been Fixed!

Mozilla planet - ma, 01/04/2019 - 10:00
April Fools!

We still have open bugs, but we’d like your help to close them!

We are grateful to have a very active set of users who generate a lot of bug reports and we are requesting your help in sorting them, an activity called bug triage. We’re holding “Bug Days” on April 8th (all day, EU and US timezones) and April 13th (EU and US timezones until 4pm EDT). During these bug days we will log on and work as a community to triage as many bugs as possible. All you’ll need is a Bugzilla account, Thunderbird Daily, and we’ll teach you the rest! With several of us working at the same time we can help each other in real time – answering questions, sharing ideas ideas, and enjoying being with like-minded people.

No coding or special skills are required, and you don’t need to be an expert or long term user of Thunderbird.

Some things you’ll be doing if you participate:

  • Help other users by checking their bug reports to see if you can reproduce the behavior of their reported problem.
  • Get advice about your own bug report(s).
  • Learn the basics about Thunderbird troubleshooting and how to contribute.

We’re calling this the “Game of Bugs”, named after the popular show Game of Thrones – where we will try to “slay” all the bugs. Those who participate fully in the event will get a Thunderbird Game of Bugs t-shirt for their participation (with the design below).

 Game of Bugs T-shirt design

Thunderbird: Game of Bugs

Sorry for the joke! But we hope you’ll join us on the 8th or the 13th via #tb-qa on Mozilla’s IRC so that we can put these bugs in their place which helps make Thunderbird even better. If you have any questions feel free to email ryan@thunderbird.net.

P.S. If you are unable to participate in bug day you can still help by checking out our Get Involved page on the website and contributing in the way you’d like!

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR14b1 available (now with H.264 video)

Mozilla planet - ma, 01/04/2019 - 05:52
TenFourFox Feature Parity Release 14 beta 1 is now available (downloads, hashes, release notes).

I had originally plotted three main features for this release, but getting the urgent FPR13 SPR1 set me back a few days with confidence testing and rebuilds and I have business trips and some vacation time coming up, so I jettisoned the riskiest of the three features (a set of JavaScript updates and a ugly hack to get Github and other sites working fully again) and concentrated on the other two. I'll be looking at that again for FPR15, so more on that later.

Before we get to the marquee features, though, there are two changes which you may not immediately notice. The first is a mitigation for a long-standing issue where some malicious sites keep popping up authentication modals using HTTP Auth. Essentially you can't do anything with the window until the modal is dealt with, so the site just asks for your credentials over and over, ultimately making the browser useless (as a means to make you call their "support line" where they can then social engineer their way into your computer). The ultimate solution is to make such things tab-modal rather than window-modal, but that's involved and sort of out of scope, so we now implement a similar change to what current Firefox does where there is a cap of three Cancels. If you cancel three times, the malicious site is not allowed to issue any more requests until you reload it. No actual data is leaked, assuming you don't type anything in, but it can be a nasty denial of service and it would have succeeded in ruining your day on TenFourFox just as easily as any other Firefox derivative. That said, just avoid iffy sites, yes?

The second change is more fundamental. For Firefox 66 Mozilla briefly experimented with setting a frame rate cap on low-end devices. Surprise, surprise: all of our systems are low-end devices! In FPR13 and prior, TenFourFox would try to push as many frames to the compositor as possible, no matter what it was trying to do, to achieve a 60fps target or better. However, probably none of our computers with the possible exception of high-end G5s were probably achieving 60fps consistently on most modern websites, and the browser would flail trying to desperately keep up. Instead, by setting a cap and enforcing it with software v-sync, frames aren't pushed as often and the browser can do more layout and rendering work per frame. Mozilla selected a 30fps cap, so that's what I selected as an arbitrary first cut. Some sites are less smooth, but many sites now render faster to first paint, particularly pages that do a lot of DOM transforms because now the resulting visual changes are batched. This might seem like an obvious change to make but the numbers had never been proven until then.

Mozilla ultimately abandoned this change in lieu of a more flexible approach with the idle queue, but our older codebase doesn't support that, and we don't have the other issues they encountered anyway because we don't support Electrolysis or APZ. There are two things to look at: we shouldn't have the same scrolling issues because we scroll synchronously, but do report any regressions in default scrolling or obvious changes in scroll rate (include what you're doing the scrolling with, such as the scroll bar, a mouse scroll wheel or your laptop trackpad). The second thing to look at is whether the 30fps frame rate is the right setting for all systems. In particular, should 7400 or G3 be even lower, maybe 15fps? You can change this with layout.frame_rate to some other target frame rate value and restarting the browser. What setting seems to do best on your machine? Include RAM, OS and CPU speed. One other possibility is to look at reducing the target frame rate dynamically based on battery state, but this requires additional plumbing we don't support just yet.

So now the main event: H.264 video support. Olga gets the credit here for the original code, which successfully loads our separately-distributed build of ffmpeg so that we don't run afoul of any licenses including it with the core browser. My first cut of this had issues where the browser ran out of memory on sites that ran lots of H.264 video as images (and believe me, this is not at all uncommon these days), but I got our build of ffmpeg trimmed down enough that it can now load the Vimeo front page and other sites generally without issue. Download the TenFourFox MP4 Enabler for either G4/7450 or G5 (this is a bit of a misnomer since we also use ffmpeg for faster MP3 and VP3 decoding, but I didn't want it confused with Olga's preexisting FFmpeg Enabler), download FPR14b1, run the Enabler to install the libraries and then start FPR14b1. H.264 video should now "just work." However, do note there may be a few pieces left to add for compatibility (for example, Twitter videos used to work and then something changed and now it doesn't and I don't know why, but Imgur, YouTube and Vimeo seem to work fine).

There are some things to keep in mind. While ffmpeg has very good AltiVec support, H.264 video tends to be more ubiquitous and run at higher bitrates, which cancel out the gains; I wouldn't expect dramatic performance improvements relative to WebM and while you may see them in FPR14 relative to FPR13 remember that we now have a frame rate cap which probably makes the decoder more efficient. As mentioned before, I only support G4/7450 (and of those, 1.25GHz and up) and G5 systems; a G4/7400 will have trouble keeping up even with low bitrates and there's probably no hope for G3 systems at all. The libraries provided are very stripped down both for performance and to reduce size and memory pressure, so they're not intended as a general purpose ffmpeg build (in particular, there are no encoders, multiplexers or protocols, some codecs have been removed, and VP8/VP9 are explicitly disabled since our in-tree hyped-up libvpx is faster). You can build your own libraries and put them into the installation location if you want additional features (see the wiki instructions for supported versions and the build process), and you may want to do this for WebM in particular if you want better quality since our build has the loop filter and other postprocessing cut down for speed, but I won't offer support for custom libraries and you'd be on your own if you hit a bug. Finally, the lockout code I wrote when I was running into memory pressure issues is still there and will still cancel all decoding H.264 instances if any one of them fails to get memory for a frame, hopefully averting a crash. This shouldn't happen much now with the slimmer libraries but I still recommend as much RAM as your system can hold (at least 2GB). Oh, and one other thing: foxboxes work fine with H.264!

Now, enjoy some of the Vimeo videos you previously could only watch with the old PopOutPlayer, back when it actually still worked. Here are four of my favourites: Vicious Cycle (PG-13 for pixelated robot blood), Fired On Mars (PG-13 for an F-bomb), Other Half (PG-13 for an F-bomb and oddly transected humans), and Unsatisfying (unsatisfying). I've picked these not only because they're entertaining but also because they run the gamut from hand-drawn animation to CGI to live action and give you an idea of how your system performs. However, I strongly recommend you click the gear icon and select 360p before you start playback, even on a 2005 G5; HD performance is still best summarized as "LOL."

At least one of you asked how to turn it off. Fortunately, if you never install the libraries, it'll never be "on" (the browser will work as before). If you do install them, and decide you prefer the browser without it, you can either delete the libraries from ~/Library/TenFourFox-FFmpeg (stop the browser first just in case it has them loaded) or set media.ffmpeg.enabled to false.

The other, almost anticlimactic change, is that you can now embed JavaScript in your AppleScripts and execute them in browser tabs with run JavaScript. While preparing this blog post I discovered an annoying bug in the AppleScript support, but since no one has reported it so far, it must not be something anyone's actually hitting (or I guess no one's using it much yet). It will be fixed for final which will come out parallel with Firefox 60.7/67 on or about May 14.

Categorieën: Mozilla-nl planet

Martin Giger: Sustainable smart home with the TXT

Mozilla planet - zo, 31/03/2019 - 19:28

fischertechnik launched the smart home kit last year. A very good move on a conceptual level. Smart home and IoT (internet of things) are rapidly growing technology sectors. The unique placement of the TXT allows it to be a perfect introductory platform to this world. However, the smart home platform from fischertechnik relies on a …

The post Sustainable smart home with the TXT appeared first on Humanoids beLog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The future of HTTP Symposium

Mozilla planet - za, 30/03/2019 - 07:26

This year’s version of curl up started a little differently: With an afternoon of HTTP presentations. The event took place the same week the IETF meeting has just ended here in Prague so we got the opportunity to invite people who possibly otherwise wouldn’t have been here… Of course this was only possible thanks to our awesome sponsors, visible in the image above!

Lukáš Linhart from Apiary started out with “Web APIs: The Past, The Present and The Future”. A journey trough XML-RPC, SOAP and more. One final conclusion might be that we’re not quite done yet…

James Fuller from MarkLogic talked about “The Defenestration of Hypermedia in HTTP”. How HTTP web technologies have changed over time while the HTTP paradigms have survived since a very long time.

I talked about DNS-over-HTTPS. A presentation similar to the one I did before at FOSDEM, but in a shorter time so I had to talk a little faster!

Mike Bishop from Akamai (editor of the HTTP/3 spec and a long time participant in the HTTPbis work) talked about “The evolution of HTTP (from HTTP/1 to HTTP/3)” from HTTP/0.9 to HTTP/3 and beyond.

Robin Marx then rounded off the series of presentations with his tongue in cheek “HTTP/3 (QUIC): too big to fail?!” where we provided a long list of challenges for QUIC and HTTP/3 to get deployed and become successful.

We ended this afternoon session with a casual Q&A session with all the presenters discussing various aspects of HTTP, the web, REST, APIs and the benefits and deployment challenges of QUIC.

I think most of us learned things this afternoon and we could leave the very elegant Charles University room enriched and with more food for thoughts about these technologies.

We ended the evening with snacks and drinks kindly provided by Apiary.

(This event was not streamed and not recorded on video, you had to be there in person to enjoy it.)


Categorieën: Mozilla-nl planet

Will Kahn-Greene: Code of conduct: supporting in projects

Mozilla planet - vr, 29/03/2019 - 23:00
CODE_OF_CONDUCT.md

This week, Mozilla added PRs to all the repositories that Mozilla has on GitHub that aren't forks, Servo, or Rust. The PRs add a CODE_OF_CONDUCT.md file and also include some instructions on what projects can do with it. This standardizes inclusion of the code of conduct text in all projects.

I'm a proponent of codes of conduct. I think they're really important. When I was working on Bleach with Greg, we added code of conduct text in September of 2017. We spent a bunch of time thinking about how to do that effectively and all the places that users might encounter Bleach.

I spent some time this week trying to figure out how to do what we did with Bleach in the context of the Mozilla standard. This blog post covers those thoughts.

This blog post covers Python-centric projects. Hopefully, some of this applies to other project types, too.

What we did in Bleach in 2017 and why

In September of 2017, Greg and I spent some time thinking about all the places the code of conduct text needs to show up and how to implement the text to cover as many of those as possible for Bleach.

PR #314 added two things:

  • a CODE_OF_CONDUCT.rst file
  • a copy of the text to the README

In doing this, the code of conduct shows up in the following places:

In this way, users could discover Bleach in a variety of different ways and it's very likely they'll see the code of conduct text before they interact with the Bleach community.

[1]It no longer shows up on the "new issue" page in GitHub. I don't know when that changed. The Mozilla standard

The Mozilla standard applies to all repositories in Mozilla spaces on GitHub and is covered in the Repository Requirements wiki page.

It explicitly requires that you add a CODE_OF_CONDUCT.md file with the specified text in it to the root of the repository.

This makes sure that all repositories for Mozilla things have a code of conduct specified and also simplifies the work they need to do to enforce the requirement and update the text over time.

This week, a bot added PRs to all repositories that didn't have this file. Going forward, the bot will continue to notify repositories that are missing the file and will update the file's text if it ever gets updated.

How to work with the Mozilla standard

Let's go back and talk about Bleach. We added a file and a blurb to the README and that covered the following places:

With the new standard, we only get this:

In order to make sure the file is in the source tarball, you have to make sure it gets added. The bot doesn't make any changes to fix this. You can use check-manifest to help make sure that's working. You might have to adjust your MANIFEST.in file or something else in your build pipeline--hence the maybe.

Because the Mozilla standard suggests they may change the text of the CODE_OF_CONDUCT.md file, it's a terrible idea to copy the contents of the file around your repository because that's a maintenance nightmare--so that idea is out.

It's hard to include .md files in reStructuredText contexts. You can't just add this to the long description of the setup.py file and you can't include it in a Sphinx project [2].

Greg and I chatted about this a bit and I think the best solution is to add minimal text that points to the CODE_OF_CONDUCT.md in GitHub to the README. Something like this:

Code of Conduct =============== This project and repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details please see the `CODE_OF_CONDUCT.md file <https://github.com/mozilla/bleach/blob/master/CODE_OF_CONDUCT.md>`_.

In Bleach, the long description set in setup.py includes the README:

def get_long_desc(): desc = codecs.open('README.rst', encoding='utf-8').read() desc += '\n\n' desc += codecs.open('CHANGES', encoding='utf-8').read() return desc ... setup( name='bleach', version=get_version(), description='An easy safelist-based HTML-sanitizing tool.', long_description=get_long_desc(), ...

In Bleach, the index.rst of the docs also includes the README:

.. include:: ../README.rst Contents ======== .. toctree:: :maxdepth: 2 clean linkify goals dev changes Indices and tables ================== * :ref:`genindex` * :ref:`search`

In this way, the README continues to have text about the code of conduct and the link goes to the file which is maintained by the bot. The README is included in the long description of setup.py so this code of conduct text shows up on the PyPI page. The README is included in the Sphinx docs so the code of conduct text shows up on the front page of the project documentation.

So now we've got code of conduct text pointing to the CODE_OF_CONDUCT.md file in all these places:

Plus the text will get updated automatically by the bot as changes are made.

Excellent!

[2]You can have Markdown files in a Sphinx project. It's fragile and finicky and requires a specific version of Commonmark. I think this avenue is not worth it. If I had to do this again, I'd be more inclined to run the Markdown file through pandoc and then include the result. Future possibilities

GitHub has a Community Insights page for each project. This is the one for Bleach. There's a section for "Code of conduct", but you only get a green checkmark if and only if you use one of GitHub's pre-approved code of conduct files.

There's a discussion about that in their forums.

Is this checklist helpful to people? Does it mean something to have all these items checked off? Is there someone checking for this sort of thing? If so, then maybe we should get the Mozilla text approved?

Hope this helps!

I hope to roll this out for the projects I maintain on Monday.

I hope this helps you!

Categorieën: Mozilla-nl planet

Jet Villegas: Yamanote: A software development and deployment system

Mozilla planet - vr, 29/03/2019 - 17:35

I left Mozilla back in July, 2018. There are many reasons for this decision, and I’ll talk about just one here: I decided to Help People Get Jobs. The following text is from a blog post I wrote at work. I have reposted it here, edited for length and content (Internal Indeed systems are not referenced.)

At Indeed, we now use a software development and deployment system called “Yamanote.” Yamanote takes its name from the Yamanote Line (山手線) in Tokyo, Japan. It is one of Tokyo’s busiest and most important lines, connecting most of Tokyo’s major stations and urban centers. The Yamanote line is a continuous railway loop. Trains which run clockwise are known as sotomawari (外回り, “outer circle”) and those counter-clockwise as uchi-mawari (内回り, “inner circle”). We deploy software on two lines as well: QA and PROD. Just like the Yamanote line, our goal is for our software trains to run reliably, safely, and securely, in a continuous ring of green.

The level of service and quality required to operate the Yamanote train line is what we aspire to and apply to our software development. We continuously improve our tools and processes. It is expected that code merging into any of our deployment branches is tested and ready for public use.

Our business (We Help People Get Jobs) is a very long-term concern (think in decades!) and we expect some of our software to have a service-life that can span many years. We optimize for lasting impact. We operate quickly and carefully.

The Yamanote line is a marvel of technology, with incredible innovations in electrical engineering and automated systems. Despite all of the technology used, the most important factors in the line’s reliability are the people who operate the system:

Every staff member on the line has the authority to stop and start the trains with a wave of their white glove. It’s this level of responsibility and care that makes all the difference. We aspire to bring that “white glove” treatment into our software development discipline, as the millions of people who depend on us also need to get to work every single day.

Thanks to Wikipedia for the Yamanote line history facts. I’m working on releasing the Yamanote tools as Open Source in a future post.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Firefox services experiments on SUMO

Mozilla planet - vr, 29/03/2019 - 17:18

Over the last week or so, we’ve been promoting Firefox services on support.mozilla.org.

In this experiment, which we’re running for the next two weeks, we are promoting the free services Sync, Send and Monitor. These services fit perfectly into our mission: to help people create take control of their online lives.

  • Firefox Sync allows Firefox users to instantly share preferences, bookmarks, history, passwords, open tabs and add-ons to other devices.
  • Firefox Send is a free encrypted file transfer service that allows people to safely share files from any browser.
  • Firefox Monitor allows you to check your email address against known data breaches across the globe. Optionally you can sign up to receive a full report of past breaches and new breach alerts.

The promotions are minimal and intended to not distract people from getting help with Firefox. So why promote anything at all on a support website when people are there to get help? People visit the support site when they have a problem, sure. But just as many are there to learn. Of the top articles that brought Firefox users to support.mozilla.org in the past month, half were about setting up Firefox and understanding its features.

This experiment is about understanding whether Firefox users on the support site can discover our connected services and find value in them. We are also monitoring whether the promotions are too distracting or interfere with the mission of support.mozilla.org. This experiment is about understanding whether Firefox users on the support site can discover our connected services and find value in them. We are also monitoring whether the promotions are too distracting or interfere with the mission of support.mozilla.org. In the meantime, if you find issues with the content please report it.

The test will run for the next two weeks and we will report back here and in our weekly SUMO meeting on the results and next steps.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: A Real-Time Wideband Neural Vocoder at 1.6 kb/s Using LPCNet

Mozilla planet - vr, 29/03/2019 - 09:08

This is an update on the LPCNet project, an efficient neural speech synthesizer from Mozilla’s Emerging Technologies group. In an an earlier demo from late last year, we showed how LPCNet combines signal processing and deep learning to improve the efficiency of neural speech synthesis.

This time, we turn LPCNet into a very low-bitrate neural speech codec that’s actually usable on current hardware and even on phones (as described in this paper). It’s the first time a neural vocoder is able to run in real-time using just one CPU core on a phone (as opposed to a high-end GPU)! The resulting bitrate — just 1.6 kb/s — is about 10 times less than what wideband codecs typically use. The quality is much better than existing very low bitrate vocoders. In fact, it’s comparable to that of more traditional codecs using higher bitrates.

LPCNet sample player

Screenshot of a demo player that demonstrates the quality of LPCNet-coded speech

This new codec can be used to improve voice quality in countries with poor network connectivity. It can also be used as redundancy to improve robustness to packet loss for everyone. In storage applications, it can compress an hour-long podcast to just 720 kB (so you’ll still have room left on your floppy disk). With some further work, the technology behind LPCNet could help improve existing codecs at very low bitrates.

Learn more about our ongoing work and check out the playable demo in this article.

The post A Real-Time Wideband Neural Vocoder at 1.6 kb/s Using LPCNet appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Nathan Froyd: a thousand and one quite modest ones

Mozilla planet - do, 28/03/2019 - 16:07

From The Reckoning, by David Halberstam:

Shaiken’s studies showed that the Japanese had made their great surge in the sixties and seventies, by which time the financial men had climbed to eminence within America’s industrial companies and had successfully subordinated the power of the manufacturing men. When the Japanese advantage in quality became obvious in the early eighties, it was fashionable among American managers to attribute it to the Japanese lead in robots, and it was true that Japanese were somewhat more robotized than the Americans. But in Shaiken’s opinion the Japanese success had come not from technology but from manufacturing skills. The Japanese had moved ahead of America when they were at a distinct disadvantage in technology. They had done it by slowly and systematically improving the process of the manufacturing in a thousand tiny increments. They had done it by being there, on the factory floor, as the Americans were not.

In that opinion Shaiken was joined by Don Lennox, the former Ford manufacturing man who had ended up at Harvester. Lennox had gone to Japan in the mid-seventies and been dazzled by what the Japanese had achieved in modernizing their factories. He was amazed not by the brilliance and originality of what they had done but by the practicality of it. Lennox’s visit had been an epiphany: He had suddenly envisioned the past twenty years in Japan, two decades of Japanese manufacturing engineers coming to work every day, busy, serious, being taken seriously by their superiors, being filled with the importance of the mission, improving the manufacturing in countless small ways. It was not that they had made one giant breakthrough, Lennox realized; they had made a thousand and one quite modest ones.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Scroll Anchoring in Firefox 66

Mozilla planet - do, 28/03/2019 - 15:21

Firefox 66 was released on March 19th with a feature called scroll anchoring.

It’s based on a new CSS specification that was first implemented by Chrome, and is now available in Firefox.

Have you ever had this experience before?

You were reading headlines, but then an ad loads and moves what you were reading off the screen.

Or how about this?!

You rotate your phone, but now you can’t find the paragraph that you were just reading.

There’s a common cause for both of these issues.

Browsers scroll by tracking the distance you are from the top of the page. As you scroll around, the browser increases or decreases your distance from the top.

But what happens if an ad loads on a page above where you are reading?

The browser keeps you at the same distance from the top of the page, but now there is more content between what you’re reading and the top. In effect, this moves the visible part of the page up away from what you’re reading (and oftentimes into the ad that’s just loaded).

Or, what if you rotate your phone to portrait mode?

Now there’s much less horizontal space on the screen, and a paragraph that was 100px tall may now be 200px tall. If the paragraph you were reading was 1000px from the top of the page before rotating, it may now be 2000px from the top of the page after rotating. If the browser is still scrolled to 1000px, you’ll be looking at content far above where you were before.

The key insight to fixing these issues is that users don’t care what distance they are from the top of the page. They care about their position relative to the content they’re looking at!

Scroll anchoring works to anchor the user to the content that they’re looking at. As this content is moved by ads, screen rotations, screen resizes, or other causes, the page now scrolls to keep you at the same relative position to it.

Demos

Let’s take a look at some examples of scroll anchoring in action.

Here’s a page with a slider that changes the height of an element at the top of the page. Scroll anchoring prevents the element above the viewport from changing what you’re looking at.

Here’s a page using CSS animations and transforms to change the height of elements on the page. Scroll anchoring keeps you looking at the same paragraph even though it’s been moved by animations.

And finally, here’s the original video of screen rotation with scroll anchoring disabled, in contrast to the view with scroll anchoring enabled.

Notice how we jump to an unrelated section when scroll anchoring is disabled?

How it works

Scroll anchoring works by first selecting an element of the DOM to be the anchor node and then attempting to keep that node in the same relative position on the screen.

To choose an anchor node, scroll anchoring uses the anchor selection algorithm. The algorithm attempts to pick content that is small and near the top of the page. The exact steps are slightly complicated, but roughly it works by iterating over the elements in the DOM and choosing the first one that is visible on the screen.

When a new element is added to the page, or the screen is rotated/resized, the page’s layout needs to be recalculated. During this process, we check to see if the anchor node has been moved to a new location. If so, we scroll to keep the page in the same relative position to the anchor node.

The end result is that changes to the layout of a page above the anchor node are not able to change the relative position of the anchor node on the screen.

Web compatibility

New features are great, but do they break websites for users?

This feature is an intervention. It breaks established behavior of the web to fix an annoyance for users.

It’s similar to how browsers worked to prevent popup-ads in the past, and the ongoing work to prevent autoplaying audio and video.

This type of workaround comes with some risk, as existing websites have expectations about how scrolling works.

Scroll anchoring mitigates the risk with several heuristics to disable the feature in situations that have caused problems with existing websites.

Additionally, a new CSS property has been introduced, overflow-anchor, to allow websites to opt-out of scroll anchoring.

To use it, just add overflow-anchor: none on any scrolling element where you don’t want to use scroll anchoring. Additionally, you can add overflow-anchor: none to specific elements that you want to exclude from being selected as anchor nodes.

Of course there are still possible incompatibilities with existing sites. If you see a new issue caused by scroll anchoring, please file a bug!

Future work

The version of scroll anchoring shipping now in Firefox 66 is our initial implementation. In the months ahead we will continue to improve it.

The most significant effort will involve improving the algorithm used to select an anchor.

Scroll anchoring is most effective when it selects an anchor that’s small and near the top of your screen.

  1. If the anchor is too large, it’s possible that content inside of it will expand or shrink in a way that we won’t adjust for.
  2. If the anchor is too far from the top of the screen, it’s possible that content below what you’re looking at will expand and cause unwanted scroll adjustments.

We’ve found that our implementation of the specification can select inadequate anchors on pages with table layouts or significant content inside of overflow: hidden.

This is due to a fuzzy area of the specification where we chose an approach different than Chrome’s implementation. This is one of the values of multiple browser implementations: We have gained significant experience with scroll anchoring and hope to bring that to the specification, to ensure it isn’t defined by the implementation details of only one browser.

The scroll anchoring feature in Firefox has been developed by many people. Thanks go out to Daniel Holbert, David Baron, Emilio Cobos Álvarez, and Hiroyuki Ikezoe for their guidance and many reviews.

The post Scroll Anchoring in Firefox 66 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Facebook and Google: This is What an Effective Ad Archive API Looks Like

Mozilla planet - do, 28/03/2019 - 06:00
Mozilla and a cohort of independent researchers have detailed the key traits that make for an effective ad archive API — and more transparent elections

 

On March 28 — after urging from dozens of civil society organizations — Facebook is set to launch its advertising archive API. This tool is intended to provide researchers, journalists, and users with transparency into political ads and audience targeting on Facebook.

Google also pledged to launch a similar tool ahead of the May 2019 EU Parliamentary elections (but postponed their initial March launch date.) As disinformation continues to spread across online platforms with the potential to interfere with democratic elections, it’s critical that these tools are accessible and effective.

So today, Mozilla and a cohort of 10 independent researchers are publishing five guidelines that these APIs must meet in order to truly support election influence monitoring and independent research.

Says Ashley Boyd, Mozilla’s VP of Advocacy: “Researchers play a critical role in tracking and reporting disinformation, and then sharing this information with the public and public officials. These API guidelines — developed with technical and policy experts — represent baseline requirements which would empower researchers to better understand and document how disinformation spreads, how it influences elections, and how it impacts society.”

Boyd continues: “Our goal is to ensure lawmakers and the public can critically assess Facebook and Google’s transparency efforts. And, can hold the tech companies accountable if they fall short.”

The 10 experts are based at Oxford University, the University of Amsterdam, Vrije Universiteit Brussel, Stiftung Neue Verantwortung, and other institutions.

These guidelines are being shared publicly with European Commissioners Mariya Gabriel, Julian King, Andrus Ansip, and Vera Jourova, who are responsible for assessing how platforms are upholding their commitments under the EU Code of Practice on Disinformation. The guidelines are also being shared with Facebook and Google.

The guidelines + our letter to policymakers and the tech companies:

 

To: Google, Facebook, Twitter

Cc: Vice President of the European Commission Andrus Ansip, European Commissioners Mariya Gabriel, Vera Jourova, Julian King

We, the undersigned, are independent researchers investigating the wide variety of issues that are crucial to understanding the impact of disinformation on our societies. This includes research into:

  • inauthentic accounts and behaviour on social media,
  • political and issue-based advertising,
  • micro-targeting practices and ad placements by political parties and other entities,
  • the effectiveness of self-regulation measures to counter disinformation
.

To do this work effectively, there must be fully functional, open APIs that enable advanced research and the development of tools to analyse political ads targeted to EU residents. This requires access to the full scope of data relevant to political advertising, and that access must be provided in a format that allows for rich analysis. Tools provided often lack the necessary data or, due to limited functionality, do not allow for analysis.

A functional, open API should have the following:

[1] Comprehensive political advertising content. The APIs should include paid political ads and issue-based ads, without limiting access on the basis of pre-selected topics or keywords. “Political” ads might include, but are not limited to:

  • direct electioneering content
  • candidates or holders of political office
  • matters of legislation or decisions of a court
  • functions of government

Non-paid, public content that is generated by users who are known political content purveyors should also be available.

[2] The content of the advertisement and information about targeting criteria, including:

  • The text, image, and/or video content and information about where the ad appeared (newsfeed, sidebar, etc.).
  • The targeting criteria used by advertisers to design their ad campaign, as well as information about the audience that the ad actually reached.
  • The number of impressions that an ad received within specific geographic and demographic criteria (e.g. within a political district, in a certain age range), broken down by paid vs. organic reach.
  • The amount of engagements that an ad received, including user actions beyond viewing an ad.
  • Information about how much an advertiser paid to place the ad.
  • Information about microtargeting, including whether the ad was a/b tested and the different versions of the ad; if the ad used a lookalike audience; the features (race, gender, geography, etc.) used to create that audience; if the ad was directed at platform-defined user segments or interests, and the segments or interests used; or if the ad was targeted based on a user list the advertiser already possessed.

[3] Functionality to empower, not limit, research and analysis, including:

  • Unique identifiers associated with each advertisement and advertiser to allow for trend analysis over time and across platforms.
  • All images, videos, and other content in a machine-readable format accessible via a programmatic interface.
  • The ability to download a week’s worth of data in less than 12 hours and a day’s worth of data in less than two hours.
  • Bulk downloading functionality of all relevant content. It should be feasible to download all historical data within one week.
  • Search functionality by the text of the content itself, by the content author or by date range.

[4] Up-to-date and historical data access, including:

  • Availability of advertisements within 24 hours of publication.
  • Availability of advertisements going back 10 years
  • APIs should be promptly fixed when they are broken
  • APIs should be designed so that they either support or at least do not impede long-term studies

[5] Public access. The API itself and any data collected from the API should be accessible to and shareable with the general public.

In the spirit of upholding the EU Code of Practice on Disinformation, we expect you to empower the research community by implementing open, functional APIs of the quality outlined in this letter — just as we expect elected officials and public authorities to fully support the release of such data in a privacy-compliant fashion to enable independent research and inform public debate. Your action on this is essential to ensuring the integrity of the upcoming European Parliamentary elections — as well as elections happening all around the globe — is upheld.

Yours Sincerely,

Mozilla Foundation

Co-written by

  1. Jef Ausloos (University of Amsterdam, NL)
  2. Chloe Colliver (Institute for Strategic Dialogue, UK)
  3. Laura Edelson (NYU Tandon School of Engineering, US)
  4. Sasha Havlicek (Institute for Strategic Dialogue, UK)
  5. Natali Helberger (University of Amsterdam, NL)
  6. Stefan Heumann (Stiftung Neue Verantwortung, GER)
  7. Sam Jeffers (Who Targets Me, UK)
  8. Rasmus Nielsen (University of Oxford, Reuters Institute for the Study of Journalism, UK)
  9. Alex Sängerlaub (Stiftung Neue Verantwortung, GER)
  10. Michael Veale (University College London and University of Birmingham, UK)
  11. Mathias Vermeulen (Vrije Universiteit Brussel, BE + Mozilla Foundation)

Co-signed by

  1. Alexandre Alaphilippe (EUDisinfoLab, BE)
  2. Jonathan Albright (Colombia University, US)
  3. Isabelle Augenstein (University of Copenhagen, DK)
  4. Dominik Batorski (University of Warsaw, PL)
  5. Anja Bechmann (Datalab, University of Aarhus, DK)
  6. Reuben Binns (University of Oxford, UK)
  7. Kalina Bontcheva (University of Sheffield, UK)
  8. Ruben Bouwmeester (Deutsche Welle, DE)
  9. Elda Brogi (European University Institute, IT)
  10. Axel Bruns (Queensland University of Technology, AUS)
  11. Paul-Olivier Dehaye (Personaldata.Io, BE)
  12. Leon Derczynski (IT University of Copenhagen, DK)
  13. José van Dijck (Utrecht University, NL)
  14. Marius Dragomir (Central European University, HU)
  15. Charles Ess (University of Oslo, NO)
  16. Aline Franzke (University Duisburg-Essen, DE)
  17. Erika Franklin Fowler (Wesleyan University, US)
  18. Michael Franz (Bowdoin College, US)
  19. Frederic Guerrero-Solé (Universitat Pompeu Fabra of Barcelona, ES)
  20. Jakub Grue Simonsen (University of Copenhagen, DK)
  21. Luca Hammer (University of Paderborn, GER)
  22. Vladan Joler (Share Lab, US)
  23. Steve Jones (University of Illinois at Chicago, US)
  24. Brian Keegan (University of Colorado, Boulder, US)
  25. Aleksi Knuutila (Open Knowledge, FI)
  26. Alex Krasodomski (Demos, UK)
  27. Aleksandra Kuczerawy (University of Leuven, CITIP, BE)
  28. Paddy Leersen (University of Amsterdam, NL)
  29. Christina Lioma (University of Copenhagen, DK)
  30. James Meese (University of Technology Sydney, AUS)
  31. Divina Meigs (Sorbonne Nouvelle University, FR)
  32. Marianela Milanes (Asociación por los Derechos Civiles, ARG)
  33. Carl Miller (Demos, UK)
  34. Aviv Ovadya (The Thoughtful Technology Project, US)
  35. Symeon Papadopoulos (Centre for Research and Technology Hellas, GR)
  36. Cameron Piercy (University of Kansas, US)
  37. Thomas Poell (University of Amsterdam, NL)
  38. Oreste Pollicino (Università Bocconi, IT)
  39. Travis Ridout (Washington State University, US)
  40. Luca Rossi (IT University of Copenhagen, DK)
  41. Yotam Shmargad (University of Arizona, US)
  42. Javier Ruiz Soler (European University Institute, IT)
  43. Damian Tambini (London School of Economics, UK)
  44. Emily Taylor (Chatham House, UK)
  45. Rebekah K. Tromble (Leiden University, NL)
  46. Claes De Vreese (University of Amsterdam, NL)
  47. Abby Wood (University of Southern California Gould School of Law, US)
  48. Amy X. Zhang (MIT CSAIL, US)
  49. Arkaitz Zubiaga (Queen Mary University of London, UK)

If you are a researcher working on these issues and want to co-sign this letter please contact us at mathias@mozillafoundation.org, and we will add names on a rolling basis to this letter.

This work is part of Mozilla’s larger effort to combat online disinformation ahead of the upcoming EU elections, and other elections around the world in 2019. We’re closely following the commitments these companies made in the EU Code of Practice on Disinformation, in order to influence the development and assessment of the most effective tools possible.

Earlier this year, Mozilla and dozens of our allies sent a public letter to Facebook, demanding the company make good on their promises to provide more transparency around political advertising. Mozilla also conducted an EU-wide survey on the state of misinformation, and found that nearly 84 percent of people polled suspected (or knew for certain) that they had seen misinformation while using the internet that very week.

The post Facebook and Google: This is What an Effective Ad Archive API Looks Like appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Facebook and Google: This is What an Effective Ad Archive API Looks Like

Mozilla Blog - do, 28/03/2019 - 06:00
Mozilla and a cohort of independent researchers have detailed the key traits that make for an effective ad archive API — and more transparent elections

 

On March 28 — after urging from dozens of civil society organizations — Facebook is set to launch its advertising archive API. This tool is intended to provide researchers, journalists, and users with transparency into political ads and audience targeting on Facebook.

Google also pledged to launch a similar tool ahead of the May 2019 EU Parliamentary elections (but postponed their initial March launch date.) As disinformation continues to spread across online platforms with the potential to interfere with democratic elections, it’s critical that these tools are accessible and effective.

So today, Mozilla and a cohort of 10 independent researchers are publishing five guidelines that these APIs must meet in order to truly support election influence monitoring and independent research.

Says Ashley Boyd, Mozilla’s VP of Advocacy: “Researchers play a critical role in tracking and reporting disinformation, and then sharing this information with the public and public officials. These API guidelines — developed with technical and policy experts — represent baseline requirements which would empower researchers to better understand and document how disinformation spreads, how it influences elections, and how it impacts society.”

Boyd continues: “Our goal is to ensure lawmakers and the public can critically assess Facebook and Google’s transparency efforts. And, can hold the tech companies accountable if they fall short.”

The 10 experts are based at Oxford University, the University of Amsterdam, Vrije Universiteit Brussel, Stiftung Neue Verantwortung, and other institutions.

These guidelines are being shared publicly with European Commissioners Mariya Gabriel, Julian King, Andrus Ansip, and Vera Jourova, who are responsible for assessing how platforms are upholding their commitments under the EU Code of Practice on Disinformation. The guidelines are also being shared with Facebook and Google.

The guidelines + our letter to policymakers and the tech companies:

 

To: Google, Facebook, Twitter

Cc: Vice President of the European Commission Andrus Ansip, European Commissioners Mariya Gabriel, Vera Jourova, Julian King

We, the undersigned, are independent researchers investigating the wide variety of issues that are crucial to understanding the impact of disinformation on our societies. This includes research into:

  • inauthentic accounts and behaviour on social media,
  • political and issue-based advertising,
  • micro-targeting practices and ad placements by political parties and other entities,
  • the effectiveness of self-regulation measures to counter disinformation
.

To do this work effectively, there must be fully functional, open APIs that enable advanced research and the development of tools to analyse political ads targeted to EU residents. This requires access to the full scope of data relevant to political advertising, and that access must be provided in a format that allows for rich analysis. Tools provided often lack the necessary data or, due to limited functionality, do not allow for analysis.

A functional, open API should have the following:

[1] Comprehensive political advertising content. The APIs should include paid political ads and issue-based ads, without limiting access on the basis of pre-selected topics or keywords. “Political” ads might include, but are not limited to:

  • direct electioneering content
  • candidates or holders of political office
  • matters of legislation or decisions of a court
  • functions of government

Non-paid, public content that is generated by users who are known political content purveyors should also be available.

[2] The content of the advertisement and information about targeting criteria, including:

  • The text, image, and/or video content and information about where the ad appeared (newsfeed, sidebar, etc.).
  • The targeting criteria used by advertisers to design their ad campaign, as well as information about the audience that the ad actually reached.
  • The number of impressions that an ad received within specific geographic and demographic criteria (e.g. within a political district, in a certain age range), broken down by paid vs. organic reach.
  • The amount of engagements that an ad received, including user actions beyond viewing an ad.
  • Information about how much an advertiser paid to place the ad.
  • Information about microtargeting, including whether the ad was a/b tested and the different versions of the ad; if the ad used a lookalike audience; the features (race, gender, geography, etc.) used to create that audience; if the ad was directed at platform-defined user segments or interests, and the segments or interests used; or if the ad was targeted based on a user list the advertiser already possessed.

[3] Functionality to empower, not limit, research and analysis, including:

  • Unique identifiers associated with each advertisement and advertiser to allow for trend analysis over time and across platforms.
  • All images, videos, and other content in a machine-readable format accessible via a programmatic interface.
  • The ability to download a week’s worth of data in less than 12 hours and a day’s worth of data in less than two hours.
  • Bulk downloading functionality of all relevant content. It should be feasible to download all historical data within one week.
  • Search functionality by the text of the content itself, by the content author or by date range.

[4] Up-to-date and historical data access, including:

  • Availability of advertisements within 24 hours of publication.
  • Availability of advertisements going back 10 years
  • APIs should be promptly fixed when they are broken
  • APIs should be designed so that they either support or at least do not impede long-term studies

[5] Public access. The API itself and any data collected from the API should be accessible to and shareable with the general public.

In the spirit of upholding the EU Code of Practice on Disinformation, we expect you to empower the research community by implementing open, functional APIs of the quality outlined in this letter — just as we expect elected officials and public authorities to fully support the release of such data in a privacy-compliant fashion to enable independent research and inform public debate. Your action on this is essential to ensuring the integrity of the upcoming European Parliamentary elections — as well as elections happening all around the globe — is upheld.

Yours Sincerely,

Mozilla Foundation

Co-written by

Jef Ausloos (University of Amsterdam, NL)

Chloe Colliver (Institute for Strategic Dialogue, UK)

Laura Edelson (NYU Tandon School of Engineering, US)

Sasha Havlicek (Institute for Strategic Dialogue, UK)

Natali Hellberger (University of Amsterdam)

Stefan Heumann (Stiftung Neue Verantwortung, GER)

Sam Jeffers (Who Targets Me, UK)

Rasmus Nielsen (University of Oxford, Reuters Institute for the Study of Journalism, UK)

Alex Sängerlaub (Stiftung Neue Verantwortung, GER)

Michael Veale (University College London and University of Birmingham, UK)

Mathias Vermeulen (Vrije Universiteit Brussel, BE)

Co-signed by

Reuben Binns (University of Oxford, UK)

Elda Brogi (European University Institute, IT)

Paul-Olivier Dehaye (Personaldata.Io, BE)

Aleksi Knuutila (Open Knowledge, FI)

Aleksandra Kuczerawy (University of Leuven, CITIP, BE)

Paddy Leersen (University of Amsterdam, NL)

Marianela Milanes (ARG)

Javier Ruiz Soler (European University Institute, IT)

Emily Taylor (Chatham House, UK)

Abby Wood (University of Southern California Gould School of Law, US)

If you are a researcher working on these issues and want to co-sign this letter please contact us at mathias@mozillafoundation.org, and we will add names on a rolling basis to this letter.

This work is part of Mozilla’s larger effort to combat online disinformation ahead of the upcoming EU elections, and other elections around the world in 2019. We’re closely following the commitments these companies made in the EU Code of Practice on Disinformation, in order to influence the development and assessment of the most effective tools possible.

Earlier this year, Mozilla and dozens of our allies sent a public letter to Facebook, demanding the company make good on their promises to provide more transparency around political advertising. Mozilla also conducted an EU-wide survey on the state of misinformation, and found that nearly 84 percent of people polled suspected (or knew for certain) that they had seen misinformation while using the internet that very week.

The post Facebook and Google: This is What an Effective Ad Archive API Looks Like appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Software-update: Mozilla Firefox 66.0.2 - Computer - Downloads - Tweakers

Nieuws verzameld via Google - wo, 27/03/2019 - 20:22
Software-update: Mozilla Firefox 66.0.2 - Computer - Downloads  Tweakers

Mozilla heeft voor de tweede keer in korte tijd een update voor versie 66 van zijn webbrowser Firefox uitgebracht. In versie 66.0.2 zijn onder meer ...

Categorieën: Mozilla-nl planet

Mozilla brengt gratis wachtwoordmanager Firefox Lockbox naar Android - Techzine.nl

Nieuws verzameld via Google - wo, 27/03/2019 - 18:44
Mozilla brengt gratis wachtwoordmanager Firefox Lockbox naar Android  Techzine.nl

Mozilla heeft zijn gratis wachtwoordmanager Firefox Lockbox uitgebracht voor Android. De manager werkt samen met de Firefox-browser.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl goes 180

Mozilla planet - wo, 27/03/2019 - 08:08

The 180th public curl release is a patch release: 7.64.1. There’s been 49 days since 7.64.0 shipped. The first release since our 21st birthday last week. (Full changelog.)

Numbers

the 180th release
2 changes
49 days (total: 7,677)

116 bug fixes (total: 5,029)
184 commits (total: 24,111)
0 new public libcurl functions (total: 80)
2 new curl_easy_setopt() options (total: 267)

1 new curl command line option (total: 221)
49 contributors, 25 new (total: 1,929)
25 authors, 10 new (total: 669)
0 security fixes (total: 87)

News!

This is a patch release but we still managed to introduce some fun news in this version. We ship brand new alt-svc support which we encourage keen and curious users to enable in their builds and test out. We strongly discourage anyone from using that feature in production as we reserve ourselves the right to change it before removing the EXPERIMENTAL label. As mentioned in the blog post linked above, alt-svc is the official way to bootstrap into HTTP/3 so this is a fundamental stepping stone for supporting that protocol version in a future curl.

We also introduced brand new support for the Amiga-specific TLS backend AmiSSL, which is a port of OpenSSL to that platform.

Bug-fixes

With over a hundred bug-fixes landed in this period there are a lot to choose from, but some of the most most fun and important ones from my point of view include the following.

connection check crash

This was a rather bad regression that occasionally caused crashes when libcurl would scan its connection cache for a live connection to reuse. Most likely to trigger with the Schannel backend.

connection sharing crash

The example source code that uses a shared connection cache among many threads was another crash regression. It turned out a thread could accidentally get hold of a connection already in private use by another thread…

“Expire in…” logs removed

Having the harmless but annoying text there was a mistake to begin with. It was a debug-only line that accidentally was pushed and not discovered in time. It’s history now.

curl -M manual removed

The tutorial-like manual piece that was previously included in the -M (or –manual) built-in command documentation, is no longer included. The output shown is now just the curl.1 man page. The reason for this is that the tutorial has gone a bit stale and there is now better updated and better explained documentation elsewhere. Primarily perhaps in everything curl. The online version of that document will eventually also be removed.

TLS terminology cleanups

We now refer to the Windows TLS backend as “Schannel” and the Apple macOS one as “Secure Transport” in all curl code and documentation. Those are the official names and those are the names people in general know them as. No more use of the former names that sometimes made people confused.

Shaving off bytes and mallocs

We rearranged the layout of a few structs and changed to using bitfields instead of booleans and more. This way, we managed to shrink two of the primary internal structs by 5% and 11% with no functionality change or loss.

Similarly, we removed a few mallocs, even in the common code path, so now the number of allocs for my regular test download of 4GB data over a localhost HTTP server claims fewer allocs than ever before.

Next?

We estimate that there will be a 7.65.0 release to ship 56 days from now. Then we will remove some deprecated features, perhaps add something new and quite surely fix a whole bunch of more bugs. Who know what fun we will come up with at curl up this coming weekend?

Keep reporting. Keep posting pull-requests. We love them and you!

<figcaption>Brand new sticker shipment for curl up from our beloved sticker sponsor!</figcaption>


Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Extensions in Firefox 67

Mozilla planet - wo, 27/03/2019 - 02:52

There are a couple of major changes coming to Firefox. One is in the current Beta 67 release, while the other in the Nightly 68 release, but is covered here as an early preview for extension developers.

Respecting User Privacy

The biggest change in release 67 is Firefox now offers controls to determine which extensions run in private browsing windows. Prior to this release, all extensions ran in all windows, normal and private, which wasn’t in line with Mozilla’s commitment to user privacy. Starting with release 67, though, both developers and users have ways to specify which extensions are allowed to run in private windows.

Going Incognito

For extension developers, Firefox now fully supports the value not_allowed for the manifest `incognito` key.  As with Chrome, specifying not_allowed in the manifest will prevent the extension from running or receiving events from private windows.

The Mozilla Add-on Policies require that extensions not store browsing data or leak identity information to private windows. Depending on what features your extension provides, using not_allowed might be an easy way to guarantee that your extension adheres to the policy.

Note that Chrome’s split value for incognito is not supported in Firefox at this time.

Raising User Awareness

There are significant changes in Firefox’s behavior and user interface so that users can better see and control which extensions run in private windows.  Starting with release 67, any extension that is installed will be, by default, disallowed from running in private windows. The post-install door hanger, shown after an extension has been installed, now includes a checkbox asking the user if the extension should be allowed to run in private windows.

To avoid potentially breaking existing user workflows, extensions that are already installed when a user upgrades from a previous version of Firefox to version 67 will automatically be granted permission to run in private windows. Only newly installed extensions will be excluded from private windows by default and subject to the installation flow described above.

There are significant changes to the Add-ons Manager page (about:addons), too. First, a banner at the top of the page describes the new behavior in Firefox.

This banner will remain in Firefox for at least two releases to make sure all users have a chance to understand and get used to the new policy.

In addition, for each extension that is allowed to run in private windows, the Add-ons Manager will add a badge to the extension’s card indicating that it has this permission, as shown below.

The lack of a badge indicates that the extension is not allowed to run in private windows and will, therefore, only run in normal windows. To change the behavior and either grant or revoke permission to run in private windows, the user can click on an extension’s card to bring up its details.

On the detail page, the user can choose to either allow or disallow the extension to run in private windows.

Finally, to make sure that users of private windows are fully aware of the new extension behavior, Firefox will display a message the first time a user opens a new private window.

Proper Private Behavior

As a developer, you should take steps to ensure that, when the user has not granted your extension permission to run in private windows, it continues to work normally. If your extension depends on access to private windows, it is important to communicate this to your users, including the reasons why access is needed. You can use the extension.isAllowedIncognitoAccess API to determine whether users have granted your extension permission to run in private windows.

Note that some WebExtension API may still affect private windows, even if the user has not granted the calling extension access to private windows. The browserSettings API is the best example of this, where an extension may make changes to the general behavior of Firefox, including how private windows behave, without needing permission to access private windows.

Finally, there is a known issue where some extensions that use the proxy.settings API require private browsing permission to use that API even in normal windows (all other proxy API work as expected). Mozilla is working to address this and will be reaching out to impacted developers.

User Scripts Are Coming

This is a bit of a teaser for Firefox 68, but after many months of design, implementation and testing, a WebExtensions user scripts API is just about ready. User scripts have been around for a very long time and are often closely associated with Firefox.  With the help of a user script extension such as Greasemonkey or Tampermonkey, users can find and install scripts that modify how sites look and/or work, all without having to write an extension themselves.

Support for user scripts is available by default in the Nightly version of Firefox 68, but can be enabled in both the current Firefox release (66) and Beta release (67) versions by setting the following preference in about:config:

extensions.webextensions.userScripts.enabled = true

This is a fairly complex feature and we would love for developers to give it a try as early as possible, which is why it’s being mentioned now. Documentation on MDN is still being developed, but below is a brief description of how this feature works.

Registering A User Script

The userScripts API provides a browser.userScripts.register API very similar to the browser.contentScripts.register API. It returns a promise which is resolved to an API object that provides an unregister method to unregister the script from all child processes.

const registeredUserScript = await browser.userScripts.register( userScriptOptions       // object ); ... await registeredUserScript.unregister();

userScriptOptions is an object that represents the user scripts to register. It has the same syntax as the contentScript options supported by browser.contentScripts.register that describe which web pages the scripts should be applied to, but with two differences:

    • It does not support a css property (use browser.contentScripts.register to dynamically register/unregister stylesheets).
    • It supports an optional property, scriptMetadata, a plain JSON object which contains metadata properties associated with the registered user script.
Providing User Script Functionality

To support injected user scripts, an extension must provide a special kind of content script called an APIScript. Like a regular content script, it:

The APIScript is declared in the manifest using the user_scripts.api_script property:

manifest.json { ... "user_scripts": { "api_script": "apiscript.js", } }


The APIScript is executed automatically on any page matched by the userScript.register API called from the same extension. It is executed before the user script is executed.

The userScript API also provides a new event, browser.userScripts.onBeforeScript, which the APIScript can listen for.  It is called right before a matched user script is executed, allowing the APIScript to export custom API methods to the user script.

browser.userScripts.onBeforeScript.addListener(listener)
browser.userScripts.onBeforeScript.removeListener(listener)
browser.userScripts.onBeforeScript.hasListener(listener)

In the above API, listener is a function called right before a user script is executed. The function will be passed a single argument, a script object that represents the user script that matched a web page. The script object provides the following properties and methods:

  • metadata – The scriptMetadata property that was set when the user script was registered via the userScripts.register API.
  • global – Provides access to the isolated sandbox for this particular user script.
  • defineGlobals – An API method that exports an object containing globally available properties and methods to the user script sandbox.  This method must be called synchronously to guarantee that the user script has not already executed.
  • export – An API method that converts a given value to a value that the user script code is allowed to access (this method can be used in API methods exported to the userScript to result or resolve non primitive values, the exported objects can also provide methods that the userScripts code is allowed to access and call).

The example below shows how a listener might work:

browser.userScripts.onBeforeScript.addListener(function (script) { script // This is an API object that represents the userScript // that is going to be executed. script.metadata // Access the userScript metadata (returns the // value of the scriptMetadata property from // the call to userScripts.register // Export some global properties into the userScript sandbox // (this method has to be called synchronously from the // listener, otherwise the userScript may have been already // be executed). script.defineGlobals({ aGlobalPropertyAccessibleFromUserScriptCode: “prop value”, myCustomAPIMethod(param1, param2) { // Custom methods exported from the API script can use // the WebExtensions APIs available to the extension // content scripts browser.runtime.sendMessage(...); ... return 123; // primitive values can be returned directly ... // Non primitive values have to be exported explicitly // using the export method provided by the script API // object return script.export({{ objKey1: { nestedProp: "nestedvalue", }, // Explicitly exported objects can also provide methods. objMethod() { ... } }, async myAsyncMethod(param1, param2, param2) { // exported methods can also be declared as async }, }); }); Miscellaneous Items

It was a busy release and besides the two major features detailed above, a number of smaller features (and fixes) also made it into Firefox 67.

Thank You

Within the WebExtensions API, a total of 74 bugs were closed in Firefox 67. Volunteer contributors continue to be an integral part of the effort and a huge thank you goes out those that contributed to this release, including: Oriol Brufau, Shailja Agarwala, Edward Wu, violet.bugreport and rugk. The combined efforts of Mozilla and its amazing community members are what make Firefox the best browser in the world.

The post Extensions in Firefox 67 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Chris H-C: Eulogy for a 13-Year-Old Display

Mozilla planet - di, 26/03/2019 - 15:27

goodbyeOldMonitorI was working for the Department of National Defence in Canada (specifically Defence Research and Development Canada) in early 2005 when I first plugged in my new xplio CM998 monitor. It was amazing.

Not only was it one of those new lightweight LCD monitors (I have since owned desks that weigh less), it supported resolutions up to 1280×1024 pixels natively and had both DVI and VGA ports!

It also generated enough heat in my basement apartment that I could notice it from across the room, but that was a plus in that cold Scarborough winter.

From there I moved it to an apartment. Another apartment. A home. And then another home. And then, finally, when I had stopped using it at home I started using it at work for Mozilla.

I liked its comfortable 5:4 aspect ratio, and the fact it wouldn’t wobble when I got up to get coffee.

On Friday it wouldn’t turn on. Well, it did turn on. Linux was assigning it desktop space, knew who it was and how big it was… but it wouldn’t display anything.

I would have liked to turn it off and on again, but the power switch hasn’t worked reliably since my daughter was born. So I did the next best thing and unplugged it and plugged it back in. It would display my Firefox wallpaper for just long enough for some capacitor to warm up or something, and then it would black out.

Nothing I could do would resuscitate it. No cable swaps, no buttons I could press, no whining or cajoling.

Here ends the 13-year service life of my venerable SXGA display.

Your service did not go unnoticed. Enjoy your recycling.

:chutten

Categorieën: Mozilla-nl planet

The Mozilla Blog: Firefox Lockbox Now on Android, Keeping your Passwords Safe

Mozilla planet - di, 26/03/2019 - 14:00

If you’re like most Firefox users, you have dozens if not hundreds of stored logins in your browser. When you use Firefox Accounts you get to take your logins on the web in Firefox Mobile. Today, many of those logins are the same ones used in the apps you download on mobile, so we’ve been working on making your various online identities work on your terms.

Today, we are excited to bring Firefox Lockbox to Android users, a secure app that keeps people’s passwords with them wherever they go.

Like Firefox Send, a free encrypted file transfer service which keeps people’s personal information private unveiled earlier this month, Firefox Lockbox is based on another successful Test Pilot experiment. Since its iOS launch, Firefox Lockbox has had more than 50K downloads, and was most recently optimized for iPad. As we continue to evolve our ecosystem of privacy-first solutions, Firefox Lockbox for Android was the next step in our efforts to give people the advantage when it comes to keeping them safe online with trusted tools and services from Firefox.

Unlike a traditional password manager, Firefox Lockbox is a simple app that gives easy access to the passwords you have already stored in your Firefox browser. No extra set-up necessary. This makes Firefox Lockbox the perfect solution for people who want to secure their personal information, but may not have time (or the recall) to choose and transfer all of their passwords into a password manager.

Here’s how Firefox Lockbox for Android works:

 



Retrieve your passwords on the go

Picture this: You volunteer your Netflix account for movie night at a friends but can’t remember your password. Rather than going through the reset process or accidentally locking yourself out with too many password attempts, Firefox Lockbox allows you to easily retrieve your password on your phone (now on both iOS and Android devices) so you can log in without a hitch. Plus, you can also use Face ID and Fingerprint touch to unlock the app, making it easy to securely access your passwords.

Retrieve your password from your phone

Take your passwords with you on any device

Keeping track of your  passwords across devices can be challenging, especially since many of us have tried to make them more complicated to protect ourselves against online data breaches. Say goodbye to the password “cheat sheet” stored as a note or “contact” in your phone or written on a sticky on your desk. Firefox Lockbox works with autofill to make the transition from using your Firefox desktop browser to mobile seamless, by automatically filling in your passwords saved on desktop to your everyday apps like Facebook or Yelp, on your mobile device.

Automatically fill in saved passwords from your desktop

Keeping track of your passwords is one of the first steps to taking control of your online privacy and Firefox Lockbox for Android is here to help.

We hope you try Firefox Lockbox for Android today. You can download it on Google Play.

 

 

The post Firefox Lockbox Now on Android, Keeping your Passwords Safe appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Firefox Lockbox Now on Android, Keeping your Passwords Safe

Mozilla Blog - di, 26/03/2019 - 14:00

If you’re like most Firefox users, you have dozens if not hundreds of stored logins in your browser. When you use Firefox Accounts you get to take your logins on the web in Firefox Mobile. Today, many of those logins are the same ones used in the apps you download on mobile, so we’ve been working on making your various online identities work on your terms.

Today, we are excited to bring Firefox Lockbox to Android users, a secure app that keeps people’s passwords with them wherever they go.

Like Firefox Send, a free encrypted file transfer service which keeps people’s personal information private unveiled earlier this month, Firefox Lockbox is based on another successful Test Pilot experiment. Since its iOS launch, Firefox Lockbox has had more than 50K downloads, and was most recently optimized for iPad. As we continue to evolve our ecosystem of privacy-first solutions, Firefox Lockbox for Android was the next step in our efforts to give people the advantage when it comes to keeping them safe online with trusted tools and services from Firefox.

Unlike a traditional password manager, Firefox Lockbox is a simple app that gives easy access to the passwords you have already stored in your Firefox browser. No extra set-up necessary. This makes Firefox Lockbox the perfect solution for people who want to secure their personal information, but may not have time (or the recall) to choose and transfer all of their passwords into a password manager.

Here’s how Firefox Lockbox for Android works:

 



Retrieve your passwords on the go

Picture this: You volunteer your Netflix account for movie night at a friends but can’t remember your password. Rather than going through the reset process or accidentally locking yourself out with too many password attempts, Firefox Lockbox allows you to easily retrieve your password on your phone (now on both iOS and Android devices) so you can log in without a hitch. Plus, you can also use Face ID and Fingerprint touch to unlock the app, making it easy to securely access your passwords.

Retrieve your password from your phone

Take your passwords with you on any device

Keeping track of your  passwords across devices can be challenging, especially since many of us have tried to make them more complicated to protect ourselves against online data breaches. Say goodbye to the password “cheat sheet” stored as a note or “contact” in your phone or written on a sticky on your desk. Firefox Lockbox works with autofill to make the transition from using your Firefox desktop browser to mobile seamless, by automatically filling in your passwords saved on desktop to your everyday apps like Facebook or Yelp, on your mobile device.

Automatically fill in saved passwords from your desktop

Keeping track of your passwords is one of the first steps to taking control of your online privacy and Firefox Lockbox for Android is here to help.

We hope you try Firefox Lockbox for Android today. You can download it on Google Play.

 

 

The post Firefox Lockbox Now on Android, Keeping your Passwords Safe appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Pagina's