mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Open Policy & Advocacy Blog: À frente da votação no Senado Federal, Mozilla endossa a aprovação de Lei Brasileira de Proteção de Dados (PLC 53/ 2018)

Mozilla planet - di, 03/07/2018 - 04:28

OSenado Brasileiro poderá votar esta semana o Projeto de Lei de Proteção de Dados Pessoais (PLC 53/2018), aprovado pela Câmara dos Deputados em 29 de maio, após quase uma década de debate em torno de várias proposições sobre o tema. Embora alguns aspectos do Projeto ainda sejam passíveis de aprimoramentos, a Mozilla acredita que o texto representa uma estrutura básica de proteção de dados para o Brasil e instamos os reguladores brasileiros à sua urgente aprovação.

Especificamente, o PLC 53/2018:

  1. É o resultado de um processo de consultas inclusivo e aberto à sociedade brasileira, seguindo o exemplo do Marco Civil da Internet (Lei n. 12.965/2014). O processo de discussão do PLC 53/2018 envolveu várias partes interessadas do governo, setor privado, sociedade civil e academia. O projeto também recebeu apoio público de várias organizações do setor privado e da sociedade civil.
  2. Não faz distinções entre o setor privado e ao governo e aplica suas disposições isonomicamente. A criação de exceções amplas para o Poder Público, conforme disposto em outros Projetos de Lei alternativos acabaria por diluir a eficácia da lei com relação à salvaguarda dos direitos do usuário. O Governo Federal é, indiscutivelmente, o maior coletor de dados pessoais no Brasil e a coleta de dados é requisito obrigatório para o acesso aos serviços. A proximidade das eleições de 2018 e a ausência de uma lei de proteção de dados despertam preocupações relativas à eventual utilização de dados pessoais para influenciar o processo eleitoral. Esse ponto faz-se especialmente importante a luz dos recentes debates e revelações em torno da Cambridge Analytica.
  3. Introduz uma entidade reguladora nacional auto-suficiente, independente e robusta. A eficácia de um marco legal de proteção de dados pessoais reside na existência de mecanismos de garantia das obrigações e direitos, indispensavelmente. Isso inclui um alto grau de independência do governo, uma vez que o regulador deve ter jurisdição sobre as atividades de proteção de dados do Governo também. Parabenizamos também a introdução de um órgão participativo e multissetorial responsável por emitir diretrizes, garantir a transparência e avaliar a implementação da lei.
  4. Institui um conjunto de direitos para os indivíduos robusto, ressaltando a importância da obtenção de consentimento do usuário e exigindo que os responsáveis por atividades de tratamento de dados respeitem os princípios de minimização de dados, limitação das atividades de uso e coleta de dados, bem como segurança de bases de dados. Ao qualificar o consentimento como livre, informado e inequívoco o PLC 53/2018 não só estipula um alto padrão de consentimento como coloca os usuários no controle de seus dados e experiências on-line. Por fim, o projeto também reforça os mecanismos de responsabilização, ao passo que (a) coloca sobre o agente o ônus para demonstrar a adoção e eficácia das medidas de proteção de dados, e (b) permite aos usuários a possibilidade de acessar e retificar dados sobre si mesmos, bem como a possibilidade de oposição ao tratamento de dados.
  5. Define categorias de dados pessoais sensíveis; a respeito deste ponto, é bom ver dados biométricos incluídos nesta lista. Acreditamos que um regime mais rigoroso dados sensíveis é útil para sinalizar aos responsáveis pelo tratamento de dados que um nível mais alto de proteção e segurança será necessário ante a sensibilidade das informações.

A falta de uma lei abrangente de proteção de dados expõe os cidadãos brasileiros a riscos decorrentes do uso indevido de seus dados pessoais tanto pelo Governo quanto pelos serviços privados. Este é um momento oportuno e histórico, onde o Brasil tem a oportunidade de finalmente aprovar uma lei geral de proteção de dados que irá salvaguardar os direitos dos brasileiros por gerações a vir.

Este post foi publicado originalmente em inglês.

The post À frente da votação no Senado Federal, Mozilla endossa a aprovação de Lei Brasileira de Proteção de Dados (PLC 53/ 2018) appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Karl Dubost: Five years

Mozilla planet - di, 03/07/2018 - 01:54

On July 2, 2013, I was hired by Mozilla on the Web Compatibility team. It has been 5 years. I didn't count the emails, the commits, the bugs opened and resolved. We do not necessary strive by what we accomplished, specifically when expressed in raw numbers. But there was a couple of transformations and skills that I have acquired during these last five years which are little gems for taking the next step.

Working is also a lot of failures, drawbacks, painful learning experiences. A working space is humanity. The material we work with (at least in computing) is mostly ideas conveyed by humans. Not the right word at the right moment, a wrong mood, a desire for a different outcome, we do fail. Then we try to rebuild, to protect ourselves. This delicate balance is though a risk worth taking on the long term.

I'm looking forward the next step, I really mean the next footstep. The one in the path, the one of the hikers, just the next one, which brings you closer from the next flower, the next grass, which transforms the landscape in an undetectable way. Breathing, discovering, learning, with tête-à-tête or alone.

Thanks to Mozilla and its community to allow me to share some of my values with some of yours. I'm very much looking forward the next day to continue this journey with you.

Otsukare!

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Ahead of Senate vote, Mozilla endorses Brazilian Data Protection Bill (PLC 53/2018)

Mozilla planet - di, 03/07/2018 - 00:03

As soon as this week, the Brazilian Senate may vote on Brazilian Data Protection Bill (PLC 53/2018), which was approved by the Chamber of Deputies on May 29th following nearly a decade of debate on various draft bills. While aspects of the bill will no doubt need to be refined and evolve with time, overall, Mozilla believes this bill represents a strong baseline data protection framework for Brazil, and we urge Brazilian policymakers to pass it quickly.

Specifically, this bill:

  1. Is the outcome of an inclusive and open consultation process, following the example of the landmark Brazilian Civil Rights Framework for the Internet (‘Marco Civil’). The consultation has involved multiple stakeholders from government, private sector, civil society, and academia. The bill has also received public support from various organizations in the private sector and civil society.
  2. Applies with equal strength to private sector and the government. Creating broad exceptions for government use of data, as proposed in alternative bills, would dilute the effectiveness of the data protection law to safeguard user rights. The government is arguably the largest data collector in Brazil, and government data collection is often mandatory for access to services. As the Brazilian general election approaches, some are concerned that in the absence of a data protection law, personal data could be used to influence the election. This is especially salient given the recent debates and revelations around Cambridge Analytica.
  3. Introduces a well-resourced, independent, and empowered national regulator. A strong enforcement mechanism is critical for any data protection framework to be effective. This includes a high degree of independence from the government, since the regulator should have jurisdiction over claims against the government as well. We also welcome the introduction of a participatory multi-stakeholder body to issue guidelines, ensure transparency, and evaluate the implementation of the law.
  4. Puts in place a robust framework of user rights with meaningful user consent at its core, requiring data controllers and processors to abide by the principles of data minimisation, purpose limitation, collection limitation, and data security. In particular, it includes a high standard of free, informed, and unequivocal consent, putting users in control of their data and online experiences. It also emphasizes mechanisms for accountability, putting the onus on the agent to demonstrate both the adoption and effectiveness of data protection measures, and allows for the user to access and rectify data about themselves as well as withdraw consent for any reason.
  5. Defines categories of sensitive personal data; in particular, it’s good to see biometric data included in this list. A stricter regime for certain categories of sensitive data is useful in order to signal to data controllers that a higher level of protection and security will be required given the sensitivity of the information.

The lack of a comprehensive data protection law exposes Brazilian citizens to risks of misuse of their personal data by both government and private services. This is a timely and historic moment where Brazil has the opportunity to finally pass a baseline data protection law that will safeguard the rights of Brazilians for generations to come.

Click here for a Portuguese translation of this post.

The post Ahead of Senate vote, Mozilla endorses Brazilian Data Protection Bill (PLC 53/2018) appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Benjamin Bouvier: Making calls to WebAssembly fast and implementing anyref

Mozilla planet - ma, 02/07/2018 - 20:00

Since this is the end of the first half-year, I think it is a good time to reflect and show some work I've been doing over the last few months, apart from the regular batch of random issues, security bugs, reviews and the fixing of 24 bugs found by our …

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: July’s Featured Extensions

Mozilla planet - ma, 02/07/2018 - 19:34

Firefox Logo on blue background

Pick of the Month: Midnight Lizard

by Pavel Agarkov
More than just dark mode, Midnight Lizard lets you customize the readability of the web in granular detail—adjust everything from color schemes to lighting contrast.

“This has got to be the best dark mode add-on out there, how is this not more popular? 10/10”

Featured: Black Menu for Google

by Carlos Jeurissen
Enjoy easy access to Google services like Search, Translate, Google+, and more without leaving the webpage you’re on.

“Awesome! Makes doing quick tasks with any Google app faster and simpler!”

Featured: Authenticator

by mymindstorm
Add an extra layer of security by generating two-step verification codes in Firefox.

“Thank you so much for making this. I would not be able to use many websites without it now days, literally, since I don’t use a smartphone. Thank you thank you thank you. Works wonderfully.”

Featured: Turbo Download Manager

by InBasic
A download manager with multi-threading support.

“One of the best.”

Featured: IP Address and Domain Information

by webdev7
Know the web you travel! See detailed information about every IP address, domain, and provider you encounter in the digital wild.

“The site provides valuable information and is a tool well worth having.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post July’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Larger image support on addons.mozilla.org

Mozilla planet - ma, 02/07/2018 - 19:16

Last week, we pushed an update that enables add-on developers to use larger image sizes on their add-on listings.

We hadn’t updated our size limits for many years, so the images on listing pages are fairly small. The image viewer on the new website design scales the screenshots to fit the viewport, which makes these limitations even more obvious.

For example, look at this old listing of mine.

Old listing image on new site

The image view on the new site. Everything in this screenshot is old.

The image below better reflects how the magnified screenshot looks like on my browser tab.

All of the pixels

Ugh

After this fix, developers can upload images as large as they prefer. The maximum image display size on the site is 1280×800 pixels, which is what we recommend they upload. For other image sizes we recommend using the 1.6:1 ratio. If you want to update your listings to take advantage of larger image sizes, you might want to consider using these tips to give your listing a makeover to attract more users.

We look forward to beautiful, crisper images on add-on listing pages.

The post Larger image support on addons.mozilla.org appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Root Store Policy Updated

Mozilla planet - ma, 02/07/2018 - 18:00

After several months of discussion on the mozilla.dev.security.policy mailing list, our Root Store Policy governing Certification Authorities (CAs) that are trusted in Mozilla products has been updated. Version 2.6 has an effective date of July 1st, 2018.

More than one dozen issues were addressed in this update, including the following changes:

  • Section 2.2 “Validation Practices” now requires CAs with the email trust bit to clearly disclose their email address validation methods in their CP/CPS.
  • The use of IP Address validation methods defined by the CA has been banned in certain circumstances.
  • Methods used for IP Address validation must now be clearly specified in the CA’s CP/CPS.
  • Section 3.1 “Audits” increases the WebTrust EV minimum version to 1.6.0 and removes ETSI TS 102 042 and 101 456 from the list of acceptable audit schemes in favor of EN 319 411.
  • Section 3.1.4 “Public Audit Information” formalizes the requirement for an English language version of the audit statement supplied by the Auditor.
  • Section 5.2 “Forbidden and Required Practices” moves the existing ban on CA key pair generation for SSL certificates into our policy.
  • After January 1, 2019, CAs will be required to create separate intermediate certificates for issuing SSL and S/MIME certificates. Newly issued Intermediate certificates will need to be restricted with an EKU extension that doesn’t contain anyPolicy, or both serverAuth and emailProtection. Intermediate certificates issued prior to 2019 that do not comply with this requirement may continue to be used to issue new end-entity certificates.
  • Section 5.3.2 “Publicly Disclosed and Audited” clarifies that Mozilla expects newly issued intermediate certificates to be included on the CA’s next periodic audit report. As long as the CA has current audits, no special audit is required when issuing a new intermediate. This matches the requirements in the CA/Browser Forum’s Baseline Requirements (BR) section 8.1.
  • Section 7.1 “Inclusions” adds a requirement that roots being added to Mozilla’s program must have complied with Mozilla’s Root Store Policy from the time that they were created. This effectively means that roots in existence prior to 2014 that did not receive BR audits after 2013 are not eligible for inclusion in Mozilla’s program. Roots with documented BR violations may also be excluded from Mozilla’s root store under this policy.
  • Section 8 “CA Operational Changes” now requires notification when an intermediate CA certificate is transferred to a third party.

A comparison of all the policy changes is available here.

The post Root Store Policy Updated appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Chris H-C: Some More Very Satisfying Graphs

Mozilla planet - vr, 29/06/2018 - 20:14

I guess I just really like graphs that step downwards:

Screenshot_2018-06-27 Telemetry Budget Forecasting

Earlier this week :mreid noticed that our Nightly population suddenly started sending us, on average, 150 fewer kilobytes (uncompressed) of data per ping. And they started doing this in the middle of the previous week.

Step 1 was to panic that we were missing information. However, no one had complained yet and we can usually count on things that break to break loudly, so we cautiously-optimistically put our panic away.

Step 2 was to see if the number of pings changed. It could be we were being flooded with twice as many pings at half the size, for the same volume. This was not the case:

Screenshot_2018-06-27 Telemetry Budget Forecasting(2)

Step 3 was to do some code archaeology to try and determine the “culprit” change that was checked into Firefox and resulted in us sending so much less data. We quickly hit upon the removal of BrowserUITelemetry and that was that.

…except… when I went to thank :Standard8 for removing BrowserUITelemetry and saving us and our users so much bandwidth, he was confused. To the best of his knowledge, BrowserUITelemetry was already not being sent. And then I remembered that, indeed, back in March :janerik had been responsible for stopping many things like BrowserUITelemetry from being sent (since they were unmaintained and unused).

So I fired up an analysis notebook and started poking to see if I could find out what parts of the payload had suddenly decreased in size. Eventually, I generated a plot that showed quite clearly that it was the keyedHistograms section that had decreased so radically.

Screenshot_2018-06-27 main_ping_size - Databricks

Around the same time :janerik found the culprit in the list of changes that went into the build: we are no longer sending a couple of incredibly-verbose keyed histograms because their information is now much more readily available in profiles.

The power of cleaning up old code: removing 150kb from the average “main” ping sent multiple times per day by each and every Firefox Nightly user.

Very satisfying.

:chutten

Categorieën: Mozilla-nl planet

Cameron Kaiser: Ad-blocker-blockers hit a new low. What's the solution?

Mozilla planet - vr, 29/06/2018 - 19:38
It may be the wrong day to slam the local newspapers, but this was what greeted me trying to click through to a linked newspaper article this morning on Firefox Android. The link I was sent was from the Riverside Press-Enterprise, but this appears to be throughout the entire network of the P-E's owners, the Southern California News Group (which includes the Orange County Register, San Bernardino Sun and Los Angeles Daily News):

That's obnoxious. SCNG is particularly notorious for not being very selective about ads and they tend to be colossally heavy and sometimes invasive; there's no way on this periodically green earth that I'm turning the adblocker off. I click "no thanks." The popover disappears, but what it was covering was this:

That's not me greeking the article so you can't see what article I was reading. The ad-blocker-blocker did it so that a clever user or add-on can't just set the ad-blocker-blocker's popover to display:none or something. The article is now incomprehensible text.

My first reaction is that any possibility I had of actually paying $1 for the 4 week subscription to any SCNG paper just went up in the flames of my great furious wrath (after all, this is a blog s**tpost). The funny part is that TenFourFox's basic adblock actually isn't defeated by this, probably because we're selective about what actually gets blocked and so the ad-blocker-blocker thinks ads are getting through. But our old systems are precisely those that need adblockers because of all the JavaScript (particularly) that modern ad systems lard their impressions up with. Anyway, to read the article I actually ended up looking at it on the G5. There was no way I was going to pay them for engaging in this kind of behaviour.

The second thought I had was, how do you handle this? I'm certainly sympathetic to the view that we need stronger local papers for better local governance, but print ads are a much different beast than the dreck that online ads are. (Yes, this blog has ads. I don't care if you block them or not.) Sure, I could have subscriptions to all the regional papers, or at least the ones that haven't p*ssed me off yet, but then I have to juggle all the memberships and multiple charges and that won't help me read papers not normally in my catchment area. I just want to click and read the news, just like I can anonymously pick up a paper and read it at the bar.

One way to solve this might be to have revenue sharing arrangements between ISPs and papers. It could be a mom-and-pop ISP and the local paper, if any of those or those still exist, or it could be a large ISP and a major national media group. Users on that ISP get free access (as a benefit of membership even), the paper gets a piece. Everyone else can subscribe if they want. This kind of thing already exists on Apple TV devices, after all: if I buy the Spectrum cable plan, I get those channels free on Apple TV over my Spectrum Internet access, or I pay if I don't. Why couldn't newspapers work this way?

Does net neutrality prohibit this?

Categorieën: Mozilla-nl planet

Mozilla VR Blog: This week in Mixed Reality: Issue 11

Mozilla planet - vr, 29/06/2018 - 17:46
 Issue 11

This week, we're making great strides in adding new features and making a wide range of improvements and our new contributors are also helping us fix bugs.

Browsers

We are churning out new features and continuing to make UI changes to deliver the best possible experience on Firefox Reality by implementing the following:

  • Focus mode with the new design
  • Full screen mode and widget resizing
  • Reusable quad node which adds supports different scale modes
  • World Fade Out/In API and blitter
  • Back handler API
  • WidgetResizer utility node
  • Settings panel
  • A single window UI design with a browser window and bar below

Here is a sneek peak of Firefox Reality with focus mode, full screen mode and widget resizing with the new UX/UI:

Firefox Reality Focus mode, full screen mode and widget resizing from Imanol Fernández Gorostizaga on Vimeo.

Social

We are working towards a content creator and content import updates on Hubs by Mozilla and added some new features:

  • Continued work on image and model spawning: animated GIFs, object deletion, proxy integration
  • Editor filesystem management feature complete, GLTF scene saving/loading, property editing
  • Migration to Maya GLTF exporter for architecture kit
  • Proof of concept of 3d spline generation and rendering for drawing tool
  • Media proxy (farspark) operationalized and deployed

Join our public WebVR Slack #social channel to participate in on the discussion!

Content ecosystem

This week, we launched v1.4.0, this includes, adding a new example scene and for handling Unity code for swapping scenes for navigation on the Unity WebVR project.

Shout out to Kyle Reczek for contributing his patch to fixing the state of the VR camera and manager since the state was not correct when exiting VR for switching scenes.

Found a critical bug? File it in our public GitHub repo or let us know on the public WebVR Slack #unity channel and as always, join us in our discussion!

Stay tuned for new features and improvements across our three areas!

Categorieën: Mozilla-nl planet

Dave Hunt: Python unit tests now running with Python 3 at Mozilla

Mozilla planet - vr, 29/06/2018 - 16:48

I’m excited to announce that you can now run the Python unit tests for packages in the Firefox source code against Python 3! This will allow us to gradually build support for Python 3, whilst ensuring that we don’t later regress. Any tests not currently passing in Python 3 are skipped with the condition skip-if = python == 3 in the manifest files, so if you’d like to see how they fail (and maybe provide a patch to fix some!) then you will need to remove that condition locally. Once you’ve done this, use the mach python-test command with the new optional argument --python. This will accept a version number of Python or a path to the binary. You will need to make sure you have the appropriate version of Python installed.

Once you’re ready to enable tests to run in TaskCluster, you can simply update the python-version value in taskcluster/ci/source-test/python.yml to include the major version numbers of Python to execute the tests against. At the current time our build machines have Python 2.7 and Python 3.5 available.

To summarise:

  1. Remove skip-if = python == 3 from manifest files. These are typically named manifest.ini or python.ini, and are usually found in the tests directory for the package.
  2. Run mach python-test --python=3 with your target path or subsuite.
  3. Fix the package(s) to support Python 3 and ensure the tests are passing
  4. Add Python 3 to the python-version for the appropriate job in taskcluster/ci/source-test/python.yml.

At the time of writing, the pythonclock.org tells me that we have just over 18 months before Python 2.7 will be retired. What this actually means is still somewhat unknown, but it would be a good idea to check if your code is compatible with Python 3, and if it’s not, to do something about it. The Firefox build system at Mozilla uses Python, and it’s still some way from supporting Python 3. We have a lot of code, it’s going to be a long journey, and we could do with a bit of help!

Whilst we do plan to support Python 3 in the Firefox build system (see bug 1388447), my initial concern and focus has been the Python packages we distribute on the Python Package Index (PyPI). These are available to use outside of Mozilla’s build system, and therefore a lack of Python 3 support will prevent any users from adopting Python 3 in their projects. One such example is Treeherder, which uses mozlog for parsing log files. Treeherder is a django project, which recently dropped support for Python 2 (unless you’re using their long term support release, which will support Python 2 until 2020).

Updating these packages to support Python 3 isn’t necessary that hard to do, especially with tools such as six, which provides utilities for handling the differences between Python 2 and Python 3. The problem has been that we had no way to run the tests against Python 3 in TaskCluster. This is no longer the case, and Python unit tests can now be run against Python 3!

So far I have enabled Python 3 jobs for our mozbase unit tests (this includes the aforementioned mozlog), and our mozterm unit tests. There are still many tests in mozbase that are not passing in Python 3, so as mentioned above, these have been conditionally skipped in the manifest files. This will allow us to enable these tests as support is added, and this condition could even be used in the future if we have a package that doesn’t have full compatibility with Python 2.

Now that running the tests against multiple versions of Python is relatively easy, it’s a great time for me to encourage our community to help us with supporting Python 3. If you’d like to help, we have a tracking bug for all of our mozbase packages. Find a package you’d like to work on, read the comments to understand what you need and how to get set up, and let me know if you get stuck!

Categorieën: Mozilla-nl planet

Andy McKay: Pedestrians vs Drivers

Mozilla planet - vr, 29/06/2018 - 09:00

So often these days the debate is framed as "drivers vs cyclists" or "drivers vs pedestrians". Basically drivers vs everyone. Because if there's one thing we've learnt is that drivers think all the roads, parking spaces and infrastructure is all for them and absolutely no-one else.

But you need to walk to get into a car. You need to get out of a car to walk when you get to your destination. Everyone becomes a pedestrian at some point. In fact everyone in a car gets out of a car, eventually.

So you can have a world where people go into their garages at their houses, drive to work and go into their garages there ... and no-one ever interacts with anything other than cars in the real world. Or perhaps you can treat pedestrians with respect. And people on bicycles. And just about everyone else.

Categorieën: Mozilla-nl planet

Mozilla B-Team: happy bmo push day!

Mozilla planet - do, 28/06/2018 - 19:47

happy bmo push day! Huge shout out to @BugzillaUX for all the UX enhancements in this release!

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1467297] variable masks earlier declaration in Feed.pm in Phabbugz extension
  • [1467271] When making a revision public, make the revision editable only by the bmo-editbugs-team project (editbugs)
  • [1456877] Add a wrapper around libcmark_gfm to Bugzilla
  • [1468818] Re-introduce is_markdown to the longdescs table (schema-only)
  • [

View On WordPress

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: AV1: next generation video – The Constrained Directional Enhancement Filter

Mozilla planet - do, 28/06/2018 - 16:59

AV1

For those just joining us….
AV1 is a new general-purpose video codec developed by the Alliance for Open Media. The alliance began development of the new codec using Google’s VPX codecs, Cisco’s Thor codec, and Mozilla’s/Xiph.Org’s Daala codec as a starting point. AV1 leapfrogs the performance of VP9 and HEVC, making it a next-next-generation codec . The AV1 format is and will always be royalty-free with a permissive FOSS license.

This post was written originally as the second in an in-depth series of posts exploring AV1 and the underlying new technologies deployed for the first time in a production codec. An earlier post on the Xiph.org website looked at the Chroma from Luma prediction feature. Today we cover the Constrained Directional Enhancement Filter. If you’ve always wondered what goes into writing a codec, buckle your seat-belts, and prepare to be educated!

Filtering in AV1

Virtually all video codecs use enhancement filters to improve subjective output quality.

By ‘enhancement filters’ I mean techniques that do not necessarily encode image information or improve objective coding efficiency, but make the output look better in some way. Enhancement filters must be used carefully because they tend to lose some information, and for that reason they’re occasionally dismissed as a deceptive cheat used to make the output quality look better than it really is.

But that’s not fair. Enhancement filters are designed to mitigate or eliminate specific artifacts to which objective metrics are blind, but are obvious to the human eye. And even if filtering is a form of cheating, a good video codec needs all the practical, effective cheats it can deploy.

Filters are divided into multiple categories. First, filters can be normative or non-normative. A normative filter is a required part of the codec; it’s not possible to decode the video correctly without it. A non-normative filter is optional.

Second, filters are divided according to where they’re applied. There are preprocessing filters, applied to the input before coding begins, postprocessing filters applied to the output after decoding is complete, and in-loop or just loop filters that are an integrated part of the encoding process in the encoding loop. Preprocessing and postprocessing filters are usually non-normative and external to a codec. Loop filters are normative almost by definition and part of the codec itself; they’re used in the coding optimization process, and applied to the reference frames stored or inter-frame coding.

a diagram of AV1 coding loop filters AV1 uses three normative enhancement filters in the coding loop. The first, the deblocking filter, does what it says; it removes obvious bordering artifacts at the edges of coded blocks. Although the DCT is relatively well suited to compacting energy in natural images, it still tends to concentrate error at block edges. Remember that eliminating this blocking tendency was a major reason Daala used a lapped transform, however AV1 is a more traditional codec with hard block edges. As a result, it needs a traditional deblocking filter to smooth the block edge artifacts away.

An example of blocking artifacts in a traditional DCT block-based codec. Errors at the edges of blocks are particularly noticeable as they form hard edges. Worse, the DCT (and other transforms in the DCT family) tend to concentrate error at block edges, compounding the problem.

 

The last of the three filters is the Loop Restoration filter. It consists of two configurable and switchable filters, a Wiener filter and a Self-Guided filter. Both are convolving filters that try to build a kernel to restore some lost quality of the original input image and are usually used for denoising and/or edge enhancement. For purposes of AV1, they’re effectively general-purpose denoising filters that remove DCT basis noise via a configurable amount of blurring.

The filter between the two, the Constrained Directional Enhancement Filter (CDEF) is the one we’re interested in here; like the loop restoration filter, it removes ringing and basis noise around sharp edges, but unlike the loop restoration filter, it’s directional. It can follow edges, as opposed to blindly filtering in all directions like most filters. This makes CDEF especially interesting; it’s the first practical and useful directional filter applied in video coding.

The Long and Winding Road

The CDEF story isn’t perfectly linear; it’s long and full of backtracks, asides, and dead ends. CDEF brings multiple research paths together, each providing an idea or an inspiration toward the final Constrained Directional Enhancement Filter in AV1. The ‘Directional’ aspect of CDEF is especially novel in implementation, but draws ideas and inspiration from several different places.

The whole point of transforming blocks of pixel data using the DCT and DCT-like transforms is to represent that block of pixels using fewer numbers. The DCT is pretty good at compacting the energy in most visual images, that is, it tends to collect spread out pixel patterns into just a few important output coefficients.

There are exceptions to the DCT’s compaction efficiency. To name the two most common examples, the DCT does not represent directional edges or patterns very well. If we plot the DCT output of a sharp diagonal edge, we find the output coefficients also form…. a sharp diagonal edge! The edge is different after transformation, but it’s still there and usually more complex than it started. Compaction defeated!

a sharp edge (left) and its DCT transform coefficients (right) illustrating the problem with sharp features

Sharp features are a traditional problem for DCT-based codecs as they do not compact well, if at all. Here we see a sharp edge (left) and its DCT transform coefficients (right). The energy of the original edge is spread through the DCT output in a directional rippling pattern.

 

Over the past two decades, video codec research has increasingly looked at transforms, filters, and prediction methods that are inherently directional as a way of better representing directional edges and patterns, and correcting this fundamental limitation of the DCT.

Classic Directional Predictors

Directional intra prediction is probably one of the best known directional techniques used in modern video codecs. We’re all familiar with h.264’s and VP9’s directional prediction modes, where the codec predicts a directional pattern into a new block, based on the surrounding pixel pattern of already decoded blocks. The goal is to remove (or greatly reduce) the energy contained in hard, directional edges before transforming the block. By predicting and removing features that can’t be compacted, we improve the overall efficiency of the codec.

AVC/H.264 intra prediction modes, illustrating modes 0-8.

Illustration of directional prediction modes available in AVC/H.264 for 4×4 blocks. The predictor extends values taken from a one-pixel-wide strip of neighboring pixels into the predicted block in one of eight directions, plus an averaging mode for simple DC prediction.

 

Motion compensation, an even older idea, is also a form of directional prediction, though we seldom think of it that way. It displaces blocks in specific directions, again to predict and remove energy prior to the DCT. This block displacement is directional and filtered, and like directional intra-prediction, uses carefully constructed resampling filters when the displacement isn’t an integer number of pixels.

Directional Filters

As noted earlier, video codecs make heavy use of filtering to remove blocking artifacts and basis noise. Although the filters work on a 2D plane, the filters themselves tend to be separable, that is, they’re usually 1D filters that are run horizontally and vertically in separate steps.

Directional filtering attempts to run filters in directions besides just horizontal and vertical. The technique is already common in image processing, where noise removal and special effects filters are often edge- and direction-aware. However, these directional filters are often based on filtering the output of directional transforms, for example, the [somewhat musty] image denoising filters I wrote based on dual-tree complex wavelets.

The directional filters in which we’re most interested for video coding need to work on pixels directly, following along a direction, rather than filtering the frequency-domain output of a directional transform. Once you try to design such a beast, you quickly hit the first Big Design Question: how do you ‘follow’ directions other than horizontal and vertical, when your filter tap positions no longer land squarely on pixels arranged in a grid?

One possibility is the classic approach used in high-quality image processing: transform the filter kernel and resample the pixel space as needed. One might even argue this is the only ‘correct’ or ‘complete’ answer. It’s used in subpel motion compensation, which cannot get good results without at least decent resampling, and in directional prediction which typically uses a fast approximation.

That said, even a fast approximation is expensive when you don’t need to do it, so avoiding the resampling step is a worthy goal if possible. The speed penalty is part of the reason we’ve not seen directional filtering in video coding practice.

Directional Transforms

Directional transforms attempt to fix the DCT’s edge compaction problems in the transform itself.

Experimental directional transforms fall into two categories. There are the transforms that use inherently directional bases, such as directional wavelets. These transforms tend to be oversampled/overcomplete, that is, they produce more output data than they take input data— usually massively more. That might seem like working backwards; you want to reduce the amount of data, not increase it! But these transforms still compact the energy, and the encoder still chooses some small subset of the output to encode, so it’s really no different from usual lossy DCT coding. That said, overcomplete transforms tend to be expensive in terms of memory and computation, and for this reason, they’ve not taken hold in mainstream video coding.

The second category of directional transform takes a regular, non-directional transform such as the DCT, and modifies it by altering the input or output. The alteration can be in the form of resampling, a matrix multiplication (which can be viewed as a specialized form of resampling), or juggling of the order of the input data.

It’s this last idea that’s the most powerful, because it’s fast. There’s no actual math to do when simply rearranging numbers.

 vertical right mode

Two examples implementing a directional transforms in different directions using pixel and coefficient reshuffling, rather than a resampling filter. This example is from An Overview of Directional Transforms in Image Coding, by Xu, Zeng, and Wu.

 

A few practical complications make implementation tricky. Rearranging a square to make a diagonal edge into a [mostly] vertical or horizontal line results in a non-square matrix of numbers as an input. Conceptually, that’s not a problem; the 2D DCT is separable, and since we can run the row and column transforms independently, we can simply use different sized 1D DCTs for each length row and column, as in the figure above. In practice this means we’d need a different DCT factorization for every possible column length, and shortly after realizing that, the hardware team throws you out a window.

There are also other ways of handling the non-squareness of a rearrangement, or coming up with resampling schemes that keep the input square or only operate on the output. Most of the directional transform papers mentioned below are concerned with the various schemes for doing so.

But here’s where the story of directional transforms mostly ends for now. Once you work around the various complications of directional transforms and deploy something practical, they don’t work well in a modern codec for an unexpected reason: They compete with variable blocksize for gains. That is, in a codec with a fixed blocksize, adding directional transforms alone gets impressive efficiency gains. Adding variable blocksize alone gets even better gains. Combining variable blocksize and directional transforms gets no benefit over variable blocksize alone. Variable blocksize has already effectively eliminated the same redundancies exploited by directional transforms, at least the ones we currently have, and done a better job of it.

Nathan Egge and I both experimented extensively with directional transforms during Daala research. I approached the problem from both the input and output side, using sparse matrix multiplications to transform the outputs of diagonal edges into a vertical/horizontal arrangement. Nathan ran tests on mainstream directional approaches with rearranged inputs. We came to the same conclusion: there was no objective or subjective gain to be had for the additional complexity.

Directional transforms may have been a failure in Daala (and other codecs), but the research happened to address a question posed earlier: How to filter quickly along edges without a costly resampling step? The answer: don’t resample. Approximate the angle by moving along the nearest whole pixel. Approximate the transformed kernel by literally or conceptually rearranging pixels. This approach introduces some aliasing, but it works well enough, and it’s fast enough.

Directional Predictors, part 2: The Daala Chronicles

The Daala side of the CDEF story began while trying to do something entirely different: normal, boring, directional intra-prediction. Or at least what passed for normal in the Daala codec.

I wrote about Daala’s frequency-domain intra prediction scheme at the time we were just beginning to work on it. The math behind the scheme works; there was never any concern about that. However, a naive implementation requires an enormous matrix multiplication that was far too expensive for a production codec. We hoped that sparsifying— eliminating matrix elements that didn’t contribute much to the prediction— could reduce the computational cost to a few percent of the full multiply.

Sparsification didn’t work as hoped. At least as we implemented it, sparsification simply lost too much information for the technique to be practical.

Of course, Daala still needed some form of intra-prediction, and Jean-Marc Valin had a new idea: A stand-alone prediction codec, one that worked in the spatial domain, layered onto the frequency-domain Daala codec. As a kind of symbiont that worked in tandem with but had no dependencies on the Daala codec, it was not constrained by Daala’s lapping and frequency domain requirements. This became Intra Paint.

A photo of Sydney Harbor with some interesting painting-like features created by the algorithm

An example of the Intra Paint prediction algorithm as applied to a photograph of Sydney Harbor. The visual output is clearly directional and follows the edges and features in the original image well, producing a pleasing (if somewhat odd) result with crisp edges.

 

The way intra paint worked was also novel; it coded 1-dimensional vectors along only the edges of blocks, then swept the pattern along the selected direction. It was much like squirting down a line of colored paint dots, then sweeping the paint in different directions across the open areas.

Intra paint was promising and produced some stunningly beautiful results on its own, but again wasn’t efficient enough to work as a standard intra predictor. It simply didn’t gain back enough bits over the bits it had to use to code its own information.

A gray image showing areas of difference between the Sydney Harbor photo and the Intra-Paint result

Difference between the original Sydney Harbor photo and the Intra Paint result. Despite the visually pleasing output of Intra Paint, we see that it is not an objectively super-precise predictor. The difference between the original photo and the intra-paint result is fairly high, even along many edges that it appeared to reproduce well.

 

The intra paint ‘failure’ again planted the seed of a different idea; although the painting may not be objectively precise enough for a predictor, much of its output looked subjectively quite good. Perhaps the paint technique could be used as a post-processing filter to improve subjective visual quality? Intra paint follows strong edges very well, and so could potentially be used to eliminate basis noise that tends to be strongest along the strongest edges. This is the idea behind the original Daala paint-deringing filter, which eventually leads to CDEF itself.

There’s one more interesting mention on the topic of directional prediction, although it too is currently a dead-end for video coding. David Schleef implemented an interesting edge/direction aware resampling filter called Edge-Directed Interpolation (EDI). Other codecs (such as the VPx series and for a while AV1) have experimented with downsampled reference frames, transmitting the reference in a downsampled state to save coding bits, and then upsampling the reference for use at full resolution. We’d hoped that much-improved upsampling/interpolation provided by EDI would improve the technique to the point it was useful. We also hoped to use EDI as an improved subpel interpolation filter for motion compensation. Sadly, those ideas remain an unfulfilled dream.

Bridging the Gap, Merging the Threads

At this point, I’ve described all the major background needed to approach CDEF, but chronologically the story involves some further wandering in the desert. Intra paint gave rise to the original Daala paint-dering filter, which reimplemented the intra-paint algorithm to perform deringing as a post-filter. Paint-dering proved to be far too slow to use in production.

As a result, we packed up the lessons we learned from intra paint and finally abandoned the line of experimentation. Daala imported Thor’s CLPF for a time, and then Jean-Marc built a second, much faster Daala deringing filter based on the intra-paint edge direction search (which was fast and worked well) and a Conditional Replacement Filter. The CRF is inspired somewhat by a median filter and produces results similar to a bilateral filter, but is inherently highly vectorizable and therefore much faster.

A series of graphs showing the original signal and the effects of various filters

Demonstration of a 7-tap linear filter vs the constrained replacement filter as applied to a noisy 1-dimensional signal, where the noise is intended to simulate the effects of quantization on the original signal.

 

The final Daala deringing filter used two 1-dimensional CRF filters, a 7-tap filter run in the direction of the edge, and a weaker 5-tap filter run across it. Both filters operate on whole pixels only, performing no resampling. At that point, the Daala deringing filter began to look a lot like what we now know as CDEF.

We’d recently submitted Daala to AOM as an input codec, and this intermediate filter became the AV1 daala_dering experiment. Cisco also submitted their own deringing filter, the Constrained Low-Pass Filter (CLPF) from the Thor codec. For some time the two deringing filters coexisted in the AV1 experimental codebase where they could be individually enabled, and even run together. This led both to noticing useful synergies in their operation, as well as additional similarities in various stages of the filters.

And so, we finally arrive at CDEF: The merging of Cisco’s CLPF filter and the second version of the Daala deringing filter into a single, high-performance, direction-aware deringing filter.

Modern CDEF

The CDEF filter is simple and bears a deep resemblance to our preceding filters. It is built out of three pieces (directional search, the constrained replacement/lowpass filter, and integer-pixel tap placement) that we’ve used before. Given the lengthy background preamble to this point, you might almost look at the finished CDEF and think, “Is that it? Where’s the rest?” CDEF is an example of gains available by getting the details of a filter exactly right as opposed to just making it more and more complex. Simple and effective is a good place to be.

Direction search

CDEF operates in a specific direction, and so it is necessary to determine that direction. The search algorithm used is the same as from intra paint and paint-dering, and there are eight possible directions.

filtering direction with discrete lines of operation for each direction

The eight possible filtering directions of the current CDEF filter. The numbered lines in each directional block correspond to the ‘k’ parameter within the direction search.

 

We determine the filter direction by making “directional” variants of the input block, one for each direction, where all of the pixels along a line in the chosen direction are forced to have the same value. Then we pick the direction where the result most closely matches the original block. That is, for each direction d, we first find the average value of the pixels in each line k, and then sum, along each line, the squared error between a given pixel value and the average value of that pixel line.

Example illustrating determination of CDEF direction

An example process of selecting the direction d that best matches the input block. First we determine the average pixel value for each line of operation k for each direction. This is illustrated above by setting each pixel of a given line k to that average value. Then, we sum the error for a given direction, pixel by pixel, by subtracting the input value from the average value. The direction with the lowest error/variance is selected as the best direction.

 

This gives us the total squared error, and the lowest total squared error is the direction we choose. Though the pictured example above does so, there’s no reason to convert the squared error to variance; each direction considers the same number of pixels, so both will choose the same answer. Save the extra division!

This is the intuitive, long-way-around to compute the directional error. We can simplify the mechanical process down to the following equation: equation for determining filter directionIn this equation, E is the error, p is a pixel, xp is the value of a pixel, k is one of the numbered lines in the directional diagram above, and Nd,k is the cardinality of (the number of pixels in) the line k for direction d. This equation can be simplified in practice; for example the first term is the same for each given d. In the end, the AV1 implementation of CDEF currently requires 5.875 additions and 1.9375 multiplications per pixel and can be deeply vectorized, resulting in a total cost less than an 8×8 DCT.

Filter taps

The CDEF filter works pixel-by-pixel across a full block. The direction d selects the specific directional filter to be used, each consisting of a set of filter taps (that is, input pixel locations) and tap weights.

CDEF conceptually builds a directional filter out of two 1-dimensional filters. A primary filter is run along the chosen direction, like in the Daala deringing filter. The secondary filter is run twice in a cross-pattern, at 45° angles to the primary filter, like in Thor’s CLPF.

Illustration of primary and secondary filter directions and taps overlaid on top of CDEF filter direction

Illustration of primary and secondary 1-D filter directionality in relation to selected direction d. The primary filter runs along the selected filter direction, the secondary filters run across the selected direction at a 45° angle. Every pixel in the block is filtered identically.

 

The filters run at angles that often place the ideal tap locations between pixels. Rather than resampling, we choose the nearest exact pixel location, taking care to build a symmetric filter kernel.

Each tap in a filter also has a fixed weight. The filtering process takes the input value at each tap, applies the constraint function, multiplies the result by the tap’s fixed weight, and then adds this output value to the pixel being filtered.

illustration of primary and secondary taps

Primary and secondary tap locations and fixed weights (w) by filter direction. For primary taps and even Strengths a = 2 and b = 4, whereas for odd Strengths a = 3 and b = 3. The filtered pixel in shown in gray.

 

In practice, the primary and secondary filters are not run separately, but combined into a single filter kernel that’s run in one step.

Constraint function

CDEF uses a constrained low-pass filter in which the value of each filter tap is first processed through a constraint function parameterized by the difference between the tap value and pixel being filtered d, the filter strength S, and the filter damping parameter D: The constraint function is designed to deemphasize or outright reject consideration of pixels that are too different from the pixel being filtered. Tap value differences within a certain range from the center pixel value (as set by the Strength parameter S) are wholly considered. Value differences that fall between the Strength and Damping parameters are deemphasized. Finally, tap value differences beyond the Damping parameter are ignored.

An illustration of the constraint function

An illustration of the constraint function. In both figures, the difference (d) between the center pixel and the tap pixel being considered is along the x axis. The output value of the constraint function is along y. The figure on the left illustrates the effect of varying the Strength (S) parameter. The figure on the right demonstrates the effect of varying Damping (D).

 

The output value of the constraint function is then multiplied by the fixed weight associated with each tap position relative to the center pixel. Finally the resulting values (one for each tap) are added to the center filtered pixel, giving us the final, filtered pixel value. It all rolls up into:…where the introduced (p) and (s) mark values for the primary and secondary sets of taps.

There are a few additional implementation details regarding rounding and clipping not needed for understanding; if you’re intending to implement CDEF they can of course be found in the full CDEF paper.

Results

CDEF is intended to remove or reduce basis noise and ringing around hard edges in an image without blurring or damaging the edge. As used in AV1 right now, the effect is subtle but consistent. It may be possible to lean more heavily on CDEF in the future.

An example illustrating application of CDEF to a picture with ringing artifacts

An example of ringing/basis noise reduction in an encode of the image Fruits. The first inset closeup shows the area without processing by CDEF, the second inset shows the same area after CDEF.

 

The quantitative value of any enhancement filter must be determined via subjective testing. Better objective metrics numbers as well wouldn’t exactly be shocking, but the kind of visual improvements that motivate CDEF are mostly outside the evaluation ability of primitive objective testing tools such as PSNR or SSIM.

As such, we conducted multiple rounds of subjective testing, first during the development of CDEF (when Daala dering and Thor CLPF were still technically competitors) and then more extensive testing of the merged CDEF filter. Because CDEF is a new filter that isn’t present at all in previous generations of codecs, testing primarily consisted of AV1 with CDEF enabled, vs AV1 without CDEF.

A series of graphs showing test results of AV1 with and without CDEF

Subjective A-B comparison results (with ties) for CDEF vs. no CDEF for the high-latency configuration.

 

Subjective results show a statistically significant (p<.05) improvement for 3 out of 6 clips. This normally corresponds to a 5-10% improvement in coding efficiency, a fairly large gain for a single tool added to an otherwise mature codec.

Objective testing, as expected, shows more modest improvements of approximately 1%, however objective testing is primarily useful only insofar as it agrees with subjective results. Subjective testing is the gold standard, and the subjective results are clear.

Testing also shows that CDEF performs better when encoding with fewer codec ‘tools’; like directional transforms, CDEF is competing for coding gains with other, more-complex techniques within AV1. As CDEF is simple, small, and fast, it may provide future means to reduce the complexity of AV1 encoders. In terms of decoder complexity, CDEF represents between 3% and 10% of the AV1 decoder depending on the configuration.

Additional Resources
  1. Xiph.Org’s standard ‘derf’ test sets, hosted at media.xiph.org
  2. Automated testing harness and metrics used by Daala and AV1 development: Are We Compressed Yet?
  3. The AV1 Constrained Directional Enhancement Filter (CDEF)
    Steinar Midtskogen, Jean-Marc Valin, October 2017
  4. CDEF Presentation Slide Deck for ICASSP 2018, Steinar Midtskogen, Jean-Marc Valin
  5. A Deringing Filter for Daala and Beyond, Jean-Marc Valin
    This is an earlier deringing filter developed during research for the Daala codec that contributed to the CDEF used in AV1.
  6. Daala: Painting Images For Fun and Profit, Jean-Marc Valin
    A yet earlier intra-paint-based enhancement filter that led to the Daala deringing filter, which in turn led to CDEF
  7. Intra Paint Deringing Filter, Jean-Marc Valin 2015
    Notes on the enhancement/deringing filter built out of the Daala Intra Paint prediction experiment
  8. Guided Image Filtering Kaiming He, Jian Sun, Xiaoou Tang, 2013
  9. Direction-Adaptive Discrete Wavelet Transform for Image Compression, Chuo-Ling Chang, Bernd Girod, IEEE Transactions on Image Processing, vol. 16, no. 5, May 2007
  10. Direction-adaptive transforms for image communication, Chuo-Ling Chang, Stanford PhD dissertation 2009
    This dissertation presents a good summary of the state of the art of directional transforms in 2009; sadly it appears there are no online-accessible copies.
  11. Direction-Adaptive Partitioned Block Transform for Color Image Coding, Chuo-Ling Chang, Mina Makar, Sam S. Tsai, Bernd Girod, IEEE Transactions on Image Processing, vol. 19, no. 7, July 2010
  12. Pattern-based Assembled DCT scheme with DC prediction and adaptive mode coding, Zhibo Chen, Xiaozhong Xu
    Note this paper is behind the IEEE paywall
  13. Direction-Adaptive Transforms for Coding Prediction Residuals, Robert A. Cohen, Sven Klomp, Anthony Vetro, Huifang Sun, Proceedings of 2010 IEEE 17th International Conference on Image Processing, September 26-29, 2010, Hong Kong
  14. An Orientation-Selective Orthogonal Lapped Transform, Dietmar Kunz 2008
    Note this paper is behind the IEEE paywall.
  15. Rate-Distortion Analysis of Directional Wavelets, Arian Maleki, Boshra Rajaei, Hamid Reza Pourreza, IEEE Transactions on Image Processing, vol. 21, no. 2, February 2012
  16. Theoretical Analysis of Trend Vanishing Moments for
    Directional Orthogonal Transforms
    Shogo Murumatsu, Dandan Han, Tomoya Kobayashi, Hisakazu Kikuchi
    Note that this paper is behind the IEEE paywall. However a ‘poster’ version of the paper is freely available.
  17. An Overview of Directional Transforms in Image Coding,
    Jizheng Xu, Bing Zeng, Feng Wu
  18. Directional Filtering Transform for Image/Intra-Frame Compression, Xiulian Peng, Jizheng Xu, Feng Wu, IEEE Transaction in Image Processing, Vol. 19, No. 11, November 2010
    Note that this paper is behind the IEEE paywall.
  19. Approximation and Compression with Sparse
    Orthonormal Transforms
    , O. G. Sezer, O. G. Guleryuz, Yucel Altunbasak, 2008
  20. Robust Learning of 2-D Separable Transforms for Next-Generation Video Coding O. G. Sezer, R. Cohen, A. Vetro, March 2011
  21. Joint sparsity-based optimization of a set of orthonormal 2-D separable block transforms, Joel Sole, Peng Yin, Yunfei Zheng, Cristina Gomila, 2009
    Note that this paper is behind the IEEE paywall.
  22. Directional
    Lapped Transforms for Image Coding
    ,Jizheng Xu, Feng Wu, Jie Liang, Wenjun Zhang, IEEE Transactions on Image Processing, April 2008
  23. Directional Discrete Cosine Transforms—A New
    Framework for Image Coding
    ,Bing Zeng, Jingjing Fu, IEEE Transactions on Circuits and Systems for Video Technology, April 2008
  24. The Dual-Tree Complex Wavelet Transform, Ivan W. Selesnick, Richard G. Baraniuk, and Nick G. Kingsbury, IEEE Signal Processing Magazine, November 2005
Categorieën: Mozilla-nl planet

Onno Ekker: Garbage

Mozilla planet - do, 28/06/2018 - 13:05

No, I didn’t put Mozilla out with the garbage.

IMG_3603

I only put my Mozilla stickers out on the garbage, so I can more easily recognize my trash bin when recollecting it after dark…

IMG_3607

Categorieën: Mozilla-nl planet

Gervase Markham: We Win

Mozilla planet - do, 28/06/2018 - 11:10

Mozilla has been making a big effort in the past few years to make privacy and data (ab)use a first-class concern in the public consciousness. I think we can safely say that when a staid company like Barclays Bank is using dancing vegetables on national broadcast TV in a privacy-focussed advert, we have won that battle…

Categorieën: Mozilla-nl planet

Firefox Nightly: Protecting Your Privacy in Firefox Pre-Release

Mozilla planet - do, 28/06/2018 - 01:44

As a matter of principle, we’ve built Firefox to work without collecting information about the people who use it and their browsing habits. Operating in this way is the right thing to do, but it makes it hard to infer what Firefox users do and want so that we can make improvements to the browser and its features. We need this information to compete effectively, but we have to do it in a way that respects our users’ privacy. That is why experimentation in our pre-release channels like Nightly, Beta and Developer Edition is so critical.

Release only gives us partial insight; pre-release helps with the bigger picture.

One outcome of the unified telemetry project that we finished last September was to streamline data collection as it takes place in the different channels of Firefox. As part of that project, we created four categories of data: Category 1 “technical data”, Category 2 “interaction data”, Category 3 “web activity data”, and Category 4 “highly sensitive data” which includes information that can identify a person. These categories apply to all Firefox data collection including telemetry (data that Firefox sends Mozilla by default) and Shield Studies (a Mozilla program to test rough features and ideas on small numbers of Firefox users).

The release channel of Firefox that hundreds of millions of people use sends us Category 1 and 2 technical and interaction data by default. The latter is especially useful so that we can understand how people interact with menus, prompts, features, and core browser functions. Because this telemetry data is limited, it is not enough to make fully informed decisions about product changes.

This is why we rely on our pre-release channels to collect, when necessary, additional Category 3 web activity data or run studies on particular features with unique privacy properties.  Gathering this information is critical so that we can understand the real-world impact of new ideas and technologies on a limited audience before deploying to all Firefox users.

Even when we collect or share data in pre-release, privacy comes first.

Any new Firefox data collection must go through a rigorous process. The lean data practices that we follow mean that we minimize collection, secure data, limit data sharing, clearly explain what we’re doing, and provide user controls. For example, Shield Studies are controlled weekly Firefox studies that answer specific questions using the minimum amount of data needed on the smallest relevant sample size. Every study is reviewed and signed off by a data scientist, a QA engineer, and a Firefox Peer. The majority of studies stay within Category 1 and Category 2 data; for anything sensitive, even in pre-release, additional sign-off by our Legal and Trust teams are required.

We don’t compromise our principles when we work with partners. Privacy and security threats on the web are evolving and so are we to protect our users. This includes partnering with others to provide expertise that we don’t have. We require partners who works with us to uphold the same privacy and accountability standards we’ve set for ourselves.

What’s next?

The Firefox Privacy Notice has always said that pre-release has different privacy characteristics, but we are going to update this to clarify what that means. We’ll do the same on the landing pages for pre-release, because anyone who is uncomfortable with additional data collection or sharing should instead download the release version of Firefox.

We are deeply grateful to our community of pre-release users who put up with unstable builds, report issues, and contribute much needed data. Ultimately, it is the insights from our most passionate users and advocates in pre-release that allow us to offer a better product to all users, with less data collection in the long run.

Categorieën: Mozilla-nl planet

Ryan Harter: If you can't do it in a day, you can't do it

Mozilla planet - wo, 27/06/2018 - 23:21

I was talking with Mark Reid about some of the problems with Coding in a GUI. He nailed part of the problem with soundbite too good not to share:

"If you can't do it in a day, you can't do it."

This is a persistent problem with tools that make you code in a GUI. These tools are great for working on bite-sized problems, but the workflow becomes painful when the problem needs to be broken into pieces and attacked separately.

Part of the problem is that I can't test the code. That means I need to understand how each change will affect the entire code base. It's impossible to compartmentalize.

GUI's also make it difficult to split a problem across people. If I can't track changes easily it's impossible to tell whether my changes conflict with a peer's changes.

So look out, bad tools are insidious! If you find yourself abandoning an analysis because it's hard to refactor, consider choosing a different toolchain next time. Especially if it's because there's no easy way to move your code out of a GUI!

Categorieën: Mozilla-nl planet

Armen Zambrano: Workshop experience at Smashing Conf

Mozilla planet - wo, 27/06/2018 - 21:43

This week I attended Toronto’s first Smashing Conf.

<figcaption>One of the many posters around the event</figcaption>

On Monday I attended one of the pre-conference workshops by Dan Mall’s “Design workflow for a multi-device world”. Dan guided us through the process of defining a problem, brainstorming objectives and key results (aka OKRs) and made us work together to build some of what we decided to tackle. We divided the whole classroom into five or six teams with 5 to 7 members each. Each team had various skillsets (e.g. designers and coders).

Development time

In my team, the “bike shed” team, we decided to rewrite TTC’s trip planning feature. We did not manage to finish the product, however, we managed to build one of the three objectives and partially complete another. The team had two people who did compositions, two people who could code and one person helping us collaborate and coordinate.

This exercise included few new things to me. For instance, I worked within a team context to create a product rather than building a feature by myself.

<figcaption>The button found in codepen(at the top) versus what I ended up with (at the bottom)</figcaption>

It was also a new experience for me to work closely with a designer. We chose to build a multi-option toggle feature to mix transit methods. The process started with writing on paper what he had in mind. I tried building a prototype from scratch to see if I understood what he wanted. I did not get it quite right the first time so wedecided to search in codepen to find similar. Once we found something we liked I started iterating on the code while he started preparing the icons for me to use. By the end of it we had something that worked but did not have enough time to complete. This is the codepen I forked and this is the unfinished feature where I left it at.

<figcaption>One of our objectives and key results</figcaption>

This exercise was a very humbling experience as I felt the pressure to produce something for a designer (Scott from Motorola services) that was right there beside me and I was “the coding expert”. I quote coding expert as I barely have a year of frontend experience. I started from a forked pen that had roughly what Scott wanted, however, I knew I was going to face a very difficult time not before long. The codepen had been written using few non-standard languages (pugjs and SCSS) instead of writing standard HTML & CSS. Another difficulty I knew I was going to face was that I did not have experience turning an image into a toggle button. The forked pen only had text inside the buttons. I deferred solving it by only dealing with text labels at first, instead I addressed other issues before integrating the icons which would require some extra research.

It was also the first time working with another coder (Sheneille Patil) on a fast paced environment. We needed to quickly figure our own development culture. Creating a GitHub repository was not an option as she was not comfortable with it. We decided to turn to codepen.io and to build features that would not conflict with each other. The final plan was to collect our different pieces of code and merge them in a single pen. We did not have enough time to get to this.

I hope you found something interesting out of this post. It’s not my tipical programming related post. I’m very grateful to SmashingConf for having lined up such great speakers and very practical workshops and for Mozilla to support my learning.

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10N Report: June Edition

Mozilla planet - wo, 27/06/2018 - 21:39

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

  • Lots of new contributors joined us through the Common Voice project: Jimi (Danish) Fran (Icelandic), Kaz, (Japanese), Joshua (Kyrgyz), Niko (Komi-Zyrian), Sardana (Sakha), Bjartur (Faroese), Donald (Hong Kong), Gregor (Slovenian), Jack (Erzya), Kelly (Hakha Chin), and a few in the Korean team. Welcome to you all! You are the reasons that the Common Voice project is expanding in record pace in both the number of locales and the diversity of the contributors
  • We have new contributors helping to revive Afrikaans l10n. Welcome Jean, Stefan and Vincent!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added
  • Erzya
  • Faroese
  • Hakha Chin
  • Komi-Zyrian
  • Kyrgyz
  • Sakha
  • Quechua Chanka
  • Aymara
  • K’iche’
New content and projects What’s new or coming up in Firefox desktop

Firefox 61 has been released on June 26th. This also means that Firefox 62 is in Beta, while 63 is on Nightly, and that’s where you should focus your work for localization and testing.

This development cycle will be a little longer than usual, to account for Summer holidays in the Northern hemisphere: the deadline to provide your localization for Beta is August 21st, with Firefox 62 planned for release on September 5th.That’s a Wednesday and not a Tuesday, which is the common release day for Mozilla products, because September 3rd is a public holiday in US (Labor Day).

Now, diving into updates and new content. There are several new in-product pages:

  • We talked about Shield Studies in the l10n report for May. about:studies is now available for localization.
  • Issues can occur when multiple instances of Firefox are open, and a pending update is applied to one of them. A new page, about:restartrequired, was created to inform the user that a restart is needed to avoid such problems.
  • Activity Stream is expanding beyond the New page. One of the directions is to experiment with a new, in-product first run experience, about:welcome, replacing the existing one based on web content served from mozilla.org.

One important note about Activity Stream (about:newtab and about:welcome): unlike other strings in Firefox, once translated they are not used directly in the following Nightly, they need a new build of Activity Stream (for now). That normally happens once a week. For this reason, if you’re targeting Beta, make sure to prioritize localization of activity-stream/newtab.properties.

What’s new or coming up in Test Pilot

Two new Test Pilot experiments were published on the website about 3 weeks ago: Color and Side View. While the experiments themselves were not localized, new content was added to the website. Unfortunately, that happened with poor timing, resulting in English content being displayed on the website. The following cycle is longer than usual (normally they are 2 weeks long), because most people were traveling for the bi-annual Mozilla All Hands, making things worse.

We realize that this is less than ideal, and we are trying to set up a system to avoid repeating these mistakes in the future.

What’s new or coming up in mobile

As mentioned in the section about Firefox desktop, Firefox 61 has been released on June 26th, which means that Firefox for Android 61 updates will start rolling out to users progressively. Please refer to the section about Firefox desktop above since the Firefox for Android release cycle is the same, and the comments concerning localization apply as well.

We’ve also recently decided to stop updating the What’s New content for Firefox Android (due to low visibility and interest in localizing this section, and the fact that it does not affect the number of downloads – amongst other things). This means the locales that were supported by the Play Store will no longer need to localize the updated version for Beta each time we are releasing a new version. Instead, we’ve opted for displaying a generic message.

On the Firefox iOS front, English from Canada (en-CA) was added to the ever-growing list of shipping locales with the new v12. Congratulations on that! Next update (and so, new strings) is slowly creeping up, so stay tuned for more.

Focus Android locales are also continuously growing, with Afrikaans (af), Pai-pai (pai), Punjabi (pa-IN), Quechua Chanka (quy) and Aymara (ay) teams having started to localize. Note that there are public Nightlies available on the Play Store that you can test your work on. Instructions on how to do that are here.

And finally, after months of silence localization-wise, Focus iOS will get new strings soon! However at the moment, we are not opening the project back up to new locales as there is no clearly defined schedule, and we cannot guarantee by when new locales can be added once completed.

What’s new or coming up in web projects

Legal documentation:

  • An FAQ page on privacy and data collection practice at Mozilla will be made available for localization for all languages. This will come soon.
  • There are twelve locales that are identified as priority locales based on Firefox desktop user size: de, es-ES, fr, it, id, ja, nl, pl, pt-BR, ru, tr, zh-CN. This means Privacy Notice for key legal documents will be supported and updated in these languages at an ongoing basis. The rest of the previously supported locales will remain available, but will be redirected to English for the latest updated versions.
  • The Privacy landing page is going through a makeover. The right side panel will be revised by archiving documentations for EOL products.

Common Voice:

  • We launched multi-language voice collection in German, French, and Welsh
  • Since the launch in May, we have started collecting in 7 more languages, for a total of 11 languages
  • The site is now live in 41 locales, with 16 more on the way
  • After the San Francisco All Hands, a whole new contribution portal would be launched soon
  • In the near future, we will roll out a new Homepage, and a brand new User Profile section (with stats and leaderboard), so we will need lots of help localizing this!
What’s new or coming up in Foundation projects

Copyright campaign! The battle to fix copyright has started a few days ago, and while we were disappointed by the JURI Committee vote, we’re now mobilizing citizens to ask a larger group of MEPs to reject Article 13 during the EU Parliament plenary on July 5th. We can still win!

In the second half of 2018, the Advocacy team will have a bigger focus on both Europe and company misbehavior around data, so we can expect more campaigns being localized. It’s also an opportunity to mobilize internal resources to plan and build localization support on the new foundation website.

If you would like to learn more, you can watch Jon Lloyd, Advocacy Campaigns Manager, during the Foundation All-Hands in Toronto talk briefly about the recent campaign wins and the strategy for the next 6 month:

On fundraising, the current plan for the coming month is to communicate an update to existing donors, do various testing around it, then send a broader fundraising ask towards end of July/beginning of August. The donate website will soon get a makeover that will fix some layout issues with currencies, and generally provide more space for localization. Which is good news!

Several foundation website will also soon get a unified navigation header to match the one on foundation.mozilla.org.

What’s new or coming up in Pontoon Making unreviewed suggestions discoverable

Reviewing pending suggestions regularly is important and making them discoverable is the first step to get there. That’s why we got rid of the misleading Suggested count, which didn’t include suggestions to Translated strings, and started exposing Unreviewed suggestions in a new sortable column in dashboards. It’s represented by a lightbulb, which is painted blue if unreviewed suggestions are present.

Tags help you prioritize your work

To help you prioritize your work, we rolled out Tags. The idea is pretty simple: we define a set of tags with set priority, which are then assigned to translation resources (files). Effectively, that assigns priority to each string. Currently, tags are only enabled for Firefox, which you’ll notice by the Tags tab available in the Firefox localization dashboard and filters.

Pontoon Tools 3.2.0

Pontoon Tools is a must-have add-on for all Pontoon users, which allows you to stay up-to-date with localization activity even when you don’t use Pontoon. Michal Stanke just released a new version, which is now also available for Chrome and Chromium-based browsers. Additionally, it also brings support for the aforementioned Unreviewed state and enables system notifications by default.

Support for localizing WebExtensions

Extensions for Firefox are built using the WebExtensions API, a cross-browser system for developing extensions. The WebExtensions API has a rather handy module available for internationalizing extensions – i18n. It stores translations in messages.json files, which are now supported in Pontoon. For more technical details on internationalizing WebExtensions, see this MDN page.

Redesigning Pontoon homepage

As mentioned in the last month’s report, Pramit Singhi is working on Pontoon homepage redesign as part of the Google Summer of Code project. Based on the impressive amount of feedback collected during the research among Pontoon users, he came up with a proposal of the new homepage and would like to hear your thoughts. Please consider the proposal a wireframe, so he’s more interested in hearing what you think about the overall page structure and content and less so how you like fonts and colors.

Events
  • The French l10n team will be gathering in Paris at the end of July, to discuss many topics including improvements to the current localization and participation process to make it even more easy for newcomers, improvements to the team communication and do some training for the most recent team members. This event is part of our 2018 l10n community events, so don’t forget to start planning yours and make your request!
  • Indonesia (Jakarta): L10N-drivers are currently organizing a sprint with the local community, in order to add Javanese and Sundanese to Firefox Rocket shipping locales – and thus hopefully help expand our user base in Indonesia. Do you know anyone who would like to help localize in these two languages? Get in touch!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Friends of the Lion

Image by Elio Qoshi

Francis who speaks many languages, joined the l10n community not long ago, thanks to the Common Voice project. He has been actively involved in bringing new contributors and introduce new localization communities to Mozilla. He also makes sure the new contributors have a simple onboarding process so they can contribute right away and see the fruit of their work quickly. Additionally, Francis files issues and fixes bugs through the project on GitHub. Thank you!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

 

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Pagina's