mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 maanden 1 dag geleden

Ehsan Akhgari: Quantum Flow Engineering Newsletter #12

vr, 09/06/2017 - 08:39

It has been a few weeks since I have given an update about our progress on reducing the amount of slow synchronous IPC messages that we send across our processes.  This hasn’t been because there hasn’t been a lot to talk about, quite to the contrary, so much great work has happened here that for a while I decided it may be better to highlight other ongoing work instead.  But now as the development cycle of Firefox 55 comes to a closing point, it’s time to have another look at where we stand on this issue.

I’ve prepared a new Sync IPC Analysis for today including data from both JS and C++ initiated sync IPCs.  First bit of unfortunate news is that the historical data in the spreadsheet is lost because the server hosting the data had a few hiccups and Google Spreadsheets seems to not really not like that.  Second bit of unfortunate news is that our hopes for disabling the non-multiprocess compatible add-ons by default in Nightly helping with reducing some of the noise in this data don’t seem to have panned out.  The data still shows a lot of synchronous IPC triggered from JS as before, and the lion’s share of it are messages that are clearly coming from add-ons judging from their names.  My guess about why is that Nightly users have probably turned these add-ons back on manually.  So we will have to live with the noise in the data for now (this is an issue that we have to struggle with when dealing with a lot of telemetry data unfortunately, here is another recent example that wasted some time and energy).

This time I won’t give out a percentage based break-down because now after many of these bugs have been fixed, the impact of really commonly occurring IPC messages such as the one we have for document.cookie really makes the earlier method of exploring the data pointless (you can explore the pie chart to get a quick sense of why, I’ll just say that message alone is now 55% of the chart and that plus the second one together form 75% of the data.)  This is a great problem to have, of course, it means that we’re now starting to get to the “long tail” part of this issue.

The current top offenders, besides the mentioned bug (which BTW is still being made great progress on!) are add-on/browser CPOW messages, two graphics initialization messages that we send at content process startup, NotifyIMEFocus that’s in the process of being fixed, and window.open() which I’ve spent weeks on but have yet to fix all of our tests to be able to land my fixes for (which I’ve also temporarily given up working on looking for something that isn’t this bug to work on for a little while!).  Besides those if you look at the dependency list of the tracker bug, there are many other bugs that are very close to being fixed.  Firefox 55 is going to be much better from this perspective and I hope the future releases will improve on that!

The other effort that is moving ahead quite fast is optimizing for Speedometer V2.  See the chart of our progress on AreWeFastYet.com:

Last week, our score on this chart was about 84.  Now we are at about 91.  Not bad for a week worth a work!  If you’re curious to follow along, see our tracker bug.  Also, Speedometer is a very JS heavy benchmark, so a lot of the bugs that are filed and fixed for it happen inside SpiderMonkey so watching the SpiderMonkey specific tracker bug is probably a good idea as well.

It’s time for a short performance story!  This one is about technical debt.  I’ve looked at many performance bugs over the past few months of the Quantum Flow project, and in many cases the solutions have turned out to be just deleting the slow code, that’s it!  It turns out that in a large code base as code ages, there is a lot of code that isn’t really serving any purpose any more but nobody discovers this because it’s impractical to audit every single line of code with scrutiny.  But then some of this unnecessary code is bound to have severe performance issues, and when it does, your software ends up carrying that cruft for years!  Here are a few examples: a function call taking 2.7 seconds on a cold startup doing something that became unnecessary once we dropped support for Windows XP and Vista, some migration code that was doing synchronous IO during all startups to migrate users of Firefox 34 and older to a newer version, and an outdated telemetry probe that turned out to not in use any more scheduling many unnecessary timers causing unneeded jank.

I’ve been thinking about what to do about these issues.  The first step is fix them, which is what we are busy doing now, but finding these issues typically requires some work, and it would be nice if we had a systematic way of dealing with some of them.  For example, wouldn’t it be nice if we had a MIMIMUM_WINDOWS macro that controlled all Windows specific code in the tree, and in the case of my earlier example perhaps the original code would have checked that macro against the minimum version (7 or higher) and when we’d bump MINIMUM_WINDOWS up to 7 along with bumping our release requirements, such code will turn itself into preprocessor waste (hurray!), but of course, the hard part is finding all the code that needs to abide by this macro, and the harder part is enforcing this consistently going forward!  Some of the other issues aren’t possible to deal with this way, so we need to work on getting better at detecting these issues.  Not sure, definitely some food for thought!

I’ll stop here, and move on to acknowledge the great work of all of you who helped make Firefox faster this past week!  As per usual, apologies to those who I’m forgetting to mention here:

Categorieën: Mozilla-nl planet

Justin Dolske: Photon Engineering Newsletter #5

vr, 09/06/2017 - 01:11

Time for a solid update #5! So far the Photon project appears to be well on track — our work is scoped to what we think is achievable for Firefox 57, and we’re generally fixing bugs at a good rate.

Search Box?

If you’ve been paying attention to any of the Photon mockups and design specs floating around, you may have noticed the conspicuous absence of the search box in the toolbar (i.e. there’s just a location field and buttons). What’s up with that?

win10mock

For some time now, we’ve been working on improving searching directly from the location field. You can already search from there by simply entering your search term, see search suggestions as you type, and click the icons of other search engines to do a one-off search with them instead of your default. [The one-off search feature has been in Nightly for a while, and will start shipping in Firefox 55.] The location bar now can do everything the search box can, and more. So at this point the search box is a vestigial leftover from how browsers worked 10+ years ago, and we’d like to remove it to reclaim precious UI space. Today, no other major browser ships with both a location field and search box.

That said, we’re being careful about understanding the impact of removing the search box, since it’s been such a long-standing feature. We’re currently running a series of user studies to make sure we understand how users search, and that the unified search/location bar meets their needs. And since Mozilla works closely with our search engine partners, we also want to make sure any changes to how users search is not a surprise.

Finally, I should note that this is about removing the search box as the default for _new_ Firefox users. Photon won’t be removing the search box entirely, you’ll still be able to add it back through Customize Mode if you so desire. (Please put down your pitchforks and torches. Thank you.) We’re still discussing what to do for existing users… There’s a trade-off between proving a fresh, clean, and modern experience as part of the upgrade to Photon (especially for users who haven’t been using the search box), and removing a UI element that some people have come to expect and use.

Recent Changes Menus/structure:
  • Hamburger panel is now feature complete!
    • An exit/quit menu item was added (except on macOS, as the native menubar handles this).
    • A restyled zoom control was added.
    • One small feature-not-yet-complete: the library subview is reusing the same content as the library panel, and so won’t be complete until the library panel itself is complete.
    • A number of smaller bugs and regressions fixed.
  • Initial version of the new Library panel has landed
    • It still needs a lot of work, items are missing, and the styling isn’t done. Landing this early allows us to unblock work in other areas of Photon (notably animations) that need to interact with this button.
    • We haven’t placed it into the toolbar by default yet. Until we do so, if you want to play with it you’ll need to manually add it from Customization mode.
  • We’re getting ready to enable the Photon structure pref by default on Nightly, and are just fixing the last tests so that everything is green on our CI infrastructure. Soon!
Animation:
  • Landed a patch that allows us to move more of our animations to the compositor and off of the main thread. Previously, this optimization was only allowed when the item’s width was narrower than the window, now it’s based on available area. (Our animations are using very wide sprite sheet “filmstrips” which required this — more about this in a future update).
  • Work continues on animations for downloads toolbar button, stop/reload button, and page loading indicator. The first two have gone up for review, and unfortunately we found some issues with the plan for the downloads button which requires further work.
Preferences:
  • Finalized spec for the preferences reorg version 2. The team will now begin implementation of the changes.
  • The team is working on improving search highlighting in about:preferences to include sub-dialogs and fixing some highlight/tooltip glitches.
  • The spec for Search behavior is also being finalized.
Visual redesign: Onboarding:
  • Enabled the basic onboarding overlay on about:newtab and about:home. Now you can see a little fox icon on the top-left corner on about:newtab and about:home on Nightly! (Here’s the full spec for what will eventually be implemented. We’re working on getting the first version ready for Firefox 56.)
  • Finished creating the message architecture so that the Auto-migration code can talk with Activity Stream
Performance:

That’s all for this week!


Categorieën: Mozilla-nl planet

Michael Kelly: Q is Scary

do, 08/06/2017 - 21:41

q is the hands-down winner of my "Libraries I'm Terrified Of" award. It's a Python library for outputting debugging information while running a program.

On the surface, everything seems fine. It logs everything to /tmp/q (configurable), which you can watch with tail -f. The basic form of q is passing it a variable:

import q foo = 7 q(foo)

Take a good long look at that code sample, and then answer me this: What is the type of q?

If you said "callable module", you are right. Also, that is not a thing that exists in Python.

Also, check out the output in /tmp/q:

0.0s <module>: foo=7

It knows the variable name. It also knows that it's being called at the module level; if we were in a function, <module> would be replaced with the name of the function.

You can also divide (/) or bitwise OR (|) values with q to log them as well. And you can decorate a function with it to trace the arguments and return value. It also has a method, q.d(), that starts an interactive session.

And it does all this in under 400 lines, the majority of which is either a docstring or code to format the output.

Spooky Spooky. How in the Hell

So first, let's get this callable module stuff out of the way. Here's the last two lines in q.py:

# Install the Q() object in sys.modules so that "import q" gives a callable q. sys.modules['q'] = Q()

Turns out sys.modules is a dictionary with all the loaded modules, and you can just stuff it with whatever nonsense you like.

The Q class itself is super-fun. Check out the declaration:

# When we insert Q() into sys.modules, all the globals become None, so we # have to keep everything we use inside the Q class. class Q(object): __doc__ = __doc__ # from the module's __doc__ above import ast import code import inspect import os import pydoc import sys import random import re import time import functools

"When we insert Q() into sys.modules, all the globals become None"

What? Why?! I mean I can see how that's not an issue for modules, which are usually the only things inside sys.modules, but still. I tried chasing this down, but the entire sys module is written in C, and that ain't my business.

Most of the other bits inside Q are straightforward by comparison; a few helpers for outputting stuff cleanly, overrides for __truediv__ and __or__ for those weird operator versions of logging, etc. If you've never heard of callable types1 before, that's the reason why an instance of this class can be both called as a function and treated as a value.

So what's __call__ do?

Ghost Magic def __call__(self, *args): """If invoked as a decorator on a function, adds tracing output to the function; otherwise immediately prints out the arguments.""" info = self.inspect.getframeinfo(self.sys._getframe(1), context=9) # ... snip ...

Welcome to the inspect module. Turns out, Python has a built-in module that lets you get all sorts of fun info about objects, classes, etc. It also lets you get info about stack frames, which store the state of each subroutine in the chain of subroutine calls that led to running the code that's currently executing.

Here, q is using a CPython-specific function sys._getframe to get a frame object for the code that called q, and then using inspect to get info about that code.

# info.index is the index of the line containing the end of the call # expression, so this gets a few lines up to the end of the expression. lines = [''] if info.code_context: lines = info.code_context[:info.index + 1] # If we see "@q" on a single line, behave like a trace decorator. for line in lines: if line.strip() in ('@q', '@q()') and args: return self.trace(args[0])

...and then it just does a text search of the source code to figure out if it was called as a function or as a decorator. Because it can't just guess by the type of the argument being passed (you might want to log a function object), and it can't just return a callable that can be used as a decorator either.

trace is pretty normal, whatever that means. It just logs the intercepted arguments and return value / raised exception.

# Otherwise, search for the beginning of the call expression; once it # parses, use the expressions in the call to label the debugging # output. for i in range(1, len(lines) + 1): labels = self.get_call_exprs(''.join(lines[-i:]).replace('\n', '')) if labels: break self.show(info.function, args, labels) return args and args[0]

The last bit pulls out labels from the source code; this is how q knows the name of the variable that you pass in. I'm not going to go line-by-line through get_call_exprs, but it uses the ast module to parse the function call into an Abstract Syntax Tree, and walks through that to find the variable names.

It goes without saying that you should never do any of this. Ever. Nothing is sacred when it comes to debugging, though, and q is incredibly useful when you're having trouble getting your program to print anything out sanely.

Also, if you're ever bored on a nice summer evening, check out the list of modules in the Python standard library. It's got everything:

  1. Check out this page and search for "Callable Types" and/or __call__.

Categorieën: Mozilla-nl planet

Mike Hoye: A Security Question

do, 08/06/2017 - 17:06

To my shame, I don’t have a certificate for my blog yet, but as I was flipping through some referer logs I realized that I don’t understand something about HTTPS.

I was looking into the fact that I sometimes – about 1% of the time – I see non-S HTTP referers from Twitter’s t.co URL shortener, which I assume means that somebody’s getting man-in-the-middled somehow, and there’s not much I can do about it. But then I realized the implications of my not having a cert.

My understanding of how this works, per RFC7231 is that:

A user agent MUST NOT send a Referer header field in an unsecured HTTP request if the referring page was received with a secure protocol.

Per the W3C as well:

Requests from TLS-protected clients to non- potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.

So, if that’s true and I have no certificate on my site, then in theory I should never see any HTTPS entries in my referer logs? Right?

Except: I do. All the time, from every browser vendor, feed reader or type of device, and if my logs are full of this then I bet yours are too.

What am I not understanding here? It’s not possible, there is just no way for me to believe that it’s two thousand and seventeen and I’m the only person who’s ever noticed this. I have to be missing something.

What is it?

FAST UPDATE: My colleagues refer me to this piece of the puzzle I hadn’t been aware of, and Francois Marier’s longer post on the subject. Thanks, everyone! That explains it.

SECOND UPDATE: Well, it turns out it doesn’t completely explain it. Digging into the data and filtering out anything referred via Twitter, Google or Facebook, I’m left with two broad buckets. The first is is almost entirely made of feed readers; it turns out that most and maybe almost all feed aggregators do the wrong thing here. I’m going to have to look into that, because it’s possible I can solve this problem at the root.

The second is one really persistent person using Firefox 15. Who are you, guy? Why don’t you upgrade? Can I help? Email me if I can help.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Science Lab June 2017 Bi-Monthly Community Call

do, 08/06/2017 - 17:00

Mozilla Science Lab June 2017 Bi-Monthly Community Call Mozilla Science Lab Bi-monthly Community Call

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Cross-browser extensions, available now in Firefox

do, 08/06/2017 - 16:54

We’re modernizing the way developers build extensions for Firefox! We call the new APIs WebExtensions , because they’re written using the technologies of the Web: HTML, CSS, and JavaScript. And just like the technologies of the Web, you can write one codebase that works in multiple places.

WebExtensions APIs are inspired by the existing Google Chrome extension APIs, and are supported by Opera, Firefox, and Microsoft Edge. We’re working to standardize these existing APIs as well as proposing new ones! Our goal is to make extensions as easy to share between browsers as the pages they browse, and powerful enough to let people customize their browsers to match their needs.

Want to know more? Build WebExtensions Port an existing Chrome extension
Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jun. 08, 2017

do, 08/06/2017 - 16:00

Reps Weekly Meeting Jun. 08, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Increasing Momentum Around Tech Policy

do, 08/06/2017 - 14:30
Mozilla’s new tech policy fellowship brings together leading experts to advance Internet health around the world

 

Strong government policies and leadership are key to making the Internet a global public resource that is open and accessible to all.

To advance this work from the front lines, some of the world’s experts on these issues joined government service. These dedicated public servants have made major progress in recent years on issues like net neutrality, open data and the digital economy.

But as governments transition and government leaders move on, we risk losing momentum or even backsliding on progress made. To sustain that momentum and invest in those leaders, today the Mozilla Foundation officially launches a new Tech Policy Fellowship. The program is designed to give individuals with deep expertise in government and Internet policy the support and structure they need to continue their Internet health work.

The fellows, who hail from around the globe, will spend the next year working independently on a range of tech policy issues. They will collaborate closely with Mozilla’s policy and advocacy teams, as well as the broader Mozilla network and other key organizations in tech policy. Each fellow will bring their expertise to important topics currently at issue in the United States and around the world.

For example:

Fellow Gigi Sohn brings nearly 30 years of experience, most recently at the Federal Communications Commission (FCC), dedicated to defending and preserving fundamental competition and innovation policies for broadband Internet access. At a time when we are moving closer to a closed Internet in the United States, her expertise is more valuable than ever.

Fellow Alan Davidson will draw on his extensive professional history working to advance a free and open digital economy to support his work on education and advocacy strategies to combat Internet policy risks.

With the wave of data collection and use fast growing in government and the private sector, fellow Linet Kwamboka will analyze East African government practices for the collection, handling and publishing of data. She will develop contextual best practices for data governance and management.

Meet the initial cohort of the Tech Policy Fellows here and below, and keep an eye on the Tech Policy Fellowship website for ways to collaborate in this work.

 

Our Mozilla Tech Policy Fellows

 

Alan Davidson | @abdavdson

Alan will work to produce a census of major Internet policy risks and will engage in advocacy and educational strategy to minimize those risks. Alan is also a Fellow at New America in Washington, D.C. Until January 2017, he served as the first Director of Digital Economy at the U.S. Department of Commerce and a Senior Advisor to the Secretary of Commerce. Prior to joining the department, Davidson was the director of the Open Technology Institute at New America. Earlier, Davidson opened Google’s Washington policy office in 2005 and led the company’s public policy and government relations efforts in North and South America. He was previously Associate Director of the Center for Democracy and Technology. Alan has a bachelor’s degree in mathematics and computer science and a master’s degree in technology and policy from the Massachusetts Institute of Technology (MIT). He is a graduate of Yale Law School.

 

Credit: New America

Amina Fazlullah

Amina Fazlullah will work to promote policies that support broadband connectivity in rural and vulnerable communities in the United States. Amina joins the fellowship from her most recent role as Policy Advisor to the National Digital Inclusion Alliance, where she led efforts to develop policies that support broadband deployment, digital inclusion, and digital equity efforts across the United States. Amina has worked on a broad range of Internet policy issues including Universal Service, consumer protection, antitrust, net neutrality, spectrum policy and children’s online privacy. She has testified before Congress, the Federal Communications Commission, the Department of Commerce and Federal Trade Commission. Amina was formerly the Benton Foundation’s Director of Policy in Washington, D.C., where she worked to further government policies to address communication needs of vulnerable communities. Before that, Amina worked with the U.S. Public Interest Research Group, for the Honorable Chief Judge James M. Rosenbaum of the U.S. District Court of Minnesota and at the Federal Communications Commission. She is graduate of the University of Minnesota Law School and Pennsylvania State University.

 

Camille Fischer | @camfisch

Camille will be working to promote individual rights to privacy, security and free speech on the Internet. In the last year of the Obama Administration, Camille led the National Economic Council’s approach to consumers’ economic and civil rights on the Internet and in emerging technologies. She represented consumers’ voices in discussions with other federal agencies regarding law enforcement access to data, including encryption and international law enforcement agreements. She has run commercial privacy and security campaigns, like the BuySecure Initiative to increase consumers’ financial security, and also worked to promote an economic voice within national security policy and to advocate for due process protections within surveillance and digital access reform. Before entering the government as a Presidential Management Fellow, Camille graduated from Georgetown University Law Center where she wrote state legislation for the privacy-protective commercial use of facial recognition technology. Camille is also an amateur photographer in D.C.

 

Caroline Holland

Caroline will be working to to promote a healthy internet by exploring current competition issues related to the Internet ecosystem. Caroline served most recently as Chief Counsel for Competition Policy and Intergovernmental Relations at the U.S. Department of Justice Antitrust Division. In that role, she was involved in several high-profile matters while overseeing the Division’s competition policy and advocacy efforts, interagency policy initiatives, and congressional relations. Caroline previously served as Chief Counsel and Staff Director of the Senate Antitrust Subcommittee where she advised the committee chairmen on a wide variety of competition issues related to telecommunications, technology and intellectual property. Before taking on this role, she was a counsel on the Senate Judiciary Committee and an attorney in private practice focusing on public policy and regulatory work. Caroline holds a J.D. from Georgetown University Law Center and a B.A. in Public Policy from Trinity College in Hartford, Connecticut. Between college and law school, Caroline served in the Antitrust Division as an honors paralegal and as Clerk of the Senate Antitrust Subcommittee.

 

Linet Kwamboka | @linetdata

Linet will work on understanding the policies that guide data collection and dissemination in East Africa (Kenya, Uganda, Tanzania and Rwanda). Through this, she aims to publish policy recommendations on existing policies, proposed policy amendments and a report outlining her findings. Linet is the Founder and CEO of DataScience LTD, which builds information systems to generate and use data to discover intelligent insights about people, products and services for resource allocation and decision making. She was previously the Kenya Open Data Initiative Project Coordinator for the Government of Kenya at the Kenya ICT Authority. Linet is also a director of the World Data Lab–Africa, working to make data personal, tangible and actionable to help citizens make better informed choices about their lives. She also consults with the UNDP in the Strengthening Electoral Processes in Kenya Project, offering support to the Independent Electoral Boundaries Commission in information systems and technology. She has worked at the World Bank as a GIS and Technology Consultant and was a Software Engineering Fellow at Carnegie Mellon University, Pittsburgh. Her background is in computer science, data analysis and Geographical Information Systems. Linet is a recognized unsung hero by the American Embassy in Kenya in her efforts to encourage more women into technology and computing, has been a finalist in the Bloomberg award of global open data champions and is a member of the Open Data Institute Global Open Data Leaders’ Network.

 

Terah Lyons | @terahlyons

Terah will work on advancing policy and governance around the future of machine intelligence, with a specific focus on coordination in international governance of AI. Her work targets questions related to the responsible development and deployment of AI and machine learning, including how society can minimize the risks of AI while maximizing its benefits, and what AI development and advanced automation means for humankind across cultural and political boundaries. Terah is a former Policy Advisor to the U.S. Chief Technology Officer in the White House Office of Science and Technology Policy (OSTP). She most recently led a policy portfolio in the Obama Administration focused on machine intelligence, including AI, robotics, and intelligent transportation systems. In her work at OSTP, Terah helped establish and direct the White House Future of Artificial Intelligence Initiative, oversaw robotics policy and regulatory matters, led the Administration’s work from the White House on civil and commercial unmanned aircraft systems/drone integration into the U.S. airspace system, and advised on Federal automated vehicles policy. She also advised on issues related to diversity and inclusion in the technology industry and entrepreneurial ecosystem. Prior to her work at the White House, Terah was a Fellow with the Harvard School of Engineering and Applied Sciences based in Cape Town, South Africa. She is a graduate of Harvard University, where she currently sits on the Board of Directors of the Harvard Alumni Association.

 

Marilia Monteiro

Marilia will be analyzing consumer protection and competition policy to contribute to the development of sustainable public policies and innovation. From 2013-15, she was Policy Manager at the Brazilian Ministry of Justice’s Consumer Office coordinating public policies for the consumer protection in digital markets and law enforcement actions targeting ISP and Internet application. She has researched the intersection between innovation technologies and society in different areas: current democratic innovations in Latin America regarding e-participation at the Wissenschaftszentrum Berlin für Sozialforschung and the development of public policies on health privacy and data protection at the “Privacy Brazil” project with the Internet Lab in partnership with Ford Foundation in Brazil. She is a board member at Coding Rights, a Brazilian-born, women-led, think-and-do tank and active in Internet Governance fora. Marilia holds a Master in Public Policy from the Hertie School of Governance in Berlin focusing on policy analysis, a bachelor in Law from Fundação Getulio Vargas School of Law in Rio de Janeiro and specialises in digital rights.

 

Jason Schultz | @lawgeek

Jason will analyze the impacts and effects of new technologies such as artificial intelligence/machine learning and the Internet of Things through the lenses of consumer protection, civil liberties, innovation, and competition. His research aims to help policymakers navigate these important legal concerns while still allowing for open innovation and for competition to thrive. Jason is a Professor of Clinical Law, Director of NYU’s Technology Law & Policy Clinic, and Co-Director of the Engelberg Center on Innovation Law & Policy. His clinical projects, research, and writing primarily focus on the ongoing struggles to balance traditional areas of law such as intellectual property, consumer protection, and privacy with the public interest in free expression, access to knowledge, civil rights, and innovation in light of new technologies and the challenges they pose. During the 2016-2017 academic year, Jason was on leave at the White House Office of Science and Technology Policy, where he served as Senior Advisor on Innovation and Intellectual Property to former U.S. Chief Technology Officer Megan Smith. With Aaron Perzanowski, he is the author of The End of Ownership: Personal Property in the Digital Economy (MIT Press 2016), which argues for retaining consumer property rights in a marketplace that increasingly threatens them. Prior to joining NYU, Jason was an Assistant Clinical Professor of Law and Director of the Samuelson Law, Technology & Public Policy Clinic at the UC Berkeley School of Law (Boalt Hall). Before joining Boalt Hall, he was a Senior Staff Attorney at the Electronic Frontier Foundation and before that practiced intellectual property law at the firm of Fish & Richardson, PC. He also served as a clerk to the Honorable D. Lowell Jensen of the Northern District of California. He is a member of the American Law Institute.

 

Gigi Sohn | @gigibsohn

Gigi will be working to promote an open Internet in the United States. She is one of the nation’s leading public advocates for open, affordable, and democratic communications networks. Gigi is also a Distinguished Fellow at the Georgetown Law Institute for Technology Law & Policy and an Open Society Foundations Leadership in Government Fellow. For nearly 30 years, Gigi has worked across the United States to defend and preserve the fundamental competition and innovation policies that have made broadband Internet access more ubiquitous, competitive, affordable, open, and protective of user privacy. Most recently, Gigi was Counselor to the former Chairman of the U.S. Federal Communications Commission, Tom Wheeler, who she advised on a wide range of Internet, telecommunications and media issues. Gigi was named by the Daily Dot in 2015 as one of the “Heroes Who Saved the Internet” in recognition of her role in the FCC’s adoption of the strongest-ever net neutrality rules. Gigi co-founded and served as CEO of Public Knowledge, the leading communications policy advocacy organization. She was previously a Project Specialist in the Ford Foundation’s Media, Arts and Culture unit and Executive Director of the Media Access Project, the first public interest law firm in the communications space. Gigi holds a B.S. in Broadcasting and Film, Summa Cum Laude, from the Boston University College of Communication and a J.D. from the University of Pennsylvania Law School.

 

Cori Zarek | @corizarek

Cori is the Senior Fellow leading the Tech Policy Fellows team and serving as a liaison with the Mozilla Foundation. Her work as a fellow will focus on the intersection of tech policy and transparency. Before joining Mozilla, Cori was Deputy U.S. Chief Technology Officer at the White House where she led the team’s work to build a more digital, open, and collaborative government. Cori also coordinated U.S. involvement with the global Open Government Partnership, a 75-country initiative driving greater transparency and accountability around the world. Previously, she was an attorney at the U.S. National Archives, working on open government and freedom of information policy.  Before joining the U.S. government, Cori was the Freedom of Information Director at The Reporters Committee for Freedom of the Press where she assisted journalists with legal issues, and she also practiced for a Washington law firm. Cori received her B.A. from the University of Iowa where she was editor of the student-run newspaper, The Daily Iowan. Cori also received her J.D. from the University of Iowa where she wrote for the Iowa Law Review and The Des Moines Register. She was inducted into the Freedom of Information Hall of Fame in 2016. Cori is also the President of the D.C. Open Government Coalition and teaches a media law class at American University.

The post Increasing Momentum Around Tech Policy appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

About:Community: Firefox 54 new contributors

do, 08/06/2017 - 05:06

With the release of Firefox 54, we are pleased to welcome the 36 developers who contributed their first code change to Firefox in this release, 33 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.18

do, 08/06/2017 - 02:00

The Rust team is happy to announce the latest version of Rust, 1.18.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed, getting Rust 1.18 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.18.0 on GitHub.

What’s in 1.18.0 stable

As usual, Rust 1.18.0 is a collection of improvements, cleanups, and new features.

One of the largest changes is a long time coming: core team members Carol Nichols and Steve Klabnik have been writing a new edition of “The Rust Programming Language”, the official book about Rust. It’s being written openly on GitHub, and has over a hundred contributors in total. This release includes the first draft of the second edition in our online documentation. 19 out of 20 chapters have a draft; the draft of chapter 20 will land in Rust 1.19. When the book is done, a print version will be made available through No Starch Press, if you’d like a paper copy. We’re still working with the editors at No Starch to improve the text, but we wanted to start getting a wider audience now.

The new edition is a complete re-write from the ground up, using the last two years of knowledge we’ve gained from teaching people Rust. You’ll find brand-new explanations for a lot of Rust’s core concepts, new projects to build, and all kinds of other good stuff. Please check it out and let us know what you think!

As for the language itself, an old feature has learned some new tricks: the pub keyword has been expanded a bit. Experienced Rustaceans will know that items are private by default in Rust, and you can use the pub keyword to make them public. In Rust 1.18.0, pub has gained a new form:

pub(crate) bar;

The bit inside of () is a ‘restriction’, which refines the notion of how this is made public. Using the crate keyword like the example above means that bar would be public to the entire crate, but not outside of it. This makes it easier to declare APIs that are “public to your crate”, but not exposed to your users. This was possible with the existing module system, but often very awkward.

You can also specify a path, like this:

pub(in a::b::c) foo;

This means “usable within the hierarchy of a::b::c, but not elsewhere.” This feature was defined in RFC 1422 and is documented in the reference.

For our Windows users, Rust 1.18.0 has a new attribute, #![windows_subsystem]. It works like this:

#![windows_subsystem = "console"] #![windows_subsystem = "windows"]

These control the /SUBSYSTEM flag in the linker. For now, only "console" and "windows" are supported.

When is this useful? In the simplest terms, if you’re developing a graphical application, and do not specify "windows", a console window would flash up upon your application’s start. With this flag, it won’t.

Finally, Rust’s tuples, enum variant fields, and structs (without #[repr]) have always had an unspecified layout. We’ve turned on automatic re-ordering, which can result in smaller sizes through reducing padding. Consider a struct like this:

struct Suboptimal(u8, u16, u8);

In previous versions of Rust on the x86_64 platform, this struct would have the size of six bytes. But looking at the source, you’d expect it to have four. The extra two bytes come from padding; given that we have a u16 here, it should be aligned to two bytes. But in this case, it’s at offset one. To move it to offset two, another byte of padding is placed after the first u8. To give the whole struct a proper alignment, another byte is added after the second u8 as well, giving us 1 + 1 (padding) + 2 + 1 + 1 (padding) = 6 bytes.

But what if our struct looked like this?

struct Optimal(u8, u8, u16);

This struct is properly aligned; the u16 lies on a two byte boundary, and so does the entire struct. No padding is needed. This gives us 1 + 1 + 2 = 4 bytes.

When designing Rust, we left the details of memory layout undefined for just this reason. Because we didn’t commit to a particular layout, we can make improvements to it, such as in this case where the compiler can optimize Suboptimal into Optimal automatically. And if you check the sizes of Suboptimal and Optimal on Rust 1.18.0, you’ll see that they both have a size of four bytes.

We’ve been planning this change for a while; previous versions of Rust included this optimization on the nightly channel, but some people wrote unsafe code that assumed the exact details of the representation. We rolled it back while we fixed all instances of this that we know about, but if you find some code breaks due to this, please let us know so we can help fix it! Structs used for FFI can be given the #[repr(C)] annotation to prevent reordering, in addition to C-compatible field layout.

We’ve been planning on moving rustdoc to use a CommonMark compliant markdown parser for a long time now. However, just switching over can introduce regressions where the CommonMark spec differs from our existing parser, Hoedown. As part of the transition plan, a new flag has been added to rustdoc, --enable-commonmark. This will use the new parser instead of the old one. Please give it a try! As far as we know, both parsers will produce identical results, but we’d be interested in knowing if you find a scenario where the rendered results differ!

Finally, compiling rustc itself is now 15%-20% faster. Each commit message in this PR goes over the details; there were some inefficiencies, and now they’ve been cleaned up.

See the detailed release notes for more.

Library stabilizations

Seven new APIs were stabilized this release:

See the detailed release notes for more.

Cargo features

Cargo has added support for the Pijul VCS, which is written in Rust. cargo new my-awesome-project --vcs=pijul will get you going!

To supplement the --all flag, Cargo now has several new flags such as --bins, --examples, --tests, and --benches, which will let you build all programs of that type.

Finally, Cargo now supports Haiku and Android!

See the detailed release notes for more.

Contributors to 1.18.0

Many people came together to create Rust 1.18. We couldn’t have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 07 Jun 2017

wo, 07/06/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 101

wo, 07/06/2017 - 19:00

The Joy of Coding - Episode 101 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting June 07, 2017

wo, 07/06/2017 - 18:00

Weekly SUMO Community Meeting June 07, 2017 This is the sumo weekly call

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Engaging on e-Privacy at the European Parliament

wo, 07/06/2017 - 17:26

Last week, I participated in the European Parliament’s Technical Roundtable regarding the draft e-Privacy Regulation currently under consideration – specifically, I joined the discussion on “cookies”. The Roundtable was hosted by lead Rapporteur on the file, MEP Marju Lauristin (Socialists and Democrats, Estonia), and MEP Michal Boni (European People’s Party, Poland). It was designed to bring together a range of stakeholders to inform the Parliament’s consideration of what could be a major change to how Europe regulates the privacy and security of communications online, related to but with a different scope and purpose than the recently adopted General Data Protection Regulation (GDPR).

Below the fold is a brief overview of my intervention, which describes our proposed changes for some of the key aspects of the Regulation, including how it handles “cookies”, and more generally how to deliver maximal benefits for the privacy and security of communications, with minimum unnecessary or problematic complexities for technology design and engineering. I covered the following three points:

  1. We support incentives for companies to offer privacy protective options to users.
  2. The e-Privacy Regulation must be future-proofed by ensuring technological neutrality.
  3. Browsers are not gatekeepers nor ad-blockers; we are user agents.

The current legal instrument on this topic, the e-Privacy Directive, leaves much to be desired when it comes to effective privacy protections and user benefit, illustrated quite prominently by the “cookie banner” which users click through to “consent” to the use of cookies by a Web site. The e-Privacy Regulation is is an important piece of legislation – for Mozilla, for Europe, and ultimately, for the health of the global internet. We support the EU’s ambitious vision,  and we will continue working with the Parliament, the Commission, and the Council by sharing our views and experiences with investing in privacy online. We hope that the Regulation will contribute to a better communications ecosystem, one that offers meaningful control, transparency, and choice to individuals, and helps to rebuild trust online.

1 – We support incentives for companies to offer privacy protective options to users.

We view one of the primary objectives of the Regulation to be catalyzing more offerings of privacy protective technologies and services for users. We strongly support this objective. This is the approach we take with Firefox: Users can browse in regular mode, which permits Web sites to place cookies, or in private browsing mode, which has our Tracking Protection technology built in. We invest in making sure that both options are desirable user experiences, and the user is free to choose which they go with – and can switch between them at will, and use both at the same time. We’d like to see more of this in the industry, and welcome the language in Article 10(1) of the draft Regulation which we believe is intended to encourage this.

2 – The e-Privacy Regulation must be future-proofed by ensuring technological neutrality.

One of the principles that shaped the current e-Privacy Directive was technological neutrality. It’s critical that the Regulation similarly follow this principle, to ensure practical application and to keep it future-proof. It should therefore focus on the underlying privacy risk to users created by cross-site and cross-device tracking, rather than on specific technologies that create that risk. To achieve that, the current draft of the Regulation would benefit from two changes.

First, the Parliament should revise  references to specific tracking techniques, like first and third party cookies to ensure that other forms of tracking aren’t overlooked. While blocking third party cookies may seem at first glance to be a low hanging fruit to better protect user privacy and security online — see this Firefox add-on called Lightbeam, which demonstrates the amount of first and third party sites that can “follow” you online — there are a number of different ways a user can be tracked online; via third party cookies is only an implementation of one form (albeit a common one). Device fingerprinting, for example, creates a unique, persistent identifier that undermines user consent mechanisms and that requires a regulatory solution. Similarly, Advertising identifiers are a pervasive tracking tool on mobile platforms that are currently not addressed. The Regulation should use terminology that more accurately captures the targeted behavior, and not only one possible implementation of tracking.

Second, the Regulation includes a particular focus on Web browsers (such as Recitals 22-24), without proper consideration of the diversity of forms of online communications today. We aren’t suggesting that the Regulation exclude Web browsing, of course. But to focus on one particular client-side software technology risks missing other technology with significant privacy implications, such as tracking facilitated by mobile operating systems or cloud services accessed via mobile apps. Keeping a principle-based approach will ensure that the Regulation doesn’t impose a specific solution that does not meaningfully deliver on transparency, choice, and control outside of the Web browsing context.

3 – Browsers are not gatekeepers nor ad-blockers; we are user agents.

Building on the above, the Parliament ought to view the Web browser in a manner that reflects its place in the technology ecosystem. Web browsers are user agents facilitating the communication between internet users and Web sites. For example, Firefox offers deep customisation options, and its goal is to put the user in the driver seat. Similarly, Firefox private browsing mode includes Tracking Protection technology, which blocks certain third party trackers through a blacklist (learn more about our approach to content blocking here). Both of these are user agent features, embedded in code shipped to users and run on their local devices – neither is a service that we functionally intermediate or operate as it is used. It’s not constructive from a regulatory perspective, nor an accurate understanding of the technology, to describe Web browsers as gatekeepers in the way the Regulation does today.

The post Engaging on e-Privacy at the European Parliament appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Jan Varga: A Time for Mourning Again.

wo, 07/06/2017 - 16:52

Our dog Aida passed away today from a kidney failure. I knew there’s a little chance to help her, but I really hoped she would magically overcome it.

She was the greatest dog I’ve ever had. She was a family member, a friend … I will miss her terribly.

Rest in peace our beloved Aida…

Jan

 


Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Introducing FilterBubbler: A WebExtension built using React/Redux

wo, 07/06/2017 - 16:47

A  few months ago my long time Free Software associate, Don Marti, called me about an idea for a WebExtension. WebExtensions is the really cool new standard for browser extensions that Mozilla and the Chrome team are collaborating on (as well as Opera, Edge and a number of other major browsers). The WebExtensions API lets you write add-ons using the same JavaScript and HTML methodologies you use to implement any other web site.

Don’s idea was basically to build a text analysis toolkit with the new WebExtensions API. This toolkit would let you monitor various browser activities and resources (history, bookmarks, etc.) and then let you use text analysis modules to discover patterns in your own browsing history. The idea was to turn the tables on the kinds of sophisticated analysis that advertisers do with the everyday browsing activities we take for granted. Big companies are using advanced techniques to model user behavior and control the content they receive, in order to manipulate outcomes like the time a user spends on the system or the ads they see. If we provided tools for users to do this with their own browsing data, they would have a better chance to understand their own behaviors and and a greater awareness of when external systems are trying to manipulate them.

The other major goal would be to provide a well-documented example of using the new WebExtensions API.  The more I read about WebExtensions the more I realized they represent a game-changer for moving web browsing intelligence “out to the edge”. All sorts of analysis and automation can be done with WebExtensions in a way that potentially lets the tools be used on any of the popular modern web browsers. About the only thing I saw missing was a way to easily collaborate around these “recipes” for analysing web content. I suggested we create a WordPress plugin that would supply a RESTful interface for sharing classifications and the basic plan for “FilterBubbler” was born.

Our initial prototype was a proof of concept that used an extremely basic HTML pop-up and a Bayesian classifier. This version proved that we could provide useful classification of web page content based on hand-loaded corpora, but it was clear that we would need additional tooling to get to a “consumer” feel. Before we could start adding important features like remote recipe servers, classification displays and configuration screens, we clearly needed to make some decisions about our infrastructure. In this article, I will cover our efforts to provide a modern UI environment and the challenges that posed when working with WebExtensions.

React/Redux

The React framework and its associated Flux implementation took the HTML UI world by storm when Facebook released the tool as Free Software in 2013. React was originally deployed in 2011 as part of the newsfeed display infrastructure on Facebook itself. Since then the library has found use in Instagram, Netflix, AirBnB and many other popular services. The tool revolves around a strategy called Flux which tightly defines the way state is updated in an application.

Flux is a strategy not an actual implementation, and there are many libraries that provide its functionality. One of the most popular libraries today is Redux. The Redux core value is a simplified universal view of the application state. Because there is a single state for the application, the behavior that results from a series of action events is completely deterministic and predictable. This makes your application easier to reason about, test and debug. A full discussion of the concepts behind React and Redux is beyond the scope of this article so if you are just getting started, I would recommend that you read the Redux introductory material or check out Dan Ambramov’s excellent introductory course at Egghead.

Integrating with WebExtensions

Digging into the WebExtensions framework, one of the first hurdles is that the UI pop-up and config page context is separate from the background context. The state of the UI context is recreated each time you open and close the UI displays. Communication between the UI context and the background script context is achieved via a message-passing architecture.

The state of the FilterBubbler extension will be stored in the background context but we’ll need to bind that state to UI elements in the pop-up and config page context. Alexander Ivantsov’s Redux-WebExt project offers one solution for this problem. His tool provides an abstraction layer between the UI and background pages with a proxy. The proxy gives the appearance of direct access to the Redux store in the UI, but it actually forwards actions to the background context, and then sends resulting state modifications generated by the reducers back to the UI context.

Action mapping

It took me some effort to get things working across the Redux-WebExt bridge. The React components that run in the UI contexts think they are talking to a Redux store; in fact, it’s a facade that is exchanging messages with your background context. The action objects that you think are headed to your reducers are actually serialized into messages, sent to the background context and then unpacked and delivered to the store. Once the reducers finish their state modifications, the resulting state is packed up and sent back to the proxy so that it can update the state of the UI peers.

Redux-WebExt puts a mapping table in the middle of this process that lets you modify how action events from the front-end get delivered to the store. In some cases (i.e., asynchronous operations) you really need this mapping to separate out actions that can’t be serialized into message objects (like callback functions).

In some cases this may be a straight mapping that only copies the data from a UI action event, like this one from FilterBubbler’s mappings in store.js:

actions[formActionTypes.TOUCH] = (data) => { return { type: formActionTypes.TOUCH, ...data }; }

Or, you may need to map that UI action to something completely different, like this mapping that calls an asynchronous function only available in the background store:

actions[UI_REQUEST_ACTIVE_URL] = requestActiveUrl;

In short, be mindful of the mapper! It took me a few hours to get my head wrapped around its purpose. Understanding this is critical if you want to use React/Redux in your extension as we are.

This arrangement makes it possible to use standard React/Redux tooling with minimal changes and configuration. Existing sophisticated libraries for form-handling and other major UI tasks can plug into the WebExtension environment without any knowledge of the underlying message-based connectivity. One example tool we have already integrated is Redux Form, which provides a full system for managing form input with validation and the other services you would expect in a modern development effort.

Having established that we can use a major UI toolkit without starting from scratch, our next concern is to make things look good. Google’s Material Design is one popular look and feel standard and the React platform has the popular Material UI, which implements the Google standard as a set of React/Redux components. This gives us the ability to produce great looking UI popups and config screens without having to develop a new UI toolkit.

Get thunky

Some of the operations we need to perform are callback-based, which makes them asynchronous. In the React/Redux model this presents some issues. Action generator functions and reducers are designed to do their work immediately when called. Solutions like providing access to the store within an action generator and calling dispatch in a callback are considered an anti-pattern. One popular solution to this problem is the Redux-Thunk middleware. Adding Redux-Thunk to your application is easy, you just pass it in when you create the store.

import thunk from 'redux-thunk' const store = createStore(    reducers,    applyMiddleware(thunk))

With Redux-Thunk installed you are provided with a new style of action generators in which you return a function to the store that will later be passed a dispatch function. This inversion of control allows Redux to stay in the driver’s seat when it comes to sequencing your asynchronous operations with other actions in the queue. As an example, here’s a function that requests the URL of the current tab and then dispatches a request to set the result in the UI:

export function requestActiveUrl() {    return dispatch => {        return browser.tabs.query({active: true}, tabs => {            return dispatch(activeUrl(tabs[0].url));        })    } }

The activeUrl() function looks more typical:

export function activeUrl(url) {    return {        type: ACTIVE_URL,        url    } }

Since WebExtensions span several different contexts and communicate with asynchronous messaging, a tool like Redux-Thunk is indispensable.

Debugging WebExtensions

Debugging WebExtensions presents a few new challenges that work a little differently depending on the browser you are using. Whichever browser you use, the first major difference is that the background context of the extension has no visible page and must be specifically selected for debugging. Let’s walk through getting started with that process on Firefox and Chrome.

Firefox

On Firefox, you can access your extension by typing “about:debugging” into the browser’s URL field. This page will allow you to load an unpacked extension with the “Load Temporary Add-On” button (or you can use the handy web-ext tool that allows you to start the process from the command line). Pressing the “Debug” button here will bring up a source debugger for your extension. With FilterBubbler, we are using the flexible webpack build tool to take advantage of the latest JavaScript features. Webpack uses the babel transpiler to convert new JavaScript language features into code that is compatible with current browsers. This means that the sources run by the browser are significantly altered from their originals. Be sure to select the “Show Original Sources” option from the preferences menu in the debugger or your code will seem very unfamiliar!

Once you have that selected you should see something more like what you expect:

From here you can set breakpoints and do everything else you would expect.

Chrome

On Chrome it’s all basically the same idea, just a few small differences in the UI. First you will go to the main menu, dig down to “more tools” and then select “extensions”:

That will take you to the extension listing page.

The “Inspect views” section will allow you to bring up a debugger for your background scripts.

Where the Firefox debugger shows all of your background and foreground activity in one place, the Chrome environment does things a little differently. The foreground UI view is activated by right-clicking the icon of your WebExtension and selecting the “Inspect popup” option.

From there things are pretty much as you would expect. If you have written JavaScript applications and used the browser’s built-in functionality then you should find things pretty familiar.

Classification materials

With all our new infrastructure in place and a working debugger we were back on track adding features to FilterBubbler.  One of our goals for the prototype is to provide the API that recipes will run in. The main ingredients for FilterBubbler recipes are:

One or more sources: A source provides a stream of classification events on given URLs. The prototype provides a simple source which will emit a classification request any time the browser switches to a particular page. Other possible sources could include a source that scans a social network or a newsfeed for content, a stream of email messages or a segment of the user’s browser history.

A classifier: The classifier takes content from a source and returns at least one classification label with a strength value. The classifier may return an array of label and strength value pairs. If the array is empty then the system assumes that the classifier was not able to produce a match.

One or more corpora: FilterBubbler corpora provide a list of URLs with label and strength values. The labels and strength values are used to train the classifier.

One or more sinks: A sink is the destination for the classification events. The prototype includes a simple sink that connects a given classifier to a UI widget, which displays the classifications in the WebExtensions pop-up. Other possible sinks could generate outbound emails for certain classification label matches or a sink that writes URLs into a bookmark folder named with the classification label.

Maybe a diagram helps. The following configuration could tell you whether the page you are currently looking at is “awesome” or “stupid”!

Passing on the knowledge

The configurations for these arrangements are called “recipes” and can be loaded into your local configuration. A recipe is defined with a simple JSON format like so:

{ “recipe-version”: “0.1”, “name”: “My Recipe”, “classifier”: “naive-bayes”, “corpora”: [ “http://mywpblog.com/filterbubbler/v1/corpora/fake-news”, “http://mywpblog.com/filterbubbler/v1/corpora/ufos”, “http://mywpblog.com/filterbubbler/v1/corpora/food-blog” ], “source”: [ “default” ], “sink”: [ “default” ] }

The simple bit of demo code above can help a user differentiate between fake news, UFO sightings and food blogs (more of a problem than you might expect in some cities). Currently the classifiers, sources and sinks must be one of the provided implementations and cannot be loaded dynamically in the initial prototype. In upcoming articles, we will expand this functionality and describe the challenges these activities present in the WebExtensions environment.

References

FilterBubbler on GitHub: https://github.com/filterbubbler/filterbubbler-web-ext
React based Material UI: http://www.material-ui.com/
Redux WebExt: https://github.com/ivantsov/redux-webext
Redux Form: http://redux-form.com/6.7.0/
Mozilla browser polyfill: https://github.com/mozilla/webextension-polyfill

Categorieën: Mozilla-nl planet

Pagina's