So I’d love to see brotli supported as a Content-Encoding in curl too, and then we just basically have to write some conditional code to detect the brotli library, add the adaption code for it and we should be in a good position. But…
There is (was) no brotli library!
It turns out the brotli team just writes their code to be linked with their tools, without making any library nor making it easy to install and use for third party applications.
We can’t have it like that! I rolled up my imaginary sleeves (imaginary since my swag tshirt doesn’t really have sleeves) and I now offer libbrotli to the world. It is just a bunch of files and a build system that sucks in the brotli upstream repo as a submodule and then it builds a decoder library (brotlidec) and an encoder library (brotlienc) out of them. So there’s no code of our own here. Just building on top of the great stuff done by others.
It’s not complicated. It’s nothing fancy. But you can configure, make and make install two libraries and I can now go on and write a curl adaption for this library so that we can get brotli support for it done. Ideally, this (making a library) is something the brotli project will do on their own at some point, but until they do I don’t mind handling this.
Mozilla se posiciona sobre mudança do Internet.org, do Facebook, para Free ...
A Mozilla fez uma recente declaração se posicionando sobre o modelo de zero rating (acesso subsidiado). Para ela, esse modelo, no qual os usuários ganham acesso livre à aplicações e sites pré-determinados e que limita sua experiência online, ...
en meer »
How do we bring together learning and advocacy? I wanted to start this conversation by giving example of campaigns I’ve worked on in the past.
Case Study 1: How much do you know about Iran quiz
This campaign happened at Berim.org (which means let’s go in Farsi and whose mission was to support Iranian social innovators.) We were running a campaign to avoid war with Iran. What we found was that we were getting a lot of emails from our members who were confusing Iran with other groups or countries around the world. For example our members would write to us and say they wouldn’t take action anymore until women in Iran were allowed to drive. It was a clear case of misinformation as women in Saudi Arabia are restricted from driving not women in Iran.
Our goal was to be able to shatter misinformation about Iran in a way that was fun and that didn’t make our members – 90% of whom were not from an Iranian background -defensive.
As a result we launched a 10 question buzzfeed style quiz titled ‘how much do you know about Iran?’ We sent the quiz to our email list of 70,000+ people. The quiz was taken about 20,000 times and had a completion rate that was close to 90%. After the success of the quiz in our network it came to the notice of Upworthy who shared the quiz and then it reached over 100,000 people.
The majority of people got about 50% of the questions in the quiz right. This was intentional as we wanted our members to be challenged and realize that some of their assumptions were wrong. Our assumption was that doing so in the format of a fun quiz would be less confronting and more likely to sink in that doing it through a ‘mythbusting checklist.’ The feedback we received from members tended to indicate we were right:
Great! Offerings like this which let us see the people and culture and humanity of Iran are the way. Thanks! – Pat
Sara, very nice introduction to Iran. It’s a good way to begin the process to break down barriers. -Dan
Great quiz…I missed two. Sent it on to about 30 other folks. – Bob
The quiz was a great way to stimulate my thinking about Iran and correcting my misconceptions and adding to my knowledge about its products, way of life, historical events, etc. Would love to see more. – Sondra
How does this related to our work at Mozilla?
I wanted to use this case study to illustrate one tactic to use online organizing to educate people at scale. Right now we’re only conceptualizing the advocacy list as a way to mobilize people around legislative action – but in order to really build relationships and deep connections we can’t just ask people to take action we need to think about how we can serve our community.
That’s why I think it would be great for us to come together and create some learning goals for our list. For example – after a year of being on the list do we want people to be able to be able to articulate what net neutrality is? Do we want to know 5 things they can do to secure their privacy?
While the strategy and benchmarks is something we need to develop together – here are some tactical ideas to help illustrate the potential of this collaboration:
-Sending people a list of fun facts/anecdotes that relate to the open web that they can talk about with their families during thanksgiving
-Creating a list of gift ideas that will help people learn more about the open web during the holiday season.
– Running a campaign to ask people to make privacy a new year’s resolution and creating small things they can do each week to realize that resolution.
A few months ago, Joel Maher announced the Perfherder summer of contribution. We wrapped things up there a few weeks ago, so I guess it’s about time I wrote up a bit about how things went.
As a reminder, the idea of summer of contribution was to give a set of contributors the opportunity to make a substantial contribution to a project we were working on (in this case, the Perfherder performance sheriffing system). We would ask that they sign up to do 5-10 hours of work a week for at least 8 weeks. In return, Joel and myself would make ourselves available as mentors to answer questions about the project whenever they ran into trouble.
To get things rolling, I split off a bunch of work that we felt would be reasonable to do by a contributor into bugs of varying difficulty levels (assigning them a bugzilla whiteboard tag ateam-summer-of-contribution). When someone first expressed interest in working on the project, I’d assign them a relatively easy front end one, just to cover the basics of working with the project (checking out code, making a change, submitting a PR to github). Then I’d assign them slightly harder or more complex tasks which dealt with other parts of the codebase, the nature of which depended on what they wanted to learn more about. Perfherder essentially has two components: a data storage and analysis backend written in Python and Django, and a web-based frontend written in JS and Angular. There was (still is) lots to do on both, which gave contributors lots of choice.
This system worked pretty well for attracting people. I think we got at least 5 people interested and contributing useful patches within the first couple of weeks. In general I think onboarding went well. Having good documentation for Perfherder / Treeherder on the wiki certainly helped. We had lots of the usual problems getting people familiar with git and submitting proper pull requests: we use a somewhat clumsy combination of bugzilla and github to manage treeherder issues (we “attach” PRs to bugs as plaintext), which can be a bit offputting to newcomers. But once they got past these issues, things went relatively smoothly.
A few weeks in, I set up a fortnightly skype call for people to join and update status and ask questions. This proved to be quite useful: it let me and Joel articulate the higher-level vision for the project to people (which can be difficult to summarize in text) but more importantly it was also a great opportunity for people to ask questions and raise concerns about the project in a free-form, high-bandwidth environment. In general I’m not a big fan of meetings (especially status report meetings) but I think these were pretty useful. Being able to hear someone else’s voice definitely goes a long way to establishing trust that you just can’t get in the same way over email and irc.
I think our biggest challenge was retention. Due to (understandable) time commitments and constraints only one person (Mike Ling) was really able to stick with it until the end. Still, I’m pretty happy with that success rate: if you stop and think about it, even a 10-hour a week time investment is a fair bit to ask. Some of the people who didn’t quite make it were quite awesome, I hope they come back some day.
On that note, a special thanks to Mike Ling for sticking with us this long (he’s still around and doing useful things long after the program ended). He’s done some really fantastic work inside Perfherder and the project is much better for it. I think my two favorite features that he wrote up are the improved test chooser which I talked about a few months ago and a get related platform / branch feature which is a big time saver when trying to determine when a performance regression was first introduced.
I took the time to do a short email interview with him last week. Here’s what he had to say:
1. Tell us a little bit about yourself. Where do you live? What is it you do when not contributing to Perfherder?
I’m a postgraduate student of NanChang HangKong university in China whose major is Internet of things. Actually,there are a lot of things I would like to do when I am AFK, play basketball, video game, reading books and listening music, just name it ; )
2. How did you find out about the ateam summer of contribution program?
well, I remember when I still a new comer of treeherder, I totally don’t know how to start my contribution. So, I just go to treeherder irc and ask for advice. As I recall, emorley and jfrench talk with me and give me a lot of hits. Then Will (wlach) send me an Email about ateam summer of contribution and perfherder. He told me it’s a good opportunity to learn more about treeherder and how to work like a team! I almost jump out of bed (I receive that email just before get asleep) and reply with YES. Thank you Will!
3. What did you find most challenging in the summer of contribution?
I think the most challenging thing is I not only need to know how to code but also need to know how treeherder actually work. It’s a awesome project and there are a ton of things I haven’t heard before (i.e T-test, regression). So I still have a long way to go before I familiar with it.
4. What advice would give you to future ateam contributors?
The only thing you need to do is bring your question to irc and ask. Do not hesitate to ask for help if you need it! All the people in here are nice and willing to help. Enjoy it!
TaskCluster is Mozilla’s task queuing, scheduling and execution service. It allows the user to schedule a DAG representing a task graph that describes a some tasks and their dependencies, and how to execute them, and it schedules them to run in the needed order on a number of slave machines.
As of a while ago, some of the continuous integration tasks have been runing on TaskCluster, and I recently set out to enable static analysis optimized builds on Linux64 on top of TaskCluster. I had previously added a similar job for debug builds on OS X in buildbot, and I am amazed at how much the experience has improved! It is truly easy to add a new type of job now as a developer without being familiar with buildbot or anything like that. I’m writing this post to share my experience on how I did this.
The process of scheduling jobs in TaskCluster starts by a slave downloading a specific revision of a tree, and running the ./mach taskcluster-graph command to generate a task graph definition. This is what happens in a “gecko-decision” jobs that you can see on TreeHerder. The mentioned task graph is computed using the task definition information in testing/taskcluster. All of the definitions are in YAML, and I found the naming of variables relatively easy to understand. The build definitions are located in testing/taskcluster/tasks/builds and after some poking around, I found linux64_clobber.yml.
If you look closely at that file, a lot of things are clear from the names. Here are important things that this file defines:
- $inherits: These files have an single inheritance structure that allows you to refactor the common functionality into “base” definitions.
- A lot of things have “linux64” in their name. This gave me a good starting point when I was trying to add a “linux64-st-an” (a made-up name) build by copying the existing definiton.
- payload.image contains the name of the docker image that this build runs. This is handy to know if you want to run the build locally (yes, you can do that!).
- It points to builds/releng_base_linux_64_builds.py which contains the actual build definition.
Looking at the build definition file, you will find the steps run in the build, whether the build should trigger unit tests or Talos jobs, the environment variables used during the build, and most importantly the mozconfig and tooltool manifest paths. (In case you’re not familiar with Tooltool, it lets you upload your own tools to be used during the build time. This can be new experimental toolchains, custom programs your build needs to run, which is useful for things such as performing actions on the build outputs, etc.)
This basically gave me everything I needed to define my new build type, and I did that in bug 1203390, and these builds are now visible on TreeHerder as “[Tier-2](S)” on Linux64. This is the gist of what I came up with.
I think this is really powerful since it finally allows you to fully control what happens in a job. For example, you can use this to create new build/test types on TreeHerder, do try pushes that test changes to the environment a job runs in, do highly custom tasks such as creating code coverage results, which requires a custom build step and custom test steps and uploading of custom artifacts! Doing this under the old BuildBot system is unheard of. Even if you went out of your way to learn how to do that, as I understand it, there was a maximum number of build types that we were getting close to which prevented us from adding new job types as needed! And it was much much harder to iterate on (as I did when I was working on this on the try server bootstrapping a whole new build type!) as your changes to BuildBot configs needed to be manually deployed.
Another thing to note is that I found out all of the above pretty much by myself, and didn’t even have to learn every bit of what I encountered in the files that I copied and repurposed! This was extremely straightforward. I’m already on my way to add another build type (using Ted’s bleeding edge Linux to OS X cross compiling support)! I did hit hurdles along the way but almost none of them were related to TaskCluster, and with the few ones that were, I was shooting myself in the foot and Dustin quickly helped me out. (Thanks, Dustin!)
Another near feature of TaskCluster is the inspector tool. In TreeHerder, you can click on a TaskCluster job, go to Job Details, and click on “Inspect Task”. You’ll see a page like this. In that tool you can do a number of neat things. One is that it shows you a “live.log” file which is the live log of what the slave is doing. This means that you can see what’s happening in close to real time, without having to wait for the whole job to finish before you can inspect the log. Another neat feature is the “Run locally” commands that show you how to run the job in a local docker container. That will allow you to reproduce the exact same environment as the ones we use on the infrastructure.
I highly encourage people to start thinking about the ways they can harness this power. I look forward to see what we’ll come up with!
Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.
Firefox corrige falha presente no navegador por 14 anos
A Mozilla Foundation anunciou a correção de um bug que estava presente no Firefox há 14 anos e que, de acordo com a empresa, consumia memória quando o usuário ativava a extensão Adblock Plus. Com a falha, o programa criado para bloquear ...
Mozilla lança versão de teste do seu protetor de privacidadeBoa Informação (liberação de imprensa) (Blogue)
Firefox corrige bug de mais de 14 anos no navegadorGlobo.com
Novo "modo secreto" está no beta do FirefoxBR-Linux
alle 7 nieuwsartikelen »Google Nieuws
Software-update: Pale Moon 25.7.1
Pale Moon logo (75 pix) Versie 25.7.1 van Pale Moon is uitgekomen. Deze webbrowser maakt gebruik van de broncode van Mozilla Firefox, maar is geoptimaliseerd voor moderne hardware. De Windows-versie van Mozilla Firefox wordt namelijk ontwikkeld ...
14% HTTP/2 thanks to nginx ?
The –libcurl flaw is fixed (and it was GONE from github for a few hours)
No, the cheat sheet cannot be in the man page. But…
bug of the week: the http/2 performance fix
option of the week: -k
Talking at the GOTO Conference next week
This scenario is simplified for purposes of demonstration.
I have 3 machines: A, B, and C. A is my laptop, B is a bastion, and C is a server that I only access through the bastion.
I use an SSH keypair helpfully named AB to get from me@A to me@B. On B, I su to user. I then use an SSH keypair named BC to get from user@B to user@C.
I do not wish to store the BC private key on host B.SSH Agent Forwarding
I have keys AB and BC on host A, where I start. Host A is running ssh-agent, which is installed by default on most Linux distributions.me@A$ ssh-add ~/.ssh/AB # Add keypair AB to ssh-agent's keychain me@A$ ssh-add ~/.ssh/BC # Add keypair BC to the keychain me@A$ ssh -A me@B # Forward my ssh-agent
Now I’m logged into host B and have access to the AB and BC keypairs. An attacker who gains access to B after I log out will have no way to steal the BC keypair, unlike what would happen if that keypair was stored on B.
See here for pretty pictures explaining in more detail how agent forwarding works.
Anyways, I could now ssh me@C with no problem. But if I sudo su user, my agent is no longer forwarded, so I can’t then use the key that I added back on A!Switch user while preserving environment variables me@B$ sudo -E su user user@B$ sudo -E ssh user@C What?
The -E flag to sudo preserves the environment variables of the user you’re logged in as. ssh-agent uses a socket whose name is of the form /tmp/ssh-AbCdE/agent.12345 to call back to host A when it’s time to do the handshake involving key BC, and the socket’s name is stored in me‘s SSH_AUTH_SOCK environment variable. So by telling sudo to preserve environment variables when switching user, we allow user to pass ssh handshake stuff back to A, where the BC key is available.
Why is sudo -E required to ssh to C? Because /tmp/sshAbCdE/agent.12345 is owned by me:me, and only the file’s owner may read, write, or execute it. Additionally, the socket itself (agent.12345) is owned by me:me, and is not writable by others.
If you must run ssh on B without sudo, chown -R /tmp/ssh-AbCdE to the user who needs to end up using the socket. Making them world read/writable would allow any user on the system to use any key currently added to the ssh-agent on A, which is a terrible idea.
For what it’s worth, the actual value of /tmp/ssh-AbCdE/agent.12345 is available at any time in this workflow as the result of printenv | grep SSH_AUTH_SOCK | cut -f2 -d =.The Catch
Did you see what just happened there? An arbitrary user with sudo on B just gained access to all the keys added to ssh-agent on A. Simon pointed out that the right way address this issue is to use ProxyCommand instead of agent forwarding.No, I really don’t want my keys accessible on B
See man ssh_config for more of the details on ProxyCommand. In ~/.ssh/config on A, I can put:Host B User me Hostname 111.222.333.444 Host C User user Hostname 222.333.444.555 Port 2222 ProxyCommand ssh -q -w %h:%p B
So then, on A, I can ssh C and be forwarded through B transparently.
The Participation Team was created back in January of this year with an ambitious mandate to simultaneously a) get more impact, for Mozilla’s mission and its volunteers, from core contributor participation methods we’re using today, and b) to find and develop new ways that participation can work at Mozilla.
This mandate stands on the shoulders of people and teams who lead this work around Mozilla in the past, including the Community Building Team. As a contrast with these past approaches, our team concentrates staff from around Mozilla, has a dedicated budget, and has the strong support of leadership, reporting to Mitchell Baker (the Executive Chair) and Mark Surman (CEO of the foundation).
For the first half of the year, our approach was to work with and learn from many different teams throughout Mozilla. From Dhaka to Dakar — and everywhere in between — we supported teams and volunteers around the world to increase their effectiveness. From MarketPulse to the Webmaker App launches we worked with different teams within Mozilla to test new approaches to building participation, including testing out what community education could look like. Over this time we talked with/interviewed over 150 staff around Mozilla, generated 40+ tangible participation ideas we’d want to test, and provided “design for participation” consulting sessions with 20+ teams during the Whistler all-hands.
Toward the end of July, we took stock of where we were. We established a set of themes for the rest of 2015 (and maybe beyond), are focused especially on enabling Mozilla’s Core Contributors, and I put in place a new team structure.Themes:
- Focus – We will partner with a small number of functional teams and work disproportionately with a small number of communities. We will commit to these teams and communities for longer and go deeper.
- Leaders – As a small staff team, we can magnify our impact by identifying and working with volunteer leaders around Mozilla (those Mozillians who engage and influence many more Mozillians). This will start with collecting information about our communities and having 1:1’s with 200+ Mozillians, and proceed to building more formal leadership and learning initiatives.
- Learning – We’re continuing the work of the Participation Lab, having both focused experiments and paying attention to the new approaches to participation being tested by staff and volunteer Mozillians all around the organization. The emphasis will be on synthesizing lessons about high impact participation, and helping those lessons be applied throughout Mozilla.
- Open and Effective – We’re investing in improving how we work as a team and our individual skills. A big part of this is building on the agile “heartbeat” method innovated by the foundation, powered by GitHub. Another part of this is solidifying our participation technology group and starting to play a role of aligning similar participation technologies around Mozilla.
You can see these themes reflected in our Q3 Objectives and Key Results.Team structure:
The Participation Team is focused on activating, growing and increasing the effectiveness of our community of core contributors. Our modified team structure has 5 areas/groups, each with a Lead and a bottom-line accountability. You’ll note that all of these team members are staff — our aim in the coming months is to integrate core contributors into this structure, including existing leadership structures like the ReMo Council.Participation Partners Global-Local Organizing Developing Leaders Participation Technology Performance and Learning Lead:
George Roter (acting)
Lucy HarrisBottom Line:
Catalyze participation with product and functional teams to deliver and sustain impactBottom Line:
Grow the capacity of Mozilla’s communities to engage volunteers and have impact
(includes Reps and Regional Communities)
Grow the capacity of Mozilla’s volunteer leaders and volunteers to have impactBottom Line:
Enable large scale, high impact participation at Mozilla through technologyBottom Line:
Develop a high performing team, and drive learning and synthesize best practice through the Participation Lab
We have also established a Leadership and Strategy group accountable for:
- Making decisions on team objectives, priorities and resourcing
- Nurturing a culture of high performance through standard setting and role modelling
This is made up of Rosana Ardila, Lucy Harris, Brian King, Pierros Papadeas, William Quiviger and myself.
As always, I’m excited to hear your feedback on any of this — it is most certainly a work in progress. We also need your help:
- If you’re a staff/functional team or volunteer team trying something new with participation, please get in touch!
- If you’re a core contributor/volunteer, take a look at these volunteer tasks.
- If you have ideas on what the team’s priorities should be over the coming quarter(s), please send me an email — .
As always, feel free to reach out to any member of the team; find us on IRC at #participation; follow along with what we’re doing on the Blog and by following [@MozParticipate on Twitter](https://twitter.com/mozparticipate); have a conversation on Discourse; or follow/jump into any issues on GitHub.
In the last week, we landed 37 PRs in the Servo repository!
In addition to a rustup by Manish and a lot of great cleanup, we also saw:
- Glennw fixed a bug where animations continued forever at full blast
- Martin Robinson landed the first bits of his massive ongoing stacking context / display list refactoring work
Servo on Windows! Courtesy of Vladimir Vukicevic.
Text shaping improvements in Servo:
At last week’s meeting, we discussed the outcomes from the Paris layout meetup, how to approach submodule updates, and trying to reduce the horrible enlistment experience with downloading Skia.
The Monday Project Meeting
These are the notes of my talk at SmartWebConf in Romania. Part 1 covered how Impostor Syndrome cripples us in using what we hear about at conferences. It also covered how our training and onboarding focuses on coding. And how it lacks in social skills and individuality. This post talks about the current state of affairs. We have a lot of great stuff to play with but instead of using it we always chase the next.
This is part 2 of 3.
- Part 1: never stop learning and do it your way
- Part 2: you got what you asked for, time to use it
- Part 3: give up on the idea of control and become active
When reading about the state of the web there is no lack of doom and gloom posts. Native development is often quoted as “eating our lunch”. Native-only interaction models are sold to us as things “people use these days”. Many of them are dependent on hardware or protected by patents. But they look amazing and in comparison the web seems to fall behind.The web doesn’t need to compete everywhere
This is true, but it also not surprising. Flash showed many things that are possible that HTML/CSS/JS couldn’t do. Most of these were interesting experiments. They looked like a grand idea at the time. And they went away without an outcry of users. What a native environment have and what we do on the web is a comparison the web can’t win. And it shouldn’t try to.
The web per definition is independent of hardware and interaction model. Native environments aren’t – on the contrary. Success on native is about strict control. You control the interaction, the distribution and what the user can and can’t see. You can lock out users and not let them get to the next level. Unless they pay for it or buy the next version of your app or OS. The web is a medium that puts the user in control. Native apps and environments do not. They give users an easy to digest experience. An experience controlled by commercial ideas and company goals. Yes, the experience is beautiful in a lot of cases. But all you get is a perishable good. The maintainer of the app controls what stays in older versions and when you have to pay the next version. The maintainers of the OS dictate what an app can and can not do. Any app can close down and take your data with it. This is much harder on the web as data gets archived and distributed.The web’s not cool anymore – and that’s OK
Evolution happens and we are seeing this right now. Browsers on desktop machines are not the end-all of human-computer interaction. That is one way of consuming and contributing to the web. The web is ubiquitous now. That means it is not as exciting for people as it was for us when we discovered and formed it. It is plumbing. How much do you know about the electricity and water grid that feeds your house? You never cared to learn about this – and this is exactly how people feel about the web now.
This doesn’t mean the web is dead – it just means it is something people use. So our job should be to make that experience as easy as possible. We need to provide a good service people can trust and rely on. Our aim should be reliability, not flights of fancy.
It is interesting to go back to the promises HTML5 gave us. Back when it was the big hype and replacement for Flash/Flex. When you do this, you’ll find a lot of great things that we have now without realising them. We complained when they didn’t work and now that we have them – nobody seems to use them.Re-visiting forms
Take forms for example. You can see the demos I’m about to show here on GitHub.
When it comes down to it, most “apps” in their basic form are just this: forms. You enter data, you get data back. Games are the exception to this, but they are only a small part of what we use the web for.
When I started as a web developer forms meant you entered some data. Then you submitted the form and you got an error message telling you what fields you forgot and what you did wrong.
<form action="/cgi-bin/formmail.pl"> <ul class="error"> <li>There were some errors: <ul> <li><a href="#name">Name is required</a></li> <li><a href="#birthday">Birthday needs to be in the format of DD/MM/YYYY</a></li> <li><a href="#phone">Phone can't have any characters but 0-9</a></li> <li><a href="#age">Age needs to be a number</a></li> </ul> </li> </ul> <p><label for="name">Contact Name *</label> <input type="text" id="name" name="name"></p> <p><label for="bday">Birthday</label> <input type="text" id="bday" name="bday"></p> <p><label for="lcolour">Label Colour</label> <input type="text" id="lcolour" name="lcolour"></p> <p><label for="phone">Phone</label> <input type="text" id="phone" name="phone"></p> <p><label for="age">Age</label> <input type="text" id="age" name="age"></p> <p class="sendoff"> <input type="submit" value="add to contacts"> </p> </form>
This doesn’t look much, but let’s just remember a few things here:
- Using labels we make this form available to all kind of users independent of ability
- You create a larger hit target for mobile users. A radio button with a label next to it means users can tap the word instead of trying to hit the small round interface element.
- As you use IDs to link labels and elements (unless you nest one in the other), you also have a free target to link to in your error links
- With a submit button you enable user to either hit the button or press enter to send the form. If you use your keyboard, that’s a pretty natural way of ending the annoying data entry part.
HTML5 supercharged forms. One amazing thing is the required attribute we can put on any form field to make it mandatory and stop the form from submitting. We can define patterns for validation and we have higher fidelity form types that render as use-case specific widgets. If a browser doesn’t support those, all the end user gets is an input field. No harm done, as they can just type the content.
In addition to this, browsers added conveniences for users. Browsers remember content for aptly named and typed input elements so you don’t have to type in your telephone number repeatedly. This gives us quite an incredible user experience. A feature we fail to value as it appears so obvious.
Take this example.
<form action="/cgi-bin/formmail.pl"> <p><label for="name">Contact Name *</label> <input type="text" required id="name" name="name"></p> <p><label for="bday">Birthday</label> <input type="date" id="bday" name="bday" placeholder="DD/MM/YYYY"></p> <p><label for="lcolour">Label Colour</label> <input type="color" id="lcolour" name="lcolour"></p> <p><label for="phone">Phone</label> <input type="tel" id="phone" name="phone"></p> <p><label for="age">Age</label> <input type="number" id="age" name="age"></p> <p class="sendoff"> <input type="submit" value="add to contacts"> </p> </form>
There’s a lot of cool stuff happening here:
- The birthday date field has a placeholder telling the user what format is expected. You can type a date in or use the arrows up and down to enter it. The form automatically realises that there is no 13th month and that some months have less than 31 days. Other browsers even give you a full calendar popup.
- The colour picker is just that – a visual, high-fidelity colour picker (yes, I keep typing this “wrong”)
- The tel and number types do not only limit the allowed characters to use, but also switch to the appropriate on-screen keyboards on mobile devices.
That’s a lot of great interaction we get for free. What about cutting down on the display of data to make the best of limited space we have?
Originally, this is what we had select boxes for, which render well, but are not fun to use. As someone living in England and having to wonder if it is “England”, “Great Britain” or “United Kingdom” in a massive list of countries, I know exactly how that feels. Especially on small devices on touch/stylus devices they can be very annoying.
<form action="/cgi-bin/formmail.pl"> <p> <label for="lang">Language</label> <select id="lang" name="lang"> <option>arabic</option> <option>bulgarian</option> <option>catalan</option> […] <option>kinyarwanda</option> <option>wolof</option> <option>dari</option> <option>scottish_gaelic</option> </select> </p> <p class="sendoff"> <input type="submit" value="add to contacts"> </p> </form>
However, as someone who uses the keyboard to navigate through forms, I learned early enough that these days select boxes have become more intelligent. Instead of having to scroll through them by clicking the tiny arrows or using the arrow keys you can start typing the first letter of the option you want to choose. That way you can select much faster.
This only works with words beginning with the letter sequence you type. A proper autocomplete should also match character sequences in the middle of an option. For this, HTML5 has a new element called datalist.
<form action="/cgi-bin/formmail.pl"> <p> <label for="lang">Language</label> <input type="text" name="lang" id="lang" list="languages"> <datalist id="languages"> <option>arabic</option> <option>bulgarian</option> <option>catalan</option> […] <option>kinyarwanda</option> <option>wolof</option> <option>dari</option> <option>scottish_gaelic</option> </datalist> </p> <p class="sendoff"> <input type="submit" value="add to contacts"> </p> </form>
This one extends an input element with a list attribute and works like you expect it to:
There is an interesting concept here. Instead of making the select box have the same feature and roll it up into a combo box that exists in other UI libraries, the working group of HTML5 chose to enhance an input element. This is consistent with the other new input types.
However, it feels odd that for browsers that don’t support the datalist element all this content in the page would be useless. Jeremy Keith thought the same and came up with a pattern that allows for a select element in older browsers and a datalist in newer ones:
<form action="/cgi-bin/formmail.pl"> <p> <label for="lang">Language</label> <datalist id="languages"> <select name="lang"> <option>arabic</option> <option>bulgarian</option> <option>catalan</option> […] <option>kinyarwanda</option> <option>wolof</option> <option>dari</option> <option>scottish_gaelic</option> </select> </datalist> <div>or specify: </div> <input type="text" name="lang" id="lang" list="languages"> </p> <p class="sendoff"> <input type="submit" value="add to contacts"> </p> </form>
This works as a datalist in HTML5 compliant browsers.
In older browsers, you get a sensible fallback, re-using all the option elements that are in the document.
This is not witchcraft, but is based on a firm understanding of how HTML and CSS work. Both these are fault tolerant. This means if a mistake happens, it gets skipped and the rest of the document or style sheet keeps getting applied.
In this case, older browsers don’t know what a datalist is. All they see is a select box and an input element as browsers render content of unknown elements. The unknown list attribute on the input element isn’t understood, so the browser skips that, too.
HTML5 browsers see a datalist element. Per standard, all this can include are option elements. That’s why neither the select, nor the input and the text above it get rendered. They are not valid, so the browser removes them. Everybody wins.A craving for control
Browsers and the standards they implement are full of clever and beautiful things like that these days. And we’ve loudly and angrily demanded to have them when they got defined. We tested, we complained, we showed what needed to be done to make the tomorrow work today and then we forgot about it. And we moved on to chase the next innovation.
How come that repeatedly happens? Why don’t we at some point stop and see how much great toys we have to play with? It is pretty simple: control.
But more on that in part 3 of this post.
Hace poco nos preguntaba como cambiar la contraseña maestra y hoy compartimos contigo los pasos para restaurar la contraseña en el navegador y en el cliente de correo. Si no lo sabías, mediante la contraseña maestra, puedes proteger tus nombres de usuario y tus contraseñas almacenadas localmente mediante una contraseña maestra. Si has olvidado tu contraseña maestra, debes restablecerla.
Por seguridad y en aras de evitar el robo de tus datos, al restablecer tu contraseña maestra, borrarás todos las contraseñas y todos los nombres de usuario que tengas almacenados localmente.
Pasos para restablecer la contraseña en Firefox:
- En la barra de direcciones de Firefox, introduce la siguiente dirección :
- Presiona la tecla Intro.
- Aparecerá la página “Restablecer la contraseña maestra”. Haz clic en Restablecer para restablecer tu contraseña maestra.
Pasos para restablecer la contraseña en Thunderbird:
- En el menú , ir hasta Herramientas y escoger Consola de errores.
- En el campo Código escribe openDialog(“chrome://pippki/content/resetpassword.xul”) y da clic en Evaluar. Aparecerá una ventana de confirmación donde podrás restablecer la contraseña maestra.
Por último, si deseas aumentar y fortalecer aún más la seguridad y tu privacidad en Firefox puedes instalar algunos de los complementos publicados en nuestra galería.
The custom made Firefox cufflinks have arrived! I’ll be working out shipping costs, and posting them out this week.
Mozilla klar med ny Firefox-browser - tager livet af 14 år gammel bug
Mozilla klar med ny Firefox-browser - tager livet af 14 år gammel bug. Efter 14 år har Mozilla nu fundet tid til at tage hånd om en bug, der rammer mange brugere af Firefox-browseren. 28. september 2015 kl. 11.03. 1 kommentar. Kim Stensdal ...
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.
- If you use unsafe, you should be using compiletest.
- Running Rust on the Rumprun unikernel.
- Survey of licenses used by Rust projects on crates.io.
- An introduction to timely dataflow, part 3. Learn more about timely dataflow by writing a breadth-first search on random graphs.
- These weeks in Servo 34.
- Get data from a URL in Rust.
- Debuger state machine in Rust.
- rust-todomvc. Implementation of TodoMVC in Rust in the browser.
- zas. A tool to help with local web development, inspired by Pow.
- Serve. Command line utility to serve the files in the current directory.
- Rodio. Rust audio playback library.
- io-providers. Defines "provider" traits and implementations for different types of I/O operations.
- rust-sorty. A Rust lint to help with the sorting of uses, mods & crate declarations.
- walkdir. Rust library for walking directories recursively.
88 pull requests were merged in the last week.Notable changes
- Correctly walk import lists in AST visitors.
- Remove region variable leaks from higher_ranked_sub().
- Always pass /DEBUG flag to MSVC linker.
- Do not drop_in_place elements of Vec<T> if T doesn't need dropping.
- Make function pointers implement traits for up to 12 parameters.
- Use BufWriter in fasta-redux for a nice speedup.
- Upgrade hoedown to 3.0.5.
- Add no_default_libraries target linker option.
- Remove the deprecated box(PLACE) syntax.
- Implement AsMut for Vec.
- Amit Aryeh Levy
- David Elliott
- Reza Akhavan
- Sebastian Wicki
- Xavier Shay
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- Promote the libc crate from the nursery.
- Add an internationalization framework to the Rust compiler.
- Add an alias attribute to #[link] and -l.
- 9/30. RustBerlin Hack and Learn.
- 10/1. Rust Meetup Hamburg: Rusty Project Presentations.
- 10/12. Seattle Rust Meetup.
No jobs listed for this week. Tweet us at @ThisWeekInRust to get your job offers listed here!Crate of the Week
I'd like to add an appeal to all supporters of "repeatable tests". Don't let the worthy goal of repeatability override the worthier goal of actually finding bugs. Your deterministic tests usually cannot even make a dent in the vast space of possible inputs. With a bit of randomness thrown in, you can greatly improve you chances and thus make your tests more valuable. Also with quickcheck, you get to see a minimized input that makes your test fail, which you can then turn into a repeatable test easily.Quote of the Week
If one regards Rust as a critique to C++, it certainly should be seen as a constructive critique. — llogiq on /r/cpp.
Mozilla überarbeitet Online-Code-Editor Thimble
Mozilla hat einen Relaunch seines Online-Code-Editors Thimble vollzogen. Thimble ist als Teil des Mozilla Learning Networks auf HTML-Anfänger fokussiert und erlaubt das einfache Erstellen und Veröffentlichen von Webseiten direkt im Browser. Mozilla ...
Firefox has supported transform-origin on html elements since version 16 (even earlier if you count -moz-transform-origin), but it’s been a bit hit and miss using it on SVG elements.
Percentage units on SVG elements did not work at all, neither did center of course since that’s just an alias for 50%.
Fortunately that’s all about to change. Firefox 41 and 42 have a new pref svg.transform-origin.enabled that you can use to enable transform-origin support for SVG elements. Even better, Firefox 43 will not require a pref at all, it will support transform-origin straight out of the box.