During the 38 cycle, we are going to publish a release between 38 & 39 (called 38.0.5).
In order to continue the development of 38 & 38.0.5 in parallel, we merged mozilla-beta (m-b) into mozilla-release (m-r).
- m-b = 38.0.0 beta
- m-r = 37.0.2
- m-b = 38.0.5 beta (even if we won't build any for now)
- m-r = 38.0 beta (next one being beta7)
We will do regular m-r => m-b merges to make sure 38.0.5 is up to date.
This does not impact aurora (aka 39). In case we have to make a new 37 dot release, we would use a relbranch.
The m-b tree is closed avoid any confusion.
Last but not least, uplift requests to 38 should be filled for mozilla-release 38.0.5 would be mozilla-beta. However, release managers and sheriffs will translate the information if the uplift requests are incorrect.
The schedule has been updated.
I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series
I teeter between thinking big about PageShot and thinking small. The benefit of thinking small is: how can this tool provide value to people who wouldn’t know if it would provide any value? And: how do we get it done?
Still I can’t help but thinking big too. The web gave us this incredible way to talk about how we experience the web: the URL. An incredible amount of stuff has been built on that, search and sharing and archiving and ways to draw people into content and let people skim. Indexes, summaries, APIs, and everyone gets to mint their own URLs and accept anyone else’s URLs, pointing to anything.
But not everyone gets to mint URLs. Developers and site owners get to do that. If something doesn’t have a URL, you can’t point to it. And every URL is a pointer, a kind of promise that the site owner has to deliver on, and sometimes doesn’t choose to, or they lose interest.
I want PageShot to give a capability to users, the ability to address anything, because PageShot captures the state of any page at a moment, not an address so someone else can try to recreate that page. The frozen page that PageShot saves is handy for things like capturing or highlighting parts of the page, which I think is the feature people will find attractive, but that’s just a subset of what you might want to do with a snapshot of web content. So I also hope it will be a building block. When you put content into PageShot, you will know it is well formed, you will know it is static and available, you can point to exact locations and recover those locations later. And all via a tool that is accessible to anyone, not just developers. I think there are neat things to be built on that. (And if you do too, I’d be interested in hearing about your thoughts.)
In the past two weeks, we merged 100 pull requests
Lars and Mike wrote a blog post on Servo on the Samsung OSG blogNotable additions
- Diego started off our WebGL implementation
- Patrick improved scrolling performance
- Guro Bokum added timeline support to our Firefox devtools implementation
- Mátyás added support for rectangles, lineCap / lineJoin, global compositing/blending, clipping path and save/restore for canvas
- Ms2ger added support for Servo-specific wpt tests, the replacement for our contenttest suite, which was removed
- Josh implemented child reparenting and node removal
- Joe Seaton extended CSS animation support to most CSS properties
- Patrick tweaked block size computation for height:auto elements with padding and layout of nested elements with different vertical-align
- Ms2ger added a JS runtime struct
- Anthony Ramine made another set of DOM changes
This is a simple demo of our new WebGL support.Meetings
- We’re trying out Reviewable for code review instead of Critic. It’s pretty neat!
- Homu is working out very well for us
- We ought to have some new team members soon!
- Integration with the Firefox timeline devtool has landed
Yesterday I discussed the second Supreme Court oral argument I attended in a recent trip to the Supreme Court. Today I describe the basic controversy in the first oral argument I attended, in a case potentially implicating the First Amendment. First Amendment law is complicated, so this is the first of several posts on the case.Texas specialty license plates
State license plates, affixed to vehicles to permit legal use on public roads, typically come in one or very few standard designs. But in many states you can purchase a specialty plate with special imagery, designs, coloring, &c. (Specialty plates are distinct from “vanity” plates. A vanity plate has custom letters and numbers, e.g. a vegetarian might request LUVTOFU.) Some state legislatures direct that specialty designs delivering particular messages be offered. And some state legislatures enact laws that permit organizations or individuals to design specialty plates.
The state of Texas sells both legislatively-requested designs and designs ordered by organizations or individuals. (The latter kind require an $8000 bond, covering ramp-up costs until a thousand plates are sold.) The DMVB evaluates designs for compliance with legislated criteria: for example, reflectivity and legibility concerns. One criterion allows (but does not require) Texas to reject “offensive” plates.
The department may refuse to create a new specialty license plate if the design might be offensive to any member of the public.Texas Transportation Code § 504.801(c) An “offensive” specialty plate design
Texas rejected one particular design for just this reason. As they say, a picture is worth a thousand words:The Texas Sons of Confederate Veterans’s proposed specialty plate…incorporating a Confederate flag. (Yes, Texas — including Rick Perry and Greg Abbott both — rejected this design.) (source)
For those unfamiliar with American imagery: the central feature of the Texas SCV insignia is the Confederate flag. Evoking many things, but in some minds chiefly representative of revanchist desire to resurrect Southern racism, Jim Crow, and the rest of that sordid time. Such minds naturally find the Confederate flag offensive.
Is the SCV actually racist? (Assuming you don’t construe mere use of the flag as prima facie evidence.) A spokesman denies the claim. Web searches find some who disagree and others who believe it is (or was) of divided view. I find no explicit denunciation of racism on the SCV’s website, but I searched only very briefly. Form your own conclusions.
Tomorrow, specialty plate programs in the courts, and the parties’ arguments.
Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...
Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...
A smaller beta release.
In this release, we disabled screen sharing (will arrive with 38.0.5), reading list and reading view are going to be disabled in beta 7. We also took some stability fixes (as usual) and some polishing patches.
- 32 changesets
- 71 files changed
- 857 insertions
- 313 deletions
ExtensionOccurrences js25 cpp11 jsm7 java6 mn5 ini4 h4 html2 css2 xul1 list1 json1 idl1
ModuleOccurrences browser29 toolkit9 mobile7 dom7 js6 testing4 layout3 widget2 modules1 media1 docshell1
List of changesets:Carsten Tomcat BookBug 1155679 - "mozharness update to ref 4f1cf3369955" on a CLOSED TREE . r=ryanvm, a=test-only - 6103268d785d Terrence ColeBug 1152177 - Make jsid and Value pre barriers symetrical. r=jonco, a=abillings - d79194507f32 Mats PalmgrenBug 1153478 - Part 1: Add nsInlineFrame::StealFrame and make it deal with being called on the wrong parent for aChild (due to lazy reparenting). r=roc, a=sledru - 18b8b10f2fbd Mats PalmgrenBug 1153478 - Part 2: Remove useless assertions. r=roc, a=sledru - e1dd0d7756c5 Mike ShalBug 1152031 - Bump mozharness.json to 23dee28169d6. a=test-only - 4411b07ee6bd Gijs KruitboschBug 1153900 - Fix IE cookies migration. a=sylvestre - 55837b9aa111 Jim ChenBug 1072529 - Only create GeckoEditable once. r=esawin, a=sledru - 69e54b268783 Paul AdenotBug 1136360 - Take into account the output device latency in the clock, and be more robust about rounding error accumulation, in cubeb_wasapi.cpp. r=kinetik, a=sledru - fff936b47a9f Mike de BoerBug 1155195 - Disable Loop screensharing for Fx38. r=Standard8, a=sledru - 6a5c3aa5b912 Gijs KruitboschBug 1153900 - add fixes to tests for aurora, rs=me, a=RyanVM - b158e9bdd8a0 Robert StrongBug 1154591 - getCanStageUpdates has incorrect checks for Windows. r=spohl, a=sledru - 86d3b1103197 Ed LeeBug 1152145 - Filter for specific suggested tiles adgroups/buckets/frecent_sites lists with display name [r=adw, a=sylvestre] - e66ad17db13f Gijs KruitboschBug 1147487 - Don't bother sending reader mode updates when isArticle is false. r=margaret, a=sledru - 125ec6c54576 Ehsan AkhgariBug 1151873 - Stop forcing text/plain-only content being copied to the clipboard when an ancestor of the selected node has significant whitespace. r=roc, a=sledru - 7e31d76c4d7b Margaret LeibovicBug 785549 - Use textContent instead of innerHTML to set domain and credits in reader view. r=Gijs, a=sledru - 38e095acde46 Paul Kerr [:pkerr]Bug 1154482 - about:webrtc intermittently throws a js type error. r=jib, a=sledru - 899ee022ed4c Jared WeinBug 1134501 - UITour: Force page into Reader View automatically whenever the ReaderView/ReadingList tour page is loaded. r=gijs, a=dolske - e5d6dc48f6de Gijs KruitboschBug 1152219 - Make reader mode node limit a pref, turn off entirely for desktop because of isProbablyReaderable. r=margaret, a=sledru - 4a98323f8e68 Gijs KruitboschBug 1124217 - Don't gather telemetry for windows that have died. r=mconley, a=sledru - 849bf3c58408 Blake WintonBug 1149068 - Use the correct font for the Sans Serif font button. ui-r=maritz, r=jaws, r=margaret, a=sledru - 44de10db57a6 Gijs KruitboschBug 1155692 - Include latest Readability/JSDOMParser changes into m-c. a=sledru - eb5e2063637b Bas SchoutenBug 1150376 - Do not try to use D3D11 for popup windows. r=jrmuizel, a=sledru - 746934eab883 Bas SchoutenBug 1155228 - Only use basic OMTC for popups when using WARP. r=jrmuizel, a=sledru - 4dc8d874746b Olli PettayBug 1153688 - Treat JS Symbol as void on C++ side of Variant. r=bholley, a=abillings - 18af6cfb3b86 Chenxia LiuBug 1154980 - Localize first run pager titles. r=ally, a=sledru - 65cf03fc2bc9 Gijs KruitboschBug 1141031 - Fix in-content prefs dialogs overflowing. r=jaws, a=sledru - 9117f9af554e Boris ZbarskyBug 1155788 - Make the Ion inner-window optimizations work again. r=efaust, a=sledru - e4192150f53a Gijs KruitboschBug 1150520 - Disable EME for Windows XP. r=dolske, a=sledru - 704989f295eb Luke WagnerBug 1152280 - OdinMonkey: tighten changeHeap mask validation. r=bbouvier, a=abillings - 5dc0d44c8dbd Boris ZbarskyBug 1154366 - Pass in a JSContext to StructuredCloneContainer::InitFromJSVal so it will throw its exceptions somewhere where people might see them. r=bholley, ba=sledru - 72f1b4086067 Ryan VanderMeulenBug 1150376 - Fix rebase typo. a=bustage - f3dd042acc18 Ralph GilesBug 1144875 - Disable EME on ESR releases. r=dolske, a=sledru - 630336da65f2
Watch mconley livehack on Firefox Desktop bugs!
Watch mconley livehack on Firefox Desktop bugs!
For a long time now, I’ve been thinking about three big challenges in open science:
- Coding is hard enough by any measure – coding for sharing & reuse is even more demanding. Given that our traditional education system isn’t yet imparting these skills to scientists & researchers, and given that it takes sustained practice over a long time to integrate these skills into our research, how can we help build those skills at scale?
- Many students and early career researchers feel intensely isolated and unsupported in their efforts to learn to code, leading to fear of embarrassment before their colleagues, struggles with imposter syndrome, and uncertainty on how or even if to proceed with their research careers.
- The production of open source software in support of open science is not enough on its own; we also need to lower the barriers to discoverability and collaboration so that those projects actually get reused, as was done at the NCEAS Codefest last year – but we need to do it at scale and at home, without requiring expensive trips to conferences.
At some level, these are all the same problem: they are all endemic to a fragmented community. Taken all together, the scientific community has a huge amount of programming knowledge; but it’s split up across individuals that rarely have the opportunity to share that knowledge. Crippling self doubt often arises not from genuine inadequacy, but a loss of perspective that comes from working in isolation where it becomes possible to imagine that we are the worst of all our peers. And as we saw at the NCEAS event, the so-called discoverability problem evaporates very quickly with even a small group of people pooling their experience.
The skills & knowledge we need are there in pieces – we have to find a way to assemble them in a way that elevates us all. The Mozilla Science Lab thinks we can do this via a loose federation of Study Groups.Our Powers Combined
I started thinking about Study Groups last Autumn, after a conversation with Rachel Sanders (PyLadies San Fransisco); Sanders described regular small PyLadies meetups where learners would support each other as they explored a tutorial, project or idea, where emphasis was on communal, participatory learning, lecturing and leadership roles took a distant back seat, and learning occurred over the long term. By blending these ideas with something like a journal club familiar to many academics, I think we can build Study Groups that powerfully address the questions I started with. I’d like Study Groups to do a few things:
- Promote learning via a network effect of skill sharing. By highlighting the authentic, practice-driven use of code, tools and packages led by the researchers who actually use them in the wild, we create an exchange of skills that scales, grows richer and tracks real scientific practice the more people participate.
- Create and normalize a custom of discussing code as a research object. Scientists and researchers need forums where the focus is on code and the methodologies surrounding it, in order to create space for the conversations that lead to discovering new tools and improving personal practice.
- Acknowledge the ongoing process of learning to code by putting that learning process out in the open & making it shared among colleagues, in order to dispel the misconception that these skills are intuitive, obvious, or in any way inherent.
In practice, these things can be achieved by getting together in an open meetup anywhere from once a month to once a week, where individuals can lead follow-along demos, have a co-working space to explore and experiment together, and everyone feels comfortable asking the group for ideas and help.Predecessors & Beta Tests
A number of powerful examples of similar groups predate this project, and I had the good fortune to learn from them over the past several months. Noam Ross leads the Davis R User’s Group, a tremendously successful R meetup that has generated a wealth of teaching content on R over the past few years; Ross also organized a recent Ask Us Anything panel on the Mozilla Science Forum, and invited the leads from a number of different similar programs to sit in and share their stories and experiences. I met Rob Johnson and others behind Data Science Hobart while I was in Australia recently; DaSH is doing an amazing job of pulling in speakers and demo leaders from an eclectic range of disciplines and interests, to great effect. And I’ve recently had the privilege of sitting in on lessons from the UBC Earth & Ocean Science coding workout group, which informed my thinking around community-led demos on tools as they are actually used, such as Kathi Unglert‘s work on awk and Nancy Soontiens‘s basemap demo.
From these examples and others, I and a team of people at UBC began discussing what a Study Group could look like. For the first few weeks, we met over beers at a university pub, in the Hacky Hour tradition started by our colleagues in Melbourne at the Research Bazaar. Enthusiasm was high – people were very keen to have a place to come and learn about coding in the lab, and find out what that would look like. Soon, with the help of many but particularly with the energetic leadership and community organizing of Amy Lee, we had booked our first event; Andrew MacDonald led a packed (and about 2/3 female) room through introductory R, and within 24 hours attendees had stepped up to volunteer to lead half a dozen further sessions on more advanced topics in R from their research.
There was no shortage of enthusiasm at UBC for the opportunities a Study Group presented, and I see no reason why UBC should be a unique case; the Mozilla Science Lab is prepared to help support and iterate on similar efforts where you are. All that’s required to start a Study Group at your home institution, is your leadership.
— Amy Lee (@minisciencegirl) April 15, 2015Your Turn
In order to support you as you start your own Study Group, the Mozilla Science Lab has a collection of tools for you:
- We’ve built a template website using GitHub Pages that you can fork and remix for your own use. Not only is the website served automagically from GitHub, but we took a page from Nodeschool.io, and set things up to direct conversation & event listings to your issue tracker, thus adding a free message board & mailing list. Check out the Vancouver R Study Group‘s use of the page; setup instructions are in the README, as well as on YouTube – and as always, feel free to open an issue or contact us at email@example.com if something isn’t working for you.
- We’ve written a first draft of the Study Group Handbook, that pulls in lessons learned from other groups and guides newcomers through the process of setting up their own, including a step-by-step guide for your first few events, lesson resources, and more. This is a work in progress, and it’ll only get better as more people try it out and send us feedback!
- We have begun to collect lesson plans & resources delivered in similar meetups for reuse community-wide. If you’d like to maintain your own lessons, send us a link and we’ll point to your work from our Study Group Handbook; if you’d rather we do the maintenance for you, send a pull request to our collection and we’ll make sure your work helps elevate the entire community.
- Finally, get on the map! Whether you start a Study Group with our tools, or you’re in one running on its own, send us a link and a location and we’ll add you to the map of Study Groups Worldwide, so others in your community can find your meetup, and we can all see the global community that is emerging around working together.
We’re very much looking forward to working with you to help you spool up your own Study Group, and learn from your experiences on how to make this program what the research community needs it to be; we hope you’ll join us.Study Groups, Hacky Hours & Open Science Meetups
At the end of last year, Cassie raised the question of ‘how to measure quality?’ on our metrics mailing list, which is an excellent question. And like the best questions, I come back to it often. So, I figured it needed a blog post.
There are a bunch of tactical opportunities to measure quality in various processes, like the QA data you might extract from a production line for example. And while those details interest me, this thought process always bubbles up to the aggregate concept: what’s a consistent measure of quality across any product or service?
I have a short answer, but while you’re here I’ll walk you through how I get there. Including some examples of things I think are of high quality.
One of the reasons this question is interesting, is that it’s quite common to divide up data into quantitative and qualitative buckets. Often splitting the crisp metrics we use as our KPIs from the things we think indicate real quality. But, if you care about quality, and you operate at ‘scale’, you need a quantitative measure of quality.
On that note, in a small business or on a small project, the quality feedback loop is often direct to the people making design decisions that affect quality. You can look at the customers in your bakery and get a feel for the quality of your business and products. This is why small initiatives are sometimes immensely high in quality but then deteriorate as they attempt to replicate and scale what they do.
What I’m thinking about here is how to measure quality at scale.
Some things of quality, IMHO:
This axe is wonderful. As my office is also my workshop, this axe is usually near to hand. It will soon be hung on the wall. Not because I am preparing for the zombie apocalypse, but because it is both useful as a tool, and as a visual reminder about what it means to build quality products. If this ramble of mine isn’t enough of a distraction, watch Why Values are Important to understand how this axe relates to measures of quality especially in product design.
This toaster is also wonderful. We’ve had this toaster more than 10 years now, and it works perfectly. If it were to break, I can get the parts locally and service it myself (it’s deliberately built to last and be repaired). It was an expensive initial purchase, but works out cheap in the long run. If it broke today, I would fix it. If I couldn’t fix it for some extreme reason, I would buy the same toaster in a blink. It is a high quality product.
This is the espresso coffee I drink every day. Not the tin, it’s another brand that comes in a bag. It has been consistently good for a couple of years until the last two weeks when the grind has been finer than usual and it keeps blocking the machine. It was a high-quality product in my mind, until recently. I’ll let another batch pass through the supermarket shelves and try it again. Otherwise I’ll switch.
This spatula looks like a novelty product and typically I don’t think very much of novelty products in place of useful tools, but it’s actually a high quality product. It was a gift, and we use it a lot and it just works really well. If it went missing today, I’d want to get another one the same. Saying that, it’s surprisingly expensive for a spatula. I’ve only just looked at the price, as a result of writing this. I think I’d pay that price though.
All of those examples are relatively expensive products within their respective categories, but price is not the measure of quality, even if price sometimes correlates with quality. I’ll get on to this.
How about things of quality that are not expensive in this way?
What is quality music, or art, or literature to you? Is it something new you enjoy today? Or something you enjoyed several years ago? I personally think it’s the combination of those two things. And I posit that you can’t know the real quality of something until enough time has passed. Though ‘enough time’ varies by product.
Ten years ago, I thought all the music I listened to was of high quality. Re-listening today, I think some of it was high-quality. As an exercise, listen to some music you haven’t for a while, and think about which tracks you enjoy for the nostalgia and which you enjoy for the music itself.
In the past, we had to rely on sales as a measure of the popularity of music. But like price, sales doesn’t always relate to quality. Initial popularity indicates potential quality, but not quality in itself (or it indicates manipulation of the audience via effective marketing). Though there are debates around streaming music services and artist payment, we do now have data points about the ongoing value of music beyond the initial parting of listener from cash. I think this can do interesting things for the quality of music overall. And in particular that the future is bleak for album filler tracks when you’re paid per stream.
Another question I enjoy thinking about is why over the centuries, some art has lasting value, and other art doesn’t. But I think I’ve taken enough tangents for now.
So, to join this up.
My view is that quality is reflected by loyalty. And for most products and services, end-user loyalty is something you can measure and optimize for.
Loyalty comes from building things that both last, and continue to be used.
Every other measurable detail about quality adds up to that.
Reducing the defect rate of component X by 10% doesn’t matter unless it impacts on the end-user loyalty.
It’s harder to measure, but this is true even for things which are specifically designed not to last. In particular, “experiences”; a once-in-a-lifetime trip, a festival, a learning experience, etc, etc. If these experiences are of high quality, the memory lasts and you re-live them and re-use them many times over. You tell stories of the experience and you refer your friends. You are loyal to the experience.
Bringing this back to work.
For MoFo colleagues reading this, our organization goals this year already point us towards Quality. We use the industry term ‘Retention’. We have targets for Retention Rates and Ongoing Teaching Activity (i.e. retained teachers). And while the word ‘retention’ sounds a bit cold and business like, it’s really the same thing as measuring ‘loyalty’. I like the word loyalty but people have different views about it (in particular whether it’s earned or expected).
This overarching theme also aligns nicely with the overall Mozilla goal of increasing the ‘number of long term relationships’ we hold with our users.
Language is interesting though. Thinking about a ‘20% user loyalty rate’ 7 days after sign-up focuses my mind slightly differently than a ‘20% retention rate’. ‘Retention’ can sound a bit too much like ‘detention’, which might explain why so many businesses strive for consumer ‘lock-in’ as part of their business model.
Talking to OpenMatt about this recently he put a better MoFo frame on it than loyalty; Retention is a measure of how much people love what we’re doing. When we set goals for increasing retention rate, we are committing to building things people love so much that they keep coming back for more.
- You can measure quality by measuring loyalty
- I’m happy retention rates are one of our KPIs this year
My next post will look more specifically about the numbers and how retention rates factor into product growth.
And I’ll try not to make it another essay.
I released rr 3.1 just now. It contains reverse execution support, but I'm not yet convinced that feature is stable enough to release rr 4.0. (Some people are already using it with joy, but it's much more fun to use when you can really trust it to not blow up your debugging session, and I don't yet.) We needed to do a minor release since we've added a lot of bug fixes and system-call support since 3.0, and for Firefox development rr 3.0 is almost unusable now. In particular various kinds of sandboxing support are being added to desktop Linux Firefox (using unshare and seccomp syscalls) and supporting those in rr was nontrivial.
In the future we plan to do more frequent minor releases to ensure low-risk bug fixes and expanded system-call support are quickly made available in a release.
An increasing number of Mozilla developers are trying rr, which is great. However, I need to figure out a way to measure how much rr is being used successfully, and the impact it's having.
Since 2012, pioneering educators and web activists have been reflecting and developing answers to the question, “What is web literacy?”
These conversations have shaped our Web Literacy Map, a guiding document that outlines the skills and competencies that are essential to reading, writing, and participating on the Web.
Just the other week, we wrapped up improvements to the Web Literacy Map, proudly unveiling version 1.5. Thank you to all who contributed to that discussion, and to Doug Belshaw for facilitating it.
As we design and test offerings to foster web literacy, we are also determining how these skills fit into a larger web journey. Prompted by user research in Bangladesh, India, Kenya, and beyond, we’re asking: What skill levels and attitudes encourage people to learn more about web literacy? And how can one wield the Web after learning its fundamentals?
Mozilla believes this is an important question to reflect on in the open. With this blog post, we’d like to start a series of discussions, and warmly invite you to think this through with us.
What is the Web Journey ?
As we talked to 356 people in four different countries (India, Bangladesh, Kenya, and Brazil) over the past six months, we learned how people perceive and use the Web in their daily lives. Our research teams identified common patterns, and we gathered them into one framework called “The Web Journey.”
This framework outlines five stages of engagement with the Web:
- Unaware: Have never heard of the Web, and have no idea what it is (for example, these smartphone owners in Bangladesh)
- No use: Are aware of the existence of the Web, but do not use it, either by rejection (“the Web is not for me, women don’t go online”), Inability (“I can’t afford data”), or perceived inability (“The Web is only for businessmen”)
- Basic use: Are online, and are stuck in the “social media bubble,” unaware of what else is possible (Internet = Facebook). These users have little understanding of the Web, and don’t leverage its full range of possibilities
- Leverage: Are able to seize the opportunities the Web has to offer to improve their quality of life (to find jobs, to learn, or to grow their business)
- Creation: From the tinkerer to the web developer, creators understand how to build the Web and are able to make it their own
You can read the full details of the Web Journey, with constraints and triggers, in the Webmaker Field Research Report from India.
Why do the Web Literacy Map and the Web Journey fit together?
While the Web Literacy Map explores the skills needed, the Web Journey describes various stages of engagement with the Web. It appears certain skills may be more necessary for some stages of the Web Journey. For example: Is there a list of skills that people need to acquire to move from “Basic use” to “Leverage”?
As we continue to research digital literacy in Chicago and London (April – August 2015), we’ll seek to understand how to couple skills listed in the Web Literacy Map with steps of engagement outlined in the Web Journey. Bridging the two can help us empower Mozilla Clubs all around the world.
What are the discussion questions ?
To kick off the conversation, consider the following:
- Literacy isn’t an on/off state. It’s more a continuum, and there are many learning pathways. How can this nuance be illustrated and made more intuitive?
- How can we leverage the personal motivators highlighted along the Web Journey to propose interest-driven learning pathways?
- Millions of people think Facebook is the Internet. How can the Web Literacy Map be a guide for these learners to know more and do more with the Web?
- As web literacy skills and competencies increase throughout a learner’s journey, and as people participate in web cultures, particular attitudes emerge and evolve. What are those nuances of web culture? How might we determine a “fluency” in the Web?
- How does the journey continue after someone has learned the fundamentals of the Web? How can they begin to participate in their community and share that knowledge forward? How can mentorship, and eventually leadership, be a more explicit part of a web journey? How do confidence and ability to teach others become part of the web journey?