mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 maanden 5 dagen geleden

Daniel Stenberg: curl + hackerone = TRUE

ma, 22/04/2019 - 17:03

There seems to be no end to updated posts about bug bounties in the curl project these days. Not long ago I mentioned the then new program that sadly enough was cancelled only a few months after its birth.

Now we are back with a new and refreshed bug bounty program! The curl bug bounty program reborn.

This new program, which hopefully will manage to survive a while, is setup in cooperation with the major bug bounty player out there: hackerone.

Basic rules

If you find or suspect a security related issue in curl or libcurl, report it! (and don’t speak about it in public at all until an agreed future date.)

You’re entitled to ask for a bounty for all and every valid and confirmed security problem that wasn’t already reported and that exists in the latest public release.

The curl security team will then assess the report and the problem and will then reward money depending on bug severity and other details.

Where does the money come from?

We intend to use funds and money from wherever we can. The Hackerone Internet Bug Bounty program helps us, donations collected over at opencollective will be used as well as dedicated company sponsorships.

We will of course also greatly appreciate any direct sponsorships from companies for this program. You can help curl getting even better by adding funds to the bounty program and help us reward hard-working researchers.

Why bounties at all?

We compete for the security researchers’ time and attention with other projects, both open and proprietary. The projects that can help put food on these researchers’ tables might have a better chance of getting them to use their tools, time, skills and fingers to find our problems instead of someone else’s.

Finding and disclosing security problems can be very time and resource consuming. We want to make it less likely that people give up their attempts before they find anything. We can help full and part time security engineers sustain their livelihood by paying for the fruits of their labor. At least a little bit.

Only released code?

The state of the code repository in git is not subject for bounties. We need to allow developers to do mistakes and to experiment a little in the git repository, while we expect and want every actual public release to be free from security vulnerabilities.

So yes, the obvious downside with this is that someone could spot an issue in git and decide not to report it since it doesn’t give any money and hope that the flaw will linger around and ship in the release – and then reported it and claim reward money. I think we just have to trust that this will not be a standard practice and if we in fact notice that someone tries to exploit the bounty in this manner, we can consider counter-measures then.

How about money for the patches?

There’s of course always a discussion as to why we should pay anyone for bugs and then why just pay for reported security problems and not for heroes who authored the code in the first place and neither for the good people who write the patches to fix the reported issues. Those are valid questions and we would of course rather pay every contributor a lot of money, but we don’t have the funds for that. And getting funding for this kind of dedicated bug bounties seem to be doable, where as a generic pay contributors fund is trickier both to attract money but it is also really hard to distribute in an open project of curl’s nature.

How much money?

At the start of this program the award amounts are as following. We reward up to this amount of money for vulnerabilities of the following security levels:

Critical: 2,000 USD
High: 1,500 USD
Medium: 1,000 USD
Low: 500 USD

Depending on how things go, how fast we drain the fund and how much companies help us refill, the amounts may change over time.

Found a security flaw?

Report it!

Categorieën: Mozilla-nl planet

Niko Matsakis: AiC: Collaborative summary documents

ma, 22/04/2019 - 06:00

In my previous post, I talked about the idea of mapping the solution space:

When we talk about the RFC process, we always emphasize that the point of RFC discussion is not to select the best answer; rather, the point is to map the solution space. That is, to explore what the possible tradeoffs are and to really look for alternatives. This mapping process also means exploring the ups and downs of the current solutions on the table.

One of the challenges I see with how we often do design is that this “solution space” is actually quite implicit. We are exploring it through comments, but each comment is only tracing out one path through the terrain. I wanted to see if we could try to represent the solution space explicitly. This post is a kind of “experience report” on one such experiment, what I am calling a collaborative summary document (in contrast to the more standard summary comment that we often do).

The idea: a collaborative summary document

I’ll get into the details below, but the basic idea was to create a shared document that tried to present, in a neutral fashion, the arguments for and against a particular change. I asked the people to stop commenting in the thread and instead read over the document, look for things they disagreed with, and offer suggestions for how it could be improved.

My hope was that we could not only get a thorough summary from the process, but also do something deeper: change the focus of the conversation from “advocating for a particular point of view” towards “trying to ensure a complete and fair summary”. I figured that after this period was done, people were likely go back to being advocates for their position, but at least for some time we could try to put those feelings aside.

So how did it go?

Overall, I felt very positive about the experience and I am keen to try it again. I think that something like “collaborative summary documents” could become a standard part of our process. Still, I think it’s going to take some practice trying this a few times to figure out the best structure. Moreover, I think it is not a silver bullet: to realize the full potential, we’re going to have to make other changes too.

What I did in depth

What I did more specifically was to create a Dropbox Paper document. This document contained my best effort at summarizing the issue at hand, but it was not meant to be just my work. The idea was that we would all jointly try to produce the best summary we could.

After that, I made an announcement on the original thread asking people to participate in the document. Specifically, as the document states, the idea was for people to do something like this:

  • Read the document, looking for things they didn’t agree with or felt were unfairly represented.
  • Leave a comment explaining their concern; or, better, supplying alternate wording that they did agree with
    • The intention was always to preserve what they felt was the sense of the initial comment, but to make it more precise or less judgemental.

I was then playing the role of editor, taking these comments and trying to incorporate them into the whole. The idea was that, as people edited the document, we would gradually approach a fixed point, where there was nothing left to edit.

Structure of the shared document

Initially, when I created the document, I structured it into two sections – basically “pro” and “con”. The issue at hand was a particular change to the Futures API (the details don’t matter here). In this case, the first section advocated for the change, and the second section advocated against it. So, something like this (for a fictional problem):

Pro:

We should make this change because of X and Y. The options we have now (X1, X2) aren’t satisfying because of problem Z.

Con:

This change isn’t needed. While it would make X easier, there are already other useful ways to solve that problem (such as X1, X2). Similarly, the goals of isn’t very desirable in the first place because of A, B, and C.

I quickly found this structure rather limiting. It made it hard to compare the arguments – as you can see here, there are often “references” between the two sections (e.g., the con section refers to the argument X and tries to rebut it). Trying to compare and consider these points required a lot of jumping back and forth between the sections.

Using nested bullets to match up arguments

So I decided to restructure the document to integrate the arguments for and against. I created nesting to show when one point was directly in response to another. For example, it might read like this (this is not an actual point; those were much more detailed):

  • Pro: We should make this change because of X.
    • Con: However, there is already the option of X1 and X2 to satisfy that use-case.
      • Pro: But X1 and X2 suffer from Z.
  • Pro: We should make this change because of Y and Z.
    • Con: Those goals aren’t as important because of A, B, and C.

Furthermore, I tried to make the first bullet point a bit special – it would be the one that encapsulated the heart of the dispute, from my POV, with the later bullet points getting progressively more into the weeds.

Nested bullets felt better, but we can do better still I bet

I definitely preferred the structure of nested bullets to the original structure, but it didn’t feel perfect. For one thing, it requires me to summarize each argument into a single paragraph. Sometimes this felt “squished”. I didn’t love the repeated “pro” and “con”. Also, things don’t always fit neatly into a tree; sometimes I had to “cross-reference” between points on the tree (e.g., referencing another bullet that had a detailed look at the trade-offs).

If I were to do this again, I might tinker a bit more with the format. The most extreme option would be to try and use a “wiki-like” format. This would allow for free inter-linking, of course, and would let us hide details into a recursive structure. But I worry it’s too much freedom.

Adding “narratives” on top of the “core facts”

One thing I found that surprised me a bit: the summary document aimed to summarize the “core facts” of the discussion – in so doing, I hoped to summarize the two sides of the argument. But I found that facts alone cannot give a “complete” summary: to give a complete summary, you also need to present those facts “in context”. Or, put another way, you also need to explain the weighting that each side puts on the facts.

In other words, the document did a good job of enumerating the various concerns and “facets” of the discussion. But it didn’t do a good job of explaining why you might fall on one side or the other.

I tried to address this by crafting a “summary comment” on the main thread. This comment had a very specific form. It begin by trying to identify the “core tradeoff” – the crux of the disagreement:

So the core tradeoff here is this:

  • By leaving the design as is, we keep it as simple and ergonomic as it can be;
    • but, if we wish to pass implicit parameters to the future when polling, we must use TLS.

It then identifies some of the “facets” of the space which different people weight in different ways:

So, which way you fall will depend on

  • how important you think it is for Future to be ergonomic
    • and naturally how much of an ergonomic hit you believe this to be
    • how likely you think it is for us to want to add implicit parameters
    • how much of a problem you think it is to use TLS for those implicit parameters

And then it tried to tell a series of “narratives”. Basically to tell the story of each group that was involved and why that led them to assign different weights to those points above. Those weights in turn led to a different opinion on the overall issue.

For example:

I think a number of people feel that, by now, between Rust and other ecosystems, we have a pretty good handle on what sort of data we want to thread around and what the best way is to do it. Further, they feel that TLS or passing parameters explicitly is the best solution approach for those cases. Therefore, they prefer to leave the design as is, and keep things simple. (More details in the doc, of course.)

Or, on the other side:

Others, however, feel like there is additional data they want to pass implicitly and they do not feel convinced that TLS is the best choice, and that this concern outweights the ergonomic costs. Therefore, they would rather adopt the PR and keep our options open.

Finally, it’s worth noting that there aren’t always just two sides. In fact, in this case I identified a third camp:

Finally, I think there is a third position that says that this controversy just isn’t that important. The performance hit of TLS, if you wind up using it, seems to be minimal. Similarly, the clarity/ergonomics of Future are not as criticial, as users who write async fn will not implement it directly, and/or perhaps the effect is not so large. These folks probably could go either way, but would mostly like us to stop debating it and start building stuff. =)

One downside of writing the narratives in a standard summary comment was that it was not “part of” the main document. In fact, it feels to me like these narratives are a pretty key part of the whole thing. In fact, it was only once I added these narratives that I really felt I started to understand why one might choose one way or the other when it came to this decision.

If I were to do this again, I would make narratives more of a first-place entity in the document itself. I think I would also focus on some other “meta-level reasoning”, such as fears and risks. I think it’s worth thinking, for any given decision, “what if we make the wrong call” – e.g., in this case, what happens if we decide not to future proof, but then we regret it; in contrast, what happens if we decide to add future proofing, but we never use it.

We never achieved “shared ownership” of the summary

One of my goals was that we could, at least for a moment, disconnect people from their particular position and turn their attention towards the goal of achieving a shared and complete summary. I didn’t feel that we were very succesful in this goal.

For one thing, most participants simply left comments on parts they disagreed with; they didn’t themselves suggest alternate wording. That meant that I personally had to take their complaint and try to find some “middle ground” that accommodated the concern but preserved the original point. This was stressful for me and a lot of work. More importantly, it meant that most people continued to interact with the document as advocates for their point-of-view, rather than trying to step back and advocate for the completeness of the summary.

In other words: when you see a sentence you disagree with, it is easy to say that you disagree with it. It is much harder to rephrase it in a way that you do agree with – but which still preserves (what you believe to be) the original intent. Doing so requires you to think about what the other person likely meant, and how you can preserve that.

However, one possible reason that people may have been reluctant to offer suggestions is that, often, it was hard to make “small edits” that addressed people’s concerns. Especially early on, I found that, in order to address some comment, I would have to make larger restructurings. For example, taking a small sentence and expanding it to a bullet point of its own.

Finally, some people who were active on the thread didn’t participate in the doc. Or, if they did, they did so by leaving comments on the original GitHub thread. This is not surprising: I was asking people to do something new and unfamiliar. Also, this whole process played out relatively quickly, and I suspect some people just didn’t even see the document before it was done.

If I were to do this again, I would want to start it earlier in the process. I would also want to consider synchronous meetings, where we could go try to process edits as a group (but I think it would take some thought to figure out how to run such a meeting).

In terms of functioning asynchronously, I would probably change to use a Google Doc instead of a Dropbox Paper. Google Docs have a better workflow for suggesting edits, I believe, as well, as a richer permissions model.

Finally, I would try to draw a harder line in trying to get people to “own” the document and suggest edits of their own. I think the challenge of trying to neutrally represent someone else’s point of view is pretty powerful.

Concluding remarks

Conducting this exercise taught me some key lessons:

  • We should experiment with the best way to describe the back-and-forth (I found it better to put closely related points together, for example, rather than grouping the arguments into ‘pro and con’).
  • We should include not only the “core facts” but also the “narratives” that weave those facts together.
  • We should do this summary process earlier and we should try to find better ways to encourage participation.

Overall, I felt very good about the idea of “collaborative summary documents”. I think they are a clear improvement over the “summary comment”, which was the prior state of the art.

If nothing else, the quality of the summary itself was greatly improved by being a collaborative document. I felt like I had a pretty good understanding of the question when I started, but getting feedback from others on the things they felt I misunderstood, or just the places where my writing was unclear, was very useful.

But of course my aims run larger. I hope that we can change how design work feels, by encouraging all of us to deeply understand the design space (and to understand what motivates the other side). My experiment with this summary document left me feeling pretty convinced that it could be a part of the solution.

Feedback

I’ve created a discussion thread on the internals forum where you can leave questions or comments. I’ll definitely read them and I will try to respond, though I often get overwhelmed1, so don’t feel offended if I fail to do so.

Footnotes
  1. So many things, so little time.

Categorieën: Mozilla-nl planet

The Servo Blog: This Month In Servo 128

ma, 22/04/2019 - 02:30

In the past month, we merged 189 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the team’s plans for 2019.

This week’s status updates are here.

Exciting works in progress Notable Additions
  • ferjm added support for replaying media that has ended.
  • ferjm fixed a panic that could occur when playing audio on certain platforms.
  • nox ensured a source of unsafety in layout now panics instead of causing undefined behaviour.
  • soniasinglad removed the need for OpenSSL binaries to be present in order to run tests.
  • ceyusa implemented support for EGL-only hardware accelerated media playback.
  • Manishearth improved the transform-related parts of the WebXR implementation.
  • AZWN exposed some hidden unsafe behaviour in Promise-related APIs.
  • ferjm added support for using MediaStream as media sources.
  • georgeroman implemented support for the media canPlayType API.
  • JHBalaji and others added support for value curve automation in the WebAudio API.
  • jdm implemented a sampling profiler.
  • gterzian made the sampling profiler limit the total number of samples stored.
  • Manishearth fixed a race in the WebRTC backend.
  • kamal-umudlu added support for using the fullscreen capabilities of the OS for the Fullscreen API.
  • jdm extended the set of supported GStreamer packages on Windows.
  • pylbrecht added measurements for layout queries that are forced to wait on an ongoing layout operation to complete.
  • TheGoddessInari improved the MSVC detection in Servo’s build system on Windows.
  • sbansal3096 fixed a panic when importing a stylesheet via CSSOM APIs.
  • georgeroman implemented the missing XMLSerializer API.
  • KwanEsq fixed web compatibility issue with a CSSOM API.
  • aditj added support for the DeleteCookies WebDriver API.
  • peterjoel redesigned the preferences support to better support preferences at compile-time.
  • gterzian added a thread pool for the network code.
  • lucasfantacuci refactored a bunch of code that makes network requests to use a builder pattern.
  • cdeler implemented the missing DOMException constructor API.
  • gterzian and jdm added Linux support to the thread sampling implementation.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Christopher Arnold

za, 20/04/2019 - 00:59


At last year’s Game Developers Conference I had the chance to experience new immersive video environments that are being created by game developers releasing titles for the new Oculus and HTC Vive and Google Daydream platforms.  One developer at the conference, Opaque Mulitimedia, demonstrated "Earthlight" which gave the participant an opportunity to crawl on the outside of the International Space Station as the earth rotated below.  In the simulation, a Microsoft Kinect sensor was following the position of my hands.  But what I saw in the visor was that my hands were enclosed in an astronaut’s suit.  The visual experience was so compelling that when my hands missed the rungs of the ladder I felt a palpable sense of urgency because the environment was so realistically depicted.  (The space station was rendered as a scale model of the actual space station using the "Unreal" game physics engine.)  The experience was so far beyond what I’d experienced a decade ago with the crowd-sourced simulated environments like Second Life, where artists create 3D worlds in a server-hosted environment that other people could visit as avatars.  
Since that time I’ve seen some fascinating demonstrations at Mozilla’s Virtual Reality developer events.  I’ve had the chance to witness a 360 degree video of a skydive, used the WoofbertVR application to visit real art gallery collections displayed in a simulated art gallery, spectated a simulated launch and lunar landing of Apollo 11, and browsed 360 photography depicting dozens of fascinating destinations around the globe.  This is quite a compelling and satisfying way to experience visual splendor depicted spatially.  With the New York Times and  iMax now entering the industry, we can anticipate an incredible surfeit of media content to take us to places in the world we might never have a chance to go.
Still the experiences of these simulated spaces seems very ethereal.  Which brings me to another emerging field.  At Mozilla Festival in London a few years ago, I had a chance to meet Yasuaki Kakehi of Keio University in Japan, who was demonstrating a haptic feedback device called Techtile.  The Techtile was akin to a microphone for physical feedback that could then be transmitted over the web to another mirror device.  When he put marbles in one cup, another person holding an empty cup could feel the rattle of the marbles as if the same marble impacts were happening on the sides of the empty cup held by the observer.  The sense was so realistic, it was hard to believe that it was entirely synthesized and transmitted over the Internet.  Subsequently, at the Consumer Electronics Show, I witnessed another of these haptic speakers.  But this one conveyed the sense not by mirroring precise physical impacts, but by giving precisely timed pulses, which the holder could feel as an implied sense of force direction without the device actually moving the user's hand at all.  It was a haptic illusion instead of a precise physical sensation.
As haptics work advances it has potential to impact common everyday experiences beyond the theoretical and experimental demonstrations I experienced.  This year haptic devices are available in the new Honda cars on sale this year as Road Departure Mitigation, whereby steering wheels can simulate rumble strips on the sides of a lane just by sensing the painted lines on the pavement with cameras.I am also very excited to see this field expand to include music.  At Ryerson University's SMART lab, Dr. Maria Karam, Dr. Deborah Fels and Dr. Frank Russo applied the concepts of haptics and somatosensory depiction of music to people who didn't have the capability of appreciating music aurally.  Their first product, called the Emoti-chair breaks the frequency range of music to depict different audio qualities spatially to the listeners back.  This is based on the concept that the human cochlea is essentially a large coiled surface upon which sounds of different frequencies resonate and are felt at different locations.  While I don't have perfect pitch, I think having a spatial-perception of tonal scale would allow me to develop a cognitive sense of pitch correctness to compensate using a listening aid like this.  Fortunately, Dr. Karam is advancing this work to introduce new form factors to the commercial market in coming years.
Over many years I have had the chance to study various forms of folk percussion.  One of the most interesting drumming experiences I have had was a visit to Lombok, Indonesia where I had the chance to see a Gamelan performance in a small village along with the large Gendang Belek drums accompanying.  The Gendang Belek is a large barrel drum worn with a strap that goes over the shoulders.  When the drum is struck the reverberation is so fierce and powerful that it shakes the entire body, by resonating through the spine.  I had an opportunity to study Japanese Taiko while living in Japan.  The taiko, resonates in the listener by resonating in the chest.  But the experience of bone-conduction through the spine is altogether a more intense way to experience rhythm.
Because I am such an avid fan of physical experiences of music, I am frequently gravitating toward bassey music.  I tend to play it in a sub-woofer-heavy car stereo, or seek out experiences to hear this music in nightclub or festival performances where large speakers animate the lower frequencies of music.  I can imagine that if more people had the physical experience of drumming that I've had, instead of just the auditory experience of it, more people would enjoy making music themselves.

As more innovators like TADs Inc. (an offshoot of the Ryerson University project) bring physical experiences of music to the general consumer, I look forward to experiencing my music in greater depth.









Categorieën: Mozilla-nl planet

Christopher Arnold: How a speech-based internet will change our perceptions

za, 20/04/2019 - 00:52
https://en.wikipedia.org/wiki/BeowulfA long time ago I remember reading Stephen Pinker discussing the evolution of language.  I had read Beowulf, Chaucer and Shakespeare, so I was quite interested in these linguistic adaptations over time.  Language shifts rapidly through the ages, to the  point that even English of 500 years ago sounds foreign to us now.  His thesis in the piece was about how language is going to shift toward the Chinese pronunciation of it.  Essentially, the majority of speakers will determine the rules of the language’s direction.  There are more Chinese in the world than native English speakers, so as they adopt and adapt the language, more of us will speak like the greater factions of our language’s custodians.  The future speakers of English, will determine its course.  By force of "majority rules", language will go in the direction of its greatest use, which will be the Pangea of the global populace seeking common linguistic currency with others of foreign tongues.  Just as the US dollar is an “exchange currency” standard at present between foreign economies, English is the shortest path between any two ESL speakers, no matter which background.
Subsequently, I heard these concepts reiterated in a Scientific American podcast.  The concept there being that English, when spoken by those who learned it as a second language, is easier for other speakers to understand than native-spoken English.  British, Indian, Irish, Aussie, New Zealand and American English are relics in a shift, very fast, away from all of them.  As much as we appreciate each, they are all toast.  Corners will be cut, idiomatic usage will be lost, as the fastest path to information conveyance determines that path that language takes in its evolution.  English will continue to be a mutt language flavored by those who adopt and co-opt it.  Ultimately meaning that no matter what the original language was, the common use of it will be the rules of the future.  So we can say goodbye to grammar as native speakers know it.  There is a greater shift happening than our traditions.  And we must brace as this evolution takes us with it to a linguistic future determined by others.
I’m a person who has greatly appreciated idiomatic and aphoristic usage of English.  So I’m one of those, now old codgers, who cringes at the gradual degradation of language.  But I’m listening to an evolution in process, a shift toward a language of broader and greater utility.  So the cringes I feel, are reactions to the time-saving adaptations of our language as it becomes something greater than it has been in the past.  Brits likely thought/felt the same as their linguistic empire expanded.  Now is just a slightly stranger shift.
This evening I was in the kitchen, and I decided to ask Amazon Alexa to play some Led Zeppelin.  This was a band that used to exist in the 1970’s era during which I grew up.  I knew their entire corpus very well.  So when I started hearing one of my favorite songs, I knew this was not what I had asked for.  It was a good rendering for sure, but it was not Robert Plant singing.  Puzzled, I asked Alexa who was playing.  She responded “Lez Zeppelin”.  This was a new band to me.  A very good cover band I admit.  (You can read about them here: http://www.lezzeppelin.com/)But why hadn't Alexa wanted to respond to my initial request?  Was it because Atlantic Records hadn't licensed Led Zeppelin's actual catalog for Amazon Prime subscribers?
Two things struck me.  First, we aren’t going to be tailoring our English to Chinese ESL common speech patterns as Mr. Pinker predicted.  We’re probably also going to be shifting our speech patterns to what Alexa, Siri, Cortana and Google Home can actually understand.  They are the new ESL vector that we hadn't anticipated a decade ago.  It is their use of English that will become conventional, as English is already the de facto language of computing, and therefore our language is now the slave to code.
What this means for that band (that used to be called Zeppelin) is that such entity will no longer be discoverable.  In the future, if people say “Led Zeppelin” to Alexa, she’ll respond with Lez Zeppelin (the rights-available version of the band formerly known as "Led Zeppelin").  Give humanity 100 years or so, and the idea of a band called Led Zeppelin will seem strange to folk.  Five generations removed, nobody will care who the original author was.  The "rights" holder will be irrelevant.  The only thing that will matter in 100 years is what the bot suggests.
Our language isn't ours.  It is the path to the convenient.  In bot speak, names are approximate and rights (ignoring the stalwart protectors) are meaningless.  Our concepts of trademarks, rights ownership, etc. are going to be steam-rolled by other factors, other "agents" acting at the user's behest.  The language and the needs of the spontaneous are immediate!
Categorieën: Mozilla-nl planet

Christopher Arnold: My 20 years of web

za, 20/04/2019 - 00:40
Twenty years ago I resigned from my former job at a financial news wire to pursue a career in San Francisco.  We were transitioning our news service (Jiji Press, a Japanese wire service similar to Reuters) to being a web-based news site.  I had followed the rise and fall of Netscape and the Department of Justice anti-trust case on Microsoft's bundling of IE with Windows.  But what clinched it for me was a Congressional testimony of the Federal Reserve Chairman (the US central bank) about his inability to forecast the potential growth of the Internet.

Working in the Japanese press at the time gave me a keen interest in international trade.  Prime Minister Hashimoto negotiated with United States Trade Representative Mickey Cantor to enhance trade relations and reduce protectionist tariffs that the countries used to artificially subsidize domestic industries.  Japan was the second largest global economy at the time.  I realized that if I was going to play a role in international trade it was probably going to be in Japan or on the west coast of the US.
 I decided that because Silicon Valley was the location where much of the industry growth in internet technology was happening, that I had to relocate there if I wanted to engage in this industry.  So I packed up all my belongings and moved to San Francisco to start my new career.

At the time, there were hundreds of small agencies that would build websites for companies seeking to establish or expand their internet presence.  I worked with one of these agencies to build Japanese versions of clients' English websites.  My goal was to focus my work on businesses seeking international expansion.

At the time, I met a search engine called LookSmart, which aspired to offer business-to-business search engines to major portals. (Business-to-Business is often abbreviated B2B and is a tactic of supporting companies that have their own direct consumers, called business-to-consumer, which is abbreviated B2C.)  Their model was similar to Yahoo.com, but instead of trying to get everyone to visit one website directly, they wanted to distribute the search infrastructure to other companies, combining the aggregate resources needed to support hundreds of companies into one single platform that was customized on demand for those other portals.

At the time LookSmart had only English language web search.  So I proposed launching their first foreign language search engine and entering the Japanese market to compete with Yahoo!'s largest established user base outside the US.  Looksmart's President had strong confidence in my proposal and expanded her team to include a Japanese division to invest in the Japanese market launch.  After we delivered our first version of the search engine, Microsoft's MSN licensed it to power their Japanese portal and Looksmart expanded their offerings to include B2B search services for Latin America and Europe.

I moved to Tokyo, where I networked with the other major portals of Japan to power their web search as well.  Because at the time Yahoo! Japan wasn't offering such a service, a dozen companies signed up to use our search engine.  Once the combined reach of Looksmart Japan rivaled that of the destination website of Yahoo! Japan, our management brokered a deal for LookSmart Japan to join Yahoo! Japan.  (I didn't negotiate that deal by the way.  Corporate mergers and acquisitions tend to happen at the board level.)

By this time Google was freshly independent of Yahoo! exclusive contract to provide what they called "algorithmic backfill" for the Yahoo! Directory service that Jerry Yang and David Filo had pioneered at Stanford University.  Google started a B2C portal and started offering of their own B2B publishing service by acquiring Yahoo! partner Applied Semantics, giving them the ability to put Google ads into every webpage on the internet without needing users to conduct searches anymore.  Yahoo!, fearing competition from Google in B2B search, acquired Inktomi, Altavista, Overture, and Fast search engines, three of which were leading B2B search companies.  At this point Yahoo!'s Overture division hired me to work on market launches across Asia Pacific beyond Japan.

With Yahoo! I had excellent experiences negotiating search contracts with companies in Japan, Korea, China, Australia, India and Brazil before moving into their Corporate Partnerships team to focus on the US search distribution partners.

Then in 2007 Apple launched their first iPhone.  Yahoo! had been operating a lightweight mobile search engine for html that was optimized for being shown on mobile phones.  One of my projects in Japan had been to introduce Yahoo!'s mobile search platform as an expansion to the Overture platform.  However, with the ability for the iPhone to actually show full web pages, the market was obviously going to shift.

I and several of my colleagues became captivated by the potential to develop specifically for the iPhone ecosystem.  So I resigned from Yahoo! to launch my own company, ncubeeight.  Similar to the work I had been doing at LookSmart and prior, we focused on companies that had already launched on the desktop internet that were now seeking to expand to the mobile internet ecosystem.

Being a developer in a nascent ecosystem was fascinating.  But it's much more complex than the open internet because discovery of content on the phone depends on going through a marketplace, which is something like a business directory.  Apple and Google knew there were great business models of being a discovery gateway for this specific type of content.  Going "direct to consumer" is an amazing challenge of marketing on small-screen devices.  And gaining visibility in Apple iTunes and Google Play is even more challenging a marketing problem that publicizing your services on the desktop Internet. 

Next I joined the Mozilla to work on the Firefox platform partnerships.  It has been fascinating working with this team, which originated from the Netscape browser in the 1990's and transformed into an open-source non-profit focusing on the advancement of internet technology in conjunction rather than solely in competition with Netscape's former competitors.

What is interesting to the outside perspective is most likely that companies that used to compete against each other for engagement (by which I mean your attention) are now unified in the idea of working together to enhance the ecosystem of the web.  Google, Mozilla and Apple now all embrace open source for the development of their web rendering engines.  Now these companies are beholden to an ecosystem of developers who create end-user experiences as well as the underlying platforms that each company provides as a custodian of the ecosystem.   The combined goals of a broad collaborative ecosystem are more important and impactful than any single platform or company.  A side note: Amazon is active in the wings here, basing their software on spin-off code from Google's Android open source software.  Also, after their mobile phone platform faltered they started focusing on a space where they could completely pioneer a new web-interface, voice.

When I first came to the web, much of what it was made up of was static html.  Over the past decade, web pages shifted to dynamically assembled pages and content feeds determined by individual user customizations.    This is a fascinating transition that I witnessed while at Yahoo! which has been the subject of many books.   (My favorite being Sarah Lacy's Once You're Lucky, Twice You're Good.)

Sometimes in reflective moments, one things back to what one's own personal legacy will be.  In this industry, dramatic shifts happen every three months.  Websites and services I used to enjoy tremendously 10 or 20 years ago have long since been acquired, shut down or pivoted into something new.  So what's going to exist that you could look back on after 100 years?  Probably very little except for the content that has been created by website developers themselves.  It is the diversity of web content accessible that brings us everyday to the shores of the world wide web.

There is a service called the Internet Archive that registers historical versions of web pages.  I wonder what the current web will look like from the future perspective, in this current era of dynamically-customized feeds that differ based on the user viewing them.  If an alien landed on Earth and started surfing the history of the Internet Archive's "Wayback Machine", I imagine they'll see a dramatic drop-off in content that was published in static form after 2010.

The amazing thing about the Internet is the creativity it brings out of the people who engage with it.  Back when I started telling the story of the web to people, I realized I needed to have my own web page.  So I needed to figure out what I wanted to amplify to the world.  Because I admired folk percussion that I'd seen while I was living in Japan, I decided to make my website about the drums of the world.  I used a web editor called Geocities to create this web page you see at right.  I decided to leave in its original 1999 Geocities template design for posterity's sake.  Since then my drum pursuits have expanded to include various other web projects including a YouTube channel dedicated to traditional folk percussion.  A flickr channel dedicated to drum photos.  Subsequently I launched a Soundcloud channel and a Mixcloud DJ channel for sharing music I'd composed or discovered over the decades.

The funny thing is, when I created this website, people found me who I never would have met or found otherwise.  I got emails from people around the globe who were interested in identifying drums they'd found.   Even Cirque de Soleil wrote me asking for advice on drums they should use in their performances!

Since I'd opened the curtains on my music exploration, I started traveling around to regions of the world that had unique percussion styles.  What had started as a small web development project became a broader crusade in my life, taking me to various remote corners of the world I never would have traveled to otherwise.  And naturally, this spawned a new website with another Youtube channel dedicated to travel videos.

The web is an amazing place where we can express ourselves, discover and broaden our passions and of course connect to others across the continents. 

When I first decided to leave the journalism industry, it was because I believed the industry itself was inherently about waiting for other people to do or say interesting things.  In the industry I pursued, the audience was waiting for me do to that interesting thing myself.  The Internet is tremendously valuable as a medium.  It has been an amazing 20 years watching it evolve.  I'm very proud to have had a small part in its story.  I'm riveted to see where it goes in the next two decades!  And I'm even more riveted to see where I go, with its help.

On the web, the journey you start seldom ends where you thought it would go!






















Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10n report: April edition

vr, 19/04/2019 - 15:17

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects What’s new or coming up in Firefox desktop

The deadline to ship localization updates in Firefox 67 is quickly approaching (April 30). Firefox 68 is going to be an ESR version, so it’s particularly important to ship the best localization possible. The deadline for that will be June 25.

The migration to Fluent is progressing steadily, and we are approaching two important milestones:

  • 3 thousand FTL messages.
  • Less than 2 thousand DTD strings.
What’s new or coming up in mobile

Lot’s of things have been happening on the mobile front, and much more is going to follow shortly.

One of the first things we’d like to call out if that Fenix browser strings have arrived for localization! While work has been opened up to only a small subset of locales, you can expect us to add more progressively, quite soon. More details around this can be found here and here.

We’ve also exposed strings for Firefox Reality, Mozilla’s mixed-reality browser! Also open to only a subset of locales, we expect to be able to add more locales once the in-app locale switcher is in place. Read more about this here.

There are more new and exciting projects coming up in the next few weeks, so as usual, stay tuned to the Dev.l10n mailing list for more announcements!

Concerning existing projects: Firefox iOS v17 l10n cycle is going to start within the next days, so keep an eye out on your Pontoon folder.

And concerning Fennec, just like for Firefox desktop, the deadline to ship localization updates in Firefox 67 is quickly approaching (April 30). Please read the section above for more details.

What’s new or coming up in web projects Mozilla.org
  • firefox/whatsnew_67.lang: The page must be fully localized by 3 May to be included in the Firefox 67 product release.
  • navigation.lang: The file has been available on Pontoon for more than 2 months. This newly designed navigation menu will be switched on whether it is localized or not. This means every page you browse on mozilla.org will show the new layout. If the file is not fully localized, you will see the menu mixed with English text.
  • Three new pages will be opened up for localization in select locales: adblocker,  browser history and what is a browser. Be on the lookout on Pontoon.
What’s new or coming up in SuMo What’s new or coming up in Fluent

Fluent Syntax 1.0 has been published! The syntax is now stable. Thanks to everyone who shared their feedback about Fluent in the past; you have made Fluent better for everyone. We published a blog post on Mozilla Hacks with more details about this release and about Fluent in general.

Fluent is already used in over 3000 messages in Firefox, as well as in Firefox Send and Common Voice. If you localize these projects, chances are you already localized Fluent messages. Thanks to the efforts of Matjaž and Adrian, Fluent is already well-supported in Pontoon. We continue to improve the Fluent experience in Pontoon and we’re open to your feedback about how to make it best-in-class.

You can learn more about the Fluent Syntax on the project’s website, through the Syntax Guide, and in the Mozilla localizer documentation. If you want to quickly see it in action, try the Fluent Playground—an online editor with shareable Fluent snippets.

Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Friends of the Lion
  • Kudos to Sonia who introduced Mozilla and Pontoon to her fellow attendees. She ran a short workshop on localization at Dive into Open Source event held in Jalandhar, India in late March. After the event, she onboarded and mentored Anushka, Jasmine, and Sanja who have started contributing to various projects in Punjabi.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Pagina's