mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Fosters the Next Generation of Women in Emerging Technologies

Mozilla Blog - vr, 25/01/2019 - 17:39

At Mozilla, we want to empower people to create technology that reflects the diversity of the world we live in. Today we’re excited to announce the release of the Inclusive Development Space toolkit. This is a way for anyone around the world to set up their own pop-up studio to support diverse creators.

The XR Studio was a first-of-its-kind pop-up at Mozilla’s San Francisco office in the Summer of 2018. It provided a deeply needed space for women and gender non-binary people to collaborate, learn and create projects using virtual reality, augmented reality, and artificial intelligence..

The XR Studio program was founded to offer a jump-start for women creators, providing access to mentors, equipment, ideas, and a community with others like them. Including a wide range of ages, technical abilities, and backgrounds was essential to the program experience.

Inclusive spaces are needed in the tech industry. In technology maker-spaces, eighty percent of makers are men. As technologies like VR and AI become more widespread, it’s crucial that a variety of viewpoints are represented to eliminate biases from lack of diversity.

The XR Studio cohort had round-the-clock access to high quality VR, AR, and mixed reality hardware, as well as mentorship from experts in the field. The group came together weekly to share experiences and connect with leading industry experts like Unity’s Timoni West, Fast.ai’s Rachel Thomas, and VR pioneer Brenda Laurel.

We received more than 100 applications in little over two weeks and accepted 32 participants. Many who applied cited a chance to experiment with futuristic tools as the most important reason for applying to the program, with career development a close second.

“I couldn’t imagine XR Studio being with any other organization. Don’t know if it would have had as much success if it wasn’t with Mozilla. That really accentuated the program.” – Tyler Musgrave, recently named Futurist in residence at ARVR Women.

Projects spanned from efforts to improve bias awareness in education, self defense training, criminal justice system education, identifying police surveillance and more. Participants felt the safe and supportive environment gave them a unique advantage in technology creation. “With Mozilla’s XR Studio, I am surrounded by women just as passionate and supportive about creating XR products as I am,” said Neilda Pacquing, Founder and CEO MindGlow, Inc., a company that focuses on safety training using immersive experiences. “There’s no other place like it and I feel I’ve gone further in creating my products than I would have without it.”

So what’s next?

The Mozilla XR Studio program offered an opportunity to learn and build confidence, overcome imposter syndrome, and make amazing projects. We learned lessons about architecting an inclusive space that we plan to use to create future Mozilla spaces that will support underrepresented groups in creating with emerging technologies.

Mozilla is also sponsoring the women in VR brunch at the Sundance Film Festival this Sunday. It will be a great opportunity to learn, collaborate, and fellowship with women from around the world. If you will be in the area, please reach out and say hello.

Want to create your own inclusive development space in your community, city or company? Check out our toolkit.

The post Mozilla Fosters the Next Generation of Women in Emerging Technologies appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Microsoft-man: Mozilla moet uit ivoren toren - AG Connect

Nieuws verzameld via Google - vr, 25/01/2019 - 09:00
Microsoft-man: Mozilla moet uit ivoren toren  AG Connect

Een program manager bij Edge- en IE-maker Microsoft vindt het hoog tijd dat Firefox-maker Mozilla afdaalt uit zijn 'filosofische ivoren toren'. De open source ...

Categorieën: Mozilla-nl planet

The Firefox Frontier: Fast vs private? Have it all with Firefox.

Mozilla planet - do, 24/01/2019 - 22:52

Two years ago there weren’t many options when it came to a fast vs private browser. If you wanted fast internet, you had to give up privacy. If you went … Read more

The post Fast vs private? Have it all with Firefox. appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: Clarifying the Future of Firefox Screenshots

Mozilla planet - do, 24/01/2019 - 22:48

Screenshots has been a popular part of Firefox since its launch in Firefox 56 in September 2017. Last year alone it was used by more than 20 million people to take nearly 180 million screenshots! The feature grew in popularity each month as new users discovered it in Firefox.

So it’s not surprising that any hints of changes coming to how we administer this popular feature generated interest from developers, press and everyday Firefox users. We want to take this opportunity to clarify exactly what the the future holds for Screenshots.

What is happening to Screenshots?

The Screenshots feature is not being removed from Firefox.

Screenshots users will still be able to crop shots, capture visible parts of pages and even capture full web pages. Users will continue to be able to download these images and copy them to their clipboard.

What is changing is that in 2019 users will no longer have the option to save screenshots to a standalone server hosted by Firefox. Previously, shots could be saved to our server, expiring after two weeks unless a user expressly chose to save them for longer.

Why are we making this change?

While some users made use of the save-to-server feature, downloading and copying shots to clipboard have become far more popular options for our users. We’ve decided to simplify the Screenshots service by focusing on these two options and sunsetting the Screenshots server in 2019.

Where did the confusion come from?

We’re an open source organization so sometimes when we’re contemplating changes that will enhance the experience of our users, information is shared while we’re still noodling the right path forward. That was the case here. In response to user feedback, we had planned to change the “Save” button on Screenshots to “Upload” to better indicate that shots would be saved to a server. When we decided that we’d no longer be offering the save-to-server option for screenshots, we shelved the button copy change.

User feedback about the button copy had nothing to do with the removal of the server. We are choosing to take the latter step simply because the copy to clipboard and download options are considerably more popular and we want to offer a simpler user experience.

OK, so when do I have to clear out the “attic”?

Starting in Firefox 67 which is released in May, users will no longer be able to upload shots to the Screenshots server. Pre-release users will see these changes starting in February as Firefox 67 enters Nightly.

We will be alerting users who have shots saved to the server by showing messaging about how to export their saved shots starting in February as well.

Users will have until late summer to export any permanently saved shots they have on the Screenshots server. You can visit our support site for additional information on how to manage this transition.

How are you gonna make it up me? What’s coming next?

Screenshots quickly became a popular tool in Firefox. Look for new features like keyboard shortcuts and improved shot preview UI coming soon. We’re also interested in finding new ways to let Firefox users know the feature is there, and are planning experiments to highlight Screenshots as one of many tools that make Firefox unique.

The post Clarifying the Future of Firefox Screenshots appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Clarifying the Future of Firefox Screenshots

Mozilla Futurereleases - do, 24/01/2019 - 22:48

Screenshots has been a popular part of Firefox since its launch in Firefox 56 in September 2017. Last year alone it was used by more than 20 million people to take nearly 180 million screenshots! The feature grew in popularity each month as new users discovered it in Firefox.

So it’s not surprising that any hints of changes coming to how we administer this popular feature generated interest from developers, press and everyday Firefox users. We want to take this opportunity to clarify exactly what the the future holds for Screenshots.

What is happening to Screenshots?

The Screenshots feature is not being removed from Firefox.

Screenshots users will still be able to crop shots, capture visible parts of pages and even capture full web pages. Users will continue to be able to download these images and copy them to their clipboard.

What is changing is that in 2019 users will no longer have the option to save screenshots to a standalone server hosted by Firefox. Previously, shots could be saved to our server, expiring after two weeks unless a user expressly chose to save them for longer.

Why are we making this change?

While some users made use of the save-to-server feature, downloading and copying shots to clipboard have become far more popular options for our users. We’ve decided to simplify the Screenshots service by focusing on these two options and sunsetting the Screenshots server in 2019.

Where did the confusion come from?

We’re an open source organization so sometimes when we’re contemplating changes that will enhance the experience of our users, information is shared while we’re still noodling the right path forward. That was the case here. In response to user feedback, we had planned to change the “Save” button on Screenshots to “Upload” to better indicate that shots would be saved to a server. When we decided that we’d no longer be offering the save-to-server option for screenshots, we shelved the button copy change.

User feedback about the button copy had nothing to do with the removal of the server. We are choosing to take the latter step simply because the copy to clipboard and download options are considerably more popular and we want to offer a simpler user experience.

OK, so when do I have to clear out the “attic”?

Starting in Firefox 67 which is released in May, users will no longer be able to upload shots to the Screenshots server. Pre-release users will see these changes starting in February as Firefox 67 enters Nightly.

We will be alerting users who have shots saved to the server by showing messaging about how to export their saved shots starting in February as well.

Users will have until late summer to export any permanently saved shots they have on the Screenshots server. You can visit our support site for additional information on how to manage this transition.

How are you gonna make it up me? What’s coming next?

Screenshots quickly became a popular tool in Firefox. Look for new features like keyboard shortcuts and improved shot preview UI coming soon. We’re also interested in finding new ways to let Firefox users know the feature is there, and are planning experiments to highlight Screenshots as one of many tools that make Firefox unique.

The post Clarifying the Future of Firefox Screenshots appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: [Important] Changes to the SUMO staff team

Mozilla planet - do, 24/01/2019 - 18:47

TL;DR

  • Social Community Manager changes: Konstantina and Kiki will be taking over Social Community Management. As of today, Rachel has left Mozilla as an employee.
  • L10n/KB Community Manager changes: Ruben will be taking over Community Management for KB translations. As of today, Michal has left Mozilla as an employee.
  • SUMO community call to introduce Konstantina, Kiki and Ruben on the 24th of January at 9 am PST.
  • If you have questions or concerns please join the conversation on the SUMO forums or the SUMO discourse

Today we’d like to announce some changes to the SUMO staff team. Rachel McGuigan and Michał Dziewoński will be leaving Mozilla.

Rachel and Michal have been crucial to our efforts of creating and running SUMO for many years. Rachel first showed great talent with her work on FxOS support. Her drive with our social support team have been crucial to the support of Firefox releases. Michal’s drive and passion for languages have ensured SUMO KB has a fantastic coverage of languages and that support to use the free, open browser that is Firefox, is available for more people. We wish Rachel and Michal all the best on their next adventure and thank them for their contributions to Mozilla.

With these changes, we will be thinking about how best to organize the SUMO team. Rest assured, we will continue investing in community management and will be growing the overall size of the SUMO team throughout 2019.

In the meantime Konstantina, Kiki and Ruben will be stepping in temporarily while we seek to backfill these roles to help us ensure we still have full focus on our work and continue working on our projects with you all.

We are confident in the positive future of SUMO in Mozilla, and we remain excited about the many new products and platforms we will introduce support for.  We have an incredible opportunity in front of us to continue delivering huge impact for Mozilla in 2019 and are looking forward to making this real with all of you.

Keep rocking the helpful web!

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #37

Mozilla planet - do, 24/01/2019 - 17:41

Hi! Last week I mentioned picture caching landing in nightly and I am happy to report that it didn’t get backed out (never to take for granted with a change of that importance) and it’s here to stay.
Another rather hot topic but which didn’t appear in the newsletter was Jeff and Matt’s long investigation of content frame time telemetry numbers. It turned into a real saga, featuring performance improvements but also a lot of adjustments to the way we do the measurements to make sure that we get apple to apple comparisons of Firefox running with and without WebRender. The content frame time metric is important because it correlates with user perception of stuttering, and we now have solid measurements backing that WebRender improves this metric.

Notable WebRender and Gecko changes
  • Bobby did various code cleanups and improvements.
  • Chris wrote a prototype Windows app to test resizing a child HWND in a child process and figure out how to do that without glitches.
  • Matt fixed an SVG filter clipping issue.
  • Matt Enabled SVG filters to be processed on the GPU in more cases.
  • Andrew fixed a pixel snapping issue with transforms.
  • Andrew fixed a blob image crash.
  • Emilio fixed a bug with perspective transforms.
  • Glenn included root content clip rect in picture caching world bounds.
  • Glenn added support for multiple dirty rects in picture caching.
  • Glenn fixed adding extremely large primitives to picture caching tile dependencies.
  • Glenn skipped some redundant work during picture caching updates.
  • Glenn removed unused clear color mode.
  • Glenn reduced invalidation caused by world clip rects.
  • Glenn fixed an invalidation issue with picture caching when encountering a blur filter.
  • Glenn avoided interning text run primitives due to scrolled offset field.
  • Sotaro improved the performance of large animated SVGs in some cases.
Ongoing work

The team keeps going through the remaining blockers (7 P2 bugs and 20 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Cameras, Sensors & What’s Next for Mozilla’s Things Gateway

Mozilla planet - do, 24/01/2019 - 17:20

Today the Mozilla IoT team is happy to announce the 0.7 release of the Things Gateway. This latest release brings experimental support for IP cameras, as well as support for a wider range of sensors. We’ve also got some exciting news on where the project is heading next.

Camera Support

With 0.7, you can now view video streams and get snapshots from IP cameras which follow the ONVIF standard such as the Foscam R2.

To enable ONVIF support, install the ONVIF add-on via Settings > Add-ons in the gateway’s web interface.

Set up your camera as per the manufacturer’s instructions, including a username and password if it’s required. (Always remember to change from the default if there is one!) Then, you can click the “Configure” button on the ONVIF add-on (see above) to enter your login details in the form shown below:

Once the adapter is configured you should be able to add your device in the usual way, by clicking on the + button on the Things screen. When your camera appears you can give it a name before saving it:

When you click on the video camera you will see icons for an image snapshot and/or video stream:

Click on the icons and the image or video stream will pop up on the screen. When viewing an image property, you can click the reload button in the bottom left to reload the latest snapshot:

Video camera support is still experimental at this point as we look to optimise video performance, refine the UI and support a wider range of hardware. If running on the Raspberry Pi you can expect to see a noticeable delay on video streams as it transcodes video into a web friendly format. We’d appreciate your help testing with different cameras and giving us feedback to help improve this feature.

Sensors

Things Gateway 0.7 also comes with support for a wider range of sensors.

We have added support for temperature sensors (e.g. Eve Degree, Eve Room and the SmartThings Multipurpose sensor).

And we have added support for leak sensors (e.g. the SmartThings Water Leak Sensor and the Fibaro Flood Sensor).

This means you can also now create new types of rules in the rules engine, for example to turn on a fan when temperature reaches a certain level, or be notified if a leak is detected.

Thing Description Changes

For developers, this release brings some changes to the Thing Description format used to advertise the properties, actions, and events web things support.

Rather than providing a single URL in an href member, each Property, Action and Event object can now provide an array of links with an href, rel and mediaType for each Link object. This is particularly useful for the new Camera and VideoCamera capabilities, which can provide links to an image resource or video stream. Below is an example of a Thing Description for a video camera that supports both new capabilities.

{  "@context": "https://iot.mozilla.org/schemas/",  "@type": ["Camera", "VideoCamera"],  "name": "Web Camera",  "description": "My web camera",  "properties": {    "video": {      "@type": "VideoProperty",      "title": "Stream",      "links": [{        "href": "rtsp://example.com/things/camera/properties/video.mp4",        "mediaType": "video/mp4"      }]    },    "image": {      "@type": "ImageProperty",      "title": "Snapshot",      "links": [{        "href": "http://example.com/things/camera/properties/image.jpg",        "mediaType": "image/jpg"      }]    }  } }

You may also notice that label has been renamed to title to be more in line with the latest W3C draft of the Thing Description specification.

We make an effort to retain backwards compatibility where possible, but please expect more changes like this as we rapidly evolve the Thing Description specification.

What’s Next

We’ve been delighted with the response we’ve seen to Project Things from hacker and maker communities in 2018. Thank you so much for all the contributions you’ve made in reporting bugs, implementing new features and building your own adapter add-ons and web things. Also thanks to you, a Project Things tutorial on Mozilla Hacks was our most read blog post of 2018!

Taking things (pun intended) to the next level in 2019, a big focus for our team will be to evolve the current Things Gateway application into a software distribution for wireless routers. By integrating all the smart home features we have built directly into your wireless router, we believe we can provide even more value in the areas of family internet safety and home network health.

In 2019, you can expect to see more effort go into the OpenWrt port of the Things Gateway to create our very own software distribution for “smart routers” which integrate smart home capabilities. We’ll start with new features for configuring your gateway as a wireless access point and all of the other features you’d expect from a wireless router. We anticipate many more new features to emerge as we develop this distribution, and explore all the value that a Mozilla trusted personal agent for your whole home network could provide.

We will keep generating Raspberry Pi builds of our ongoing quarterly releases for the foreseeable future, because that’s what most of our current users are using and that plucky little developer board is still close to our hearts. But look out for support for new hardware platforms coming soon.

For now, you can download the new 0.7 release from our website. If you have a Things Gateway already set up on a Raspberry Pi it should update itself automatically.

Happy hacking!

The post Cameras, Sensors & What’s Next for Mozilla’s Things Gateway appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Aaron Klotz: 2018 Roundup: Q2, Part 1

Mozilla planet - do, 24/01/2019 - 02:30

This is the second post in my “2018 Roundup” series. For an index of all entries, please see my blog entry for Q1.

Refactoring the DLL Interceptor

As I have alluded to previously, Gecko includes a Detours-style API hooking mechanism for Windows. In Gecko, this code is referred to as the “DLL Interceptor.” We use the DLL interceptor to instrument various functions within our own processes. As a prerequisite for future DLL injection mitigations, I needed to spend a good chunk of Q2 refactoring this code. While I was in there, I took the opportunity to improve the interceptor’s memory efficiency, thus benefitting the Fission MemShrink project. [When these changes landed, we were not yet tracking the memory savings, but I will include a rough estimate later in this post.]

A Brief Overview of Detours-style API Hooking

While many distinct function hooking techniques are used in the Windows ecosystem, the Detours-style hook is one of the most effective and most popular. While I am not going to go into too many specifics here, I’d like to offer a quick overview. In this description, “target” is the function being hooked.

Here is what happens when a function is detoured:

  1. Allocate a chunk of memory to serve as a “trampoline.” We must be able to adjust the protection attributes on that memory.

  2. Disassemble enough of the target to make room for a jmp instruction. On 32-bit x86 processors, this requires 5 bytes. x86-64 is more complicated, but generally, to jmp to an absolute address, we try to make room for 13 bytes.

  3. Copy the instructions from step 2 over to the trampoline.

  4. At the beginning of the target function, write a jmp to the hook function.

  5. Append additional instructions to the trampoline that, when executed, will cause the processor to jump back to the first valid instruction after the jmp written in step 4.

  6. If the hook function wants to pass control on to the original target function, it calls the trampoline.

Note that these steps don’t occur exactly in the order specified above; I selected the above ordering in an effort to simplify my description.

Here is my attempt at visualizing the control flow of a detoured function on x86-64:

http://dblohm7.ca/images/detours_hook.svg”>

Refactoring

Previously, the DLL interceptor relied on directly manipulating pointers in order to read and write the various instructions involved in the hook. In bug 1432653 I changed things so that the memory operations are parameterized based on two orthogonal concepts:

  • In-process vs out-of-process memory access: I wanted to be able to abstract reads and writes such that we could optionally set a hook in another process from our own.
  • Virtual memory allocation scheme: I wanted to be able to change how trampoline memory was allocated. Previously, each instance of WindowsDllInterceptor allocated its own page of memory for trampolines, but each instance also typically only sets one or two hooks. This means that most of the 4KiB page was unused. Furthermore, since Windows allocates blocks of pages on a 64KiB boundary, this wasted a lot of precious virtual address space in our 32-bit builds.

By refactoring and parameterizing these operations, we ended up with the following combinations:

  • In-process memory access, each WindowsDllInterceptor instance receives its own trampoline space;
  • In-process memory access, all WindowsDllInterceptor instances within a module share trampoline space;
  • Out-of-process memory access, each WindowsDllInterceptor instance receives its own trampoline space;
  • Out-of-process memory access, all WindowsDllInterceptor instances within a module share trampoline space (currently not implemented as this option is not particularly useful at the moment).

Instead of directly manipulating pointers, we now use instances of ReadOnlyTargetFunction, WritableTargetFunction, and Trampoline to manipulate our code/data. Those classes in turn use the memory management and virtual memory allocation policies to perform the actual reading and writing.

Memory Management Policies

The interceptor now supports two policies, MMPolicyInProcess and MMPolicyOutOfProcess. Each policy must implement the following memory operations:

  • Read
  • Write
  • Change protection attributes
  • Reserve trampoline space
  • Commit trampoline space

MMPolicyInProcess is implemented using memcpy for read and write, VirtualProtect for protection attribute changes, and VirtualAlloc for reserving and committing trampoline space.

MMPolicyOutOfProcess uses ReadProcessMemory and WriteProcessMemory for read and write. As a perf optimization, we try to batch reads and writes together to reduce the system call traffic. We obviously use VirtualProtectEx to adjust protection attributes in the other process.

Out-of-process trampoline reservation and commitment, however, is a bit different and is worth a separate call-out. We allocate trampoline space using shared memory. It is mapped into the local process with read+write permissions using MapViewOfFile. The memory is mapped into the remote process as read+execute using some code that I wrote in bug 1451511 that either uses NtMapViewOfSection or MapViewOfFile2, depending on availability. Individual pages from those chunks are then committed via VirtualAlloc in the local process and VirtualAllocEx in the remote process. This scheme enables us to read and write to trampoline memory directly, without needing to do cross-process reads and writes!

VM Sharing Policies

The code for these policies is a lot simpler than the code for the memory management policies. We now have VMSharingPolicyUnique and VMSharingPolicyShared. Each of these policies must implement the following operations:

  • Reserve space for up to N trampolines of size K;
  • Obtain a Trampoline object for the next available K-byte trampoline slot;
  • Return an iterable collection of all extant trampolines.

VMSharingPolicyShared is actually implemented by delegating to a static instance of VMSharingPolicyUnique.

Implications of Refactoring

To determine the performance implications, I added timings to our DLL Interceptor unit test. I was very happy to see that, despite the additional layers of abstraction, the C++ compiler’s optimizer was doing its job: There was no performance impact whatsoever!

Once the refactoring was complete, I switched the default VM Sharing Policy for WindowsDllInterceptor over to VMSharingPolicyShared in bug 1451524.

Browsing today’s mozilla-central tip, I count 14 locations where we instantiate interceptors inside xul.dll. Given that not all interceptors are necessarily instantiated at once, I am now offering a worst-case back-of-the-napkin estimate of the memory savings:

  • Each interceptor would likely be consuming 4KiB (most of which is unused) of committed VM. Due to Windows’ 64 KiB allocation guanularity, each interceptor would be leaving a further 60KiB of address space in a free but unusable state. Assuming all 14 interceptors were actually instantiated, they would thus consume a combined 56KiB of committed VM and 840KiB of free but unusable address space.
  • By sharing trampoline VM, the interceptors would consume only 4KiB combined and waste only 60KiB of address space, thus yielding savings of 52KiB in committed memory and 780KiB in addressable memory.
Oh, and One More Thing

Another problem that I discovered during this refactoring was bug 1459335. It turns out that some of the interceptor’s callers were not distinguishing between “I have not set this hook yet” and “I attempted to set this hook but it failed” scenarios. Across several call sites, I discovered that our code would repeatedly retry to set hooks even when they had previously failed, causing leakage of trampoline space!

To fix this, I modified the interceptor’s interface so that we use one-time initialization APIs to set hooks; since landing this bug, it is no longer possible for clients of the DLL interceptor to set a hook that had previously failed to be set.

Quantifying the memory costs of this bug is… non-trivial, but it suffices to say that fixing this bug probably resulted in the savings of at least a few hundred KiB in committed VM on affected machines.

That’s it for today’s post, folks! Thanks for reading! Coming up in Q2, Part 2: Implementing a Skeletal Launcher Process

Categorieën: Mozilla-nl planet

About:Community: Firefox 65 new contributors

Mozilla planet - wo, 23/01/2019 - 23:21

With the release of Firefox 65, we are pleased to welcome the 32 developers who contributed their first code change to Firefox in this release, 27 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Mozilla B-Team: happy bmo push day!

Mozilla planet - wo, 23/01/2019 - 23:13

It’s hard to believe, but we’ve laned nearly 70 commits this year. In this update comments get make-over, APIs got faster and certain types of bots are shown the door. Also, bug fixes.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1511261] request queue page shows ‘Bugzilla::User=HASH(…)’ instead of username
  • [1520856] “Opt out of these emails” at bottom of overdue request nagging emails doesn’t open desired page
  • [1520011] Phabbugz panel short description missing
  • [1518886] Remove outdated build plan code from PhabBugz extension used to move…

View On WordPress

Categorieën: Mozilla-nl planet

Mozilla VR Blog: How I made Jingle Smash

Mozilla planet - wo, 23/01/2019 - 21:00
How I made Jingle Smash

When advocating a new technology I always try to use it in the way that real world developers will, and for WebVR (the VR-only precursor to WebXR), building a game is currently one of the best ways to do that. So for the winter holidays I built a game, Jingle Smash, a classic block tumbling game. If you haven't played it yet, put on your headset and give it a try. Now an overview of how I built it.

How I made Jingle Smash

This article is part of my ongoing series of medium difficulty ThreeJS tutorials. I’ve long wanted something in between the intro “How to draw a cube” and “Let’s fill the screen with shader madness” levels. So here it is.

ThreeJS

Jingle Smash is written in ThreeJS using WebVR and some common boilerplate that I use in all of my demos. I chose to use ThreeJS directly instead of A-Frame because I knew I would be adding custom textures, custom geometry, and a custom control scheme. While it is possible to do this with A-Frame, I'd be writing so much code at the ThreeJS level that it was easier to cut out the middle man.

Physics

Jingle Smash is an Angry Birds style game where you lob an object at blocks to knock them over and destroy targets. Once you have destroyed the required targets you get to the next level. Seems simple enough. And for an 2D side view game like Angry Birds it is. I remember enough of my particle physics from school to write a simple 2D physics simulator, but 3D collisions are way beyond me. I needed a physics engine.

After evaluating the options I settled on Cannon.js because it's 100% Javascript and has no requirements on the UI. It simply calculates the positions of objects in space and puts your code in charge of stepping through time. This made it very easy to integrate with ThreeJS. It even has an example.

Graphics

In previous games I have used 3D models created by an artist. For this Jingle Smash I created everything in code. The background, blocks, and ornaments all use either standard or generated geometry. All of the textures except for the sky background are also generated on the fly using 2D HTML Canvas, then converted into textures.

I went with a purely generated approach because it let me easily mess with UV values to create different effects and use exactly the colors I wanted. In a future blog I'll dive deep into how they work. Here is a quick example of generating an ornament texture:

const canvas = document.createElement('canvas') canvas.width = 64 canvas.height = 16 const c = canvas.getContext('2d') c.fillStyle = 'black' c.fillRect(0, 0, canvas.width, canvas.height) c.fillStyle = 'red' c.fillRect(0, 0, 30, canvas.height) c.fillStyle = 'white' c.fillRect(30, 0, 4, canvas.height) c.fillStyle = 'green' c.fillRect(34, 0, 30, canvas.height) this.textures.ornament1 = new THREE.CanvasTexture(canvas) this.textures.ornament1.wrapS = THREE.RepeatWrapping this.textures.ornament1.repeat.set(8, 1)

How I made Jingle Smash

Level Editor

Most block games are 2D. The player has a view of the entire game board. Once you enter 3D, however, the blocks obscure the ones behind them. This means level design is completely different. The only way to see what a level looks like is to actually jump into VR and see it. That meant I really needed a way to edit the level from within VR, just as the player would see it.

To make this work I built a simple (and ugly) level editor inside of VR. This required building a small 2D UI toolkit for the editor controls. Thanks to using HTML canvas this turned out not to be too difficult.

How I made Jingle Smash

Next Steps

I'm pretty happy with how Jingle Smash turned out. Lots of people played it at the Mozilla All-hands and said they had fun. I did some performance optimization and was able to get the game up to about 50fps, but there is still more work to do (which I'll cover soon in another post).

Jingle Smash proved that we can make fun games that run in WebVR, and that load very quickly (on a good connection the entire game should load in less than 2 seconds). You can see the full (but messy) code of Jingle Smash in my WebXR Experiments repo.

While you wait for the future updates on Jingle Smash, you might want to watch my new Youtube Series on How to make VR with the Web

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Fearless Security: Memory Safety

Mozilla planet - wo, 23/01/2019 - 16:00
Fearless Security

Last year, Mozilla shipped Quantum CSS in Firefox, which was the culmination of 8 years of investment in Rust, a memory-safe systems programming language, and over a year of rewriting a major browser component in Rust. Until now, all major browser engines have been written in C++, mostly for performance reasons. However, with great performance comes great (memory) responsibility: C++ programmers have to manually manage memory, which opens a Pandora’s box of vulnerabilities. Rust not only prevents these kinds of errors, but the techniques it uses to do so also prevent data races, allowing programmers to reason more effectively about parallel code.

With great performance comes great memory responsibility

In the coming weeks, this three-part series will examine memory safety and thread safety, and close with a case study of the potential security benefits gained from rewriting Firefox’s CSS engine in Rust.

What Is Memory Safety

When we talk about building secure applications, we often focus on memory safety. Informally, this means that in all possible executions of a program, there is no access to invalid memory. Violations include:

  • use after free
  • null pointer dereference
  • using uninitialized memory
  • double free
  • buffer overflow

For a more formal definition, see Michael Hicks’ What is memory safety post and The Meaning of Memory Safety, a paper that formalizes memory safety.

Memory violations like these can cause programs to crash unexpectedly and can be exploited to alter intended behavior. Potential consequences of a memory-related bug include information leakage, arbitrary code execution, and remote code execution.

Managing Memory

Memory management is crucial to both the performance and the security of applications. This section will discuss the basic memory model. One key concept is pointers. A pointer is a variable that stores a memory address. If we visit that memory address, there will be some data there, so we say that the pointer is a reference to (or points to) that data. Just like a home address shows people where to find you, a memory address shows a program where to find data.

Everything in a program is located at a particular memory address, including code instructions. Pointer misuse can cause serious security vulnerabilities, including information leakage and arbitrary code execution.

Allocation/free

When we create a variable, the program needs to allocate enough space in memory to store the data for that variable. Since the memory owned by each process is finite, we also need some way of reclaiming resources (or freeing them). When memory is freed, it becomes available to store new data, but the old data can still exist until it is overwritten.

Buffers

A buffer is a contiguous area of memory that stores multiple instances of the same data type. For example, the phrase “My cat is Batman” would be stored in a 16-byte buffer. Buffers are defined by a starting memory address and a length; because the data stored in memory next to a buffer could be unrelated, it’s important to ensure we don’t read or write past the buffer boundaries.

Control Flow

Programs are composed of subroutines, which are executed in a particular order. At the end of a subroutine, the computer jumps to a stored pointer (called the return address) to the next part of code that should be executed. When we jump to the return address, one of three things happens:

  1. The process continues as expected (the return address was not corrupted).
  2. The process crashes (the return address was altered to point at non-executable memory).
  3. The process continues, but not as expected (the return address was altered and control flow changed).
How languages achieve memory safety

We often think of programming languages on a spectrum. On one end, languages like C/C++ are efficient, but require manual memory management; on the other, interpreted languages use automatic memory management (like reference counting or garbage collection [GC]), but pay the price in performance. Even languages with highly optimized garbage collectors can’t match the performance of non-GC’d languages.

Manually

Some languages (like C) require programmers to manually manage memory by specifying when to allocate resources, how much to allocate, and when to free the resources. This gives the programmer very fine-grained control over how their implementation uses resources, enabling fast and efficient code. However, this approach is prone to mistakes, particularly in complex codebases.

Mistakes that are easy to make include:

  • forgetting that resources have been freed and trying to use them
  • not allocating enough space to store data
  • reading past the boundary of a buffer

Shake hands with danger!
A safety video candidate for manual memory management

Smart pointers

A smart pointer is a pointer with additional information to help prevent memory mismanagement. These can be used for automated memory management and bounds checking. Unlike raw pointers, a smart pointer is able to self-destruct, instead of waiting for the programmer to manually destroy it.

There’s no single smart pointer type—a smart pointer is any type that wraps a raw pointer in some practical abstraction. Some smart pointers use reference counting to count how many variables are using the data owned by a variable, while others implement a scoping policy to constrain a pointer lifetime to a particular scope.

In reference counting, the object’s resources are reclaimed when the last reference to the object is destroyed. Basic reference counting implementations can suffer from performance and space overhead, and can be difficult to use in multi-threaded environments. Situations where objects refer to each other (cyclical references) can prohibit either object’s reference count from ever reaching zero, which requires more sophisticated methods.

Garbage Collection

Some languages (like Java, Go, Python) are garbage collected. A part of the runtime environment, named the garbage collector (GC), traces variables to determine what resources are reachable in a graph that represents references between objects. Once an object is no longer reachable, its resources are not needed and the GC reclaims the underlying memory to reuse in the future. All allocations and deallocations occur without explicit programmer instruction.

While a GC ensures that memory is always used validly, it doesn’t reclaim memory in the most efficient way. The last time an object is used could occur much earlier than when it is freed by the GC. Garbage collection has a performance overhead that can be prohibitive for performance critical applications; it requires up to 5x as much memory to avoid a runtime performance penalty.

Ownership

To achieve both performance and memory safety, Rust uses a concept called ownership. More formally, the ownership model is an example of an affine type system. All Rust code follows certain ownership rules that allow the compiler to manage memory without incurring runtime costs:

  1. Each value has a variable, called the owner.
  2. There can only be one owner at a time.
  3. When the owner goes out of scope, the value will be dropped.

Values can be moved or borrowed between variables. These rules are enforced by a part of the compiler called the borrow checker.

When a variable goes out of scope, Rust frees that memory. In the following example, when s1 and s2 go out of scope, they would both try to free the same memory, resulting in a double free error. To prevent this, when a value is moved out of a variable, the previous owner becomes invalid. If the programmer then attempts to use the invalid variable, the compiler will reject the code. This can be avoided by creating a deep copy of the data or by using references.

Example 1: Moving ownership

let s1 = String::from("hello"); let s2 = s1; //won't compile because s1 is now invalid println!("{}, world!", s1);

Another set of rules verified by the borrow checker pertains to variable lifetimes. Rust prohibits the use of uninitialized variables and dangling pointers, which can cause a program to reference unintended data. If the code in the example below compiled, r would reference memory that is deallocated when x goes out of scope—a dangling pointer. The compiler tracks scopes to ensure that all borrows are valid, occasionally requiring the programmer to explicitly annotate variable lifetimes.

Example 2: A dangling pointer

let r; { let x = 5; r = &x } println!("r: {}", r);

The ownership model provides a strong foundation for ensuring that memory is accessed appropriately, preventing undefined behavior.

Memory Vulnerabilities

The main consequences of memory vulnerabilities include:

  1. Crash: accessing invalid memory can make applications terminate unexpectedly
  2. Information leakage: inadvertently exposing non-public data, including sensitive information like passwords
  3. Arbitrary code execution (ACE): allows an attacker to execute arbitrary commands on a target machine; when this is possible over a network, we call it a remote code execution (RCE)

Another type of problem that can appear is memory leakage, which occurs when memory is allocated, but not released after the program is finished using it. It’s possible to use up all available memory this way. Without any remaining memory, legitimate resource requests will be blocked, causing a denial of service. This is a memory-related problem, but one that can’t be addressed by programming languages.

The best case scenario with most memory errors is that an application will crash harmlessly—this isn’t a good best case. However, the worst case scenario is that an attacker can gain control of the program through the vulnerability (which could lead to further attacks).

Misusing Free (use-after-free, double free)

This subclass of vulnerabilities occurs when some resource has been freed, but its memory position is still referenced. It’s a powerful exploitation method that can lead to out of bounds access, information leakage, code execution and more.

Garbage-collected and reference-counted languages prevent the use of invalid pointers by only destroying unreachable objects (which can have a performance penalty), while manually managed languages are particularly susceptible to invalid pointer use (particularly in complex codebases). Rust’s borrow checker doesn’t allow object destruction as long as references to the object exist, which means bugs like these are prevented at compile time.

Uninitialized variables

If a variable is used prior to initialization, the data it contains could be anything—including random garbage or previously discarded data, resulting in information leakage (these are sometimes called wild pointers). Often, memory managed languages use a default initialization routine that is run after allocation to prevent these problems.

Like C, most variables in Rust are uninitialized until assignment—unlike C, you can’t read them prior to initialization. The following code will fail to compile:

Example 3: Using an uninitialized variable

fn main() { let x: i32; println!("{}", x); } Null pointers

When an application dereferences a pointer that turns out to be null, usually this means that it simply accesses garbage that will cause a crash. In some cases, these vulnerabilities can lead to arbitrary code execution 1 2 3. Rust has two types of pointers, references and raw pointers. References are safe to access, while raw pointers could be problematic.

Rust prevents null pointer dereferencing two ways:

  1. Avoiding nullable pointers
  2. Avoiding raw pointer dereferencing

Rust avoids nullable pointers by replacing them with a special Option type. In order to manipulate the possibly-null value inside of an Option, the language requires the programmer to explicitly handle the null case or the program will not compile.

When we can’t avoid nullable pointers (for example, when interacting with non-Rust code), what can we do? Try to isolate the damage. Any dereferencing raw pointers must occur in an unsafe block. This keyword relaxes Rust’s guarantees to allow some operations that could cause undefined behavior (like dereferencing a raw pointer).

Everything the borrow checker touches...what about that shadowy place? That's an unsafe block. You must never go there Simba.

Buffer overflow

While the other vulnerabilities discussed here are prevented by methods that restrict access to undefined memory, a buffer overflow may access legally allocated memory. The problem is that a buffer overflow inappropriately accesses legally allocated memory. Like a use-after-free bug, out-of-bounds access can also be problematic because it accesses freed memory that hasn’t been reallocated yet, and hence still contains sensitive information that’s supposed to not exist anymore.

A buffer overflow simply means an out-of-bounds access. Due to how buffers are stored in memory, they often lead to information leakage, which could include sensitive data such as passwords. More severe instances can allow ACE/RCE vulnerabilities by overwriting the instruction pointer.

Example 4: Buffer overflow (C code)

int main() { int buf[] = {0, 1, 2, 3, 4}; // print out of bounds printf("Out of bounds: %d\n", buf[10]); // write out of bounds buf[10] = 10; printf("Out of bounds: %d\n", buf[10]); return 0; }

The simplest defense against a buffer overflow is to always require a bounds check when accessing elements, but this adds a runtime performance penalty.

How does Rust handle this? The built-in buffer types in Rust’s standard library require a bounds check for any random access, but also provide iterator APIs that can reduce the impact of these bounds checks over multiple sequential accesses. These choices ensure that out-of-bounds reads and writes are impossible for these types. Rust promotes patterns that lead to bounds checks only occurring in those places where a programmer would almost certainly have to manually place them in C/C++.

Memory safety is only half the battle

Memory safety violations open programs to security vulnerabilities like unintentional data leakage and remote code execution. There are various ways to ensure memory safety, including smart pointers and garbage collection. You can even formally prove memory safety. While some languages have accepted slower performance as a tradeoff for memory safety, Rust’s ownership system achieves both memory safety and minimizes the performance costs.

Unfortunately, memory errors are only part of the story when we talk about writing secure code. The next post in this series will discuss concurrency attacks and thread safety.

Exploiting Memory: In-depth resources

Heap memory and exploitation
Smashing the stack for fun and profit
Analogies of Information Security
Intro to use after free vulnerabilities

The post Fearless Security: Memory Safety appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Daniel Glazman: WebExtensions v3 considered harmful

Mozilla planet - wo, 23/01/2019 - 11:47

The Open Web Platform is a careful and fragile construction billions of people, including millions of implementors rely on. HTML, CSS, JavaScript, the Document Object Model, the Web API and more are all standardized one way or another; that means vendors and stakeholders gather around a table to discuss all changes and that these changes must pass quality and/or availability criteria to be considered "shippable".

One notable absent from the list of Web Standards is WebExtensions. WebExtensions are the generalized name of Google Chrome Extensions that became mainstream when Google achieved dominance over the desktop browser market and when Mozilla abandoned its own, and much more powerful, addons system based on XUL and privileged scripts.

As a reminder, the WebExtension API allows coders to implement extensions to the browser based on:

  • HTML/CSS/JS for each and every dialog created by the extension, including the ones "integrated" into the browser's UI
  • a dual model with "background scripts" with more privileges than "content scripts" that get added to visited web pages
  • a new API (the WebExtension API) that offers - and rather strictly controls - access to information that is not otherwise reachable from JavaScript
  • a permissions model that declare what part of the aforementioned API the extension uses and which remote URLs the embedded scripts can access
  • a URL model that puts everything in the extension under a chrome-extension:// URL
  • a review process (on the Google Chrome Extension store) supposed to block harmful codes and more

A while ago, at a time Microsoft still had its own rendering engine, it initiated a Community Group on WebExtensions at the World Wide Web Consortium (W3C). With members from most browser vendors plus a few others, this seemed to be a very positive move not only for implementors but also for users.

But unfortunately, that effort went nowhere. Lack of commitment from other browser vendors and in particular Google, Microsoft abandoning its own rendering engine, lax Community Group instead of a formal W3C Working Group, the WebExtension draft specification has been in limbos for a while now and WebExtensions clearly remain the poor parent of Web Standards even if most people have at least one browser extension installed (usually some sort of ad-blocker).

Today, Google is impulsing a deep change in its WebExtension model:

  • Background HTML pages will be deprecated in favor of ServiceWorkers. That change alone will imply a complete rearchitecture of existing extensions and will also impact their ability to create and deal with the dialogs their UX model requires.
  • The webRequest API that billions of users activate on a daily basis to block advertisement, trackers or undesirable content, is at stake and should be replaced by a declarartive new API that will not allow to monitor the requested resources any more. At a time the advertisement model on the Web is harmed by ad blockers, one can only wonder if this change is triggered only by technical considerations or if ad strategy is also behind it... Furthermore, it will be limited to a few dozens of thousands of declarations, which is far below the number of trackers and advertisement scripts available in the wild today.
  • Some heavily used API will be removed, without consideration for usage metrics or change cost to implementors
  • Even the description of the top level of an extension (aka the "browser action" and the "page action") will change and impact extension vendors
  • All of that is for the time being decided on the Google side alone, with little or no visible contact with the other WebExtension host (Mozilla) or the thousands of WebExtension (free or commercial) providers. There is even a "migration plans" document but it's not publicly available, the link being access-restricted

On the webRequest part specifically, all major actors of the ad-blocking and security landscape are screaming (see also the chromium-extensions Google group). Us at Privowny are also deeply concerned by the v3 proposed changes. Even Amnesty International complained in a recent message! To me, the most important message posted in reply to the proposed changes is the following one:

Hi, we are the developer of a child-protection add-on, which strives to make the Internet safer for minors. This change would cripple our efforts on Chrome.

Talk about "don't be evil"...

All of that gives a set of very bad signals to third-party implementors, including us at Privowny:

  1. WebExtensions are not a mature part of the Open Web Platform. It completely lacks stability, and software vendors willing to use it must be ready to life-threatening (for them) changes at any time
  2. WebExtensions are fully in the hands of Google, that can and will change it any time based on its own interests only. It is not a Web Standard.
  3. Google is ready to make WebExtensions diverge from cross-browser interoperability at any time, killing precisely what brought vendors like us at Privowny to WebExtensions.
  4. Google Chrome is not what it seems to be, a browser based on an Open Source project that protects users, promotes openness and can serve as a basis tool for webcitizen's protection.

Reading the above, and given the fact Google is able to impulse changes of such magnitudes with little or no impact study on vendors like us, we consider that WebExtensions are not a safe development platform any more. We will probably study soon an extraction of most of our code into a native desktop application, leaving only the minimum minimorum in the browser extension to communicate with web pages and of course with our native app.

After Mozilla that severely harmed its amazing addons ecosystem (remember it triggered the success of Firefox), after Apple that partly went away from JavaScript-based Safari extensions jeopardizing its addons ecosystem so much it's anemic (I could even say dying), Google is taking a move that is harmful to Chrome extensions vendors. What is striking here is that Google is making the very same mistake Mozilla did: no prior discussion with stakeholders (hear extension implementors), release of a draft spec that was obviously going to trigger strong reactions, unmeasured impact (complexity, time and finances) on implementors, more and more restrictions on what it is possible to do but a too limited set of new features.

On the legal side of things, this unilateral change could probably even qualify as "Abuse of dominant position" under European Union's article 102 TFUE, and could then cost Google a lot, really a lot...

The Open Web Platform is alive and vibrant. The Browser Extension ecosystem is in jail, subject to unpredictable harmful changes decided by one single actor. This must change, it's not viable any more.

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP/3 talk on video

Mozilla planet - wo, 23/01/2019 - 10:12

Yesterday, I had attracted audience enough to fill up the largest presentation room GOTO 10 has, which means about one hundred interested souls.

The subject of the day was HTTP/3. The event was filmed with a mevo camera and I captured the presentation directly from my laptop as well, and I then stitched together the two sources into this final version late last night. As you’ll notice, the sound isn’t awesome and the rest of the “production” isn’t exactly top notch either, but hey, I don’t think it matters too much.

<figcaption>I’ll talk about HTTP/3 (Photo by Jon Åslund)</figcaption> <figcaption>I’m Daniel Stenberg. I was handed a medal from the Swedish king in 2017 for my work on… (Photo by OpenTokix)</figcaption> <figcaption>HTTP/2 vs HTTP/3 (Photo by OpenTokix)</figcaption> <figcaption>Some of the challenges to deploy HTTP/3 are…. (Photo by Jonathan Sulo)</figcaption>

The slide set can also be viewed on slideshare.

Categorieën: Mozilla-nl planet

Ian Bicking: We Need Open Hosting Platforms

Mozilla planet - wo, 23/01/2019 - 07:00

In Bringing people back to the open web Chris states:

But most users don’t care about the principles or implementation of an open web, at least not in those terms. Most people don’t see themselves as ever having left the open web behind, and if you told them to try to get back to it, they wouldn’t know what to do or why it was worthwhile.

No matter how much it might be in their long-term self interest, it’s not up to the casual Internet user to figure that out. Instead, it’s up to the developers, designers, entrepreneurs and technology leaders to create a version of the open web that also happens to be the best version of the web.

I think he’s starting with a reasonable, positive call: we can’t just decry the state of things, we have to make things. And we have to make good things. The open web should be better.

I fear a moralizing approach to advocacy pushes people away, makes it harder for people to care about the values we are espousing. When we frame something as depressing or hopeless we encourage people to pay attention to other things. So yes: the open web should be the best web.

But ignoring my advice, I’m going to point out a depressing fact: open source products aren’t successful. Open source is not in line to be part of any solution.

Open Source has done a lot for developers, but it’s not present on the surface of the web – the surface that people interact with, and that defines the “open web”. Actual sites. Actual interfaces. Open source is used everywhere except at the point of interaction with actual people.

Why is open source so absent?

One big problem: the web isn’t software. The web is deployed software, running on servers.

If, as a creator of software, I want to share what I’ve done with everyone on the web – not just with other developers – then I actually have to deploy that software somewhere. But if maintaining open source is difficult and unsustainable, hosting that software is even worse.

I could create a whole company to support the service. But at that point I’m not a developer, I’m an “entrepreneur”. That’s more of a pain in the ass than giving stuff away.

For open source developers to build the open web we need a platform that allows us to actually give the tools we’ve created to everyone. Because of the hosting problem all our open source work is mediated through commercial entities, and we have this world where the web is very much built on open source, and yet that does nothing to make it more open.

An open hosting platform is not a specification, it is not a protocol, it is not a piece of software. It is actual hosting. It is people who deal with abuse, security, takedown notices, denial of service attacks, naming, bill paying, authentication and recovery, and are committed to ongoing improvements to the platform. Those are the things that separate software from a running service, and only running services can participate in the open web.

I don’t think decentralization, federation, or P2P is important or probably even desirable. I think these are ways to avoid the work of hosting, and they succeed to the degree no one uses the resulting software and so no work has to be done. It’s better to start with a working product.

Would hosting change things? Probably not enough: products aren’t just software, and open source development still struggles to include a diversity of skills and the consistent delivery of effort to make a product. Here, I have no suggestions. But still, an open, public, accessible hosting platform would be a start.

Categorieën: Mozilla-nl planet

Firefox verwijdert misleidende knop in screenshot-functionaliteit - Techzine.nl

Nieuws verzameld via Google - di, 22/01/2019 - 19:23
Firefox verwijdert misleidende knop in screenshot-functionaliteit  Techzine.nl

Na enige maanden van klachten, bindt Mozilla eindelijk in. Het verwijdert een misleidend 'duister patroon' van de Firefox-functie waarmee gebruikers.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: How to make VR with the web, a new video series

Mozilla planet - di, 22/01/2019 - 16:51

Virtual reality (VR) seems complicated, but with a few JavaScript libraries and tools, and the power of WebGL, you can make very nice VR scenes that can be viewed and shared in a headset like an Oculus Go or HTC Vive, in a desktop web browser, or on your smartphone. Let me show you how:

In this new YouTube series, How to make a virtual reality project in your browser with three.js and WebVR, I’ll take you through building an interactive birthday card in seven short tutorials, complete with code and examples to get you started. The whole series clocks in under 60 minutes. We begin by getting a basic cube on the screen, add some nice 3D models, set up lights and navigation, then finally add music.

All you need are basic JavaScript skills and an internet connection.

Here’s the whole series. Come join me:

1: Learn how to build virtual reality scenes on the web with WebVR and JavaScript

2: Set up your WebVR workflow and code to build a virtual reality birthday card

3: Using a WebVR editor (Spoke) to create a fun 3D birthday card

4: How to create realistic lighting in a virtual reality scene

5: How to move around in virtual reality using teleportation to navigate your scene

6: Adding text and text effects to your WebVR scene with a few lines of code

7: How to add finishing touches like sound and sky to your WebVR scene

  

To learn how to make more cool stuff with web technologies, subscribe to Mozilla Hacks on YouTube. And if you want to get more involved in learning to create mixed reality experiences for the web, you can follow @MozillaReality on twitter for news, articles, and updates.

The post How to make VR with the web, a new video series appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Miljoenen Windows-gebruikers draaien verouderde software - Security.nl

Nieuws verzameld via Google - di, 22/01/2019 - 15:53
Miljoenen Windows-gebruikers draaien verouderde software  Security.nl

Miljoenen Windows-gebruikers draaien verouderde software met allerlei beveiligingslekken, zo blijkt uit onderzoek van anti-virusbedrijf Avast onder 163 miljoen ...

Categorieën: Mozilla-nl planet

The Mozilla Blog: The Coral Project is Moving to Vox Media

Mozilla planet - di, 22/01/2019 - 15:27

Since 2015, the Mozilla Foundation has incubated The Coral Project to support journalism and improve online dialog around the world through privacy-centered, open source software. Originally founded as a two-year collaboration between Mozilla, The New York Times and the Washington Post, it became entirely a Mozilla project in 2017.

Over the past 3.5 years, The Coral Project has developed two software tools, a series of guides and best practices, and grown a community of journalism technologists around the world advancing privacy and better online conversation.

Coral’s first tool, Ask, has been used by journalists in several countries, including the Spotlight team at the Boston Globe, whose series on racism used Ask on seven different occasions, and was a finalist for the Pulitzer Prize in Local Reporting.

The Coral Project’s main tool, the Talk platform, now powers the comments for nearly 50 newsrooms in 11 countries, including The Wall Street Journal, the Washington Post, The Intercept, and the Globe and Mail. The Coral Project has also collaborated with academics and technologists, running events and working with researchers to reduce online harassment and raise the quality of conversation on the decentralized web.

After 3.5 years at Mozilla, the time is right for Coral software to move further into the journalism space, and grow with the support of an organization grounded in that industry. And so, in January, the entire Coral Project team will join Vox Media, a leading media company with deep ties in online community engagement.

Under Vox Media’s stewardship, The Coral Project will receive the backing of a large company with an unrivaled collection of journalists as well as experience in the area of Software as a Service. This combination will help specifically to grow the adoption of Coral’s commenting platform Talk, while continuing as an open source project that respects user privacy.

The Coral Project has built a community of journalists and technologists who care deeply about improving the quality of online conversation. Mozilla will continue to support and highlight the work of this community as champions of a healthy, humane internet that is accessible to all.

We are excited for the new phase of The Coral Project at Vox Media, and hope you will join us in celebrating its success so far, and in supporting our shared vision for a better internet.

The post The Coral Project is Moving to Vox Media appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Pagina's