mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 6 uur 17 min geleden

Gregory Szorc: On Monolithic Repositories

di, 09/09/2014 - 12:00

When companies or organizations deploy version control, they have to make many choices. One of them is how many repositories to create. Your choices are essentially a) a single, monolithic repository that holds everything b) many separate, smaller repositories that hold all the individual parts c) something in between.

The prevailing convention today (especially in the open source realm) is to create many separate and loosely coupled repositories, each repository mapping to a specific product or service. That does seem reasonable: if you were organizing files on your filesystem, you would group them by functionality or role (photos, music, documents, etc). And, version control tools are functionally filesystems. So it makes sense to draw repository boundaries at directory/role levels.

Further reinforcing the separate repository convention is the scaling behavior of our version control tools. Git, the popular tool in open source these days, doesn't scale well to very large repositories due to - among other things - not having narrow clones (fetching a subset of files). It scales well enough to the overwhelming majority of projects. But if you are a large organization generating lots of data (read: gigabytes of data over hundreds of thousands of files and commits) for version control, Git is unsuitable in its current form. Other tools (like Mercurial) don't currently fare that much better (although Mercurial has plans to tackle these scaling vectors).

Despite popular convention and even limitations in tools, companies like Google and Facebook opt to run large, monolithic repositories. Google runs Perforce. Facebook is on Mercurial, or at least is in the process of migrating to Mercurial.

Why do these companies run monolithic repositories? In Google's words:

We have a single large depot with almost all of Google's projects on it. This aids agile development and is much loved by our users, since it allows almost anyone to easily view almost any code, allows projects to share code, and allows engineers to move freely from project to project. Documentation and data is stored on the server as well as code.

So, monolithic repositories are all about moving fast and getting things done more efficiently. In other words, monolithic repositories increase developer productivity.

Furthermore, monolithic repositories are also more compatible with the ebb and flow of large organizations and large software projects. Components, features, products, and teams come and go, merge and split. The only constant is change. And if you are maintaining separate repositories that attempt to map to this ever-changing organizational topology, you are going to have a bad time. Either you'll be constantly copying, moving, merging, splitting, etc data and repositories. Or your repositories will be organized in a very non-logical and non-intuitive manner. That translates to overhead and lost productivity. I think that monolithic repositories handle the realities of large organizations much better. Big change or reorganization you want to reflect? You can make a single, atomic, history-preserving commit to move things around. I think that's much more manageable, especially when you consider the difficulty and annoyance of history-preserving changes across repositories.

Naysayers will decry monolithic repositories on principled and practical grounds.

The principled camp will say that separate repositories constitute a loosely coupled (dare I say service oriented) architecture that maps better to how software is consumed, assembled, and deployed and that erecting barriers in the form of separate repositories deliberately enforces this architecture. I agree. However, you can still maintain a loosely coupled architecture with monolithic repositories. The Subversion model of checking out a single tree from a larger repository proves this. Furthermore, I would say architecture decisions should be enforced by people (via code review, etc), not via version control repository topology. I believe this principled argument against monolithic repositories to be rather weak.

The principled camp living in the open source realm may also decry monolithic repositories as an affront to the spirit of open source. They would say that a monolithic repository creates unfairly strong ties to the organization that operates it and creates barriers to forking, etc. This may be true. But monolithic repositories don't intrinsically infringe on the basic software freedoms, organizations do. Therefore, I find this principled argument rather weak.

The practical camp will say that monolithic repositories just don't scale or aren't suitable for general audiences. These concerns are real.

Fully distributed version control systems (every commit on every machine) definitely don't scale past certain limits. Depending on your repository and user base, your scaling limits include disk space (repository data terabytes in size), bandwidth (repository data terabytes in size), filesystem (repository hundreds of thousands or millions of files), CPU and memory (operations on large repositories take too many system resources), and many heads/branches (tools like Git and Mercurial don't scale well to tens of thousands of heads/branches). These limitations with fully distributed version control are why distributed version control tools like Git and Mercurial support a partially-distributed mode that behaves more like your classical server-client model, like those employed by Subversion, Perforce, etc. Git supports shallow clone and sparse checkout. Mercurial supports shallow clone (via remotefilelog) and has planned support for narrow clone and sparse checkout in the next release or two. Of course, you can avoid the scaling limitations of distributed version control by employing a non-distributed tool, such as Subversion. Many companies continue to reach this conclusion today. However, users adapted to the distributed workflow would likely be up in arms (they would probably use tools like hg-subversion or git-svn to maintain their workflows). So, while scaling of version control can be a real concern, there are solutions and workarounds. However, they do involve falling back to a partially-distributed model.

Another concern with monolithic repositories is user access control. You inevitably have code or data that is more sensitive and want to limit who can change or even access it. Separate repositories seem to facilitate a simpler model: per-repository access control. With monolithic repositories, you have to worry about per-directory/subtree permissions, an increased risk of data leaking, etc. This concern is more real with distributed version control, as distributed data and access control aren't naturally compatible. But these issues can be resolved. And if the tooling supports it, there is only a semantic difference between managing access control between repositories versus components of a single repository.

When it comes to repository hosting conversions, I agree with Google and Facebook: I prefer monolithic repositories. When I am interacting with version control, I just want to get stuff done. I don't want to waste time dealing with multiple commands to manage multiple repositories. I don't want to waste time or expend cognitive load dealing with submodule, subrepository, or big files management. I don't want to waste time trying to find and reuse code, data, or documentation. I want everything at my fingertips, where it can be easily discovered, inspected, and used. Monolithic repositories facilitate these workflows more than separate repositories and make me more productive as a result.

Now, if only all the tools and processes we use and love would work with monolithic repositories...

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 33 beta1 to beta2

di, 09/09/2014 - 11:05

  • 31 changesets
  • 72 files changed
  • 2046 insertions
  • 532 deletions

ExtensionOccurrences js23 cpp12 html5 h5 mn4 jsm4 ini4 xhtml3 java3 jsx2 xml1 sh1 list1 css1

ModuleOccurrences browser36 security10 mobile5 gfx4 content4 layout3 toolkit2 services2 dom2 netwerk1

List of changesets:

Jeff MuizelaarBug 1057716 - d3d11: Properly copy the background. r=bas, a=sledru - 9eb4dff42df0 Richard NewmanBug 993885 - Refactor SendTabActivity to avoid a race condition. r=mcomella, a=sledru - 764591e4e7f3 Lucas RochaBug 1050780 - Avoid disabled items in GeckoMenu's adapter. r=margaret, a=sledru - 7cf512b6b64c Tim NguyenBug 891258 - Use Australis styling for the findbar buttons. r=Unfocused, a=sledru - 4815ff146c57 Dave TownsendBacking out Bug 891258 due to broken styling issues on OSX. r=backout - e7d6edff44d3 Cosmin MalutanBug 1062224 - [tps] Fix test_tabs.js for non-existent testcase pages. r=hskupin a=testonly DONTBUILD - 292839cc6594 David KeelerBug 1057128 - Add --clobber to generate_certs.sh, disabled by default (don't unnecessarily regenerate all certificates). r=rbarnes, a=sledru - 3f1e228fac54 David KeelerBug 1009161 - mozilla::pkix: Allow the Netscape certificate type extension if more standardized information is present. r=briansmith, a=sledru - 03029d16e697 JW WangBug 1034957 - Don't spin decode task queue waiting for audio frames since it hangs with gstreamer 1.0. r=cpearce, a=sledru - 46ffe60377d9 Neil RashbrookBug 1054289 - Scroll to the current ref, not the original one. r=smaug, a=sledru - 8865201cd18e Neil RashbrookBug 1054289 - Add testcase. r=smaug, a=sledru - e47ff024eec1 Jan-Ivar BruaroeyBug 1060708 - Detect user and environment cameras on Android. r=gcp, r=blassey, r=snorp, a=sledru - fbc322c42d06 Mark FinkleBug 1063893 - Enable casting on beta and release. r=rnewman a=mfinkle - 32560f800b2e Ed LeeBug 1062683 - Remove urls from new tab pings [r=adw a=lmandel] - c81810e5f3a5 Bas SchoutenBug 1040187 - Combine update regions properly when upload hasn't executed yet. r=nical, a=lmandel - 872fe12f9214 Matt WoodrowBug 1060114 - Fix partial surface uploading through BufferTextureClient. r=Bas, a=lmandel - 09d840603713 Chenxia LiuBug 1060678 - Notify Gecko when browser history is cleared from HistoryPanel. r=margaret, a=lmandel - 957e1ef7f769 Gijs KruitboschBug 1035536 - Add blank theme file for net error pages. r=Unfocused, a=lmandel - f9e4f36ba116 Ryan VanderMeulenBacked out changeset 09d840603713 (Bug 1060114) for bustage. - c3ecb4c952ec Matt WoodrowBug 1060114 - Fix partial surface uploading through BufferTextureClient. r=Bas, a=lmandel - bca701646487 Chris KarlofBug 1056523 - Ensure sync credentials are reset during reauth flow. r=markh, a=lmandel - 8b409f2dfcb1 Steve WorkmanBug 1058099 - Cancel CacheStorageService::mPurgeTimer if it's still set during shutdown. r=mayhemer, a=lmandel - ede2300e8733 Michael ComellaBug 1046017 - Backed out changesets 1c213218173f & 8588817f7f86 (bugs 1017427 & 1006797). a=lmandel - 7984a6ceffb8 Randell JesupBug 1063971 - Allow SetRemoteDescription to omit callbacks again. r=jib, a=lmandel - 880228a5208a Richard NewmanBug 1045085 - Remove main Product Announcements code. r=mcomella, a=lmandel - 776ddfd41f21 Ryan VanderMeulenBacked out changeset 776ddfd41f21 (Bug 1045085) for Android bustage. - 70930f30da0e Benjamin SmedbergBug 1012924 - Experiments should cancel their XMLHttpRequest on shutdown and should also set a reasonable timeout on them. r=gfritzsche, a=lmandel - db5539e42eb5 Mark BannerBug 1022594 - Part 1: Change Loop's incoming call handling to get the call details before displaying the incoming call UI. r=nperriault, a=lmandel - e0ad01b2e26e Mark BannerBug 1022594 - Part 2: Desktop client needs ability to decline an incoming call - set up a basic websocket protocol and use for both desktop and standalone UI. r=dmose, a=lmandel - 062929c9ff5d Mark BannerBug 1045643 - Part 1: Notify the Loop server when the desktop client accepts the call, so that it can update the call status. r=nperriault, a=lmandel - be539410c211 Mark BannerBug 1045643 - Part 2: Notify the Loop server when the client has local media up and remote media being received, so that it can update the call connection status. r=nperriault, a=lmandel - d820ef3b256d

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

di, 09/09/2014 - 08:41

the following changes have been pushed to bugzilla.mozilla.org:

  • [913647] Deploy YUI 3.17.2 for BMO
  • [1054138] add the ability to filter on “fields containing the string”
  • [1062344] contrib/reorg-tools/sync* do not clear memcached
  • [1051058] Auto-CC Erica Choe into Finance Review and Master Kick-Off Bugs

discuss these changes on mozilla.tools.bmo.

 

the new bugmail filtering ability allows you to filter on specific flags:

bugmail filtering with substrings

these two rules will prevent bugzilla from emaling you the changes to the “qa whiteboard” field or the “qe-verify” flag for bugs where you aren’t the assignee.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Matt Brubeck: Let's build a browser engine! Part 5: Boxes

di, 09/09/2014 - 01:16

This is the latest in a series of articles about writing a simple HTML rendering engine:

This article will begin the layout module, which takes the style tree and translates it into a bunch of rectangles in a two-dimensional space. This is a big module, so I’m going to split it into several articles. Also, some of the code I share in this article may need to change as I write the code for the later parts.

The layout module’s input is the style tree from Part 4, and its output is yet another tree, the layout tree. This takes us one step further in our mini rendering pipeline:

I’ll start by talking about the basic HTML/CSS layout model. If you’ve ever learned to develop web pages you might be familiar with this already—but it may look a bit different from the implementer’s point of view.

The Box Model

Layout is all about boxes. A box is a rectangular section of a web page. It has a width, a height, and a position on the page. This rectangle is called the content area because it’s where the box’s content is drawn. The content may be text, image, video, or other boxes.

A box may also have padding, borders, and margins surrounding its content area. The CSS spec has a diagram showing how all these layers fit together.

Robinson stores a box’s content area and surrounding areas in the following structure. [Rust note: f32 is a 32-bit floating point type.]

// CSS box model. All sizes are in px. struct Dimensions { // Top left corner of the content area, relative to the document origin: x: f32, y: f32, // Content area size: width: f32, height: f32, // Surrounding edges: padding: EdgeSizes, border: EdgeSizes, margin: EdgeSizes, } struct EdgeSizes { left: f32, right: f32, top: f32, bottom: f32, } Block and Inline Layout

Note: This section contains diagrams that won't make sense if you are reading them without the associated visual styles. If you are reading this in a feed reader, try opening the original page in a regular browser tab. I also included text descriptions for those of you using screen readers or other assistive technologies.

The CSS display property determines which type of box an element generates. CSS defines several box types, each with its own layout rules. I’m only going to talk about two of them: block and inline.

I’ll use this bit of pseudo-HTML to illustrate the difference:

<container> <a></a> <b></b> <c></c> <d></d> </container>

Block boxes are placed vertically within their container, from top to bottom.

a, b, c, d { display: block; }

Description: The diagram below shows four rectangles in a vertical stack.

a b c d

Inline boxes are placed horizontally within their container, from left to right. If they reach the right edge of the container, they will wrap around and continue on a new line below.

a, b, c, d { display: inline; }

Description: The diagram below shows boxes `a`, `b`, and `c` in a horizontal line from left to right, and box `d` in the next line.

a b c d

Each box must contain only block children, or only inline children. When an DOM element contains a mix of block and inline children, the layout engine inserts anonymous boxes to separate the two types. (These boxes are “anonymous” because they aren’t associated with nodes in the DOM tree.)

In this example, the inline boxes b and c are surrounded by an anonymous block box, shown in pink:

a { display: block; } b, c { display: inline; } d { display: block; }

Description: The diagram below shows three boxes in a vertical stack. The first is labeled `a`; the second contains two boxes in a horizonal row labeled `b` and `c`; the third box in the stack is labeled `d`.

a b c d

Note that content grows vertically by default. That is, adding children to a container generally makes it taller, not wider. Another way to say this is that, by default, the width of a block or line depends on its container’s width, while the height of a container depends on its children’s heights.

This gets more complicated if you override the default values for properties like width and height, and way more complicated if you want to support features like vertical writing.

The Layout Tree

The layout tree is a collection of boxes. A box has dimensions, and it may contain child boxes.

struct LayoutBox<'a> { dimensions: Dimensions, box_type: BoxType<'a>, children: Vec<LayoutBox<'a>>, }

A box can be a block node, an inline node, or an anonymous block box. (This will need to change when I implement text layout, because line wrapping can cause a single inline node to split into multiple boxes. But it will do for now.)

enum BoxType<'a> { BlockNode(&'a StyledNode<'a>), InlineNode(&'a StyledNode<'a>), AnonymousBlock, }

To build the layout tree, we need to look at the display property for each DOM node. I added some code to the style module to get the display value for a node. If there’s no specified value it returns the initial value, 'inline'.

enum Display { Inline, Block, DisplayNone, } impl StyledNode { /// Return the specified value of a property if it exists, otherwise `None`. fn value(&self, name: &str) -> Option<Value> { self.specified_values.find_equiv(&name).map(|v| v.clone()) } /// The value of the `display` property (defaults to inline). fn display(&self) -> Display { match self.value("display") { Some(Keyword(s)) => match s.as_slice() { "block" => Block, "none" => DisplayNone, _ => Inline }, _ => Inline } } }

Now we can walk through the style tree, build a LayoutBox for each node, and then insert boxes for the node’s children. If a node’s display property is set to 'none' then it is not included in the layout tree.

/// Build the tree of LayoutBoxes, but don't perform any layout calculations yet. fn build_layout_tree<'a>(style_node: &'a StyledNode<'a>) -> LayoutBox<'a> { // Create the root box. let mut root = LayoutBox::new(match style_node.display() { Block => BlockNode(style_node), Inline => InlineNode(style_node), DisplayNone => fail!("Root node has display: none.") }); // Create the descendant boxes. for child in style_node.children.iter() { match child.display() { Block => root.children.push(build_layout_tree(child)), Inline => root.get_inline_container().children.push(build_layout_tree(child)), DisplayNone => {} // Skip nodes with `display: none;` } } return root; } impl LayoutBox { /// Constructor function fn new(box_type: BoxType) -> LayoutBox { LayoutBox { box_type: box_type, dimensions: Default::default(), // initially set all fields to 0.0 children: Vec::new(), } } }

If a block node contains an inline child, create an anonymous block box to contain it. If there are several inline children in a row, put them all in the same anonymous container.

impl LayoutBox { /// Where a new inline child should go. fn get_inline_container(&mut self) -> &mut LayoutBox { match self.box_type { InlineNode(_) | AnonymousBlock => self, BlockNode(_) => { // If we've just generated an anonymous block box, keep using it. // Otherwise, create a new one. match self.children.last() { Some(&LayoutBox { box_type: AnonymousBlock,..}) => {} _ => self.children.push(LayoutBox::new(AnonymousBlock)) } self.children.mut_last().unwrap() } } } }

This is intentionally simplified in a number of ways from the standard CSS box generation algorithm. For example, it doesn’t handle the case where an inline box contains a block-level child. Also, it generates an unnecessary anonymous box if a block-level node has only inline children.

To Be Continued…

Whew, that took longer than I expected. I think I’ll stop here for now, but don’t worry: Part 6 is coming soon, and will cover block-level layout.

Once block layout is finished, we could jump ahead to the next stage of the pipeline: painting! I think I might do that, because then we can finally see the rendering engine’s output as pretty pictures instead of just numbers.

However, the pictures will just be a bunch of colored rectangles, unless we finish the layout module by implementing inline layout and text layout. If I don’t implement those before moving on to painting, I hope to come back to them afterward.

Categorieën: Mozilla-nl planet

Jeff Walden: Quote of the day

di, 09/09/2014 - 00:56

Snipped from irrelevant context:

<jorendorff> In this case I see nearby code asserting that IsCompiled() is true, so I think I have it right

Assertions do more than point out mistakes in code. They also document that code’s intended behavior, permitting faster iteration and modification to that code by future users. Assertions are often more valuable as documentation, than they are as a means to detect bugs. (Although not always. *eyes fuzzers beadily*)

So don’t just assert the tricky requirements: assert the more-obvious ones, too. You may save the next person changing the code (and the person reviewing it, who could be you!) a lot of time.

Categorieën: Mozilla-nl planet

David Boswell: Creating community contribution challenges

di, 09/09/2014 - 00:32

There is something magical about how anyone anywhere can contribute to Mozilla—people show up and help you with something you’re doing or offer you something completely new and unexpected.

The Code Rush documentary has a great example of this from the time when the Mozilla project first launched. Netscape opened it’s code to the world in the hope that people would contribute, but there was no guarantee that anyone would help.

One of the first signs they had that this was working was when Stuart Parmenter started contributing by rewriting a key part of the code and this accelerated development work by months. (This is about 27 minutes into the documentary.)

code_rush_pavlov_scene

It is hard to plan and schedule around magic though. This year we’ve been building up a participation system that will help make contributions more reliable and predictable, so that teams can plan and schedule around leveraging the Mozilla community.

Pathways, tools and education are part of that system. Something else we’re trying is contribution challenges. These will identify unmet needs where scale and asynchronous activities can provide impact in the short-term and where there is strong interest within the volunteer community.

The challenges will also specify the when, where, who and how of the idea, so that we can intentionally design for participation at the beginning and have a prepared way that we’re rallying people to take action.

For next steps, leadership of the Mozilla Reps program is meeting in Berlin from September 12-14 and they’ll be working on this concept as well as on some specific challenge ideas. There will be more to share after that.

RemoCamp-berlin

If you’re interested in helping with this and want to get involved, take a look at the contribution challenges etherpad for more background and a list of challenge ideas. Then join the community building mailing list and share your thoughts, comments and questions.


Categorieën: Mozilla-nl planet

Nathan Froyd: xpcom and move constructors

ma, 08/09/2014 - 21:15

Benjamin Smedberg recently announced that he was handing over XPCOM module ownership duties to me.  XPCOM contains basic data structures used throughout the Mozilla codebase, so changes to its code can have wide-ranging effects.  I’m honored to have been given responsibility for a core piece of the Gecko platform.

One issue that’s come up recently and I’m sure will continue to come up is changing XPCOM data structures to support two new C++11 features, rvalue references and their killer app, move constructors.  If you aren’t familiar with C++11’s new rvalue references feature, I highly recommend C++ Rvalue References Explained.  Move constructors are already being put to good use elsewhere in the codebase, notably mozilla::UniquePtr, which can be used to replace XPCOM’s nsAutoPtr and nsAutoRef (bug 1055035).  And some XPCOM data structures have received the move constructor treatment, notably nsRefPtr (bug 980753) and nsTArray (bug 982212).

A recent discussion and the associated bug, however, decided that the core referenced-counted smart pointer class in XPCOM, nsCOMPtr, shouldn’t support move constructors.  While move constructors could have replaced the already_AddRefed usage associated with nsCOMPtr, such as:

already_AddRefed<nsIMyInterface> NS_DoSomething(...) { nsCOMPtr<nsIMyInterface> interface = ...; // do some initialization stuff return interface.forget(); }

with the slightly shorter:

nsCOMPtr<nsIMyInterface> NS_DoSomething(...) { nsCOMPtr<nsIMyInterface> interface = ...; // do some initialization stuff return interface; }

There were two primary arguments against move constructor support.  The first argument was that the explicitness of having to call .forget() on an nsCOMPtr (along with the explicitness of the already_AddRefed type), rather than returning it, is valuable for the code author, the patch reviewer, and subsequent readers of the code.  When dealing with ownership issues in C++, it pays to be more explicit, rather than less.  The second argument was that due to the implicit conversion of nsCOMPtr<T> to a bare T* pointer (a common pattern in smart pointer classes), returning nsCOMPtr<T> from functions makes it potentially easy to write buggy code:

// What really happens in the below piece of code is something like: // // nsIMyInterface* p; // { // nsCOMPtr<nsIMyInterface> tmp(NS_DoSomething(...)); // p = tmp.get(); // } // // which is bad if NS_DoSomething is returning the only ref to the object. // p now points to deleted memory, which is a security risk. nsIMyInterface* p = NS_DoSomething(...);

(I should note that we can return nsCOMPtr<T> from functions today, and in most cases, thanks to compiler optimizations, it will be as efficient as returning already_AddRefed.  But Gecko culture is such that a function returning nsCOMPtr<T> would be quite unusual, and therefore unlikely to pass code review.)

The changes to add move constructors to nsRefPtr and nsTArray?  They were reviewed by me.  And the nixing of move constructors for nsCOMPtr?  That was also done by me (with a lot of feedback from other people).

I accept the charge of inconsistency.  However, I offer the following defense.  In the case of nsTArray, there are no ownership issues like there are with nsCOMPtr: you either own the array, or you don’t, so many of the issues raised about nsCOMPtr don’t apply in that case.

For the case of nsRefPtr, it is true that I didn’t seek out as much input from other people before approving the patch.  But the nsRefPtr patch was also done without the explicit goal of removing already_AddRefed from the code base, which made it both smaller in scope and more palatable.  Also, my hunch is that nsRefPtr is used somewhat less than nsCOMPtr (although this may be changing somewhat given other improvements in the codebase, like WebIDL), and so it provides an interesting testbed for whether move constructors and/or less explicit transfers of ownership are as much of a problem as argued above.

Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 29/30 2014

ma, 08/09/2014 - 20:37

In this post you can find an overview about the work happened in the Firefox Automation team during week 29 and 30.

Highlights

During week 29 it was time again to merge the mozmill-tests branches to support the upcoming release of Firefox 31.0. All necessary work has been handled on bug 1036881, which also included the creation of the new esr31 branch. Accordingly we also had to update our mozmill-ci system, and got the support landed on production.

The RelEng team asked us if we could help in setting up Mozmill update tests for testing the new update server aka Balrog. Henrik investigated the necessary tasks, and implemented the override-update-url feature in our tests and the mozmill-automation update script. Finally he was able to release mozmill-automation 2.6.0.2 two hours before heading out for 2 weeks of vacation. That means Mozmill CI could be used to test updates for the new update server.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 29 and week 30.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 29 and week 30.

Categorieën: Mozilla-nl planet

Pagina's