Blog Image

MikeK's software notebook

What you will find

This used to be the place where I wrote stuff I was thinking about while working on the Mozilla project.

Maybe in the near future I'll start to update it again as I'm involved in a couple of new open-source projects - updates pending...

Building with scripts, scratchbox and LNG (last known good)

Mozilla coding hints Posted on 15 Jun, 2010 23:03:06

(Thanks to Alex Sallin, this post is also available in Czech here.)

This blog post is about how I build using scripts instead of calling make directly.

Last known good

When you do “random” pulls from Mozilla-central you are rarely sure what the state of the tree is before you pull, it takes some time before tinderbox get updated and it might be burning red on a platform that you don’t care about or it might not have a build for the special configuration you are working on….

To solve this problem I have two cron scripts running on my machine, one that pulls the latest and greatest from mozilla-central every half hour, and one that tries to build what ever was pulled, if the build succeeds with all the configurations that are important to me, it will tag the revision so I can update to it when I need to – that means that at all times I can update to a known good revision, I do incremental builds, and while this is not always giving the same result as a clean build, it is close enough for day to day use.

(The rest of this post is mostly relevant in a Linux environment)

Doing scratchbox builds without being in scratchbox

The most tricky part of creating the scripts were to make the compile in scratchbox (when building for Maemo) – I was already doing:

$ sudo mount –bind ~/MozillaCode /scratchbox/users/mike/home/mike/MozillaCode

This lets me share a single copy of the source code between scratchbox and my normal development directory (The above command will make ~/MozillaCode contain the same stuff when in scratchbox and when you are outside scratchbox for a user called “mike”, remember to create the empty target directory before executing the command, and you can’t do it the other way around mounting the scratchbox directory in your home directory).

My first attempt at building in scratchbox, without first logging into scratchbox, involved calling a script running in scratchbox that did the building – but then I found that it is actually possible to use the same environment in scratchbox as you were in when launching scratchbox, this is done by using the -k flag, and you can even shift into a specific directory with the -d flag.

So to build, my script sets up the correct MOZCONFIG and then executes something like:

$ scratchbox -d “$sourceDir” -k make -f

where sourceDir points to the mozilla-central that I want to build.

Doing scratchbox builds without being in scratchbox from a cron script

Now, my Linux knowledge is too limited to know the reason why – but in order to run the scratchbox command from a script run from cron, you need to first do:

export USER=mike

and then the scratchbox command, if you happen to share my user name, otherwise substitute your own user name smiley.

Only getting the relevant output

When we build we get a lot of build output that is not very relevant in most cases, what is most important is the errors that can be hidden inside the output, they are especially hiding if you are running multi threaded builds.

The first thing I did was to add a:

mk_add_options MOZ_MAKE_FLAGS=”–no-print-directory”

to my mozconfig files, as that takes a lot of unneeded information away – I’m sure it’s useful for some people to see where the build script is, but not for me.

The other thing I did was to pipe all the standard output to a file, leaving only the error output, by executing make like:

$ make -f > stdoutLogFile.txt

But as I wanted to be able to get the error output to see failures or successes from my cron script I also needed to pipe the error output, that is done by using the 2> pipe:

$ make -f > stdoutLogFile.txt 2> stderrLogFile.txt

In my actual script the files a renamed so I can see which build generated them, and a _OK is postfixed to the filename if the build was successful, and _Fail if there were build errors – this enables me at a glance to see which platforms are currently building and which are failing, and to get the build error I just need to open the file(s) postfixed with _Fail. Very easy and very convenient (I can even see which target is being build at the moment as that gets a _Building postfixed).

At this very moment the Result* content of my LatestBuild directory is like:


These correspond to the “stderrLogFile.txt” in the above make example, but for each platform I’m building for.

It can be seen that right now everything seems fine for the platforms I care about in my daily work.

What I can do now to get the latest code, in my working directory is the execute

$ hg pull
$ hg update – u mikek-lng

As I’m always cloning my local repository rather than directly from mozilla-central when I start a new directory. This way I can be almost sure that any failures after the update are due to errors I made in the patch I’m working on, and not something that is coming from mozilla-central – which was my main goal.

Building with a script instead of the command line

Previously I used to have several command line windows open, one configured for each target that I wanted to build for (like for Firefox Mobile PC, Firefox Mobile PC Qt version, Firefox Mobile Maemo Release version, …) It was confusing and I was never sure on which platforms I had kicked of builds on, or which window belonged to what platform.

As creating the LNG scripts had given me a basal and dangerous knowledge of writing shell scripts on my Ubuntu box I got the idea of creating a single macro that could do all the building – iBuild was created, what I have now is a simple tool I can run that reports back to me in a very simple way if the build is a success or not.

I can do the quick and dirty, that takes forever:

$ ./iBuild all

This will simply build all the targets that I found relevant as seen above, or I can specify what I’m currently interested in like:


(FF_* = Firefox, FFM_* = Firefox mobile, *_P* = PC, *_N* = Maemo (Nokia), *_?R = Release, *_?D = Debug, _QT = Qt version, otherwise it defaults to Gtk) It’s all done with a big table and a number of Mozconfigs, but I hope one day it will auto generate the Mozconfigs.

A build cycle now looks something like (Note, scratchbox Maemo builds are handled inline):

Building from /home/mike/MozillaCode/100614
parameter is FFM_NR FFM_NR_QT FF_PD

Building FFM_NR
FireFox Mobile Nokia Release
Using Scratchbox for building
Build success

Building FFM_NR_QT
FireFox Mobile QT Nokia Release
Using Scratchbox for building
Build success

Building FF_PD
Firefox PC Debug
Build success

All builds were ok

The bottom line is the important one – it tells me with a single glance that everything went well, where before I need to go to each individual command line window – if it had failed it is easy to see which build failed, and I have the error log in a file saved on the drive to see the exact error message.

My iBuild script is by no means finished, it contains very little error handling, it is constantly evolving, while it works for me it might not work for anyone else, and while I didn’t intend for it to launch an ICBM attack when executed it might do, use at your own risk.

However, feel free to be inspired by it and create your own modified version – it can be found here and must be executed in the directory that contains your mozilla-central, e.g. the directory below where you would usually execute make from.

The currentSyncTarget() function contains the translation between command line arguments and mozconfig files, and the buildAll() function defines what the “all” command will build.

The script that is called from cron does a

hg pull
hg update -C

and then calls iBuild all to do the actual building, its protected against multiple executions in the same simplified way as used in the iBuild script with a lock file.

If iBuild is successful it then uses hg to tag the version

hg tag -f mikek-lng

Any feedback will as usually be appreciated.

nsAutoPtr clever in it’s own way

Mozilla coding hints Posted on 13 Jul, 2009 16:20:17

So today I learned the hard way the meaning of nsAutoPtr<>. I started to use it when I copied a piece of code from another component that did something similar to what I was doing. What I didn’t realize was the true purpose of nsAutoPtr<> which lead to a… shall we say crash in Fennec!

I (wrongly) assumed it was some magic kind of pointer that you could assign to and use as a normal pointer, well you can – if you know how it’s supposed to work.

I imagine that nsAutoPtr was created to assist the developers in preventing one of the common mistakes, namly to forget to release (delete) an object that has been created (new’ed) dynamically. It is however very bad when you try to store pointers to the object in multiple places.

Let me first explain how I now understand how to use nsAutoPtr<>. An nsAutoPtr<> should be seen as a simple pointer that when you assign a pointer to it, remembers this pointer, but in the case where it already holds a pointer to something when you assign something to it (NULL or another pointer) deletes what-ever it held previously:

// myObj autoinitialised to NULL
nsAutoPtr<myType> myObj;

myObj = new myType(A);
// myObj now holds a pointer to myType(A)

myObj = new myType(B);
// The previous content myType(A) has
// been deleted
and myObj now holds a
// pointer to myType(B)

myObj = NULL;
// myType(B) is now deleted

If you get a pointer back from an argument in a function call this is the way to do it:

// prototype for example func
MyFuncReturningAnObject(myType **);

// When calling the function

// Don’t do like this:
MyFuncReturningAnObject(&myObj); // THIS IS WRONG!!!!

// Do like this:
MyFuncReturningAnObject(getter_Transfers(myObj)); // This is CORRECT!!!!

// Or

// Declare tmp object as normal object
myType *tmpObj;
// Get a pointer to the object you which to keep
// Store the pointer
myObj = tmpObj;

Hence the good thing about nsAutoPtr is that as long as you only have one pointer to each object you are fine and keeping within the intended use of it, but when you need more complex patterns, you better be very careful about ownership and lifetime, or use something else.

Let me illustrate with an example:

nsAutoPtr<myType> myObj1;
nsAutoPtr<myType> myObj2;

myObj1 = new myType(A);
myObj2 = myObj1;
// BE CAREFULL!!! – myObj1 now holds a NULL pointer

or another bad usage:

nsAutoPtr<myType> myObj;
myType* myRawPointer;

myObj = new myType(B);
myRawPointer = myObj;
// So far so good, myObj and myRawPointer both points to the same object

myObj = NULL;
// What myRawPointer points to has now been deleted!

or totally wrong as if you could scale wrongness (don’t try this at home):

nsAutoPtr<myType> myObj1;
nsAutoPtr<myType> myObj2;
myType* myRawPointer;

myObj1 = new myType(C);
myRawPointer = myObj1;
myObj2 = myRawPointer;
// So far so good, all point to myType(C)
// but beware – your code is doomed –
// as in “crash pending”!!!

myObj1 = NULL;
// The object is now gone but even you don’t use
// any of the other variables, the code WILL go
// wrong when myObj2 goes out of scope, as
// the nsAutoPtr<> will try to delete what ever
// myObj2 points to at that time – assinging NULL
// to
myObj2, will only make it crash faster

So this last one was what I attempted, with the two nsAutoPtr’s wrapped into some 3’rd party code, different threads and a couple of function calls – a lesson was learned for me 🙂

Happy coding!


Mozilla coding hints Posted on 22 Apr, 2009 23:14:28

Looking at the Mozilla code, you have probably come across the NS_DECL_ISUPPORTS and NS_DECL_ISUPPORTS_INHERITED macros. These are actuall not Mozilla specific but rather part of XPCOM.

The purpose of these macros are reference counting and interface detection.

So instead of implementing the:

NS_IMETHOD QueryInterface(REFNSIID aIID, void** aInstancePtr);
NS_IMETHOD_(nsrefcnt) AddRef(void);
NS_IMETHOD_(nsrefcnt) Release(void);

functions that are declared in nsISupports, you add the NS_DECL_ISUPPORTS macro to your class definition, like:

class nsMyBasicClass : public nsISupports
// Basic refcount and interface detection macro

Now, if you inherit from an interface that inherits from nsISupports, then you need to specify this interface too:

class nsMyInterfaceImplementerClass : public nsIMyGreatInterface
// Basic refcount and interface detection macro

If you inherit from multiple interfaces then you just list them all instead of NS_DECL_NSIMYGREATINTERFACE in the example above.

The above “macros” take care of the prototyping of the functions, you also need to use some “macros” to implement the body of the functions.

In the case where there is a direct inheritance from nsISupports the “macro” should be:


(Just put it anywhere in your source file)

If you implement multiple interfaces you replace the 0 in the end of the name with the number of interfaces that you implement, and list the names of these interfaces after the name of the class that implements them:

NS_IMPL_ISUPPORTS1(nsMyInterfaceImplementerClass, nsIMyGreatInterface)

In the case where you inherit from multiple classes that already implement the nsISupports interface, you can get an ambugity as to which functions to call to do the reference counting – to solve this you must use the NS_DECL_ISUPPORTS_INHERITED “macro” instead of the plain NS_DECL_ISUPPORTS “macro”

Remember that all pointers to interfaces/classes should use a reference with the type:


rather than a nsMyType*, as you don’t wan’t to take care of the reference counting manually.