I just returned to my home this evening after spending 4 days in Copenhagen, the weekend was for the Mozilla Maemo Danish Weekend it was a really nice event, and great to meet with people from the community – the event was hosted at ITU in Copenhagen, and pictures are available on Flickr here.
For Linux, try oprofile (a command line tool), or sysprof (GUI based)
My favorite for Windows is still AQTime
This morning I saw a nice video presentation about the concept driving development in the open-source Mozilla organization/community, if that is the right way to describe it…
It is the kind of presentation that makes you think afterwards if you ignore the sales talk on firefox and mozilla build into it 😉
Why didn’t I have the option to take this kind of classes when I went to the university?
As I posted some hours ago, I was quite frustrated about the build system, it haven’t lessened in the meantime So initially I got it linking again on my system back home, but as that is a 64-bit Ubuntu and my laptop is only running a 32-bit version it didn’t help me much except for verifying that the source code was OK. (My outgoing internet connection is too slow for me to run any GUI apps remotely, gotta upgrade it when I come back home)
So knowing it could be the build system I tried about every trick that I knew of to trigger what ever it is that needs to be triggered – deleting the objdir, deleting the generated/cached files in the source tree…. but still I got the same linker error I even went to the point where I started using a hex-editor to verify that the version of the libraries that I was linking with did indeed have the name in them that the linker was complaining about – they did…
So in a last desperate attempt to bypass the problem I changed the order of the packages in the “configure.in” file, so the package (“gstreamer-base-0.10”) that contained the lib (“libgstbase-0.10”) that had the symbol (“gst_push_src_get_type”) became the first packaged mentioned. A rebuild later (now on my fast-but-slow-compared-to-my-desktop laptop) the error had gone – the software now linked
Guess I shouldn’t have been that surprised, as it was the same pattern I saw earlier today. Being curius, I tried to move it back, and guess what…. it still compiled and linked – one word: ARggggghhh – another day almost wasted
…otherwise they would have stopped as they felt the pain when they pulled it out
Now why do I have a sudden need to express my frustrations on the build systems, they are after all made to help us, not to frustrate us.
My problem is that I have not yet worked with any build system more complicated than building a single file that I have been able to trust – I mean really trust – so that you just know that when you have a failed build, then it is because you did something wrong, not because you were unlucky with the build process it self.
So last week, back in Denmark, I had this perfectly fine source-tree that build without any issues on my desktop, I copied it to my USB stick and took it and my laptop with me on the plane to California – arrived at the Mozilla headquarter copied the content of the USB stick onto my laptop – rebuild… – error…
Oh what, I thought, maybe I were too bold copying the obj dir from one machine to another – so deleted the objdir and did a rebuild – still error… hmm… did a remote login to my desktop back in DK, deleted the objdir and did a rebuild => error… arghh… what was happening?
Not having too much confidence in the build systems, I tried again, with same result – a linking failure – now about a day of frustrations later I copied my source tree on the remote machine into a new directory, deleted the objdir and did a new build – do you know what… now it succeeded
Talking to one of the guys here – it seems like it is a good idea to keep your objdir out of your source dir – I have now reconfigured my build to do that – but come on… why did it stop working in the first place???? – why did I have to waste the most of a day on a non-issue???
I want to do coding not fighting with the tools why isn’t it possible to make a deterministic build system?
As I wrote a few days ago I was considering to update my desktop PC to run Linux, so I decided to give it a go – and I’m happy I did 🙂
The performance is so much better than what I’m used to – I can do a full build of Fennec, from scratch, in 10 minutes, it used to take about 40 on my laptop. Actually doing the build is the first time I have seen all 4 cores of my CPU max out, I’m used to running Windows Vista on this machine, never seen the CPU usage go above 60-70%.
A full rules based source compare of two source trees is accomplished in 5-6 min with Beyond Compare running in Wine.
The only problem I have seen so far is that Source Insight running under Wine sometimes fails to open the menu when I use my mouse (actually my wacom tablet), but then it’s probably better anyway if I use the keyboard short-cuts 🙂
I’m running a 64-bit version of Ubuntu so need to do some magic to get scratchbox running – but until then I can continue to use the laptop for doing these builds.
I don’t seem to be the only one who has an opinion on the way we use bugzilla, there has been quite some activity on the subject in the mozilla.governance news group the last few days, I posted a couble of new entries to subject “Moving patches from posted-to-bug to in-the-tree: a process discussion” as follwup to the one I posted last week.
What I try to argue for in there is to make the process as simple as possible to follow, always having someone responsible for each individual bug and automate it as much as possible – who wouldn’t do that?
I just posted a reply to m.d.platform about the bug tracking process:
On 27-03-2009 06:46, Jeff Walden wrote:
> A bug was filed, a patch was posted, but it didn’t get in the tree until
> “too late” (for a not-worst-case-scenario definition of the phrase, but
> still way too close to it for comfort). Some questions to ponder:
> When did we fail to keep things moving along?
> How do we make the process clearer and more discoverable for new hands?
> What documentation needs to be modified to help that clarification?
> How and where should such documentation be found, and where should we
> position links to it on our sites to make it easy to find?
> How should responsibility for making sure posted patches get in the tree
> be split up?
> Are there any changes we should make to the super-review process (or the
> review process) to ensure patches don’t fall through the cracks?
> Where do we document any such changes? (Do we even *have* any
> documentation on doing non-super-review reviews and what the goal of a
> review is?)
> Those are just for starters, and I’m sure there are others people can
> bring up as discussion happens.
> All comments are appreciated.
Being relatively green in the Mozilla processes I don’t know the details of how it is handled today – but it seems to depend a lot on humans doing things that could be happening automatically. Another problem is documentation of the process – I have been told part of the process, but never seen the document that describes it – now this might be because I didn’t search for it – I’ll get back to that in a sec.
So the issues I see are:
1) Lack of automation
2) You need to pull to know the process – (especially bad when the process changes)
What I would like to see is that this gets integrated into bugzilla, let me explain:
– I’ll assume that we have a well defined process, stating which states a bug can enter from a given other state, and the reasons/triggers for each state change. (I think it is out of scope to discuss the specific states here, as the point being that there are states)
– Now, any active bug must at all times have someone or a group that is responsible for it (by active I mean, a bug that is not fixed and in the tree or one that has been closed for another reason – like “working as intended”)
– This means that if you are not responsible for a bug then you are not expected to do anything with it. So when you call for a review on a bug, you are no longer responsible for the bug, unless it comes back to you with a failed review – this could be automated, if you set the review flag “state”, then if the reviewer fails the review (setting the “failed review” state) it gets automatically assigned back to the one who requested the review.
– When you get assigned to a bug (for validation, testing, reviewing, fixing, committing, what ever reason) you should receive an e-mail. (Where you at a glance, from the title of the e-mail, can see that it is about something you (or a group you are a member of) are getting assigned to)
– When a bug that you are assigned to is opened, you could get a description that explains what you are supposed to do, this text being defined by the state the bug is in, “Verify the bug”, “Review attached patch”, etc.
You then only get the options for changing states that is defined by the process – with a description of why you would want to move it to that state (e.g. for a bug you are assigned to fixing – you could have:
1) A fix for the bug has been attached => state=waiting for review, assignedTo=reviewer
2) You can’t reproduce the bug => state=more info wanted, assignedTo=reporter
3) You don’t have time to work on it now => state=waiting for time, assignedTo=you
4) You have accepted there is an issue and are working on fixing it => state=accepted working on it, assignedTo=you
etc. note that the above choice are only to illustrate the principle, the actual states are again out of scope for this discussion.
The above should give visibility to the process – there is only one rule you must know – if you get a bug assigned, it is your responsibility to move it to the next state – you are told everything else you need to know when you open the bug in bugzilla.
If the bug at any state gets assigned to a group instead of an individual, everyone in that group can then change the assignment to an individual person in the group (so we don’t get multiple persons working on the same issue unintentionally).
So how do we ensure that bugs don’t gets forgotten by individuals / stay in the system for ever (or past an important deadline)?
-Assuming there are someone responsible for any given product and/or area then these people should be able to setup triggers with time-limits, so if a high impact bug stays in the review state for more than a specified time, then the one responsible for the product gets notified, and can choose to take appropriate action.
-In my view, it should never be e.g. a developers responsibility to prioritize for or push a reviewer – it must be the project that does this kind of prioritization.
-It is the same if you report an issue, you are only expected to give a useful description of the issue. In my view, the reporter should not push the fixing, by any other means than perhaps setting the initial severity of the bug – it is the responsibility of the project, or owner of the area to do the prioritization with respect to the other tasks at hand.
I have written a little about automation in the above, e.g. the triggers the project owner can set if relevant bugs gets stalled in the system, and the e-mails that are send with tags in the title if you get assigned something new (at least we have the e-mails today)
But the automation could also continue into choosing who to assign bugs to at any given state – so if I create a new bug, it could automatically get assigned to the group (or individual) responsible for the area where the bug is reported in (with the e-mail notification)
When a bug is moved to the review state, the reviewer could be chosen from the files that are patched, etc.