Wednesday, December 30, 2009

Unofficial 2009 in Review


Shortly we will be entering the Space Year 2010, so here is my unofficial review of this last year from the point of view of a DS fan and part-time homebrew coder. Please don't feel offended if I've left out your game or application. There's been a lot to get through.

This roundup was partly done from my recollections of homebrew in the last 12 months, and partly from trawling through my Google Reader feeds. I haven't been able to try out all of the releases this year so let me know in the comments if you feel particularly hard done by.

While creating this post, PDRoms has been very useful. This site does a really great job of keeping the whole scene ticking over. Kojote has consistently posted interesting stuff, especially when you consider that he also covers a load of other portable platforms. Keep up the good work dude!

Other sites of note are Drunken Coders and NintendoMax (That's their DS RSS feed). Writing your own DS programs seems to be all the rage in France and NintendoMax's site has had the scoop on the latest news most of the year. We Hack DSi has also been an interesting read, though their posts are rather less frequent and tend to mix in some of the less salubrious aspects of the scene.

I've tried to link to the latest release for each game or application where I can. This way you don't end up downloading an initial, buggier release if there is a newer one available. This does mean that the text of the link ("Version 0.9" or whatever) doesn't always match up with the latest release on the target page, so keep that in mind if you are a history weeny.


DSi image The year began with news reports on the latest revision of the DS hardware, the DSi, which was already out in Japan. With 2 cameras and better wifi, it promised to be an interesting (if rather conservative) upgrade and everyone was looking forward to the releases in Europe and the US.

The homebrew scene started the year off with the arrival of the deadline for the Winter 2008 Compo. Nyarla's Snowride shoot 'em up came in first place, with the judges being particularly impressed by its graphics, music and Christmassy theme. It is a great game too, definitely download it if you haven't had the change to play it yet. Damn those penguins! It has the added advantage of being seasonal if you play it now. Bonus.

Around the middle of January Okiwi was officially abandoned, marking the end of the DS browser wars. Writing a web browser for a console with just 4MB of memory and a TCP library that is not as battle hardened as its big-brother PC counterparts is an exercise in frustration, so I know exactly where Pedro was coming from. I've abandoned Bunjalloo several times already, but keep coming back for more punishment.

Towards the end of the month the moonbooks project resurfaced after having the website hacked the previous year. In a strange homage to Geocities (perhaps) it still shows "under construction".

There were lots of homebrew releases in January. Some that stood out for me were:

  • Powder 110 - Jeff continued to improve his fantastic rogue-like throughout 2009, albeit at a slower rate than in previous years.

  • Moonshell 2.0 - several beta releases marked the return of the ubiquitous menu/launcher/mp3/video player

  • AemioDA - vector graphics emulator with an unpronouncable name, but it lets you play classics like Tempest, Asteroids and Lunar Lander. Lunar Lander sadly seems to ignore all my attempts to control the craft. Bummer.

  • Brix - ace physics game where you have to make towers topple over while carefully balanced sticks of dynamite land in the right place. Plays much better than my description, luckily.

  • maketens - mathsy fun with the numbers 1 to 5. An original puzzler that is fully playable, but lacks that last little bit of polish to make it a real classic. Still good though.


I did want to say DS1942 at this point too, which is an emulator for the arcade classic 1942. I have been completely unable to get this to run though, with the emulator complaining about a missing "srb-03.m3" despite it being right there. Ho hum. You might have more luck. It did mark the start of a 2009 trend; stand alone emulators for individual arcade games, or sets of games. More later.


No hardware news for February, but the NDS homebrew scene was in full swing. Several of the games released in January got more updates.


I have fond memories of playing through Tetrominout, which is an amusing cross between Tetris and Breakout. Certainly it was a surprising twist on the usual "hello world" titles that new game coders often cut their teeth on. Also out this month was a new version of the devkitARM toolchain and libraries.

Other homebrew of note in February were:

  • And's PDF reader, which bent the laws of computer science to squeeze the rather bloated PDF format onto the DS.

  • Glubies Planet. We don't often see 3D homebrew games on the DS, so this one stood out from the crowd. The puzzley gameplay isn't bad either.


March was a slow month, with only a few updates to already released programs including And's PDF reader.

QuirkDS wasn't really new but it did get a big update right at the end of March. It's a remake of the old Game Boy title Kwirk and plays a bit like Sokoban (a bit, not really much). I'm completely hopeless at it, so good job it has a level skip. If anyone knows how you're supposed to move the yellow blocks that appear from about level 3 onwards, then please let me know in the comments.

News began to trickle through of a great looking remake of the Commodore 64 shooter Warhawk.


The big news as we entered the second quarter of the year was the launch of the Nintendo DSi in Europe and the US. Almost right away we received confirmation of our worst fears: the current generation of flash devices didn't work on the new console!

It wasn't really that much of a shock of course, as Nintendo had threatened this already. In Japan there had already been confirmation of the bad news too. Still, it hit home when the console was finally out over here.

The Acekard had a working flash card that they had not-so-secretly been developing since the Japanese launch. Soon after this, R4 and EZ Flash released new flash cards that bypassed the DSi's stricter security. Homebrew was back on the cards.

Sadly all of these devices only gave (and still give) access only to the DS hardware, not the new DSi features. Although there would later be exploits to get small programs running in DSi mode, at the end of the year we are still in the dark when it comes to details about the DSi hardware from an unofficial developer's point of view. This means that there is still no solution for connecting to the net with the DSi's faster network hardware, nor can we use secure WPA Wifi connections, nor can we slurp out all those photos and all that data that is stored on the built in RAM.


Homebrew out this month included an update to @gentakojima's DSTwitter app. Pretty cool, even if the jury is still out on the usefulness of Twitter. You should follow me on twitter here, if that's your bag. I don't say much though, mostly just complain about Dr Who. Don't try and visit gentakojima's old Acdrtux site though, you just get Rick Roll'd.

DSPack added to the growing list of arcade emulators released this year, this time emulating Pac-Man and derivatives from the MAME collection. Finding ROMS for these emulators proved to be a bit trickier - not everyone has the, er, technical know-how to back up the collection of arcade games that they keep in their spare room. Yes. Ahem.

The emulation is excellent. That just leaves the limitations of the DS screen to contend with. It would've been nice if there had been an option to keep the sceen fixed in one spot, losing the top banner that takes up about a third of the top screen. As it is, not all of the screen fits on at once, and the play area scrolls to show Pac on the bottom screen at all times. I suppose a fixed screen would have meant the yellow one traversing the area in the middle of the 2 screens, which could have been quite confusing. Oh well. Good games, great emulator, worth checking out.


In May we got another update to devkitARM and its support libraries. There were no major additions this time round, but it was a solid maintenance release.

Powder had its Nelson release in May. And I suppose it was quite unlucky for us Powder fans; we wouldn't get another update until well into November.


After more than 6 months of silence, Wee Basic made a come back. This novel application lets you code in BASIC directly on your DS. Perhaps it is about as far from the cutting edge of computer science as you can get, but I found it a nostalgic amusement. You know what would make something like this even cooler? Auto-completion and some online help. You should see my .vim directory though, so perhaps my expectations are a bit far fetched.


Atmos was another cool-looking DS puzzle game that came out in June. The initial releases were multiplayer only, but the idea seemed original. The game could be described as a cross between othello and one of those colour matching games like Columns. Sadly the only record of its existence now is that page on PDRoms. Let this be a lesson for would-be coders: whatever you do, don't just post your game exclusively to some random forum! Especially if it requires log-in to download the title. You have probably spent a great deal of time writing your game, the least you can do is to take a few more minutes to create a more permanent home for your work on Google Sites or some other free web hosting service (yeah, like Geocities. Ha.)

Speaking of downloads, GameUp looks like a really great way to keep up to date with the latest DS homebrew. I've only discovered it recently, and it hasn't been updated since July, but the idea is a winner. It's a front end to a web site where you can download and rate homebrew titles, so no need to faff around with flash cards and swapping them between the PC and the DS. A bit like an unofficial "AppStore" application, I suppose. The only real problem is that it depends on the GameUp website being active and people updating the database with new homebrew. Sadly it seems that new programs have not been posted for quite some time.


The first rumours of DSi homebrew started to appear this month. It would be a while before more details were released, and all we got to see at this point was a video. Skeptics everywhere battled wits on the YouTube comments thread.


If we want to talking about games we could actually play, then the amazingly good Warhawk was finally released in July. Based on the C64 classic and coded entirely in ARM assembly, it was one of the best pieces of homebrew to come out all year. In fact for my money it was only topped by the group's second release later in the year...

Line Wars

The release of Line Wars DS added to the underrepresented 3D space game genre on the DS. Its author, Patrick Aalto, ported the game from the original x86 assembly code to C for the DS. He cheated by having the original source code, I bet ;-) The fact that he wrote that original code is not to be sneezed at, of course. Line Wars runs at a really smooth frame rate, making use of the DS's hardware 3D and lighting effects. Unlike Elite, in LW you take part in single "missions" that get progressively more difficult, with the focus here being on the action, the shootin' and the blastin'. Well worth the download either way.


facebook DSi

One of the highlights of the month of August, at least for us DSi owners, was the release of a new system menu. This upgrade, version 1.4, added Facebook support to the built in camera program. By connecting to a local Wifi network and then entering your Facebook account details to the DSi, you could upload any of the really terrible quality photos you'd taken. The DSi browser was also updated to a new minor version. Though details on the exact changes are non-existent, I suspect one of the changes was to make logging in on web-sites a lot more difficult; the updated browser seems to forget cookies almost instantly.

However, this was not the most dramatic feature of System Menu 1.4. Oh no. What really caught the eye was the new "blocks all DSi flash cards" feature that nobody asked for. And so began the next step in the cat-and-mouse game that card manufacturers and Nintendo were playing. Thanks to good design and forethought most of these next-gen flash cards had a way to update the firmware. They managed to patch their way out of the 1.4 straight jacket within a few days, but it did look pretty bleak for a while back there.

Other news in August came from WinterMute, who released his hack for DSi Classic Word Games. This hack allows a small amount of assembly code to run on the DSi in actual DSi mode. It exploits an error in the save game handling of a commercial title, which means that its viability for running DSi code in a more mainstream way is a tad limited - you have to own Classic Word Games and the hardware to upload save games to the cartridge to make use of the exploit. Cool nonetheless.

Regular DS homebrew releases continued in August. There were quite a few demos and unfinished early releases of games, but also updates to some old favourites, including DronDS. DronDS is a Tron light-cycles game that takes place in full 3D (on a 2D plane of course, you can't start riding up the walls). In the latest releases you can even play online against other Dronners. I've never coincided with anyone, but then I'm antisocial. If you have friends on IRC or even on Twitter (hey look, a use for Twitter!) then maybe you could gather a posse and have a game.


From the makers of Warhawk, a sneak preview video of another remake "Manic Miner - The Lost Levels" had us pining for its release back in September.

Nano Lua caused a flurry of activity as well this month. Lua is a great little language (even if it does have its warts. 1-indexed arrays, I am looking at you) and Nano Lua enabled people to write games for their DSes without getting their hand dirty with C or C++. I've not had chance to check Nano Lua out in detail - it seems to be a bit lacking on documentation, natch - but people have released more things with it than with the previous DS-Lua efforts, so it must be doing something right.


Blockman Gets Screenshot

The floodgates opened again in October. Blockman Gets, a pacman/puzzle game, combined 2 genres (classic arcade and puzzle) to produce an infuriatingly difficult mind bender. No ghosts on the first level, and yet it's more difficult than the original!

For novelty value, Mario Bros Lemmings DS was pretty hard to beat. These are Mario themed levels for Lemmings DS, complete with Koopers, Goombas, Bowser, and Mario, all rendered in their full 8-bit NES glory. I suspect some sort of automatic process was used, as the levels don't really stand out as brilliant Lemmings puzzles (too many ∞ abilities, some levels cause the game to hang, unbalanced requirements), but the amusement factor is high and all the SMB1 levels seem to be in there.

Now I'm not a big fan of Pang, but Pang DS does a good job of emulating the arcade game. Once more, make sure you only use carefully created backups of those Pang bootleg arcade boards you have next to the washing machine. Don't just Google "Pang MAME ROMs" or anything, okay?

GBA veteran coder FluBBa ported his SEGA Master System/Game Gear emulator over to the DS. It's called S8DS and is very good, as you might expect. The DS has a large enough screen that the squashed graphics from the GBA version are a thing of the past. S8DS uses the usual DLDI/libFAT features, so there's no need to inject the MS/GG games into the NDS ROM - you can just read them from your flash card. There are loads of great MS games out there, which of course you er, back up from the original cartridges you own (this is getting silly). I spent many a happy hour wasting batteries playing Fantastic Dizzy on the Game Gear.

Red gameplay

I didn't see many people mention Red when it came out in October, but it is really good. Although it is a quasi-official conversion of a Flash game, it feels at home on the DS. Using the stylus, you control the defences of a base at the bottom of the screen. This base appears to be some sort of last stand against an onslaught of meteors on a red planet somewhere. Story be damned! The base is armed with a cannon, and by shooting balls that look like a deadly paper/spit combination at the oncoming rocks, you can deflect them away from the base. By charging up the cannon, you can shoot bigger wads at the meteors, giving you a better chance of deflecting them. There are some power-ups in there as well to mix things up a bit. It plays like a faster, more physicsy version of Missile Command. You have no excuses, go and play this one.

Manic Miner in the Lost Level Bouncy-Bouncy

Last, but by no means least, is the epic Manic Miner in the Lost Levels. Created by the same chaps that created Warhawk earlier in the year, together with input from Amiga Power's Stuart Campbell, MMLL gathers all the best bits from the myriad of Manic Miner ports and packages it up with all the pazzazz of a first-party Nintendo game. You get first class graphics and sound, snazzy menus, unlockable bonuses, hidden levels, high scores, time attacks, historical notes, insider jokes, 80s film references, sly digs at the C64... it's all there. It is impossible to imagine how they could have done more justice to good ol' miner Willy. And I'm not even especially fond of the original Manic Miner! (ducks the pick-axe thrown from the back row), but the way it is presented here, with its skippable levels and modern day graphics make it much more accessible. All this without compromising the classic pixel-perfect, difficulty-turned-up-to-11 gameplay. In a word: "stunning".


Woopsi 0.43

Woopsi had many updates throughout the year, and it's good to see Ant updating his GUI library. I still haven't completely ruled out a port to the Woopsi library for Bunjalloo - maybe in 2010? For me all it's really lacking is some way to support UTF-8, which is a tricky problem when you don't want to support STL collections.

Another teaser was released from the team behind the Manic Miner remake in November. This time they are were going to have a crack at The Detective. I'll admit now that I had not heard of the game before, and that despite being an ex-Commodore 64 owner. Still, with their pedigree even if the original was pants I'm looking forward to this one.

Munky Blocks screenshot

From the creator of retro-platform/puzzler Platdude, Munky Blocks also came out in November. I would describe the gameplay as somewhere between Sokoban and Columns, with a bit of Yoshi thrown in. Though the gameplay is simple to grasp and has only a few concepts, the levels start to get fiendish quite quickly. Viewed in side-on 2D, you control a Monkey/Cat thing that can swallow blocks and climb up platforms. The idea is to place like-coloured blocks next to one another in groups of 3 or more so that they disappear. When you have swallowed a block, you can then walk around with it in your gut, but you can only climb up a single floor level when you're this full. After regurgitating the block you can move freely again. Later levels mix things up with switches that open doors, keys you can collect to unlock other doors, and so on. A very good game, and strangely claustrophobic I found.

In other, non-playable news, the first DSi-mode game was announced, although it is only in video form. Perhaps 2010 will see this taken to the next level? Let's hope so.

A new emulator was announced by Patrick Aalto (of Line Wars fame) back at the start of November. This time the emulation target was the x86! Specifically, the 80286 with MCGA graphics and SoundBlaster audio. Quite an achievement. Full details of DSx86 are on Patrick's site. The current tech demo includes the DOS version of Line Wars, though his blog also shows screenshots of other programs and games running, including SYSINFO and Paratrooper. Hopefully 2010 will see more improvements and news.


I'm writing this in December. Past tense confusion overload.

Another great puzzle game appeared at the start of December, called Cogito. This has nothing to do with version control, but is a tricky sliding block puzzle game. It is somewhere between a Rubik's cube and those plastic sliding block puzzles. You have to align rows of coloured blocks on the bottom screen in the way that they are shown on the top screen. The trick is that you can only move entire rows at once. It is simple, but much more complicated to do than you first expect. Another great game.

Red Temple probably came out at the end of November, but it got renamed and had a new release right at the start of December too, so here it is. Besides, it's my blog so I'll put it where I like :-) Now I really quite liked Red Temple. The game starts off with a few effects on the start screen, then you are taken away to the world overview screen to select your start level. Once chosen, the game proper starts off. It is a straight forward collect-'em-up where you play a snake/worm thing that goes around eating a load of fruit. As snakes and worms often do, I imagine.

The art work is a colourful enough tile-based 2D affair (described rather unfairly as "crap" by the author, they aren't that bad! At least they are original, which counts for a lot IMO), and the sounds are made up of some quality coder-generated-sounds and grunts. At least that's what it sounded like - the "level win" and "you're dead" sounds being especially amusing. It has a nice Amiga PD game feel to it, the sounds reminding me of the credits files that occasionally described how the authors went about making each sound ("falling in water - dropping an orange in a mug of coffee", that sort of thing). Not bad, not bad at all. There are a few videos on youtube if you don't fancy downloading the game to try it. But you should.

Ripholes in Rubbish

The great-looking Ripholes in Rubbish was a surprise release. It came out of nowhere pretty much completed. The game is a simple platform game, with an added twist that on some screens you can interact with the background. The "rip" in the name comes from this idea - you can rip holes in the background and go through to the other side. The later levels use this technique as part of the puzzles to get past otherwise impossible sections. Other puzzles involve capturing and moving clouds. The graphics use a distinctive hand-drawn look and the sound is varied and original. Sadly the game has a few bugs that cause it to either hang or do strange things, such as all objects on levels disappearing suddenly. Still, there have been further updates improving the stability, and as it is the game is a playable work of art.

The Detective Game

The last big release of the year was headsoft's The Detective Game. I'd not even heard of the original game, so this was something of an unknown. The graphics are great (Cook's bouncing chest aside) and the game is classic 8-bit "WTF do I have to do?!". It is an admirable remake. Sadly without the heritage of Manic Miner it doesn't have that elusive wow-factor, as they say on TV. I must say that Ben (AKA headcaze) has been great in fixing a few glitches that made the game unplayable on the EZ Flash cartridges, so extra kudos there.


Barring any last minute surprises, my game of the year award would go to Manic Miner in the Lost Levels. Not an easy choice, there have been a lot of fine chunks of code running on my DS this year.

I hope the 2010 has as good a crop of games as this year has had. I know a lot of DS coders bemoan the death of the system, saying that everyone has moved on to smart phones, but I'm sure the ol' DS still has a few years left in her yet. Happy holidays, and happy homebrewing!

Wednesday, December 23, 2009

Bunjalloo release v0.8

I've just uploaded a new release of Bunjalloo. This contains all the performance improvements I've been working on recently as well as a few other changes. Full details are in the changelog.

One of the changes is a fix to the updater. Sadly I broke it in the last release. This means you will have to update manually by downloading the zip file and unpacking it to your flash card. I've added some automated tests during this release to make sure I don't mess it up again.

You can download the latest version here.

Thursday, November 19, 2009

Setting environment variables in Java

Have you ever noticed that the Java System class doesn't let you set environment variables? You can retrieve them using getenv() but there is no equivalent setenv() function.
First off, what is the environment? The manual entry for environ(7) describes the environment as:
an array of strings [that] have the form name=value. Common examples are USER (the name of the logged-in user), HOME (A user's login directory) and PATH (the sequence of directory prefixes that many programs use to search for a file known by an incomplete pathname).
It turns out that when you start up the JVM, it copies this environment into its own Map of Strings. The actual container it uses is an unmodifiable map, probably to be extra safe.
So in a running Java application we have 2 environments: the JVM copy that you can read via System.getenv() and the underlying environment that lives in the C library.
If we want to change the JVM's copy, we can do so using reflection. Or at least we should be able to, just as long as our code is not running in a sandbox. In that case you'd be right out of luck. Anyway, the code to fetch a modifiable copy of the environment could look like this:
import java.lang.reflect.Field; import java.util.Map; import java.util.HashMap; public class Environment { @SuppressWarnings("unchecked") public static Map<String, String> getenv() { try { Map<String, String> unomdifiable = System.getenv(); Class<?> cu = unomdifiable.getClass(); Field m = cu.getDeclaredField("m"); m.setAccessible(true); return (Map<String, String>)m.get(unomdifiable); } catch (Exception e) { } return new HashMap<String, String>(); } }
Calling System.getenv() returns us an UnmodifiableMap of Strings containing the copy of the C environment. In order to get to its squishy modifiable heart, we access the m member variable which is modifiable. This is because UnmodifiableMap uses the proxy pattern - it implements the Map interface, but delegates all calls for retrieving values off to a member variable Map that contains the values and does all the real work. The set methods throw UnsupportedOperation exceptions. By accessing that m we can change the JVM's copy of the environment from under its very nose. Heh heh.
Not so fast. On Windows this is slightly different. The environment there contains a different Map that allows you to search for variables in a case-insensitive way. System.getenv() returns one Map (the unmodifiable one, same as Linux) whereas System.getenv("value") looks up the value in the case-insensitive Map. In order to create a robust update-env implementation we should update both of these Maps. This needs some more reflection to get the Map in question out of the ProcessEnvironment, probably like this:
@SuppressWarnings("unchecked") public static Map<String, String> getwinenv() { try { Class<?> sc = Class.forName("java.lang.ProcessEnvironment"); Field caseinsensitive = sc.getDeclaredField("theCaseInsensitiveEnvironment"); caseinsensitive.setAccessible(true); return (Map<String, String>)caseinsensitive.get(null); } catch (Exception e) { } return new HashMap<String, String>(); }
An example of how this may be useful without going any further would be if you have the DISPLAY environment variable set but you cannot use it (for example you connect via ssh -X, start a background process, then disconnect closing the X session connection too). In this situation even if you tell java.awt to run headless it may see that DISPLAY is set, try to use it and throw an exception. By clearing DISPLAY we can use headless methods to create off-screen graphics, to send them to a printer or to save a fake screen shot to file maybe for automated testing.
However this is not enough to affect the environment for any child processes. They will still only see the original, unmodified environment that the JVM craftily made a copy of at start-up. The comment in java.lang.ProcessEnvironment says it all:
// We cache the C environment.  This means that subsequent calls
// to putenv/setenv from C will not be visible from Java code.
Grrrr! By far the easiest approach here is to just use the ProcessBuilder. This lets you change the environment before launching the child process. Win. End of post.
No! I really want to change the underlying C environment for no good reason!
In order to change that we have to resort to either JNI or JNA. Lets start with JNA, it is the easier of the two and needs less tools. Just download a jar file and with the Java Compiler you're good to hack.
In case you have not heard of it, JNA is to Java what ctypes is to Python - a byte-code library that lets you open native, compiled, shared libraries and call the C routines within directly from Java code. JNI on the other hand requires you to create C++ functions, compile these, then call them from Java.
Armed with the JNA classes we can wrap the standard setenv() and unsetenv() functions from the C library.
import com.sun.jna.Library; import com.sun.jna.Native; public class Environment { public interface LibC extends Library { public int setenv(String name, String value, int overwrite); public int unsetenv(String name); } static LibC libc = (LibC) Native.loadLibrary("c", LibC.class); }
That works fine on Linux, but on Windows we have to take a different approach. There neither setenv nor unsetenv exist, instead we have to call _putenv. This function accepts a "name=value" string, and if we pass "name=" we can delete a variable from the environment. Unfortunately this multi-platform approach messes up the code a fair amount. Here is one way to do it:
// Inside the Environment class... public interface WinLibC extends Library { public int _putenv(String name); } public interface LinuxLibC extends Library { public int setenv(String name, String value, int overwrite); public int unsetenv(String name); } static public class POSIX { static Object libc; static { if (System.getProperty("").equals("Linux")) { libc = Native.loadLibrary("c", LinuxLibC.class); } else { libc = Native.loadLibrary("msvcrt", WinLibC.class); } } public int setenv(String name, String value, int overwrite) { if (libc instanceof LinuxLibC) { return ((LinuxLibC)libc).setenv(name, value, overwrite); } else { return ((WinLibC)libc)._putenv(name + "=" + value); } } public int unsetenv(String name) { if (libc instanceof LinuxLibC) { return ((LinuxLibC)libc).unsetenv(name); } else { return ((WinLibC)libc)._putenv(name + "="); } } } static POSIX libc = new POSIX();
Here we use JNA to load either libc on Linux or the msvcrt DLL (which contains _putenv) on Windows. There are ugly casts in there, and other OSes are left as an exercise to the reader, but this means that I can call POSIX.setenv() or unsetenv() and have it work.
To complete the picture, the JNI equivalent of this would be:
public class Environment { public static class LibC { public native int setenv(String name, String value, int overwrite); public native int unsetenv(String name); LibC() { System.loadLibrary("Environment_LibC"); } } static LibC libc = new LibC(); }
The call to System.loadLibrary() loads a dynamic/shared library. On Linux it looks for "" and on Windows "Environment_LibC.dll". The implementation of those native calls could be like this C++ code:
#include "Environment_LibC.h" #include <stdlib.h> #ifdef WINDOWS #include <string> #endif struct JavaString { JavaString(JNIEnv *env, jstring val): m_env(env), m_val(val), m_ptr(env->GetStringUTFChars(val, 0)) {} ~JavaString() { m_env->ReleaseStringUTFChars(m_val, m_ptr); } operator const char*() const { return m_ptr; } JNIEnv *m_env; jstring &m_val; const char *m_ptr; }; JNIEXPORT jint JNICALL Java_Environment_00024LibC_setenv (JNIEnv *env, jobject obj, jstring name, jstring value, jint overwrite) { JavaString namep(env, name); JavaString valuep(env, value); #ifdef WINDOWS std::string s(namep); s += "="; s += valuep; int res = _putenv(s.c_str()); #else int res = setenv(namep, valuep, overwrite); #endif return res; } JNIEXPORT jint JNICALL Java_Environment_00024LibC_unsetenv (JNIEnv *env, jobject obj, jstring name) { JavaString namep(env, name); #ifdef WINDOWS std::string s(namep); s += "="; int res = _putenv(s.c_str()); #else int res = unsetenv(namep); #endif return res; }
You generate the header files using javah - that gives you the strange function names needed - and compile the code as C++ to produce a shared library:
javah Environment
g++ -shared -o -I$(JAVA_HOME)/include/linux -I$(JAVA_HOME)/include/
This library needs to be in one of the usual places to work - somewhere where it can be found by dlopen(). So either in /usr/lib, a directory where ldconfig looks, or in one of the paths in LD_LIBRARY_PATH. This depends on the OS you are using. For Windows, Solaris or Mac you need a whole different set of flags and incantations. You can see why I lean towards JNA, even though it has its own problems. For the record this is how to compile the JNI library on Windows using MinGW:
g++ -DWINDOWS -Wl,--kill-at -shared -o Environment_LibC.dll -I$(JAVA_HOME)/include/win32 -I$(JAVA_HOME)/include
That --kill-at switch is a real gotcha. Without it the function symbol that the MinGW compiler produces is not the one that the JVM was expecting. On Windows the library itself must be in the current directory, or in one of the directories listed in the PATH variable.
As you can see we repeat the whole setenv-and-unsetenv-do-not-exist dance and use _putenv() for both. Here I fudge it with a bit of ifdeffing. Meh.
Now that we have a way to call the C library's setenv() and unsetenv() (or equivalent), let's wrap it all up. Here are the final setenv() and unsetenv() functions that update the C environment and the Java one too:
// inside the Environment class... public static int unsetenv(String name) { Map<String, String> map = getenv(); map.remove(name); Map<String, String> env2 = getwinenv(); env2.remove(name); return libc.unsetenv(name); } public static int setenv(String name, String value, boolean overwrite) { if (name.lastIndexOf("=") != -1) { throw new IllegalArgumentException( "Environment variable cannot contain '='"); } Map<String, String> map = getenv(); boolean contains = map.containsKey(name); if (!contains || overwrite) { map.put(name, value); Map<String, String> env2 = getwinenv(); env2.put(name, value); } return libc.setenv(name, value, overwrite?1:0); }
Curiously enough ProcessEnvironment is wired so as to validate the values that you add to the "unmodifiable" Map, but the case insensitive equivalent on Windows is not validated. If you try and add an invalid environment variable, such as one with a name that contains =, only the unmodifiable map will throw an IllegalArgumentException. This makes it fairly robust as the nasty name doesn't trickle down to the underlying C environment, but for Windows we have to do an extra check manually.
I've uploaded a tarball with all the files mentioned on here together with the dependencies to my GBA Remakes site. So now you've no excuse to not go off setting environment variables like mad.
Of course you shouldn't really do any of this. This post was 60% "if you really need to", 40% "might be useful". ProcessBuilder is the way to go for changing the environment in child processes.
The only thing that you might want to do is change the environment in a running Java process, but even then it is probably easier to create a wrapper script or batch file that launches your program and fiddle the environment prior to launching the JVM. Using reflection to access all those inner member variables is pretty flaky - if they change their name in some future version, your code will stop working. Happy hacking :-)

Wednesday, November 11, 2009


If you have been reading the changes that I've been pushing out to Bunjalloo lately, you will have noticed that there are quite a few that are aimed at optimising parts of the code. There was a lot of low-hanging fruit in there and the rendering of some 200+ comment threads on reddit was starting to annoy me.

There's an old rule when it comes to optimising your code; don't. 2nd verse, same as the first. The 3rd rule of optimisation is to profile your code first. So that's what I did.

The good old way to see where the bottle necks are is to use gprof. You compile with profiling on by passing a couple of flags to gcc, run the program and it spits out a profile upon exit. You then run the gprof command line tool and that interprets the profile to give you a table of hot spots. From commit 5beaffd581ed it looked something like this:

Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
10.00 0.02 0.02 103228 0.00 0.00 Font::valueToIndex(unsigned int) const
10.00 0.04 0.02 68648 0.00 0.00 std::_List_iterator<HtmlElement*>::operator--()
7.50 0.06 0.01 701479 0.00 0.00 std::less<unsigned int>::operator()(unsigned int const&, unsigned int const&) const

Right there you can see that Font::valueToIndex was taking a lot of time, but I already knew that that method could be improved. The other C++ STL calls were trickier to track down. I use all sorts of STL containers, so the problem could have been anywhere...

A couple of years ago Google released their performance analysis tools. One of the advantages google-perftools' CPU profiler has over gprof is that the output shows function calls line-by-line in a nifty graphical representation. Running the profiler on the same code gave this output:

thumbnail of the analysis
Here is the original full-size image

From there, I could trace the call graph back from the "slow" STL call to my code see that the use of a std::set was causing problems. It was overkill anyway and a simple change improved things there.

The different output in the perftools analysis is probably due to the way it works compared to gprof. With gprof, you get a call to the profiling library added to each of your program's functions, which means you need to recompile all of your code. With the Google tools you don't need to recompile your code, you just have to link in the profiler library to your executable. It then hooks in to the start of the program and uses timers and signals to instrument the code, adding traces at the current point of execution. The result is that code that takes longer has more "hits", but the result is not exactly the same as gprof's output.

This is why being able to compile NDS code on the PC is a really good idea. These things are not impossible on the DS of course - counting the hblanks your slow function takes by changing the background color is a classic technique - but in general running unit tests, debugging, and doing performance analysis are much tougher to do on an embedded device.

Enough waffle. All this means that the next version of Bunjalloo may be a bit nippier, hopefully.

Friday, November 06, 2009

NDS thumbnail preview in Nautilus

When you insert a cartridge into a Nintendo DS, the start menu shows a little image or logo of the game. Homebrew games for the DS can also add their own logos and these show up when you view them in Moonshell or whatever menu your cartridge happens to use.

Ages ago I wrote a little script to show these images as the preview icon for NDS files in the Gnome file browser Nautilus. The script is about a hundred lines of Python and it's available here.

The setup file in that download tarball associates the icon-extracting command to the MIME-type of the Nintendo DS file. If you use an older distro then you may have to associate the script with the generic application/octet-stream MIME type. The script filters by file name anyway. This association is not done using the usual Nautilus scripting method, where you add a script to ~/.gnome2/nautilus-scripts/, but rather using gconf variables.

If you need to uninstall the icon preview for some reason, you can run:

gconftool-2 -u /desktop/gnome/thumbnailers/application@x-nintendo-ds-rom

Oh, and while we're here I recommend you go and play the absolutely brilliant nostalgia-fest that is Manic Miner in the Lost Levels.

Sunday, August 16, 2009

Game Boy Remakes moves home

Geocities has been around in one form or another since about 1995, but Yahoo has recently announced that the web hosting service will close this October. With this in mind, I've moved all my Game Boy Advance remakes and BBC emulation to a new home on Google Sites.

Do you remember when people said "don't forget to update your bookmarks!" when they changed their website address? Now nobody does that because everyone uses a search engine to find what they need. That means that when Geocities closes, all the search-juice for the old site will be lost. So here I'm sowing a little search seed, which will hopefully grow and let people find these remakes when Yahoo finally pulls the plug. BTW, closing down Geocities seems a really odd thing to do. The limits on bandwidth were tiny (4MB per hour!), the sites are full to the brim with adverts. It must have been costing Yahoo nothing to host all of them.

Anyway the migration was completely manual, but hopefully I haven't missed anything and the new web site should have all of the old content on it.

As I moved all of the content over, I was reading through a lot of the stuff I had up on the old site. I really miss remaking these old games, I'd love to get back into that again one more time. Reading through old assembly code figuring out the algorithms used and rewriting them in C is pretty good fun. And at the end you get a game to play too. I have a few ideas for a remake, but I don't want to promise anything. I know from experience that announcing something that is a work in progress, or even less than that, can suck the life out of the effort.

Monday, July 20, 2009

Migration to Mercurial

Liquid Metal, VeniceImage by ms sdb via Flickr

I've taken the plunge and decided to integrate all of the Bunjalloo code onto the Google Code site. This meant I migrated from the original Github-hosted Git repository to make use of the new Mercurial support that GC added not so long ago.

The main reason I migrated was to get all the issue tracking, wiki and code changes on to one site. I really had 2 choices: migrate the main site to Github and use the issues and wiki on there, or migrate the code to Google Code and use Mercurial. I quite like Github and I think it is amazing that there are so many brilliant free hosting services, but I really prefer the Google Code interface from a user's point of view. It is generally less cluttered.

So what does this change get us? First off, we lose github's famous "social coding" features. I had one fork in 2 years, with 0 additional commits. No great loss there then. You can still email patches to the discussion list of course, and of course with either DVCS the author information is retained. I gain the fact that I can now reference changes a bit easier in bug reports and close tickets straight from commit messages. Assuming I even write any more code or fix any bugs ;-)

I had a couple of pet peeves with Mercurial, mostly that I really missed gitk's features. Luckily I discovered what I assume everyone else must use: Tortoise HG. This provides "hgtk log" on Linux that works a lot better than the "hg view" history viewer, which is a port of a really ancient version of gitk. The most important feature that hgtk has is a refresh button so I don't have to keep killing and restarting the application each time I make some changes.

Some other areas that I saw as weaknesses were to do with Mercurial's history fiddling, or lack thereof. However I decided that I'm probably better off not messing with history too much anyway. Lately I've tended to avoid doing that in Git and I get more done, even if the revision log is not as clean as it could be. I finally understood the way to do this using Mercurial Queues anyway, even if it is a bit more fiddly.

My final missing feature was the "commit -v" feature, which shows you the patch of the commit that you are making without having to open up a separate console. This hasn't been fixed, but I've worked around it by writing a Vim script to do something similar. Pressing "K" shows the diff of the current tree in a new buffer. This actually works out pretty well as I can see a patch and write a comment at the same time, rather than having to jump back to the top as I did with the "commit -v" thing.

To do the actual Git to Mercurial conversion I used the "hg convert" extension that is included with core Mercurial. That worked flawlessly and made switching really easy. The conversion guide on the Google Code support site has detailed steps on what to do when converting from Subversion, but I'll describe a few gotchas that I found with the Git to Hg transition.

The recently released Mercurial 1.3 included a tiny patch that I wrote to generate a slightly nicer log for git conversions, so I used that version. You see Git tracks both the author of a patch and the person who made the actual commit, but Mercurial only tracks the "user". The user is equivalent to Git's "committer" by default, while author information is assumed to be the same dude. When you ran hg convert, it added a line like "committer: A.Committer " to the Mercurial log message for every commit, even if the author and committer were one and the same in the original git repository. It looked a bit silly when 99.9% of the commits were my own. So my patch made sure that the "committer" line only got added if the author of the original change was not the same as the commiter of the change.

Another interesting niggle: Git has 2 different types of tag; lightweight and annotated. Annotated tags can also be gpg signed, but that wasn't the case in my repository. The difference between lightweight and annotated tags is pretty subtle. As far as I understand it, a lightweight tag is simply a reference to a commit ID, while an annotated tag also has its own blob in the Git database.

By default "git tag mytag HEAD" will create a tag of the lightweight variety. This is apparently the Wrong Thing To Do and one of the few places that Git's default behaviour is not the best option. You really should pass the "-a" option to create an annotated tag. Suffice to say that I used the default lightweight type of tags for quite a while until I discovered my mistake. The "hg convert" extension doesn't convert lightweight tags at all, it only converts the chunky annotated kind. This is possibly by design (maybe by misunderstanding?) as it would be easy to fix the convert extension to convert either type of tag.

The easiest workaround for me was to just convert my git lightweight tags to annotated tags in the source git repository using the "--force" option to overwrite the old ones. The convert process picked these up and converted them over correctly. Interestingly enough Justin Williams had posted about a similar problem and his timing was perfect to ask it over on

Now that I've used DVCS a bit more and the novelty of branching has worn off, I decided that I wanted the minimal number of heads in my new Mercurial repository. I also wanted to maintain as much of the released history as possible. Luckily the history was mostly linear. I did create a couple of branches for maintenance releases of Bunjalloo early on, but after about version 0.4 I just made releases from the trunk.

Originally the repository was in Subversion and I pulled in the tag branches with git-svn too. This lead to a few branch stubs with a single commit ("creating tag blah") with a corresponding git tag that I must have created at some point later on. I used the Mercurial Queues extension to trim these out of the history where applicable so that the final repo has just 2 heads - the main trunk and an old, closed maintenance branch from the 0.3 days.

Oh, when you install Mercurial from source on Ubuntu (possibly on any Debian derivative?) it rather inexplicably creates an /etc/mercurial/hgrc file that enables all of the extensions. This lead me to (re)discover a bug with the inotify extension when used in conjunction with Mercurial Queues. My solution was to simply disable the inotify extension (in fact just removing the /etc/mercurial directory and enabling what you need in $HOME/.hgrc is a better idea overall).

Anyway, feel free to check out the code and send me your patches to fix all of those open issues! :-)

Wednesday, July 08, 2009

Bunjalloo 0.7.5

hand_and_bugcolorImage by beneneuman via Flickr

I've just put up yet another new version of Bunjalloo. This one fixes a load of bugs that caused lots of top pages to be broken. In particular you can log in to GMail again. Yay!

Changes in 0.7.5:
  • Improvements to caching - logging in to GMail works again
  • Clicking preference icon goes straight to preferences
  • Fix encoding problems that caused crashes
  • Fixed lots of non-ascii character keyboard bugs
  • Fix configuration changes that use escapable % characters
You may have to manually fix the download path in your configuration settings. This is because the download path could have become messed up and show all path separators as %2F instead of /.

In the next release I want to fix cookies so that you don't have to enter your name and password into all the sites that you log in to. I changed the password on my google account from something like "password" to a good strong one and it's a pain typing it in all the time on the DS ;-)

Monday, July 06, 2009

Creative Zen Mozaic on Ubuntu 9.04 Jaunty

Where Birthday Happens!Image by ♪ Sleeping Sun ♪ via Flickr
I've just been given a Creative Zen Mozaic portable audio player for my birthday (it's a bit early, but I'm not complaining!). I've had a Zen Micro for a few years now and it's a pretty decent gizmo. Small enough to fit in my pocket, has stood up to several falls, has a UI that gets out of the way and it works with Gnomad2.

Update for Ubuntu 9.10: The Mozaic now runs out of the box, no changes needed. I'll leave this here for posterity.

After plugging the new Mozaic in to my laptop, which runs the latest release of Ubuntu (9.04 Jaunty Jackalope at the time of writing) I discovered that horror! it wasn't recognised by Gnomad2. It seems like it's a common problem. The solution is sort of linked to on that bug report, but the help is in French and the file you have to change has moved since that was written. You couldn't make up a better "open source documentation sucks" anecdote even if you tried.

So.. using your favourite editor open up the file /lib/udev/rules.d/45-libmtp8.rules and add the magic lines:
# Creative ZEN Mozaic
ATTR{idVendor}=="041e", ATTR{idProduct}=="4161", SYMLINK+="libmtp-%k", MODE="660", GROUP="audio"
You'll have to do that with super powers, so use something like this to open the file with the correct permissions using gedit:

gksu gedit /lib/udev/rules.d/45-libmtp8.rules

Add the lines right before the first "#Creative ZEN" line that is already in there, just to be on the safe side. This can be derived from first priciples actually, because if you run lsusb it'll say something like:

Bus 001 Device 004: ID 041e:4161 Creative Technology, Ltd

Which ties in pretty nicely to the line you have to add to the 45-libmtp8.rules file.

Anyway, after adding that unplug the Mozaic and plug it back in again. That forces udev to reread the settings you have just changed. Start Gnomad2 and hopefully it will recognise your player and list all the tracks on it. Yipee!

Thursday, July 02, 2009

Like buses

What a disaster! You wait ages for a release of Bunjalloo, then 2 arrive in 2 days. Let me tell you how this happened.

I use version control for all of the main code. All my own code, that is. I also have a big bunch of downloadable 3rd party libraries that Bunjalloo needs. The build system only deals with my own code, it doesn't handle the 3rd party libraries that I install to $DEVKITPRO/libnds and assume are pretty much fixed. So stuff like libpng, jpeg, zlib, the matrix ssl library, all of these have to be downloaded, compiled and installed separately. I have a few patches that I've applied on top of these libraries too. All of these steps are handled by a script that downloads the source files, patches them if needed, compiles and installs. These patches are changes that would either never be accepted in the core library - removing printfs for example, or changing the Makefiles for the DS - or in the case of matrix, the upstream authors don't seem to have any real "community" or way to send them patches.

These libraries are all relatively stable, and when I upgrade devkitPro I can just run my script to install the latest versions. I've also put a tarball of the compiled code on the Google Code site to make your life easier if you want to compile Bunjalloo from code.

There are some libraries here that I've not yet mentioned and that I mostly take "as is". These are the core devkitPro libraries for the NDS - libnds, libfat, and friends. But! I have also been patching dswifi since about Bunjalloo v0.7 to fix an issue with non blocking sockets that just can't be worked around.

Thanks to an oversight I didn't apply the patch when I installed the latest dswifi 0.3.9, which meant that the sockets are not dealt with properly. I assume a socket has connected before it really has, and start shoving data in before it's ready. This has the result that after a few page loads sockets just stop connecting, presumably because I've filled up some buffers with junk. Pages no longer load and the whole thing grinds to a halt.

The solution? Short term, I've patched dswifi again and uploaded v0.7.4. Longer term, I've added a bug report to devkitpro on sourceforge. Hopefully my patch can get integrated and I don't have to apply the changes manually each time I upgrade the DS toolchain. I should have done this a while ago really.

Wednesday, July 01, 2009

Bunjalloo 0.7.3 released

I've released a new version of Bunjalloo on the Google Code site. This is a fairly minor release that mostly updates the code for devkitARM r26 and the various library updates. Full details are in the ChangeLog over on the wiki.

So it's a minor release, but Bunjalloo itself has gone through quite a lot since 0.7.2 came out in December. Development of new features went on hiatus while I tried to incorporate the Woopsi GUI library. After that failed to really work - mostly down to design decisions on both parts; Woopsi uses the DS hardware for really fast scrolling whereas I want flicker free double buffering - I started to go down this crazy architecture astronomy path that went nowhere either.

Now I want to get back to basics. I want to start releasing versions a bit more regularly. I'm going to try getting a version out every month from now on, even if it is crap! Let's see how long I can last. You may notice the new banner logo that can be seen when using moonshell and other menu programs. That is thanks to Sam and I'd like to get some more community contributions. The only way for that to happen is if Bunjalloo is alive and kicking, and I think regular releases is the best way for that.

Sunday, June 28, 2009

EZ Flash Vi and libnds 1.3.6

The version of libnds and the default ARM7 core that was released as part of devkitpro last week (24 June 2009) has a problem with the NDS touch screen controls on the EZ Flash Vi cartridge.

The symptom is seen best if you run the nds-example "touch_test.nds" from the devkitpro distribution. The keys shows 00002000 in the top right, as if you were holding down the stylus, but in fact no touches are registered. Until an official fix is released by the ezflash coders, the workaround is to use devkitpro's own homebrew menu application. It's GPL code so there's no real conspiracy going on here! In fact it'd be nice to see this menu become the defacto standard in the homebrew world, but it is very rudimentary and only shows a textual listing of the files on your card.

So on to the instructions to get the latest homebrew working. I'll assume that right now you've downloaded and installed all the latest devkitPro stuff and you're sat there wondering why your touch screen controls don't work.

First download the latest EZ Flash DLDI driver from You'll have to download the "Kernel 1.90" zip file, extract it and find the Ez5s.dldi file. Now you have to download the Homebrew Menu code. This is only available in the devkitPro Subversion repository.

svn co
cd HomebrewMenu

Once that is compiled, you have to patch it with the previously extracted Ez5s.dldi file (if you use a high capacity SD card, use the Ez5sdhc.dldi instead). Do that using the dlditool in $DEVKITARM/bin:

$DEVKITARM/bin/dlditool /path/to/ez5190ob11/moonshell/Ez5s.dldi HomebrewMenu.nds

Now copy that file onto your card, but copy it as the ez5sys.bin file. This means that when the card boots up, it will boot our HomebrewMenu instead of the regular EZ Flash/Moonshell executable.

Hopefully the next release of libnds will fix the issue so that homebrew written with it is compatible with all cartridges out of the box. Even better would be if the Homebrew Menu started to see regular updates and became a really cool and useful file browser.

Monday, June 15, 2009

Lint and CMake

I've just updated my cmake-lint project with version 1.2. This fixed a bug that I introduced when I added in the ability to specify defaults in a .cmakelintrc file. This was one of those silly errors that using a compiled language would have caught, but in the end better unit tests would have caught it too. I'm not really a rabid TDD-weenie but automated tests that check for basic breakage are really handy. Python makes it dead easy to mock out unwanted and unrelated code too. Unit testing C++ is hellish in comparison.

As I was saying, cmake-lint is a really basic lint tool to root out common bad practices in CMake files and suggest some better practices. It also comes with some handy Vim plugins for working with CMake. One is a better version of the omni completion code I previously posted. The other is a :CMakeHelp command that helps you learn what all the CMake commands mean. There's no documentation for these plugins, but it should be obvious how they work. In Vim, type :CMakeHelp followed by the name of a command, or partial name and tab to complete. When you edit a CMakeLists.txt file, type CTRL-X CTRL-O to start up the omni completion.

I wrote these Vim scripts because CMake is a pretty good build system but there aren't that many IDE tools available. Actually, CMake is about my favourite build system ATM, it's quite a close call with Waf though. CMake's cross-compiling is probably what just pips Waf for any new stuff I do.

CMake lets you set up a regular build that compiles for the PC, and just by passing in the name of a file that describes a different compiler via the command line you can compile the same code for the DS. With Waf you have to faff about writing more code, cloning environments, fiddling with variants... That gets a bit messy and it's all manual, especially if you want to compile with debug flags too.

Waf does ease the pain of integrating multiple modules inside the same source tree though, and it doesn't compile templates down to Make (which seems slightly crazy). Although gyp also does similar craziness for Google Chrome, but converts python templates to Makefiles, SCons input files (!) or proprietary IDE project files. Makes you wonder why they didn't just go with CMake and add to it. I would have liked to have seen a CMake generator that spat out SConstructs, for example.

Sunday, April 26, 2009


Google Code is going to add Mercurial support soon, in fact it's already available for some pilot projects. I'm more familiar with Git though, so I needed a good translation guide and I found the Git HG Rosetta Stone. That isn't bad, but there are still some gaps, so I thought I'd share my experience fiddling around with Mercurial. To try and pad out some of those gaps. This is a big long dull technical entry. Sorry.

The first thing I tried was to convert a small Git repository to Mercurial. The way to do this is to use the convert extension.

Now a bit of a rant about extensions here. This goes for Mercurial, but I think it applies to Bazaar too. Anyway, the issue I have is that in order to activate extensions in Mercurial, you have to add a line to a file in your home directory called .hgrc. This seems a bit anti intuitive to me. If these extensions are available in the core, and they are installed and everything, then why do I have to activate them by changing a preferences file? It's probably to keep the core set of commands simple or something, but it adds needless complexity to the whole process.

The documentation on the conversion page mentions converting from CVS, Subversion and Darcs, but there is no mention of converting from Git. I wonder if this is because nobody would really want to convert from Git to something else? ;-) Anyway, the conversion was easy for a trivial repository, just run "hg convert /path/to/repo".

This creates a new directory repo-hg that contains your shiny new Mercurial-converted repository. The directory is initially empty, except for the .hg directory. That's equivalent to the .git directory in a git repository. The odd thing here is that running "hg status" on this empty directory reports no changes - compare this to a "git status" in the same situation, where it would say to you "hey, you've deleted all of your files!", and it seemed a little strange. You have to run "hg checkout" to get all your files back in the directory, which is pretty much the same as the "git reset --hard" idiom that you'd use in the same situation.

Converting my Bunjalloo repository wasn't as easy. I've used a few branches, and some of the history came from subversion originally and there are strange tag/branches there. Mercurial doesn't seem to like this at all and just creates a load of branch-less "things". There's probably a good way to do this, but it would need some investigation.

After compiling and running "hg status" I had a load of unknown files, marked with things like "? build/foo.o". I tried the simplest solution: "mv .gitignore .hgignore". This almost worked, but it game me the following strange error:

could not start inotify server: /tmp/repo-hg/.hgignore: invalid pattern (relre): *.[oa]
abort: /tmp/repo-hg/.hgignore: invalid pattern (relre): *.[oa]

By default Mercurial uses Python's regular expression syntax, not globs. That probably makes some sense, it means it was less effort to code I bet. The error message about inotify is a strange red herring though. The fix was to add a "syntax: glob" line to the file.

After this I made a change in my code to see how Mercurial handles the normal workflow. Git's diff command shows differences in colour. Well, that's not quite true - by default it doesn't, you have to set a preference by running something like "git config color auto". Mercurial does colour too, you have to edit the .hgrc file adding "hgext.color=".

All these properties and preferences can either go in your global $HOME/.hgrc file or in the repository-local .hg/hgrc file. There's no core "hg config" command, but there is an extension to do it. It is a second class citizen that isn't shipped with the core Mercurial and requires an extra not-core-Python library (I suspect this is why it isn't in the default Mercurial). In git you can run the core command "git config --global" to add a config option to your global file, or by default it changes the repository local configuration in the .git directory. It makes things a bit more newbie friendly than editing strange dot files. I can't imagine an ex-Subversion user being comfortable editing a file below .hg, for example. In subversion you just don't go futzing about in the .svn directories.

Once all that was done, I decided to commit the change I had made. Here hg doesn't use git's staging area. That's fine. It is something that works well, but a lot of people don't get it. Mercurial uses the classic approach to this - pass what you want to commit on the command line - which works too. What I do miss here from git is the ability to run "commit -v". With the -v option, you get shown the patch that you are committing as well as the file names. This saves running "$VCS diff" in a separate shell, which is what I always ended up doing with Subversion. A definite regression from git here.

After committing a change or two, I sometimes run gitk to see what is going on overall. Mercurial ships "hgk" which is a really ancient version of gitk tuned for Mercurial commands instead of Git ones. Unfortunately it isn't turned on by default and, you guessed it, you have to edit the .hgrc file. Not only that, but the examples given in the Mercurial wiki are not entirely correct, at least when you install from source. They all state silly paths for - the real location is $PREFIX/lib/python$VERSION/site-packages/hgext/, where PREFIX is usually /usr/local and VERSION is the Python version, probably 2.5. But even if you set that in your .hgrc, "hg view" doesn't work. You need to copy the file "hgk" from the source contrib directory into your path. This is quite awkward. If something ships with the core, it should be installed and Just Work (TM).

Sadly hgk is not very as good as the current gitk. It's missing a lot of the search options, the "you are here" yellow ball on the current
checkout, status bars for slower updates, and probably some other features that I've forgotten for the moment. Running help says "About gitk" and shows that it is version 1.2 (C) 2005 - the current gitk must be about version 1.80 now and is in active development. The hgk equivalent is about 4 years behind and appears to be more or less abandoned.

The other graphical interface that git has is the "git gui" command. This is really needed for managing the index well, adding chunks of patches and so on. The index is handled automatically in Mercurial, so that's a big chunk of usage that you wouldn't need a gui for. There is an extension - hg record - that emulates the "git add -i" behaviour for adding partial patches. A hg gui would be handy for using that recording extension, if nothing else, but there's nothing included by default. The only likely candidate is hgct, which has not been updated for a couple of years.

Another command I use a lot is "git rebase -i". This lets you squash up commits, change the order of unrelated commits, drop patches, break
up monolithic changesets into smaller ones, and so on. It is really good and easy to use. You run "git rebase -i someid" and get back a
list of commits to do stuff with, something like this:

pick 84d3267 Add the key release waits back in
squash 63656f4 This seems to work on desmume at least
pick d34b4f3 WIP, damn sprites do not show up...
edit c9b6718 Mostly works

# Rebase 7c1db7e..c9b6718 onto 7c1db7e
# Commands:
# p, pick = use commit
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.

Mercurial doesn't have rebase, but on the other hand it does have the incomprehensible queues system. I've read the documentation, but I just don't get it at all. You need to remember a whole bunch of new Q commands, and you need to initialise the repository to tell it that you want to use queues. Then you can add, push and pop patches. I dunno. It just feels too heavyweight and requires too much planning up front.

The rebase command was added to the git core late in the game, but it integrates seamlessly with everything else (except submodules, but
git submodules have other problems anyway). You don't need to plan to use git-rebase, you just do it. The visualisation is done in your editor - you don't have to push and pop patches to get them in the right order. You just move them about as lines of text. If it all goes wrong, you can run "rebase --abort" and you don't get screwed over by having all these strange extra patch commits, which is what MQs tends to do.

Now to push things to a new bitbucket account, since Google Code doesn't let everyone use Mercurial yet. Creating the empty repository on Bit Bucket was dead simple. Pushing was equally simple - hg push You enter a username and password, which is far easier than the git equivalent, where you have to generate a ssh key, perhaps having to fiddle with ~/.ssh/config to tell it that SSH should use the user "git" when it connects to the github server, and so on. I can see how this would be simpler for someone used to Subversion over HTTP. The Mercurial documentation on the push command seems easier to follow than the git equivalent, too.

Running that line will just push changes to the remote repository - it doesn't set up the remote branch, the equivalent of a "git remote add origin". To do that you have to edit .hg/hgrc and add a "[paths]" section pointing to the remote URL. The convention here is to set the URL to a value "default". i.e.

default =

To see which changes have not been pushed to the remote repo with git, you can use gitk or git log --decorate. Both of these show where the remote head is and where the local one is. By inspection you can tell which commits are missing from the remote branch. The equivalent in Mercurial is to use "hg outgoing", which tells you which changes have yet to be pushed. Ah, another difference for the gitk/hgk list here: hgk doesn't show the remote branches at all. Anyway, Mercurial's outgoing command is handy. You could mostly emulate it in git using "git log origin/master..HEAD", but you need to know the name of the remote branch, which is surprisingly non-trivial in some cases.

Another feature of Git that I tend to use is for creating patches to send to other people. I mostly use this together with git-svn. That way I can clone a Subversion repository, make my local commits and then send patches of those commits off in bug reports or to mailing lists or whatever. This is "git-format-patch" and "git-send-email". Mercurial has a single command for that, "hg email". It's pretty much the same, but also sends email by default (like git-send-email). Sometimes you need to attach patches to HTML forms rather than emailing them, and so generating files is my preferred approach. By default git generates one file per commit - in fact I think that is the only way it works- Mercurial on the other hand only ever generates one file, with multiple patches inside. A nice option in Mercurial is "-o", which automatically generates the patches that are not upstream on the remote branch. Anyway, the command lines are subtly different, and the output is similar-but-not-the-same. Git has its single-patch approach, where you get out what you put in, plus it is less irritating in the number of questions it asks you (none). Mercurial creates a patch-bomb mbox and asks you for a To: email address, even if you don't want to email the file to anyone. It is probably a personal preference thing, but I like the way Git does it better here. They are both usable and scale better than "svn diff > ~/whatever.patch".

Another gotcha that caught me out was that "hg add" without arguments adds all untracked files for commit. This is like "git add .", but normally commands without arguments don't do naughty things. I didn't like the default behaviour here, it was too surprising. Especially if your hgignore file is not set up right. To fix it, you have to run "hg revert -a", which forgets about the added files.


Here's a big ol' list with the subtle, and not-so-subtle differences that bugged me. I really wanted to use a table, but blogger doesn't do tables without adding a bunch of blank lines...

Git gitignore
Hg .hgignore, syntax: glob to get the same behaviour as git

Git .git/config, ~/.gitconfig, use git-config to modify the values
Hg .hg/hgrc, ~/.hgrc, hg config is a non-core extension

Git git commit -v
Hg hg diff | less; hg commit

Git gitk
Hg hg view - there is some set up involving .hgrc and placing hgk in the path.

Git git rebase
Hg Mercurial Queues, kind of.

Git git push URL ; git remote add origin URL
Hg hg push URL; $EDITOR .hg/hgrc ; [paths] default = URL

Git gitk, git log origin/master..HEAD
Hg hg outgoing

Git git format-patch RANGE
Hg hg email -m filename -o

Git git gui
Hg Nothing equivalent. There's "hg record" for "git add -i", but it is less user friendly.

Git git add . ; Note the dot
Hg hg add ; No dot needed. Take care with that! You'll have to run "hg revert" to fix the mess

The main arguments for using Mercurial over Git are better Windows support and ease of use. I can't comment on how well either work on Windows, but for Linux Mercurial worked more or less as expected. The second argument about hg being much easier than git... I don't know. Maybe. I'm tainted now that I more or less grok DVCS. But given the difficulty in setting up most of the core plugins and the lack of good core GUIs (gitk/git-gui), I disagree. The basic work flow is more or less the same, but Git has this batteries included feel while Mercurial is more limited. Mostly by default, by enabling optional extensions there isn't that much real difference. Besides, I think that for someone used to Subversion who has never used DVCS, either tool would be overwhelming at first.

What I do know is that the GUI tools that come with Git are a huge help for getting up to speed with what is going on "under the hood". Seeing all the commits and their hierarchy in gitk is far easier than reading the log or viewing the gitweb page. Especially if you start doing rebases and history changing stuff. Maybe Mercurial's one-branch approach and lack of history-munging commands makes all these extras unnecessary. Who knows. Time will probably tell.

Either way it's time to get stuck in to learning Mercurial. I'll definitely start using Mercurial when Google Code offers it to everyone, because the current github/GC integration works but it's a bit ad-hoc. It's missing automatic links from issues to commits, the source tab looks a bit wrong, the Updates feed doesn't update, little things like that. I like the Github UI for code - you get the last change right there on the front page - but for non coders, I think it could be a bit daunting. Google Code's interface is cleaner and easier to navigate. Also subversion is missing features like real tags (no the /tags directory is not the same) and a couple of projects I've got on GC still use the default Subversion just because it's easier to set up and everything. Mercurial will be a nice bonus feature.
Reblog this post [with Zemanta]