Wednesday, October 13, 2010

Android Native Coding in C

When I read questions like "How to do code sharing between Android and iOS" on stackoverflow.com, and the answers are variations on the theme: "the NDK is not for creating cross platform apps", it makes me sad. The NDK is an excellent way to write cross platform games. Here is a little insight into the approach I've taken with my so-far unreleased Chaos port.

Chaos running on the HTC Magic

To code a game in C on Android you have to first write a Java Activity with a View. This can be either a regular View or one that is OpenGL-ified, this explanation uses the GLSurfaceView. Then you use the Java Native Interface (JNI) to call from Java into your C code. You compile your C code using the Android Native Development Kit. The remaining problem is then: how can I draw pixels?

You have 2 options (3 if you are willing to target Android 2.2+): drawing pixels to a Canvas, drawing to an OpenGL ES texture, or drawing directly to the pixel buffer of a Bitmap. This last option is similar to option 1, but is faster and available on Android 2.2 "Froyo" only.

Assuming you want to draw a screen that is smaller than the Android native screen size and scale this up, the OpenGL version is the fastest and most compatible of the 3 choices. Using OpenGL from C code is actually cleaner than from Java, as you do not need to worry about adding gl. to all the GL function calls (gl.glActiveTexture for example, where gl is an instance of javax.microedition.khronos.opengles.GL10). You also don't have to worry about the arrays you pass to GL functions being on the Java heap rather than being the required native arrays. This means you don't have to deal with all the ByteBuffer calls that clog up the Java OpenGL examples.

You will need at least 3 native functions to draw in OpenGL: the "main loop" that runs your game's code, the "screen resized" and the "screen rendered".

The main loop can be the classic while (1) { update_state(); wait_vsync();}. The screen resized function is called when the Android device is rotated or otherwise needs a new screen setting up. The screen rendered function is called once per frame.

The main loop and the render functions both accept no arguments. The screen resized or set up code accepts a width and height argument. The Java native declarations for these calls will look like this:


private static native void native_start();
private static native void native_gl_resize(int w, int h);
private static native void native_gl_render();

static {
System.loadLibrary("mybuffer");
}

Now you have to write these in C and somehow register them with the Dalvik VM (or "Dalek VM" as I often misread it. Exterminate!) Dalvik uses the same approach to binding native methods as the Java VM does; it opens a native library with dlopen() and looks for the symbol JNI_OnLoad and functions with "mangled" names that match the native declarations. The library loaded here will be "libmybuffer.so". You can either implement your functions with the mangled names or using a call to "RegisterNatives" in the JNI_OnLoad function. There are many JNI tutorials on the net, I won't rewrite how to do that here. Whichever way you choose, you still need to have the native declarations in your Java source code. My examples are using RegisterNatives, as it gives cleaner C function names.

In the constructor of your Java GLSurfaceView, you should call the main loop (in C) from a separate thread - this way the loop does not block the main thread of your Android application and the Android OS won't kill it for being non-responsive. It is important that the main C code does not do OpenGL manipulation as that can also crash the application. All GL manipulation is done in the renderer call. The main-loop thread can change whatever C state it likes; the render call later reads this state to create the final rendered screen.


public GlBufferView(Context context, AttributeSet attrs) {
super(context, attrs);
(new Thread() {
@Override
public void run() {
native_start();
}
}).start();
setRenderer(new MyRenderer());
}

The implementation of your GLSurfaceView.Renderer class simply delegates to the native functions and should look like this:

class MyRenderer implements GLSurfaceView.Renderer {
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig c) { /* do nothing */ }

@Override
public void onSurfaceChanged(GL10 gl, int w, int h) {
native_gl_resize(w, h);
}

@Override
public void onDrawFrame(GL10 gl) {
native_gl_render();
}
}

The onSurfaceCreated method is not used, the onSurfaceChanged method is what the OpenGL implementation really uses to indicate a screen should be set up properly. The method onDrawFrame is called once per frame, at a rate of between 30-60 FPS (if you're lucky).

Now you can forget about Java until you need to do input, but that's another story, and write the rest of your game in C. The native_gl_resize method should grab a texture and set up the simplest rendering scenario it can. Experimentation has shown that this is not too shabby:


#define TEXTURE_WIDTH 512
#define TEXTURE_HEIGHT 256
#define MY_SCREEN_WIDTH 272
#define MY_SCREEN_HEIGHT 208
static int s_w;
static int s_h;
static GLuint s_texture;

void JNICALL native_gl_resize(JNIEnv *env, jclass clazz, jint w, jint h)
{
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &s_texture);
glBindTexture(GL_TEXTURE_2D, s_texture);
glTexParameterf(GL_TEXTURE_2D,
GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D,
GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glShadeModel(GL_FLAT);
glColor4x(0x10000, 0x10000, 0x10000, 0x10000);
int rect[4] = {0, MY_SCREEN_HEIGHT, MY_SCREEN_WIDTH, -MY_SCREEN_HEIGHT};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, rect);
glTexImage2D(GL_TEXTURE_2D, /* target */
0, /* level */
GL_RGB, /* internal format */
TEXTURE_WIDTH, /* width */
TEXTURE_HEIGHT, /* height */
0, /* border */
GL_RGB, /* format */
GL_UNSIGNED_SHORT_5_6_5,/* type */
NULL); /* pixels */
/* store the actual width of the screen */
s_w = w;
s_h = h;
}

You can also call glDisable to turn off fog, depth, and other 3D functions, but it doesn't seem to make too much difference. The glEnable(GL_TEXTURE_2D); call enables textures. You need this as you'll be drawing your pixels into a texture. glGenTextures and glBindTexture get a handle to a texture and set it as the currently used one. The 2 glTexParameterf calls are needed to make the texture actually show up on hardware. Cargo cult coding here. Without these the texture is just a white square. Similarly, the glShadeModel and glColor4x are needed to have any chance of your texture showing up either on hardware or on the emulator. Presumably if the screen has no color it is not drawn at all.

The rect[4] array and associated glTexParameteriv call will crop the texture to the rectangle size given. The MY_SCREEN_XX values depend on your "emulated" screen size, but should be smaller than the texture. The TEXTURE_XXX sizes should be power-of-2 (256, 512, 1024) to work on hardware. Anything else may work on the emulator, but will fail miserably on the real thing. The rectangle is inverted here to get the final texture to show the right way round. The call to glTexImage2D allocates the texture memory in video ram, passing NULL means nothing is copied there yet. The native Android pixel type is RGB565, which means 5 bits of red, 6 of green and 5 of blue. How close to the Nintendo DS or GBA pixel format, just 1 bit different! Using this colour type speeds ups the frame rate from less than 30 FPS to a more respectable 50-60 FPS.

Now the render code. This uses the glDrawTexiOES function call, which is an OpenGLES extension to render a texture straight to the screen. It is the fastest way to do things as there is no real 3D going on, it is just drawing your texture straight to screen.


void JNICALL native_gl_render(JNIEnv *env UNUSED, jclass clazz UNUSED)
{
memset(s_pixels, 0, S_PIXELS_SIZE);
render_pixels(s_pixels);
glClear(GL_COLOR_BUFFER_BIT);
glTexSubImage2D(GL_TEXTURE_2D, /* target */
0, /* level */
0, /* xoffset */
0, /* yoffset */
MY_SCREEN_WIDTH, /* width */
MY_SCREEN_HEIGHT, /* height */
GL_RGB, /* format */
GL_UNSIGNED_SHORT_5_6_5, /* type */
s_pixels); /* pixels */
glDrawTexiOES(0, 0, 0, s_w, s_h);
/* tell the other thread to carry on */
pthread_cond_signal(&s_vsync_cond);
}

The memset clears out old pixel values. If you were careful and kept track of dirty areas, only refreshing those, this could be omitted. I'm keeping things simple here though and clearing the screen each time. The render_pixels routine does whatever it takes to draw your game's pixels into the s_pixels array in the RGB565 format. The glClear call is not strictly necessary, but it may help to speed up the pipeline as the hardware knows not to worry about keeping any old values. Experimentation shows that leaving it in doesn't harm the framerate at least. The glTexSubImage2D call will copy the s_pixels data into video memory, only updating the area indicated rather than the whole thing. If you do update the whole texture it is actually faster to call glTexImage2D. Finally, glDrawTexiOES will draw the texture to the screen, scaled to the screen size.

The final pthread_cond_signal is to tell our vsync call to wake up. I haven't mentioned this yet, but in order to have a GBA or DS-like coding experience, it is vital to wait on the screen refresh. The implementation of this is simple, as Android lets you play with all the usual pthread calls from the world of Linux. You create a mutex and condition at the start of the main, and have the implementation of waitvblank lock the mutex and wait for a signal on the pthread condition.

#define UNUSED __attribute__((unused))

static void wait_vsync()
{
pthread_mutex_lock(&s_vsync_mutex);
pthread_cond_wait(&s_vsync_cond, &s_vsync_mutex);
pthread_mutex_unlock(&s_vsync_mutex);
}

void JNICALL native_start(JNIEnv *env UNUSED, jclass clazz UNUSED)
{
/* init conditions */
pthread_cond_init(&s_vsync_cond, NULL);
pthread_mutex_init(&s_vsync_mutex, NULL);

while (1) {
/* game code goes here */
wait_vsync();
}
}

That ensures that main loop can wait on screen redraws, which avoids tearing.

Obviously if you want to write code from scratch that will only ever run on Android, there is no point jumping through these hoops. Just use Java and forget about the NDK. However, if you want to port existing code to Android, or you don't want to write new code that is tied to a single platform, this approach is IMO the best way to go about it. It makes Android an almost decent platform for writing old-skool games on :-)

I've made a compilable example of this code available on github here.

17 comments:

  1. Anonymous1:01 a.m.

    Nice example, do you have zipped example sources I could compile and try run on Android device.
    thx

    ReplyDelete
  2. I meant to get round to doing this, so thanks for reminding me. Here's the code: https://github.com/richq/glbuffer/tree

    You can get a zip or tarball from this page https://github.com/richq/glbuffer/downloads

    ReplyDelete
  3. Anonymous9:32 a.m.

    Thanks for the good explanation and code sample. Anyway, I've been testing this on Nexus One. On 2.2.1 and 2.2.2 the fps is as low as 8.0. Any quick thoughts how to fix this to run on decent frame rate? At least fps2d from Market runs on 60 fps.

    ReplyDelete
  4. I recently learned that newer devices are limited by fill rate and you should aim for 30fps. Is the frame rate a constant 8fps or just occasionally shows that low value? If it's the odd spurious 8fps reading, it could be an anomoly. If it is fixed at ~10fps, then there is something terribly wrong and I'd like to find out what too. My Android game in the Market uses this technique (more or less) and also uses vblank for timing purposes, so if it runs dog-slow on some phones that would be a bit lame.

    ReplyDelete
  5. Anonymous4:09 p.m.

    At first: Sorry for mixing up with the numbers. I was trying out some things to speed up the sample and fps dropped to 8. Actually your original code runs on Nexus One at 15 fps. There might be occasional drop to 14.33334.

    Today I asked a friend with HTC Desire HD to run the same apk and it runs at constant 38 fps. SDK r08 and NDK r4b&r5b.

    On my device native_gl_render() takes ~64 ms, while it spends ~2 ms outside that function between updates.

    On Desire HD native_gl_render() takes ~24 ms, while it spends ~2 ms outside that function between updates.

    ReplyDelete
  6. Wow that is pretty bad. Could be related to this thread? http://osdir.com/ml/Android-Developers/2010-04/msg02973.html The conclusion there was "don't do this, it is too slow" :-(

    ReplyDelete
  7. Anonymous11:17 a.m.

    Apparently yes. The good news is, that the same code runs on Creative Ziio 7" (Android 2.1) at 60 fps.

    ReplyDelete
  8. "In the constructor of your Java GLSurfaceView, you should call the main loop (in C) from a separate thread - this way the loop does not block the main thread of your Android application and the Android OS won't kill it for being non-responsive."

    Nope. GLSurfaceView starts another thread for its render + present, and that thread calls your callback. You'll never get killed for being non-responsive for doing too much work in GLSurfaceView.onDrawFrame().

    "That ensures that main loop can wait on screen redraws, which avoids tearing."

    Nope. It's impossible to get screen tearing on an Android device with GLSurfaceView, since it does calls the gl swap for you.

    "You have 2 options (3 if you are willing to target Android 2.2+): drawing pixels to a Canvas, drawing to an OpenGL ES texture, or drawing directly to the pixel buffer of a Bitmap."

    What about just making native ogl calls that render to your color + depth buffers? Why are you introducing render-to-texture? You don't need half of the stuff you're talking about. You certainly don't need glDrawTexiOES() at all, which is good because a lot of people on this thread are complaining about its performance.

    When you initialize OGL, just query the device for the native resolution, create color + depth buffers of the correct sizes, and clear/render to those. When you're done, GLSurfaceView will present them.

    Maybe you should read up on some OGL tutorials, you'll see that none of them use the methods you're describing (render to texture) just to get basic sprite/tri rendering up and running.

    ReplyDelete
  9. Thanks for the feedback C. You're absolutely right about the tearing, that's not an issue at all on Android.

    Re: non-responsiveness, I was probably not clear enough. What I'm using here is a 3 threaded model.

    1. the Android GUI thread, which you cannot block or face a "not responding" dialogue, 2. the thread that is started from the GLSurfaceView (not inside the GLSurfaceView.Renderer, which is what you thought I meant), this is where the non-OpenGL-calling native code executes, 3. the thread that renders the screen, calling GLSurfaceView.Renderer.onDrawFrame, which calls into the native code that uses OpenGL.

    The non-responsiveness occurs if you were to run (2) directly from the GLSurfaceView constructor without running it in a new thread. Sure the renderer code is in a separate thread, but that is not the concern here.

    Re: frame-buffer vs texture. For this simple example it is overkill, but it does have an advantage - if you use the native size of the screen, create a frame buffer of that size, and draw pixels there directly, then you'd have to scale the graphics yourself depending on the screen size instead of having the texture renderer upscale them for you. I'll certainly look into this though, because I do think there must be an easier way than what I've posted here.

    As for reading tutorials - if you have any links to good tutorials do let me know, there's not a lot of good information on how to do a lot of this kind of thing.

    ReplyDelete
  10. "The non-responsiveness occurs if you were to run (2) directly from the GLSurfaceView constructor without running it in a new thread. Sure the renderer code is in a separate thread, but that is not the concern here."

    Stop creating threads, except for your input thread. Run your sim and render, sequentially, inside of GLSurfaceView.onDrawFrame(). You don't need to create extra threads at all, that's what I'm saying. You'll never be tagged as nonresponsive, and you don't need to coordinate your sim + render code synchronization with cond vars.

    "if you use the native size of the screen, create a frame buffer of that size, and draw pixels there directly, then you'd have to scale the graphics yourself depending on the screen size instead of having the texture renderer upscale them for you."

    Nope. You need to set up your own 2D coordinate system and render your sprites to that. A correctly set up orthographic projection matrix, applied to your world-space sprite coordinates inside of your vertex shader, will take care of all of your scaling issues on all devices, irrespective of resolution. All OGLES 2.0 GPUs are optimized for this path, you won't get bizarre performance hits.

    You'll be happier if you start thinking about the graphics hardware in the way that it's built to be used, as a 3D triangle transform + rasterizer engine. Your game happens to only take place in one specific Z plane, making it 2D.

    ReplyDelete
  11. Well, I think we'll agree to disagree on the threading - each to whatever works best for the way they code. http://replicaisland.blogspot.com/2009/10/rendering-with-two-threads.html

    The rendering ideas sound good, and I'd love to see some code examples of what you mean.

    ReplyDelete
  12. Anonymous3:20 p.m.

    Hi,
    Thanks for the post i have a question, do you mean copyPixelsToBuffer and copyPixelsFromBuffer with "drawing directly to the pixel buffer of a Bitmap" ?

    ReplyDelete
  13. I mean the "AndroidBitmapInfo", "AndroidBitmap_getInfo", and "AndroidBitmap_lockPixels" calls in the NDK. Calling lockPixels gives you a pointer to the Bitmap's pixels. These calls can only be used with the NDK on Froyo upwards.

    ReplyDelete
  14. Anonymous6:31 p.m.

    hello C,,
    could you please send us more details about your rendering idea,, and what if my scene data are in 2D array, shouldn't i copy it first to 2D texture?? how can i render it to buffer array directly,,

    Thanks in advance

    ReplyDelete
  15. Anonymous7:14 a.m.

    DrawPixels / ReadPixels is not available in OGL ES 1.1/2.0.

    ReplyDelete
  16. I think LINUX is betrayed by Android-Google. C/C++ is not fully supported, that is foolish and unmoral. Why do you choose Java and, but kernel is LINUX (created by C.) If Android-Google has honor, please create new kernel for yourself, and you can do what you want. If you choose kernel is LINUX, the first, you must fully support for C/C++.

    ReplyDelete
  17. I tend to agree with you The Lost Generation - the problem is all of the closed source drivers, which mean something like Debian-Android is impossible. Cyanogenmod is not the same, it has a dubious copyright situation. The pretty-much-dead Replicant project would be the proper way to go.

    ReplyDelete

Note: only a member of this blog may post a comment.