Thursday, June 16, 2011

Image Transparency & Performance (Frame Rate) Solved

Moving from a Android Phone to an Android Tablet, I have been having a hell of a time trying to keep my frame-rate stable and about 60FPS.  Search as I may on the internet, no one seems to have asked or answered this question so let me post it here:
"How can I draw many Sprites/Bitmaps that contain transparency on a (SurfaceView) canvas  in Android without having my framerate (FPS) go to hell?"
Things worked fine on my phone (58FPS), but deploying the same app to my tablet, the framerate chugs to (>30FPS) I assumed it had something to do with the larger screen.  (I however assumed the video card would be "beefer" on the Tablet, so I couldn't quite understand what was going on.)

Searching the internet I found this article from Romain Guy about the performance of rendering to give you more information. in general the performance for rendering an RGB_565 image verses others can 1.5 to 2x faster (Depending on the screen you are rendering to.)


That made some sense, but then I had read many times that transparencies are not possible in RGB_565 format (Which is not true by the way, it just does not contain an Alpha channel, big difference).  the ARGB_4444 format means literally 4 bits for each Alpha, Red, Green, Blue
RGB_565 means 5 bytes Red, 6 bytes Green, 5 bytes Blue

RGB_565 does have a "transparent" pixel value the int (0) so images stored as PNGs and read in (with transparency) or you can create a bitmap:


Bitmap bitmap = Bitmap.createBitmap(width, height, Config.RGB_565);

anyways, this has made a HUGE difference, too bad it's taken this many days of struggling trying other workarounds... hopefully someone else out there will find this info and it will be a help to them...

Added Note:
For gingerbread... all windows are 32 bits, OpenGL is 16 bits for compatibility
http://www.curious-creature.org/2010/12/04/gingerbread-and-32-bits-windows/

Side note: playing Plants verses Zombies on android tablet and I think they are suffering from this problem (using the wrong assets to draw on the screen), the framerate practically kills this stellar experience
for me.

Thursday, April 21, 2011

Optimizing the Development Process for Android

Developing Applications for Android (using the .apk Android Package deployable) is optimized for a smaller memory footprint and for runtime performance.  Heres a good video of explaining an APK
http://sites.google.com/site/io/inside-the-android-application-framework In the presentation, an APK is called "an island" which is an appropriate metaphor.  In this way assets and code are constructed and linked/bound together to allow developers can quickly access resources (layouts, images, etc.) using a main (precompiled) registry (i.e. R.strings.name) this convenience does much for things like making sure your get things correct at compile time so you don't push out a half baked application which is missing a string, image, sound, or other asset.  (And that's a good thing)

Because the apk structure of an Android application is optimal for storage footprint and runtime performance, this means it is unfortunately sub-optimal for another aspect, and that would be for flexibility/quick iterations.
Let me explain (through a use case).

Designing Layouts in Android provides convenient ways of accessing resources (images, text, etc. in the res directory) directly from the Layout.xml file.  In eclipse, there is even a "real time" editor and viewer which allows you to layout components (buttons, text, images) in and see what they "should" look like on the android device.  This is no doubt a nice feature and time saver... Assuming what you are looking at in the preview pane is what you get on the device (which in my experience is not the case).  In practice, what ends up happening is that  the UI will:

  • look one way in the Layout preview pane, 
  • a different way in the emulator, 
  • and a third way running on a specific target device.

Going through the development process where I had an emulator, the Layout preview, and a target phone (HTC Droid Incredible) and trying to get the layout to look right on all of these devices was tedious...
1) make one change to the layout.xml,
2) redeploying to phone.
3) enter information in the application until I get to the page I am testing (frequently the last one in the app).
4) realize the page doesn't look quite right and go back to step 1)

I must have spent a good 1/3 of my time just tweaking the UI to try to make things look right (and this was in development.)

Then when the application was in QA this continued.  The QA leads were using 1st gen Android phones and regardless of my emulator settings, those screens were impossible to emulate, so I ended up receiving feedback such as ("the character needs to move up more", and "the guys hand is getting cut off to the right") I'm not blaming QA, they were stellar (They would even take pictures of what the screens looked like), but as a architect/developer, it was death by 1000 paper cuts.


The sad part of this was the fact that I was making the most minor changes (i.e. tweaking the number of dips (device independent pixels) between the header image and the content, etc.) in some xml file ... this was EJB deployment descriptors all over again... 
In addition, we had internationalization issues, and the text (for things like buttons and screen verbiage) changed frequently, and for each (simple) change, we had the following process:
1) (QA) identified problem
2) (QA) JIRA writeup made (screenshots, etc)
3) (QA) assigned JIRA
4) (Management) scheduled and assigned JIRA
4) (Developer) a single XML file was changed
5) (Developer) redeployed app to dev phone
6) (Developer) check in change
7) (Developer) update JIRA status
8) (Developer) cross fingers/ pray it fixed problem (if not goto 1))

What I would love is to be able to make all of these changes (to things like layouts, strings, etc.) push out a debug version of the application, and allow changes to be made at runtime. (Until things are refined to the point where a production version is available)

Alternatively
Deploy a "debug" version of the application  which resolves assets (layouts, strings, etc.) locally or from a Server, so effectively you can change (simple assets) at runtime.  To accomplish this, we are looking at a combination of Google App Engine and C2DM http://www.youtube.com/watch?v=51F5LWzJqjg&feature=related
... where C2DM just notifies your android device that it should go out and reload some asset from the Server.)--- More later

Tuesday, April 12, 2011

The Editor != The Game, The Game != The Editor

A design goal for real time editing is that the game (in it's finished, running form) does not contain the infrastructure and overhead from the editor.  Likewise, the editor should not be "the game+some debug code" running within a "debug" container.  Therefore one can be in a "broken" state without effecting the other, (decoupled).  The reason being is that it is difficult for the game engine to handle not only loading the assets, but having a solid framerate and operating the debug operations...And you don't want the most tested version of the game to be the version that it not being shipped. Also, I don't want to build a image/sound/ map editor inside the game, I want the state of the game to be exported, then I can edit the game map with an appropriate tool, update game map changes to a server, and have it be reloaded into the game)

One approach I was considering was to having 2 separate Projects each with using separate Application classes and each of these projects would reference a "library" project which contains the main code(i.e. Activity and Views, etc.) and assets.  It's not a bad idea, other than there is the potential that the code diverges (from one version to the other) and then you have a perfectly good running debug version (which gets the majority of iterations) and a relatively "untested" production version.

The approach I am taking now is to have the "editors" introduce themselves into the code at bootstrap and this will take place if the Development project is built in the classpath.  So there is only 1 place in which the Production (running) version of the code differs from the debug version of the code... here is the (current) wiring code for the "production" version.


public class BeaconDependencyProvider implements IBeaconDependencyProvider {

GameRenderer gr;
GameTiltListenerConfig tiltConfig;
GameTiltListener tilt;
GameMapLoader mapLoader;
GameState gameState;
Simulation sim;


@Override
public void setUp(Application application) {
gr = new GameRenderer (new GameMapRenderer(new SolidColorTileRenderer(), new BallRenderer()));
tiltConfig = new GameTiltListenerConfig();
tilt = new GameTiltListener(tiltConfig);
mapLoader = new GameMapLoader(application);
gameState = new GameState();
sim = new Simulation (mapLoader, tilt, gameState);
}


The Debug version is "wired" together differently:



/** Provides the debug dependencies for Beacon */
public class BeaconDebugDependencyProvider implements IBeaconDependencyProvider {


/** All Editors for the Game */
public CompositeEditor ce;

IGameRenderer gr;
GameTiltListenerConfig tiltConfig;
GameTiltListener tilt;
IGameMapLoader mapLoader;
GameState gameState;
Simulation sim;

public void setUp(Application application) {
Log.e("BeaconDebugModule", "Configuring Dependencies");

TileColorPalette tcp = new TileColorPalette();
BallRenderer br = new BallRenderer();
SolidColorTileRenderer sctr = new SolidColorTileRenderer(tcp);
GameMapRenderer gmr = new GameMapRenderer(sctr, br);
GameRenderer sr = new GameRenderer (gmr);
this.gr = sr;
GameTiltListenerConfig tiltConfig = new GameTiltListenerConfig();
tilt = new GameTiltListener(tiltConfig);
GameMapEditor gme = new GameMapEditor();
mapLoader = new EditableGameMapLoader(application, gme);
gameState = new GameState();
sim = new Simulation (mapLoader, tilt, gameState);

ce = new CompositeEditor();

FieldEditor tcpe = new FieldEditor(tcp);
GameTiltConfigEditor gte = new GameTiltConfigEditor(tiltConfig);
FieldEditor gse = new FieldEditor(gameState);

ce.addEditor ("tiltConfig",gte);
ce.addEditor("tileColorPalette", tcpe);
ce.addEditor("gameState", gse);
ce.addEditor("gameMap", gme);
}

All the Editor code does not exist in the production distro, in addition, none of the overhead of creating and wiring the editors is incurred by the main game while running.

One aspect I need to explore is forcing a refresh of the screen, for instance, at the moment my gameMap class is created and stored, and each time the frame is drawn, the cached bitmap is presented on the screen.  I need a "generic" avenue for forcing the game to realize that a recently loaded asset is dirty, and it needs to trigger a refresh to cause the screen to re-render.   


Monday, April 11, 2011

Runtime State Editing Android

I was happy to find some more supporting evidence that rapid iterations is the key to developing good software.  Here's a great article from Gamasutra about the development of tools for Dead Rising 2)
"Typically a tool will be some sort of viewer that allows a designer to tweak their content data. That data is then compiled, built, or baked, and somehow makes its way into the game. Often this process is long -- minutes if not hours -- involves restarting the game, and usually involves having a programmer enter some secret code.
It seemed much better to have a designer move objects around at run time from the comfort of their PC, so we developed a communications protocol that could talk between a game console and the PC.  
Tools could now make the game do things (like spawn objects, move their locations, change an items attributes, etc.) at runtime."

I think games are a prime example of why designing tools for real time are so powerful.  Developers for games spend a good deal of time developing the foundation (physics, rules, etc.) then folks (who aren't programmers, sometimes artists or level designers) populate the game with content.  If those iterations are slow (as described in the article) it's painful, and you really make the process tedious.  The easier it is for people to make changes, the more likely you will end up with a refined product in the end.

Normally (and especially with Android) the Software are optimizing for the finished/deployed application (i.e. they are concerned with performance, load times, memory footprint, etc.)  And those are good things to focus on for a finished product.  However, having all of the resources local inside a single distro requires a recompile, repackage and redeploy each time, and therefore is not efficient for creation of software.  What I want to offer is a way to optimize quick iterations, in a way that can produce a refined finished product.

Originally I was looking to create some simple telemetry for determining what areas of the game are too difficult and return information about the game as it is being played.  Taking this a step further, it seemed logical that if I could Monitor the state of the game, why not allow real time editing as well. (If you can identify an issue using telemetry, why couldn't you "fix" and retest it in real time?) 


The system I have in mind is a client server based system.  The Android device will have a Service (call the RealTimeChangeService) that can be bound to by the application (only in development/debug mode).  During bootstrap the application class will resolve all of the dependencies including all of the application "state" (it will not only wire components (code) together, but it will also resolve variables (i.e. touchSensitivity), images (headerleft.gif), sounds (alarm.au), etc.

For example, I have a "state" class which contains the variables for dealing with the accelerometer:


/** The configuration properties of a GameTiltListener */ 
public class GameTiltListenerConfig {


/** threshold to determine if the x tilt encountered should cause movement (i.e. dead zone) should be > 0 */
public float xTiltDampener = 0.3f;

/** threshold to determine if the y tilt encountered should cause movement (i.e. dead zone)  should be > 0 */
public float yTiltDampener = 0.3f;

/** factor applied to the xTilt (assuming the xTilt > xTiltDampener)...(> 1.0 = speedup) (< 1.0 = slowdown)  should be > 0 */ 
public float xTiltSpeedFactor = 1.0f;
}


GameTiltListenerConfig is created at bootstrap and used within the game to modify the character movement.  In real-time development mode, in addition to creating and setting these values, the application will also create and register an "Editor" which has a reference to the GameTiltListenerConfig, and this editor will be capable of : exporting the "specification" (i.e. the Editor will be aware of all of the properties available in the GameTiltListenerConfig object), and the editor will be able to export the state of the object (at the moment I'm using JSON as a data format) as well as accept changes to the state (i.e. allow someone to say editor.set("yTiltDampener", 0.5);


This is where Java Reflection comes in handy... I create a simple class (FieldEditor) that takes any JavaBean class in it's constructor, it introspects the class (in this case a GameTiltListenerConfig to find all of it's fields (the names and types).  The FieldEditor can accomplish all of the tasks (export the specification of the object, the state of the object, and also accept changes to update the value of the fields).

So on bootstrap
1) The state is created and the game entities are wired together
2) Editors are created with references to the game state
3) The application binds to the "RealTimeChange" service
4) The specification(s) are exported (there are many specifications for the many stateful game objects)
5) The state of the game is exported
6) The game listens for changes coming from the "RealTimeChange" Service
---On change: The appropriate Editor will process the change

The Service will perform all of the required Server connectivity, it will upload the specifications and state to the server, and the state can be changed on the server (through any client that is appropriate).  The Service (on the Android Device) will poll the server at set increments and determine if changes are required, if so, the client requests the updates and the appropriate editor will mutate the state of the program at runtime.

..Later I'll describe more on the Specifications, etc., and how the specifications are tied to the data and the server-side editors (a dynamic UI editor that that is created at runtime).

Friday, April 8, 2011

RoboGuice (Gone, baby, Gone)

So after fiddling around a bit with RoboGuice, I looked at how to design the application and not incur the cost of using "extends" and remove extraneous jars and annotations.  What I ended up with was (actually) simple, just have all of the components wired together in the "Application" class:

public class BeaconApplication extends Application {
public IGameMapLoader mapLoader;

public BeaconApplication () {
mapLoader = new GameMapLoader(this);
}
}


...then in my Activity:

public class BeaconActivity extends Activity {
IGameMapLoader gameMapLoader;

public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
BeaconApplication ba = (BeaconApplication)getApplication();
this.gameMapLoader = ba.mapLoader;
}


Generally speaking, I don't like the fact that the Activity now needs to know about the Application, but I sure like it a helluva lot more than:


public class BeaconApplication extends RoboApplication {
   protected void addApplicationModules(List modules) {
       modules.add(new BeaconModule()); 
   }
}


public class BeaconModule extends AbstractAndroidModule{
  private Application application;

public BeaconModule(Application application) {
this.application = application;
}

protected void configure() {
IGameMapLoader mapLoader = new GameMapLoader(this.application);
bind(IGameMapLoader.class).toInstance(mapLoader);
    }
}


public class BeaconActivity extends RoboActivity {
@Inject IGameMapLoader gameMapLoader;



I could go back and utilize a "Module" to construct and swap out depending on the environment, but for the time being I'm not coupled to all this extension and annotation nonsense. Also, I could use "get()" methods rather than instance variables on the Application class which could provide me with a way to lazyily create things (although in practice thats usually not a good idea), but anyways for normal operation, the tradeoff to use Guice/RoboGuice in an Android application just seems like a bad idea. (You pay alot for not so much).

RoboGuice and why Android Needs DI (at it's core)

So I've been playing around with RoboGuice, which is built from the semi-popular Guice Dependency Injection Framework.  This is my first experience with Guice, I've always thought Crazy Bob was a smart dude, and I  thought it'd be nice to give Guice a try, I never gave it a chance because Spring caught on like wildfire.

So my impressions so far are well good and bad...

I like:

  • modules - instead of xml files from Spring, you write a Module class and this class binds things together) 
  • Everything happens in the Application class, this makes sense.


I don't like:
extends -- I have to use the RoboGuice class heirarchy (i.e.)
public class BeaconActivity extends RoboActivity

verses
public class BeaconActivity extends Activity


annotations -  make the code less portable, and not have code with silly @Inject tags everywhere

more specifically in cases where you want to configure "library" code.  (You are using a library module and you need to configure it in your code (or module)... Also if you develop library code (i.e. ContourMap generator I created) I don't want to put @Inject annotations everywhere and force people to add this dependency to get it to work for Android.)
(Side note: I think this is why Crazy Bob has been trying to make @Inject a standard Java Notation, because (I fear) people don't want to ship any library code that uses Guice because it will require library users to also include guice.)

lifecycle - you need to understand how RoboGuice configures itself and when in the Android Application Lifecycle, I originally had a bunch of NPE's because I was trying to log out a value in onCreate() and it wasn't available until onStart().

Frankly I understand where the RoboGuice Team is going with the project, but it seems based on the idea that they can make things "more convenient" and get rid of boilerplate and make it "more testable".  But you also have to make sacrifices... new jar dependencies to guice-2.0-no-aop.jar and roboguice1.1.1.jar


Here's the creator giving an overview


It got me thinking, though, really what Android needs is a good DI framework, and not some "Bolt on solution" (no offense, but that is what Robo-Guice is).  Android needs DI at it's very core.

Although currently people are arguing that DI systems are too "heavyweight" or resource intensive" keep in mind how android currently handles the R resource... (all this is done "pre" deployment time)...  the same could apply for Dependency Injection, if done correctly... ok well.. more later (on Why Android Needs DI)

Sunday, April 3, 2011

Dependency Injection Android (Rapid Prototyping)

In order to achieve the nirvana of real-time iterations for Android (making changes to android applications at runtime without recompilation / redeploy).  I'm designing a client-server model to provide state/properties and "publish" changes to the client (Android device).  The key to all of this is using dependency injection.  If all objects are configured and wired together from the outside (as dependency injection dictates), then, in "real-time editing mode" the changes can be applied to the program state.

In "production mode" there will be no client-server architecture, all of the configuration will be done locally (also through dependency injection).  In development/debug/real-time-editing mode the configuration will be either
1) configured locally, then uploaded to the server, and the server will be polled for changes (publish&poll)
2) configured remotely (will "pull" the configuration/properties from the remote server) (pull&poll)

So off the bat we have a few issues to contend with.  First we don't want all the client server code to be integrated and deployed with the production application.  I will use the strategy described in an earlier post of development/release aware builds for accomplishing this (Basically I have multiple application deployment "topologies", each consisting of potentially (many) eclipse projects ..

the "debug" deployment topology version of the release which contains
1) the base project
2) any "library" projects needed to run the application
2) a debug android library project  containing overrides and configuration as well as any library code and dependencies for the client-server architecture (this is an android "library project")

the "production/release" version of the project which contains
1) the base project
2) any "library" projects needed to run the application

The main "inflection point" or way I provide this indirection is (at Runtime) during initialization, I check for the existence of a specific class (which is only available in the Debug project lets call it DebugMode.class)... if this class exists,  it is created, and in it's constructor it will perform all of the additional configuration and initialization as well as any property overrides to make the application run in "Client-Server mode".

So building and deploying the application in release mode requires nothing special, building and deploying the application in debug mode requires you to change the classpath to add the debug project. (That way none of the debug specific code is deployed to the release app unnecessarily)... this can be done either in eclipse directly or in Maven/Ant. (with different release targets)

Another issue I am working on is selecting a DI framework which works for Android, at the moment I am using robo-guice... which I'm starting to like.  Having those @Inject tags makes things relatively easy to read.

package example.roboguice;

import roboguice.activity.RoboActivity;
import android.os.Bundle;
import android.util.Log;

import com.google.inject.Inject;

public class MyRoboGuiceActivity extends RoboActivity {
    @Inject protected MyInterface someInterface;
 
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);
    }
   
    public void onStart() {
     super.onStart();
     Log.e("Guice", someInterface.getName());
    }
}
...Also I like that "Modules" (which "wire" everything together) are relatively straightforward

package example.roboguice;

import android.util.Log;
import roboguice.config.AbstractAndroidModule;

public class MyRoboGuiceModule extends AbstractAndroidModule {
 protected void configure() {
  Log.e("MyModule", "Loading Module ... SomeInterface");
  bind(MyInterface.class).to(MyImpl.class);
 }
}


We'll see how things turn out

Saturday, April 2, 2011

Android Rapid Prototyping

For me, the key to developing great software is the ability to quickly iterate.  There are two competing forces here, discipline and time.  You must be disciplined enough keep iterating until the software is 100% right (and not "good enough" or 90%).  Then their is time, which is telling you that 90% IS good enough since you've got a laundry list of other (and potentially more interesting) problems to solve.

One of the first things I identified while using android was "speed".  At first I thought, "Wow, I can get an application up and running on Android quickly"... but then after iterating and making simple (minor) changes to text, variables and parameters (and having to redeploy every time) I was reminded of  the old EJB days, where you'd make a change (to an EJB or deployment descriptor) then cross your fingers and hope it'd deploy.

For example, I find myself "adjusting" some algorithms for the sensitivity of the accelerometer (i.e. where is the dead zone, what are would I consider the maximum tilt which allows the player to view the screen) through trial and error... and for each minor adjustment it requires a recompilation (fast) and a deployment to the phone (slow) so I might make a minor adjustment to a single number and it takes about 1 minute to get that change up and running... that's not very efficient.


I realized, for me to be more efficient,  I've got to be able to iterate more quickly... So my first attempt to develop a way of rapid prototyping for Android was to avoid deploying to Android entirely. To do this I wrote an abstraction layer around Android's API specifics (mostly handling of Graphics/Bitmaps) so I could deploy my code on my desktop to a JFrame (which is instantaneous). When I got everything working and running 100% on my desktop, I "ported" the code over to Android (I fleshed out the Android abstractions).  This worked, but there some problems with this approach:
1) The amount of code in the middleware became kinda a mess, (even though it was limited to simple operations)
2) Using interfaces slows things down when on Android (and speed is key)
3) I developed a sub-optimal "more generic port" solution to work on both platforms (Java Desktop, Android) rather than an optimal solution (based on the speed and efficiencies provided by Dalvik)
4) Really this approach wont work for something like the camera or accelerometer, so it's fine for the more "plain vanilla" applications, but not an approach for Android
5) I had to write of ugly "factory" code and other boilerplate to allow me to swap in/ out the implementations (I could have avoided this by using Dependency Injection... more on that later)

Segue:
If someone asks me "whats the difference between a 'developer' and an 'architect'?" My answer is "The differentiating factor between an architect and a developer is that an architect can improve efficiencies in the development process, (strategic) while developers focus on the tactical aspects of coding. 


Even though I was successful in creating the contour generator, the whole process wasn't ideal and I need a new way to approach the (rapid prototyping on Android) problem.  Ideally I would be able to update simple things (like text, variables, settings) in real-time without having to rebuild and repackage and redeploy the app every time.

My next approach is using a real-time client-server approach (in development) where the android application/device is a client and it gets all of it's settings/assets from a Server, and the server can be updated real-time and the Android/Application will poll the server for changes which can take effect immediately) At first I want to provide simple property settings (i.e. general assets like strings, etc.) but in the future maybe the entire wiring of the application could be provided remotely... (We'll see how far down the rabbit-hole I get)

Wednesday, March 23, 2011

Image Processing (Extension of Contour Plotting)

Wrote a few extra lines of code to test out the Image processing capabilities using the contour plotter.
Not bad for a few hours of work:
I'm not too keen on posting my actual likeness on the internet, but this is obfuscated enough.  the 75x75 image took 30msec to process and print.. .anyways I can see how being able to quickly run through images could have a number of practical purposes... I'm pretty sure the folks who do image scans/retina scans use something similar, also for something like Microsoft's Kinect Body tracking it uses some fast implementation of this, likewise OCR solutions likely want to differentiate between signal and noise in an image


It's kinda neat having this power at your fingertips, from an artistic standpoint overlaying or converting filmed or captured images and running them through some code to spit out contours could yield some interesting stuff.  (Keep in mind the implementation I'm using is not attempting to smooth out edges but this would be an easy step to make things smoother.  Alright well the code is in a "good enough" state, now lets get the android port working.

Sunday, March 20, 2011

Contour Plot with Perlin Noise (Android)

SUCCESS!... alright I finally got it working, a mechanism for generating random "terrain" using Perlin Noise as well as a mechanism for plotting contours... here's a look:



Generated Contour map from 400x400 Perlin Noise Matrix Took 484ms


...Generally speaking I wouldn't be too happy saying anything takes 1/2 second (484ms) on my desktop, but this is not a normal case... usually you will not have an exact 1-to-1 correspondence between your contour map and the data (image) and normally you'll get a much smaller matrix/list of elevations/list of points... (say 80x80) and you'll need a contour map for this... heres a look (the generated bitmap is in the top left)


...for this scenario the generation of the 80x80 matrix took 16 ms. (I could make this look "better" if, instead of painting pixels that are 1x1, I'd paint pixels that are 5x5.... I generated another image and upped the tile size:

...this took about 31 milliseconds to generate the contour for it.

You might argue that the image above is rather smooth in comparison, so I fooled around with the Perlin noise "turbulence" and generated the following image (using 4x4 tiles, 100x100 Perlin noise generated matrix to generate a 400x400 contour map image)
..this took 41 milliseconds... acceptable.

So a few things had to come together to make this happen.  First, I had to adapt the W.V Snyder Conrec routine to work in a "non-Java-applet" mode...second I had to fiddle with the implementions of Perlin Noise to find out which one had the best fit.  Anyways I'm not "there" yet (currently it's running in a Java JFrame and I'm using an in memory BufferedImage), but I built all the AWT specific stuff in an abstraction so I'm nearly there.  Anyways originally I was going to use some processing based "blobification" tools but those were slow didnt scale (beyond 256x256 matrix for my Desktop... ewww) and did alot of "new"'s which kill Android.

Anyways it's nice to have a generic configurable contour plotting mechanism for Java/android anyways,( I can configure the number of contour lines/etc.) but also this can be used for image manipulation (instead of "feeding" it Perlin noise, feed it an image and it will contour plot the images based on HSB (Hue Saturation Brightness) Or this can be used by feeding it in actual elevations (I think from USGS) and it can develop a map like the one above from this data...    Big thing is... it's Fast, and it'll run on Android (even though I havent tested it yet)

So I'm gonna try and figure out a way to make the contours a little less jaggy (at the moment these circular looking contours are actually the result of a bunch of individual drawLines, so I might see some optimizations using breziers and drawPoly, gotta check on that though)

FWIW a 2000x2000 matrix with 4,000,0000 unique elevations took 17,735 msec to contour  (and I still didn't run out of memory... the blobDetector I was using prior, choked on images much larger than 256x256)

Well I need a beer, will work on it tommorrow, gotta celebrate the little victories!

Saturday, March 19, 2011

Graphics AWT/Swing/Android interoperability woes

Gotta take the good with the bad I guess, but I got a gripe and maybe writing this post will help me get it out of my system.

Interoperability for android graphics are a mess... I wonder whether sometimes they change the API just to annoy me.  For instance in android we have Colors that are treated as ints, but in AWT it's a Color object/class, likewise drawing on the canvas (in Android) is simular but different than in Android... So what I'm asking for is a simple Middleware interface API that allows you to implement both i.e. an interface that abstracts drawing to a canvas (whether it be a AWT/swing canvas or an Android canvas)...

Here's why... I'm writing this middleware (A contour plotting application) that should work for BOTH a Java Desktop application AND Android, but in order to do so I have to write all this abstraction around the implementation... (wait a minute, what happened to write once, run anywhere?)

This "contour engine" which (given a matrix of elevations) generates a contour map presents contour graphs on the screen.


I'm basically stealing (like all great artists do) code from WV Snyder  ...who wrote it in FORTRAN... in 1978.  Now, I'm not doing crazy OpenGL graphics or anything like that, I'm doing the very basic of basic, setting colors, drawing lines, drawing the occasional text... But its a pain to basically separate the logic out to write to separate Canvas implementations (one for AWT, one for Android).

And this is just the beginning, I also ran into some major differences trying to do many other "simple" operations... maybe I'll write one, but you'd think Google could go ahead and whip one up just to make me happy.