Mini Compiler and Stack Machine VM

This weekend past, I achieved a long-standing goal… I created a working compiler and Virtual Machine for a made up mini language.  The target mini language is intentionally tiny and minimal.

Here are some buzzwords: Its a entirely handwritten recursive decent compiler. For expressions I use a slightly strange shunting yard implementation that I dreamed up with a while back.  The target ISA is stack based similar to JAVA, Pascal or early LUA.  There is only one data type… the 32bit integer.

At this point I’ll just throw up a link to the repo:
Be warned however that there a pungent reek of code smell emanating from there (it was a weekend project after all).  So gas masks are advisable.

Some important constraints of the language are: Only integers are supported, there is also no globals or arrays or any such thing, also semantic checking is completely ignored.

It also helps to see a code fragment of this target language.  Here is my factorial example I uses to prove to myself that recursion worked fine:

function factoral(val)
    if (val <= 1)
        return 1
        return val * factoral(val - 1)

function main()
    return factoral(4)

The above code currently translates into the following instructions:

-- factoral
0000 INS_GETV   -1
0005 INS_CONST  1
0010 INS_LEQ   
0011 INS_NOT   
0012 INS_CJMP   37
0017 INS_CONST  1
0022 INS_RET    1
0027 INS_CONST  1
0032 INS_CJMP   64
0037 INS_GETV   -1
0042 INS_GETV   -1
0047 INS_CONST  1
0052 INS_SUB   
0053 INS_CALL   0
0058 INS_MUL   
0059 INS_RET    1
0064 INS_CONST  0
0069 INS_RET    1

-- main
0074 INS_CONST  4
0079 INS_CALL   0
0084 INS_RET    0
0089 INS_CONST  0
0094 INS_RET    0

The syntax of my mini language is somewhat similar to LUA, who’s code I always found nice to look at.  I first learned real programming using BlitzBasic so I have a soft spot for the BASIC syntax.  Also the author of BlitzBasic, Mark Sibley is one of my heros!

One of the more interesting aspects of this little project, was the design and implementation of the stack machine the VM emulates. I have read a fair bit about stack machines in the past, but somehow I never quite groked their operation.  My main sticking point was the ABI, how are parameters passed/used, how are local stored/used, how is a frame constructed?

I had visions that I would need loads of stacks for different things, locals, return points, arguments etc.  This turned out to be way off, and I was able to make a stack machine that used just two stacks, however even this could certainly be reduced to just with a bit of rework.  I think there are advantages to maintaining multiple stacks however.

Things clicked for me when I realized I could use one stack for computing expressions, as well as storing locals and arguments.  The second stack stored procedure frames, which captured a ‘frame pointer’ and a ‘return location’. The top procedure frame on the stack always refers to the function currently being executed.

In my system, the frame pointer is somewhat conventional, and procedure arguments can be found immediately bellow it on the stack, and local variables can be found immediately above it on the stack.  This makes code generation simple, as accessing locals and arguments, becomes independent of the head of the stack, and each other.

During the course of execution the stack may look as follows:

08 scratch <-- top of stack is scratch for expression
07 var0    <-- FP (top)
06 arg1
05 arg0
04 scratch <-- may have scratch space if call part of incomplete expression
03 var1
02 var0    <-- FP (older)
01 arg1
00 arg0

Scratch space is the term I’m assigning to the fragments of a not yet fully evaluated expression.  For something like ‘a+b’, both a and b will live on the stack before they are combined via the addition.  If a function is called as part of an expression, these fragments will still hand around until it returns, and evaluation resumes.
My stack machine has two instructions GETV and SETV that provide everything I need to work with arguments and locals.  Lets looks at the operation of GETV:

    GETV <int32_t operand>:
    1. index = frame.fp + operand
    2. push(stack[index])

GETV will index the stack relative to FP+<operand> and place that element on the top of the stack.  This has the effect that operands >= 0 refer to locals and operands < 0 refer to arguments.

    SETV <int32_t operand>:
    1. index = frame.fp + operand
    2. stack[index] = pop()

SETV is much the same, however it pops the top item on the stack and overwrites the element on the stack that it indexes.

With these two instructions, my code can easily read and write arguments and locals.  Perfect!

Unlike a regular register ISA, local variables are not allocated on the stack by moving the stack pointer, all that that is needed to allocate a local on the stack is to push an item on the stack and not pop it.  This has the effect of incrementing the top of the stack, reserving space which can now be indexed relative to the fp.  In my mini language, a variable declaration is preceded by the ‘var’ keyword which makes it really easy to know that we don’t need to pop anything after this statement has finished.  It will happily sit there on the stack and can now be assigned to and read from.

One more trouble point is returning from a function… how do we know how to clean up the stack?

I took the easy route and made my RET instruction fairly complex.  RET takes one operand, which is the number of items to discard from the stack.  In this manner, RET can discard all local variables and arguments, similar to a callee save function.  The parser knows how many arguments and locals have been provided, so is becomes trivial to emit this instruction.

There was one kink in this plan however… If we want to return a value from a function, how do we do that.  Since the return value will be sitting on the top of the stack at the point that we return we cant simply discard all of the top elements.

My RET function operates as follows:

RET <int32_t operand>
    1. save the top item on the stack (return value)
    2. discard <operand> items from the stack
    3. push the previously saved top item
    4. pc = frame.return_value
    5. discard the top procedure frame

That is a fairly meaty RET function, but I didn’t spend too long thinking about it and it seemed justified.

To simplify things in my compiler I enforce that every function must return a value, which defaults to 0 if no return statement is issued in the program.  This is crude and implemented by issuing a { CONST 0, RET } instruction pair at the end of every function.

I believe this is the same trick that LUA uses.  It my seem wasteful, but it keeps things simple stupid.

So to conclude, how good is my mini language.  Well…

…Its crap…

…but it works…

…and I learned a ton!
I would chalk this up as a complete success.

The code is highly smelly, but hey! this was a weekend project.  Its up on github.

Kentucky route zero – Behind the scenes

kentuck_route_zero_rendI had a bit of fun here making several renders using the assets from the beautiful Kentucky Route Zero by Cardboard Computer; the aesthetic of which captivates me immensely. (click for hi-res)

render7I used 3DRipper to grab a frame from the game in .obj format, and then went at it with Cinema4D.  The game plays out in pseudo 2D, which presented a challenge since a scene seems to be composed of many flat 2d layers as shown below.  This could be me misunderstanding 3DRipper and its dumped post projection, but I imagine such a tool should correct for that and transform back into modelview space before dumping.

I’m still not sure how they achieve their flat shaded 2D look.  I imagine they could disconnect a model based on color groups and set their vertex colour, or use UV’s to index a colour texture atlas.

wireThe modeling of Conway (the main character) is fantastic.  I have so much to learn.


Grep’n for macros

A colleague of mine spent a fair bit of time using grep to track down some functions in a very large body of source code.  His greps were turning up a blank, and after some time discovered the functions were generated by macros at compile time.

Its fun to think that this could have been solved a little quicker if he had grep’d the output of the preprocessor.

gcc -E source_file.c | grep -B3 -A3 "thing to search for"


Pico-8 is a fantasy console from the creator of Voxatron  Its a virtual console, similar in many respects to the game boy color, since it features tile/sprite based graphics, 128×128 screen, 4 channel synth.

One part of the Pico-8 that is particularly cool is that games written for it can be distributed as regular PNG images.  Here are two examples:


Its not immediately obvious how the programs are stored inside the images themselves, however it seems this is a fine example of steganography:

Steganography (US Listeni/ˌstɛ.ɡʌnˈɔː.ɡrʌ.fi/, UK /ˌstɛɡ.ənˈɒɡ.rə.fi/) is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Ancient Greek words steganos (στεγανός), meaning “covered, concealed, or protected”, and graphein (γράφειν) meaning “writing”.

A simple scheme for hiding data in pictures is to hijack the least significant bits of the color channels, and use them to transport the data, which will have the lowest visually perceivable impact.  I had a hunch that was what was happening here.

The quickest way I could think to validate my hypothesis was to take two cartridges (the above two) and blend them using difference mode in Artweaver.  I hoped that would cut out some of the cartridge surround, and make the data somewhat more visible.  I also upped the contrast for good measure so those least significant bits move towards the more significant bits and become brighter.

Here is the result:


Structured noise is readily apparent, and is most certainly the hidden program data.  Taking a quick look at the range of colors present in the noise, I would guess that two bits of each color channel are being hijacked, and for RGBA that would allow one full byte to be hidden per pixel.  At an image size of 160×205 that would allow ~32k of data to be stored.  That middle band of data also seems very random, leading me to guess it may be compressed.

I might one day whip up a program to try and extract and decode the actual data from these cartridges.

I love the idea of a fantasy virtual console, and I love the idea of distributing picture “cartridges” with their programs embedded.  Hats off to lexaloffle, I think this is just fantastic stuff.

Simple safe printf snippet

A little prototype for an idea I had to use some C++11 features to make printf a bit more safe.

#include <initializer_list>
#include <stdio.h>

struct val_t {

    enum type_t { 

    } type_;

    union {

        int          int_;
        float        float_;
        const char * cstr_;

    val_t( int v )          : type_( e_int   ), int_  ( v ) {}
    val_t( float v )        : type_( e_float ), float_( v ) {}
    val_t( const char * v ) : type_( e_cstr  ), cstr_ ( v ) {}

bool print( const char * fmt, std::initializer_list<val_t> vargs ) {

    // itterate over the strings
    for ( ; *fmt;fmt++ ) {
        // check for format specifier
        if ( *fmt == '%' ) {
            // argument index
            int index = fmt[1] - '0';
            if (index < 0 || index >= int(vargs.size()))
                return false;
            // access argument
            const val_t & v = vargs.begin()[index];
            switch (v.type_) {
            case (val_t::e_int  ): printf("%d", v.int_   ); break;
            case (val_t::e_float): printf("%f", v.float_ ); break;
            case (val_t::e_cstr ): printf("%s", v.cstr_  ); break;
                return false;
            // skip argument index
        else {
            // output character
    return true;

int main( void ) {
    print( "%0 and %1, %0, %2", { 3, 0.1f, "hello world" } );
    return 0;

The output is:

3 and 0.100000, 3, hello world

OpenGL Rasterizer – Part1 – The Setup

I love software rendering and I also love pushing myself to learn new thing.  True two myself, I hatched a crazy idea a number of weeks ago.  Could I find a game that uses a small subset of OpenGL, and implement it myself using software rasterization for all of the drawing operations.  I had read a lot about OpenGL but I lacked practical experience with it, and theory and practice rarely overlap completely.  This project would force me to learn OpenGL in every detail and from an unusual perspective.

The game I picked was Quake2, since it was one of the first OpenGL enabled games, is very well documented, open source and still actively developed.  Quake2 uses a small subset of OpenGL1.1 which is entirely immediate mode and about as minimal as I could have found.
I chose the yquake2 fork, a very clean highly portable, 64bit compatible version of the quake2 engine.

I checked out the repo, built the source via mingw using their makefiles and soon had my own 64bit executable to become my victim.

The first thing I needed was an idea of just how much GL I would have to be implementing.  I fired up dependency walker, so that I could have a look at what functions Quake2 would be importing from OpenGL32.dll.  Just 59 functions, from an api which contains hundreds.  That was a good start, and I was sure that not all of them would be needed before I could have something up and running on my screen.

My first task was to create an OpenGL32.dll that satisfies yquake2’s import requirements, and place it in the same directory which will cause yquake2 to load my OpenGL instead.

Function can be exported from a dll easily as follows:

extern "C" {
__declspec( dllexport ) void __stdcall glEnable( GLenum );

There are a few things going on here which I will explain.  C++ uses a mechanism known as name mangling, where it will modify a functions symbolic name to append details of its arguments and nesting to avoid name collisions with other functions.  I am sure its much more complicated then this but I’m not overly familiar with the details of mangling. The extern “C” directive tells the compiler this function is externally visible outside of its compilation unit and should use C style name conventions, which are relatively unmangled, but I will come back to this in just a moment.

The next interesting part is the __declspec( dllexport ) which instructs the compiler to add this function to the dll’s export table.  As windows loads a programs it tries to resolve all functions in the executables dll import table, which involves walking the export table of the required dll and looking for a matching function.  If a match is found then that dll import is said to be resolved.

On windows, the OpenGL API uses __stdcall calling conventions just like the rest of the windows api. A calling convention is an agreement between two functions, the caller and the callee, about how they will share registers and stack space during a function call. __cdecl is the default calling convention used by visual studio so I have to explicitly state that want to use a different one.

After I had exported all 59 opengl functions in this way, I fired up dependency walker so that I could inspect my dll’s export table.  Unfortunately all of my functions were still being mangled in the C convention.  C mangling will prepend and ‘_’ at the beginning of a function and append the sum number of bytes of all arguments.  This mangling can be circumvented using a .def file, which tells the linker how to construct the export table of a dll.  For my purposes the .def file need only contain a list of function names, and they will be exported without any mangling applied.

The .def file is no more complicated then this:


The work flow I would need is something like follows:

Write code
Compile dll
Copy to yquake2 folder
Execute yquake2
Attach and debug

Visual studio allows us to specify the debug target when you execute a project, which is handy because I cant execute my dll directly.  I the debug target to yquake2, so that when it launch quake2 which in turn will load my dll, at which point visual studio will resolve the debug symbols for it, allowing me to debug the code in my dll normally.

One tricky point in the workflow is the copy step, where I needed to take my freshly compiled dll and transfer it to the yquake2 folder.  This would be a real bottle neck to my development and be really annoying.  Some friends at work suggested the perfect solutions…. symlinks.

Windows has the command ‘mklink’ which will create a file in the current directory but that file actually resides elsewhere on disk.  I could use symlinks to effectively place an OpenGL32.dll in the same directory as yquake2 but have it point to the opengl32.dll in my compilation directory.

The command for this is:

cd d:/my_yquake2_dir/bin
mklink opengl32.dll d:/my_project_dir/bin/opengl32.dll

Thus my work flow would be reduced to the following fairly easy steps:

Write code
Compile dll
Debug project

At this stage I had an stub OpenGL32.dll file which satisfies the windows loader, and a neat work flow for my development.  Now the real development could begin.  The video below shows some early progress of my OpenGL implementation.

My current implementation moves far beyond this and I will try to document the steps I went through, adding features such as:

Adding threading and screen binning.
Perspective correct texture mapping.
Opengl blend modes.
SSE vectorization.
Avoiding combinatorial explosion.

Here are some great links that were invaluable during my development:


In the next article I will explain why OpenGL is not enough by itself, and why I needed to dip into the windows API.  Stay tuned.