GCC programming - doscalls / ioscalls / writing to video memory

Started by megatron-uk, July 04, 2020, 10:04:53 PM

Previous topic - Next topic


I'm playing around with the Lydux GCC toolchain (patched with the struct alignment fix) and I've hit a function that doesn't appear to work as documented.

I'm using _dos_curdrive() and _dos_curdir(int drive, char *buffer) to save information about where I'm current at:

        /* save curdrive */
 old_drive = _dos_curdrv();
 if (old_drive < 0){
 printf("Unable to save current drive [status:%d]\n", old_drive);
 return 0;
 } else {
 printf("Saved current drive [%d]\n", old_drive);
 /* save curdir */
 status = _dos_curdir(old_drive, old_dir_buffer);
 if (status < 0){
 printf("Unable to save current directory [status:%d]\n", status);
 return 0;
 } else {
 printf("Saved current directory [%s]\n", old_dir_buffer);

After this, old_drive has the drive number I'm currently on (3 == D:, in the case of my emulated xm6-g setup). However I never get anything copied to my directory buffer:

char old_dir_buffer[DIR_BUFFER_SIZE];

Where DIR_BUFFER_SIZE is 65, from the PUNI docs. I'm working in a sub-directory of D: (lets say D:\Games), but the subdir name is never recorded. From the documentation, I should get "Games" stored in the buffer (i.e. minus the drive prefix and leading/trailing slashes) but it's always null after the call.

The prototype for the function looks like:

extern int      _dos_curdir (int, char *);

... and looking at the definition in newlib it appears as:

| int _dos_curdir (int, char *);
.global _dos_curdir
.type _dos_curdir,@function
        move.l  %sp@(8), %sp@-
        move.w  %sp@(10), %sp@-
        .short  0xff47
        addq.l  #6, %sp

It's some 20+ years since I did any m68k assembly (and that was only a couple of weeks on a single module for my undergraduate degree!), and I'm afraid I've lost all of that knowledge, so I don't know if the prototype matches what the assembly implementation is saying.


There's definitely something weird going on with the call to _dos_curdir().

I'm running the code from drive D: (drive number 3, as reported by _dos_curdrv()).

If I change the call to from _dos_curdir(3, buffer) to _dos_curdir(0, buffer), I get the name of the current directory on the current drive (3, in this case) in the buffer.

The documentation states that the first parameter indicates the drive to interrogate the current directory from (sensible, since Human68k, like DOS, allows you to change the current directory on any active drive). However I've found that it doesn't actually report anything unless you use drive number 0 as a parameter, and then it reports the current directory name from the current drive (regardless of you specifying drive 0).


Would anyone be so kind as to try the same call from either GCC 1.x or 2.x, in case it's the particular implementation of libdos using those assembly shims in Lydux's version of newlib?


This is what I observe running the code from the following locations:


Both D:\ and A:\ should result in no 'current dir' being found, since we're at the root of the drive. But... D:\Games should result in a current directory of 'Games', the same as we see whilst on drive A:\ in the 'Games' subdir.

Oh, hold on... drive number 0 as supplied to _dos_curdrv is not the same as drive number 0 as supplied to _dos_curdir .... ARGH!!!

0 = 'current' drive when supplied to curdir, drive A: when supplied to curdrv
1 = drive A to curdir, drive B to curdrv
25 = drive Y to curdir, drive Z to curdrv
26 = drive Z to curdir, NA to curdrv

So, if you save the current drive selection using curdrv, increment it by +1 to get the 'real' drive number that you would have to pass in to curdir or similar (haven't been through the rest of the drv/dir functions yet to see which camp they fall into - 0 or 1-indexed).

Programmers love their 'off by one' errors, don't they?


Okay, got two fixes for functions defined in newlib for the Lydux toolchain, if anyone wants them:

| int _dos_files (struct _dos_filbuf *, const char *, int);
.global _dos_files
.type _dos_files,@function
        move.w  %sp@(14), %sp@-
        move.l  %sp@(10), %sp@-
        move.l  %sp@(10), %sp@-
        .short  0xff4e
        lea    %sp@(10), %sp

| int _dos_nfiles (struct _dos_filbuf *);
.global _dos_nfiles
.type _dos_nfiles,@function
        move.l  %sp@(4), %sp@-
        .short  0xff4f
        addq.l  #4, %sp

Both functions (files.S and nfiles.S) in the newlib source directory (newlib-1.19.0-human68k/newlib/libc/sys/human68k/libdos) are named incorrectly.

In their original implementations they both share the incorrect the .global function and label _dos_filbuf, which is actually the name of one of their parameters. I've updated their .global namespace attribute as well as their label to match the C prototype naming convention (as well as the Puni docs).

At the moment I'm just assembling the source files seperately and linking them in to overide those defined in libdos.a (which, due to the wrong names being applied, don't actually exist - so you get an undefined reference error at link-time), but at some point we'll need to merge these fixes back in to a central repository.

The new functions have been tested and work correctly (at least as far as my use-case goes: find all subdirectories in a given drive and path), matching against the filespec and attributes defined in _dos_files, and then looping through the results via _dos_nfiles.

Concrete example below:

/* list files with attribute 0x10 == directory
and wildcard name. */
struct dos_filbuf buffer;
int status;
int go;

go = 1
status = _dos_files(&buffer, "*.*", 0x10);
if (status >= 0){
printf("%s.%d\t Search for files returned [status:%d]\n", __FILE__, __LINE__, status);
while(go == 1){
status = _dos_nfiles(&buffer);
if (status == 0){
go = 1;
printf("name: %s\n", buffer.name);
} else {
go = 0;
} else {
printf("%s.%d\t Search for files returned no entries [status:%d]\n", __FILE__, __LINE__, status);


FYI, also looks like the same issue affects _dos_exfiles() and _dos_exnfiles(), i.e. the function is labelled as one of its parameters, rather than the actual function name.


Back when I was messing around with the lydux toolchain I also found a few bugs in dos/bios call functions in C. I just circumvented the call by looking up the Japanese documentation and doing the call, reading the registers directly, or some such.

Lydux put this thing together quickly and it was never really tested extensively so it has bugs here and there. A shame, really, but I think I can count on one hand how many people in the world have ever actually done anything substantial with the toolchain.


That's a shame, as it's far closer to what most people would consider a modern development toolchain; recent C syntax support, built as a cross-compiler, not relying on an ancient assembler. etc.

Yes, there are issues, such as not being able to link to objects from period compilers, but that's something you'd expect - I've done bits and pieces of C dev for things like the Atari ST and MS-DOS and being able to link in something produced from someone else's compiler with an up to date GCC isn't really something I'd anticipate working.

So far, with the obvious gotchas (above) out of the way, it seems to be working quite nicely.


What we really need is a set of simple guides for using the graphics hardware:

  • Getting bitmaps onto the gvram screens
  • Displaying fonts/text on the text screen
  • Loading a sprite from disk and moving it around the sprite screen

I know there's various bits and pieces around, but a lot of it is still in Japanese, makes no mention of the required data formats, or is purely focused on assembly.


Back in the day there were some good English language web sources with that sort of information around. Most have unfortunately vanished over time.

Best resource is google translate, Japanese documentations and an emulator with debug features, allowing you to view all graphics planes individually, plus preview memory content as well as allow manipulation of memory locations.

You'll get far using those resources. But when it comes to more advanced stuff like dma transfer and manipulation of rasterization hardware it gets a lot more complicated.

I had just gotten to that point with my projects before I stopped working on them.


Okay, so I dived in and started poking values at pixels in GVRAM (mode 8, 512x512, 16bit, single graphics page).

int gfx_init(int verbose){


 return 0;

void gfx_checkerboard(){
 int x;
 int y;
 int bit;
 uint16_t super;
 bit = 1;
 super = _dos_super(0);
 gvram = GVRAM_START;
 for(y = 0; y < GFX_ROWS; y++){
  for(x = 0; x < GFX_COLS; x++){
   if (bit){
    *gvram = 0x0000;
   } else {
    *gvram = 0xFFFF;
   // Flip b/w to w/b for next column
   bit = 1 - bit;
 // Flip b/w to w/b for next row
 bit = 1 - bit;

And that gives me what I was expecting:

However, moving on to colour, what format are the RGB values stored in?


So looking at the Doom source its GRBA; 5 bits each of G, R and B, and 1 for Alpha. Is that right?


I never played with that layer, but it sound logical enough. Should be easy enough to test. Just fill the screen with a gradient for each colour.


the last bit is a 'feature' bit. with nothing special enabled it shifts the colors up a bit so they're a little brighter. there are also bits in the video controller register 2 that let you do either semi-transparency(which is a fixed transparency %) or special priority mode which puts pixels from the top most GV or TVRAM layer on top of all other layers. you should look at the 'puni' docs and/or Inside X68000 on Internet Archive. I put some info here ages ago. vidcon registers

the 'puni' docs are a summary, roughly, of most of the hardware features. Inside X68000 is basically the definitive documentation.
'puni' docs
Inside X68000


Thanks neko, so we actually have 15 bits of colourspace to work with (only 32768 colours) + brightness. That's good to know.

I was struggling yesterday to come up with a way of getting all possible colours on the screen... what I tried was looping through all pixels whilst maintaining a counter of 0-65536. Each pixel would get set to a value (from the X68_RGB macro) of 1-32 (upper 5 most bits when the current value was split into 3 bytes). That *should* have given me every combination several times over, but the output rainbow effect only seemed to have a maximum of around 700 colours.

I must have something wrong with the algorithm as it looks obviously wrong.


16-bit mode is swizzled. I can't remember the actual layout off the top of my head. Generally speaking I would probably suggest not using it for anything other than static images since you have to access two words per pixel. Palette modes use the normal GRBi format without anything funky and the low 8/4 bits of each 16-bits in GVRAM are used to choose the index. Also, 0x0000 is punch-through, so totally transparent, when written to the first index of each 16-color palette. I found some weird stuff you can do with the VIDCON registers to use other colors as punch-through but it was years ago and I don't remember.


This is my attempt at putting a pixel of every possible colour on screen:

void gfx_rainbow(){
int x, row;
int y, col;
uint16_t super;
uint8_t r,g,b;
unsigned r_mask, g_mask, b_mask;
uint16_t i, ii, iii;
super = _dos_super(0);
r_mask = ((1 << 5) - 1) << 11;
g_mask = ((1 << 5) - 1) << 6;
b_mask = ((1 << 5) - 1) << 1;
for(gvram = GVRAM_START; gvram < GVRAM_END; gvram++){
if (ii >= 65536){
ii = 0;
// Shift each byte left 3, we only combine the 5 msb for the 15bit+alpha value
r = (uint8_t)  (((ii & r_mask) >> 11) << 3);
g = (uint8_t) (((ii & g_mask) >> 6) << 3);
b = (uint8_t) (((ii & b_mask) >> 1) << 3);
*gvram = rgb2grb(r,g,b,1);

... and I'm using the X68_RGB macro from the Doom source:

#define rgb2grb(r, g, b, i) ( ( ((b&0xF8)>>2) | (((g)&0xF8)<<8) | (((r)&0xF8)<<3) ) | i )

I'm having a hard time figuring out where I'm losing the information that should give me 32*32*32 colours.


Quote from: neko68k on July 07, 2020, 06:05:28 PM16-bit mode is swizzled. I can't remember the actual layout off the top of my head. Generally speaking I would probably suggest not using it for anything other than static images since you have to access two words per pixel. Palette modes use the normal GRBi format without anything funky and the low 8/4 bits of each 16-bits in GVRAM are used to choose the index. Also, 0x0000 is punch-through, so totally transparent, when written to the first index of each 16-color palette. I found some weird stuff you can do with the VIDCON registers to use other colors as punch-through but it was years ago and I don't remember.

Thanks for the info - I'm in the process of trying to write a decent game browser/launcher, so the intention was to use GVRAM to hold the background frame and to load game covers into (so would be static, other than when selecting a new game and the game cover would be loaded). The game names would be loaded in to TVRAM (still to dive into that!) and selector images, buttons, etc would be in sprite ram.


it looks like you put the low byte of the first two pixels in the first word and the high byte of the first two pixels in the second word. so like...

$e82000 1.b -- PL = $00
$e82001 1.b -- PL = $01
$e82002 1.b PH = $00 --
$e82003 1.b PH = $01 --

it's described, briefly, near the top of _iomap.man in the 'puni' docs. so ya know, those are just shift-jis text files not man pages or word docs as the extensions suggest.


Yes, I've got those docs, as well as the Inside/Outside manual, and the Programmers Reference, and the GCC for Games... and ... and ... :D

I generally just use wxmedit to view S-JIS stuff, but google translate is only partially effective in getting any data out of them! I've found 'better' descriptions of the various system calls in the 'green' Programmers Reference manual, but it's still often just a case of trial and error to work out what the description is trying to convey!

So if I understand what you're saying about 16bit, or 15bit+bright mode ;), I'm currently trying to write a single 16bit value to a given GVRAM location, where I should be writing that same 16bit value (as low byte and high) to location x and x+2, and then next value to x+1 and x+3. So I need two pointers really; gvram_low and gvram_high. Would that layout imply that the physical video memory is two interleaved banks to improve performance?


I would assume that yeah its interleaved banks. I'm not sure offhand if you can do 8-bit writes to GVRAM, you'll have to try it. Otherwise you need to pack the two bytes into a word before you write them which sucks. I guess ideally you'd pre-swizzle your images on disk and just load them straight in. Inconvenient but faster at run time.


Are we sure the multiplexed, odd/even, 16bit pixel mode refers to the operation of writing to GVRAM?

In the Inside X68000 book on page 169 and 170, in the structure diagram of GVRAM, it appears to show a single pixel mapped to a single 16bit word, e.g at 0xC00000 and 0xC00001, with the next pixel in its entirety at 0xC00002- 0xC00003.

This is with a single graphics 'page' in 16bit colour mode - and the rest of the GVRAM region 'blanked off', unlike 256 and 16 colour modes, of course.

The only place I see the multiplexed 16bit rgb mode described is on page 217/218, which appears to describe the structure of the colour palette region (0xE82000 - 0xE821FF). I don't think I need to actually do anything with that myself, do I?

I could be wrong, however, as the book doesn't lend itself well to OCR translation and I'm using my phone camera to try and pick out the key words.

This all said, I'm struggling to work out how to write a pixel in a single desired colour; all my efforts end in the wrong colour.

Attempt #1
uint8_t r,g,b;
uint16_t i;
uint16_t *gvram;
r = 0x00;
g = 0x00;
b = 0xFF;
i = rgb2grb(r,g,b,1); // Flatten to 15bit + intensity in GRBI format
*gvram = i;

So for three attempts at displaying solid colours of red, green or blue, I see the following:

r=0x00, g=0x00, b=0xFF

r=0x00, g=0xFF, b=0x00

r=0xFF, g=0x00, b=0x00

If I swap to:

i = rgb2rgb(r,g,b,1); // Flatten to 15bit + intensity in RGBI format

r=0x00, g=0x00, b=0xFF

r=0x00, g=0xFF, b=0x00

r=0xFF, g=0x00, b=0x00


Where rgb2grb and rgb2rgb are defined as:

/* Merge 3 8bit values to 15bit + intensity in GRB format */
/* Keep only the 5 msb of each value */
#define rgb2grb(r, g, b, i) ( ((b&0xF8)>>2) | ((g&0xF8)<<8) | ((r&0xF8)<<3) | i )

/* Merge 3 8bit values to 15bit + intensity in RGB format */
/* Keep only the 5 msb of each value */
#define rgb2rgb(r, g, b, i) ( ((r&0xF8)<<8) | ((g&0xF8)<<3) | ((b&0xF8)>>2) | i )


Okay, I'm just going to go back and hide in my corner.

Turns out I have been trying to do 16bit colour in mode 8, rather than 12....


Here's a 64k image to look at instead:


Source code:

volatile uint16_t *gvram; // Pointer to a GVRAM location
#define CRT_MODE 12 // 512x512 65535 colour
#define GFX_ROWS 512
#define GFX_COLS 512
#define GVRAM_START 0xC00000 // Start of graphics vram
#define GVRAM_END 0xC7FFFF // End of graphics vram
#define rgb2grb(r, g, b, i) ( ((b&0xF8)>>2) | ((g&0xF8)<<8) | ((r&0xF8)<<3) | i )

int gfx_init(int verbose){
   return 0;

void gfx_rainbow(){
int x, y;
uint16_t super;
uint8_t r,g,b, i_high, i_low;
unsigned r_mask, g_mask, b_mask;
uint16_t i, ii;
super = _dos_super(0);

// Construct masks needed to extract 5 bits each of red, green and blue from a 16bit int
r_mask = ((1 << 5) - 1) << 10;
g_mask = ((1 << 5) - 1) << 5;
b_mask = ((1 << 5) - 1) << 0;
gvram = GVRAM_START;

for(y = 0; y < GFX_ROWS; y++){
for(x = 0; x < GFX_COLS; x++){
i++; // Counter which generates our colour
if (i >= 65535){
i = 0;

// Shift each byte left 3, we only combine the 5 msb for the 15bit+alpha value
r = (uint8_t)  (((i & r_mask) >> 10) << 3);
g = (uint8_t) (((i & g_mask) >> 5) << 3);
b = (uint8_t) (((i & b_mask) >> 0) << 3);

// Convert r,g,b values to x68000 grbi
ii = rgb2grb(r,g,b,1);

// Write single 16bit word in grbi format
*gvram = ii;

// If writing in 16bit word mode, step by +1

Some things to take away:

  • GVRAM writes are 16bit
  • It doesn't appear that 16bit pixels are multiplexed in GVRAM - is that just BGRAM perhaps? The documentation isn't clear
  • The intensity bit emulates the effect of true 16bit mode, but obviously you don't have a full 16bit palette to choose from; a bit like the old Speccy mode BRIGHT :)
  • Don't try and do 16bit colours in 8bit mode... it doesn't work for some reason ;)


Ah, I really am a big fan of how obscure and obtuse the design of old computer/console graphics hardware is.

The X68000 is the king standing on the hill of esoteric hardware design.

It has so many graphical planes and each one somehow works different from the other ones.

You really can do some wild stuff with parallax scrolling if you use the hardware to it's full potential. But hardly anyone did. The 2nd level of Akumajo is the best attempt I can think of.

Imagine what wizard level developers could achieve if the system was more popular, especially in the west.


I will say that writing to GVRAM seems slow (I am of course setting pixel-by-pixel, so that's the absolute worst-case scenario). I don't know if it can be scrolled or manipulated via the video registers like the other graphics memory so it does seem more limited than BGRAM, TVRAM or sprites.

For my purposes I just want to use GVRAM to load a nice menu frame, background image etc, so I don't need to do manipulate it much once loaded, so the speed is not a major factor.


I've started documenting what I'm finding out, along with sample code (e.g. for writing pixels to GVRAM, getting that 16bit colour matrix on screen, the various dos calls, etc):


I'll be happy to transfer them to the wiki once I have time.


Okay, so I can draw single pixels at points, translate x/y co-ords into GVRAM addresses, draw filled and unfilled rectangles, fill the screen with solid colours or gradients. Great.

Now moving on to the next piece of the puzzle - loading image data assets from disk. I'm using a simple BMP function (based on this) - it current includes math.h, which I'd really rather do without if possible, but for the moment I can live with it.

My (heavily modified) code loads the data okay, correctly seeks to attributes for bpp, height, width and the padding to the data section, then loops over loading the pixel data from the image section... but on calling fclose() at the end of reading the data the program ends with an address error:


Now up to this point fopen, fseek and fread implementations from newlib-1.19 as included with the Lydux toolchain all worked as expected. If I remove the reference to the end of fclose() at the end of my bmp function call, it works and passes the data back to the caller without the address error:


Just wondering if anyone else had come across it? Before I delve into the newlib implementation to see if there's anything obviously broken.


Okay, so that's weird, if I move fopen() and the corresponding fclose() from inside the bmp function to outside, and simply pass in the file handle, fclose() works as expected; no address error.

So previously it was (pseudo code):

int main(){
    bitmap_struct bmp;

loadbmp(bitmap_struct *bmp){
    f = fopen("file");
    // do other stuff with bitmap data;


All I've done is move the fopen and fclose calls outside, and pass in the open file handle:

int main(){
    bitmap_struct bmp;
    f = fopen("file");

loadbmp(FILE *f, bitmap_struct *bmp){
    // do other stuff;



Okay, so it's not quite there yet:

  - Image still in 16bit rgb format, haven't yet put anything in place to downsample to 15bit grbi, that's why the colours are wrong - I suspect I'll put the logic for that into the bmp function, so it's always ready in native format by the time it gets back to the caller.

  - I think I'm either loading from the file wrong, or writing back to gvram in the wrong size or something like that, which is causing the offset pixels

Still... I feel a particular sense of achievement having come from just nothing, over a week ago:


It's the boxart to Alshark btw, if anyone is wondering.


Getting closer now that I've swapped the endianness of the image data (BMP defines little-endian):


Still some things to work out - I'm clearly processing the data by-byte somewhere, whereas it should all be treated as 16bit words.. that's got to be the source of interlaced rows and offset data.

I need to review the data-reading logic where its pulling the pixels from the file and appending them to my pixel array.

Slow progress is still progress though.


A bit further again; sorted out the 'interlacing' type effects, and it looks like all the colour is there, but not yet remapped to grbi:


I seem to be getting alternating scanlines in the image, and when viewing the contents of memory it's clear that I'm getting one image-width-worth-of-pixels of empty bytes in between every valid line of data (hence why the image is vertically stretched and missing the bottom half of the box-art).

It must be to do with the reverse-offset through the pixel buffer that I'm using to read the BMP data in reverse (BMP stores data bottom up, so the top row of pixels is the very last in the file). I suspect I'm jumping too far back into the pixel buffer each time I'm reading a new line from the file.

That's the next bug to track down, I think.

Also, the aspect ratio was driving me insane for a while; I was convinced that I was only getting partial content of the various boxes/rectangles I was drawing... no, it was because they are horizontally stretched in the 512x512 aspect ratio as displayed in XM6 (still wish there was a decent emulator to run in Linux, rather than having to run it via Wine...).


Got the missing rows and therefore the half-height image issue sorted now. It was an alignment issue between the pixel buffer (uint16_t *) and the (uint8_t *) pointer used to index it.

I've converted both the buffer and access pointers over to (uint8_t), which solves it, but it does mean that when I come to access an indiviual pixel (2 bytes) I need to either cast back back or access pointer and pointer+1 to get both bytes, which is a bit of a chore.



SHould be just a case of applying the rgb565 to grb555 conversion now; you can see in the second bitmap that red and blue are still flipped.




That's it - 16bit BMP loaded from disk, byte-swapped from little to big-endian, colour truncated to 5 bits for each of r/g/b and then copied to an x,y screen coordinate represented in GVRAM.

The next thing is to try and work out if I can do the byteswapping or colour conversion any quicker. At the moment I'm doing this after reading all of the pixel data into memory:

for(i = 0; i < n_pixels; i++){
// Remember, each pixel is actually 2 bytes for our 16bit mode
pixel = (uint16_t) (((bmp_ptr[0] & 0xFF) << 8) | (bmp_ptr[1] & 0xFF));

// Swap from the native little-endian BMP data to big-endian
pixel = swap_int16(pixel);

r = (((pixel & r_mask565) >> 11) << 3);
g = (((pixel & g_mask565) >> 5) << 2);
b = (((pixel & b_mask565)  >> 0) << 3);
pixel = rgb888_2grb(r, g, b, 1);

bmp_ptr[0] = ((pixel & 0xFF00) >> 8);
bmp_ptr[1] = ((pixel & 0x00FF));
bmp_ptr += bmpdata->bytespp;

Where swap_int16() and rgb888_2grb() and the bitmasks are defined as the following macros:

#define r_mask565 ((1 << 5) - 1) << 11
#define g_mask565 ((1 << 6) - 1) << 5
#define b_mask565 ((1 << 5) - 1) << 0

#define rgb888_2grb(r, g, b, i) ( ((b&0xF8)>>2) | ((g&0xF8)<<8) | ((r&0xF8)<<3) | i )

#define swap_int16(i) ((i << 8) | ((i >> 8) & 0xFF))

And just to show that the pixelation-like distortion in the emulator is caused by the rendering/stretching by XM6, and not the image display algorithm or some deficiency in the X68000 colour format, here's the the same BMP open in an image viewer, next to the Graphic Buffer window of XM6, next to the rendered screen output:



Yeah, doing endian conversion and rgb565 to grb555 conversion on the fly isn't super fast.

I just timed a full screen, 512x512 image at around 11 seconds to display:


... almost 10 seconds of that is the conversion. Although I have to say that I'm really pleased with the output.

The smaller, 1/4 to 1/3 screen sized images (like my cover art example) load from disk in around 250-500ms, with the endian conversion and rgb conversion taking it up to say 2-3 seconds or thereabouts. I think that's reasonable for the intended use as a game browser/launcher (the main code isn't written yet!), I'll probably write a callback function that fires after a game has been selected and user input has been idle for 250-500ms, then load the bmp from disk), so normal browsing/scrolling doesn't trigger it.

Still, if I can try to speed it up at all, it's a bonus.


Once pre-processed you could just transfer the image to the graphics memory using DMA so as to not lose menu responsiveness.

Actually way back I wanted to do a similar graphical game launcher myself. But in the end I gave it up in favour for a simple text based one. I've set it up to start up on boot on my machine. If I had continued with the graphical version I would have prolly never finished because it was so much more work.


I'll probably write some gvram to gvram bitmap copy/move functions to make life easier for copying and moving sections of screen around once loaded and then move on to looking at TVRAM and sprites next as they'll be the bulk of the stuff that will be responding to user input.


In case anyone is interested, I've updated my wiki with all the functions I've written, plus examples and expected output:

  • Using dos.h to search and query files/directorys
  • Using iocs.h to change screen mode
  • Generating an X68000 GRB+i pixel value
  • Turning X/Y coordinates into GVRAM addresses
  • Drawing points on the screen
  • Drawing filled/unfilled rectangles
  • Full-screen colour gradients
  • Loading BMP images
  • Screen-to-screen copying


When I'm a bit further on I'll add the same content to the Gamesx wiki pages... it's better to have it in multiple locations.

If anyone has any pointers for getting started using TVRAM (loading / using custom fonts/bitmaps) or the use of PCG RAM for defining/moving sprites, I'd be really appreciative.


For sprites and background tiles you might want to look for my old Gankutsuou project thread on here. I believe I linked the source code for the project. Might be handy to look through.

I started it sometime in 2013 I think.


TVRAM is 4-bit planar indexing into the first GVRAM palette (the first 16 colors). It supports a "simultaneous access mask" for writing more than one plane at a time with a single write and a "write mask" for doing easy patterning. It also supports hardware raster copy which I've seen used for fast text scrolling. There are some nice diagrams and explanation in Inside X68000.

PCG is 4-bit 1-dimensional indexing into the selected PCG palette (0-15). The nametable items reference which of the 16 palettes the item uses. IIRC sprite nametables are in their own little chunk of RAM but BG nametables are at the bottom of PCG. So BG layers eat up the bottom 1/4 to 1/2 of PCG depending on how you set things up. These are, of course, also well described in Inside and to a bit lesser extent the 'puni' docs depending on what version of them you have.

For loading custom fonts to use with stuff like _dos_print or printf you should probably just use HFONT (link) or HIOCS from the command line. I think the font format is described at the end of the graphics chapter in Inside. There is some explanation in the HFONT documentation also.

You probably won't need it but if you intend to do wild and crazy sprite things you should look into the XSP (link) library. It's the sprite multiplexing code used in ChoRenSha68k and Puti'n Plin among other things.

That reminds me, Inside X68000 OCR's very well with OmniPage. Thats an expensive piece of software but maybe you can find it uh, cheap, somewhere ;)