Nintendo 64 by Virtual Boy: Episode IV

It’s time for class… Welcome to Virtual Boy 101!

Hey! It’s Aubrey again, and we’re now at Episode IV of this series. If you’re new here, I really suggest starting from the beginning and working your way through the episodes. Then you at least have as rough of an idea as I do as to just what’s going on here. Here’s some links to help you out in that journey:

* — ⸨Episode 01⸩ ⸨Episode 02⸩⸨Episode 03⸩ — *
* — ⸨Episode 04⸩ — *

As a reminder, here’s an overview of the project!

Anyways, let’s dive straight into the good stuff, because there’s a fair amount of information I want to get through this time around and I’m only willing to make you read so much in a single entry of this series! Okay actually, before we dive in I just want to give a slight cautionary statement about today’s episode: things are going to get very technical — and I am only so clever meaning there is a fair chance I’ve gotten something mixed up in here. I’ll do the best I can but take what I’m saying with a grain of salt, and if something seems incorrect, let me know in the comments!

First things first, I want to get into the video decoder component of this project. As I’ve mentioned in previous episodes I see this particular component as, by far, the most complex layer of the project. It is here that we have to process the video signal provided by the hub, digesting it for the Virtual Boy’s displays. I suppose the best place to start then is with the Virtual Boy, its displays, and how they work.

The Virtual Boy’s display capabilities are (surprisingly) not terribly complicated in theory. A strip of 224 tiny, bright red LEDs is paired to a magnifying lens and an oscillating mirror which, in turn, reflects the LED’s light at a 90 degree angle to the viewer’s eye. Of course, there are two of these LED-based mechanisms (one for each eye). A servo circuit makes the mirror oscillate at a steady 50Hz while generating the appropriate signals for another circuit to send the image data to the LED array. The light of the 224 LEDs is translated into a column of pixels in the displayed image by this process. As the Virtual Boy’s resolution is 384x224, this means a frame is completed when 384 columns have been translated. With the mirror oscillating at 50Hz, that means approximately 19,200 columns are translated per second — mechanically providing the Virtual Boy’s smooth 50 frames per second video. That humming noise you can hear coming from your Virtual Boy? That’s the sound of the mirrors oscillating like all hell’s broken loose so that your human eyes can pick up the full image that they do.

That’s not all to the Virtual Boy’s display though! The Virtual Boy, if nothing else, is well-known for its coming out of the box with “3D graphics”.

At the courtesy of Wikipedia, here is a simplified illustration of the parallax of an object against a distant background due to a perspective shift. When viewed from “Viewpoint A”, the object appears to be in front of the blue square. When the viewpoint is changed to “Viewpoint B”, the object appears to have moved in front of the red square

The Virtual Boy achieved these “3D graphics” through an effect known as parallax, which is the apparent displacement of an object’s position when viewed along two different lines of sight. Parallax is actually the quintessential component in the process of stereopsis, by which we humans gain a perception of depth. This is key to how the Virtual Boy’s 3D works as well. A given frame being displayed by the Virtual Boy is slightly different from the frame in the corresponding display in terms of the horizontal positioning of the graphics. When our brain processes these visuals and the horizontal disparity between them, we develop a false sense of depth in the presented image — the Virtual Boy’s “3D”.

At this point you’re probably starting to understand a flaw in my Gameduino-based solution: to achieve the 3D effect the Virtual Boy is capable of, I would need two Gameduinos — one to render each image for each display, as the video feeds for each display should differ from one another. To explain myself, this current Gameduino architecture is a stepping stone towards project completion, remember this is still development for Mark I. I want to work out the process of rendering an image to a Virtual Boy whatsoever before I begin investing in more hardware or diving into deeper technical challenges, so for now, the Virtual Boy’s 3D capability goes unused while I learn how to piece the rest of this project together.

Moving along, Furrtek (the modder I mentioned in Episode I who modified a Virtual Boy for a video output) has a lot of information on the Virtual Boy’s operation graphically speaking (I think I also mentioned that in Episode I). Pouring over this information has started to round out my understanding of how this all comes together. First comes Furrtek’s work with the VPU, where he discovered the pixel data bus is shared between the Virtual Boy’s two displays. The pixel clock revealed to Furrtek that bursts were occurring at 100Hz — not 50Hz (which would match the operation of the displays, as you’ll recall)— and so each burst along the bus comes with a select signal that indicates which display it is intended for. Given that, we now know that an image is made up of 384 individual 224-pixel columns, and that the pixel bus is responsible for dishing them out left and right. So how do we start feeding a video into the Virtual Boy, you ask? First, let’s recall a certain image from Episode II of this series:

The highlight here is a reminder of one particular piece of information about the Virtual Boy: it displays in 2-bit (four color) monochrome. Knowing that is to know that each pixel is represented in 2 bits, and as a bit is the smallest unit of data (a bit is simply one or zero), we know that each pixel can be thought of as a pair of signals that are either on or off. Thanks again to Furrtek’s work we also know that the pixel bus outputs 8 pixels at a time, meaning a 224-pixel column is constructed in 28 pulses of pixel data, which ultimately form a burst every 5.6 microseconds — that is, one 224-pixel column drawn to the display it was intended for. Conversely, we now also know just the opposite: that what we need to do is output 224-pixel columns each in 28 pulses of 8 pixels with each pixel being communicated as the 2 bits that it is, and we need to include the chip select signal to indicate which display that pixel column is intended for.

A massive thanks to Furrtek for this table, illustrating the pinout for the ribbon cable connection from the pixel bus, as numbered from the main board’s connector. Notice there are 16 pins for pixel data (two pin pairs working together to define each pixel of the 8 pixels that the bus handles at a time).

Okay, I don’t know about you, but I’m getting pretty excited now — we can concretely outline what the decoder needs to produce as an end result, and we already know what the decoder needs to accept as an input. When we have our inputs and outputs defined, the rest is simple algebra: that is, we know what we’re given and what we need to create from that, so we just need to solve for the “x” in the equation.

So, let’s think through this even further. Our input is what the mod hub is outputting: a 2-bit monochrome image with a resolution of 288x224 through a VGA port at 60Hz. If we connect a VGA cable to that port, and plug it into the VGA port we’re going to mod onto the Virtual Boy, we just need to understand what to do with that incoming VGA signal. If we were to back up for a second and look carefully at that VGA cable’s connector, we find ourselves looking at three rows of a total fifteen pins. These pins serve a variety of functions, but there are five of them that we are particularly interested in, as they carry the active signals that we’ll be needing: three analog signals and two digital signals. The three analog signals are the red, the blue, and the green values of the image, while the two digital signals are the horizontal and vertical sync signals. I know what you’re thinking, do we really care about the blue and the green values of the image? The answer to that is maybe. (I’ll explain that later.) For now, let’s discard all notion of the Virtual Boy and pretend we’re planning on rendering the gameduino’s output to a standard computer monitor, because here things get interesting and it will be easiest to break this down into smaller, easier to understand chunks.

The vertical sync signal (one of the five VGA signals we’ll be working with) is what tells our display to start drawing a new frame. Essentially, this is to start drawing pixels at x,y coordinate (0,0) of the display. The horizontal sync signal, conversely, is what tells our display to refresh the next row of pixels. With our gameduino outputting at 288x224, this means it sends a horizontal sync signal at the end of every row of 288 pixels and that it does this 224 times to draw a complete frame on the display before a vertical sync signal sets us back to x,y coordinate (0,0) of the display and we run through the process again, drawing yet another frame, row-by-row.

Okay, now we can let out that breath we’re all holding in: if the VGA signal is a stream of pixels row-by-row how can we draw that pixel data column-by-column?

Let’s walk through the process again, but with the Virtual Boy in mind this time. On a vertical sync signal, we know a number of things, perhaps most obviously, we know a frame has been transmitted in full. On each vertical sync, another 224 rows of 288 pixels have been fed down the line. For the Virtual Boy, we will have to buffer that frame and write the buffered frame, column by column to the Virtual Boy’s display. That sounds slow, doesn’t it? Fortunately, we have several factors operating in our favor here. First is the fact that the VGA signal operates at, as I mentioned earlier, 60Hz, while our Virtual Boy, as you’ll recall, has its displays operating at 50Hz. That means our input runs at sixty cycles per second, while our output only needs to run at fifty. Second, our input resolution is smaller, there are only 288 columns, meaning we only really need to think for 288 of the possible 384 columns we draw, given the difference, we know we can write 48 empty columns, write the 288 columns from the VGA frame, and then write another 48 empty columns. This means there are 96 empty columns we can write in between each frame we receive from the VGA. Remembering that a column is drawn in 5.6 microseconds, that means we have a whopping 537.6 microseconds (or about half a millisecond) to buffer the next frame, which is coming in faster than we need to be drawing to begin with.

Let’s write some pseudocode to sketch out at a high-level what we need to do. First let’s pretend we have the functions readPixel() which reads the incoming VGA signal, returning a “pixel” object and write() which accepts an array of pixel objects and handles writing the pixels out to the Virtual Boy.

//Container for incoming frames, 224 rows of 288 pixels
pixel[] framebuffer = [224][288];
//Container for outgoing columns
pixel[] column = [224];

int x = 0, y = 0;
//Here we’ll populate framebuffer while vsync is false
//Once vsync triggers, we’ve got a full frame in the framebuffer
//so we can move on to writing columns
while (vsync == false) {
framebuffer[y][x++] = readPixel();
if (x % 287 == 0) { y++; x = 0; }

write(null); // 48 times for the empty columns

//write out the columns in framebuffer
for (int i =0; i <288; i ++) {
for (int j=0; j<224; j++) {
column += framebuffer[j][i];

write(null); // 48 times for the empty columns

//and repeat.

Of course, we would want some of that execution to run in parallel, but that’s an approximate look at what needs to happen. So, cool, now we have at least an idea of what we’re doing from start to finish in this project. At least roughly, we know about modifying Angrylion to stream properly encoded pixel data to the gameduino, we know about the gameduino using a pixel-addressed graphics setup to construct an image to output over VGA, and we know how the video decoder needs to operate to process that VGA signal for the Virtual Boy!

I think that will be it for today’s update — that felt like a fair heap of information all at once! Next time we’ll start working on the implementation of Mark I of the mod hub, and maybe get to some other implementation work depending on what of my incoming packages show up. See you then!

Hey, I’m Aubrey! I write about gaming, comics, programming, and LGBTQ+ issues.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store