Nintendo 64 by Virtual Boy: Episode II

Four colors, three dimensions, two consoles, and one loose idea of how it all fits together

Aubrey B
7 min readJul 21, 2018

Last time, in Nintendo 64 by Virtual Boy: Episode I, I essentially wrote up my concept for this project (I’d suggest reading that before this so you have a rough idea of what’s going on here). Sure, I had explored a little bit into modifying the graphics plugin for the Nintendo 64 emulation end of things, but overall I wasn’t very far along in this project, just really excited about it.

This time, I’ll go over some of my early development efforts so far and explain with a little more depth how it all comes together, or at least how I hope it all comes together.

First, I thought I’d talk a bit about my inspiration for the project, which really comes from the seemingly unrelated project Depth3D. Not all that long ago I had the exciting realization that Depth3D (and ReShade in general) was compatible with Angrylion RDP Plus (and Nintendo 64 emulation in general). You can see my article How to Play (Modded) Ocarina of Time for more on that. In any case, I was now tinkering around with applying everything from depth of field to ambient occlusion on to a pixel-perfect emulation of a handful of Nintendo 64 games when I got around to playing with Depth3D: which enabled me to play games in anaglyph 3D. Of course, Depth3D can do much more than that, but anaglyph 3D is a real weakness of mine. If you’re unfamiliar, anaglyph 3D is in fact that “old school” 3D where you pop on those red and cyan glasses to look at a corresponding image like this:

Hopefully you have some anaglyph 3D glasses lying around to look at this with.

And it was at this point that I realized that a convincing “3D” effect could be generated for a Nintendo 64 game without too much hassle, and hey, Depth3D is open source so I can even learn from its code how to do some of this. Of course, the Virtual Boy’s 3D is a fair bit different from anaglyph 3D, but a little bit of challenge keeps things interesting.

In any case, coming back to the present, I’ve been working away at putting together a more technical design for this project. Based on my preliminary work done for encoding the Nintendo 64’s graphics for output to the Virtual Boy. I generated a quick comparison in emulated images here to give you a sense of what all is going into this encoding (and that there’s still more to do):

Tweaking and refining that encoding is one thing, but it’s a whole other story to actually understand what needs to be tweaked and refined (as it’s hard to infer what the image will look like on actual hardware), so I started work on the hardware end of things. Whether or not it’s the most practical idea or it will even work remains to be seen, but at this stage I’m thinking I can make use of my Gameduino in solving this. That has at least a couple advantages: first, I already have the Gameduino and second I have enough familiarity with the Arduino platform in general that that will hopefully lend me some ease in piecing things together.

The Gameduino shield, as it appears on the Arduino Playground.

Gameduino, according to the excamera site, “connects your Arduino to a VGA monitor and speakers, giving powerful sprite and tile-based graphics for video game creation.” Despite that description, the Gameduino actually can be used to output pixel-addressed graphics! They even provide an example, bitmap, that illustrates the setup for pixel-addressed graphics in the form of a 256x256 pixel 4-color bitmap. That sounds pretty convenient doesn’t it? Of course, this is done by arranging all 256 sprites available to the Gameduino in a 16x16 grid with each sprite itself being a grid of 16x16 pixels, giving us the grand total resolution of 256x256 pixels. At this point I had to do a slight bit of thinking to come up with the arrangement of sprites in a 18x14 grid instead of the example’s 16x16. An 18x14 grid only uses 252 of the available 256 sprites, but it does create a total resolution of 288x224 pixels, which should fit nicely within the bounds of the Virtual Boy’s native 384x224 resolution. It also falls not terribly far off from the Nintendo 64’s native 4:3 aspect ratio.

So, to back up for a second, what am I doing exactly? Sure, having the image in a 288x224 resolution makes sense for the Gameduino, but the Gameduino outputs a VGA signal, what good is that for the Virtual Boy? My line of thinking is this: my modified Angrylion won’t output a “video signal” in the sense that we’re used to, as it’s not like you could plug an HDMI cable into your PC and run the Virtual Boy like a monitor with a resolution of 288x224, well, maybe you could do it, but I feel like that would be a gratuitous amount of work. What I’m actually thinking is that my modified Angrylion will output a data stream (of pixel data) which I’ll run over USB to my Arduino/Gameduino, which will process that data stream into the 288x224 red-dyed monochrome image and output that via VGA. Additionally, this creates a testable point prior to the modded Virtual Boy in that I can feed the VGA signal out to a regular VGA-compatible monitor (probably my old Trinitron CRT as I think it should support such a resolution, if not I can play around with one of my scalers as well). Overall the process looks something like this (where I’ve yet to have given much thought to the process’ third and admittedly most complex stage):

The step from generating the VGA signal is then to install a specialized VGA input on the Virtual Boy. What I like about this approach is it enables to me to mix and match the modded Virtual Boy with different devices, as in the long run my Arduino/Gameduino solution almost certainly isn’t the best solution, there are problems with this approach, no doubt. First of all it won’t be easy to accept a generic VGA signal and process it for the Virtual Boy — I can say that without having even begun investigating the format the Virtual Boy is actually going to accept, but it is certainly a unique format for pixel data, not at all a plug-and-play type of thing. In any case I do want the Virtual Boy to accept a generic video signal of some kind, so I suppose this is a path I’m going to have to walk anyways. Also, this approach has problems when you consider how I could possibly manage any sort of 3D effect through it.

So I guess for now, I’m developing an experimental prototype for my project, Virtual Boy Input Mod: Mark 1! (I’ll come up with a better name later on). Mark 1 will simply be a project to learn how to take an image and fire it up on everyone’s favorite pair of black and red binoculars. From there, I’ll move on to a more refined Mark 2 which will worry more about using the Virtual Boy to its fullest extent.

In any case, first thing comes first: I dug out my Gameduino, and began the process of modifying Angrylion to write a stream down the line to my Arduino, and the writing of the corresponding program to accept that stream and process it into an actual visual! For the curious, Angrylion, according to its Git-Hub page, is made up of 93.5% C code, followed by 5.6% C++, while the Arduino programming language is also essentially C/C++. Now, admittedly I am first and foremost a .NET developer, so C# is my comfort-zone but I seem to be able to feel my way though C/C++ without too much difficulty, and I have the fortunate experience of having previously worked with Angrylion and an Arduino project to rip images from a GameBoy Camera to a PC so I’m not working my way through this entirely unfamiliar with my surroundings.

I realize at this point, Episode II for this series is now like twice the length of Episode I, so I’ll leave off here. Next time I’ll aim to get into the technical architecture for the project and really start getting into gear with the development process! Hopefully by then my new Virtual Boy and Virtual Boy parts will have arrived in the mail too. See you next time!

--

--