Categories
Geek / Technical Linux Game Development

Wine Talk

This Thursday I am giving a talk on using Wine to run Windows applications on Linux. Wine stands for Wine Is Not an Emulator. It is an implementation of the Win32 API, which means applications run at a normal speed compared to running them in actual emulators like VMWare.

That is, if they run properly. Wine is still not at version 1.0, which means that nothing is expected to work. Still, a great many apps are available as can be seen in the Application Database.

My first test was to install Starcraft. I found a few how-tos, but they aren’t terribly consistent, and it seems that the authoritative document is years old. Fortunately it wasn’t hard to install and run. The only problem I’ve found is that Battle.net isn’t usable. I get a black screen and the mouse cursor disappears so I can’t join in games. Cedega is a fork of the Wine code that was made into a closed source app. It was previously called WineX. It’s database lists Starcraft as highly rated and doesn’t mention the Battle.net problem. Still, I am not ready to shell out $5/month just to have Windows games I’ve already paid for running on my Gnu/Linux system, especially when some months I don’t get to play said games, and especially when there are plenty of native apps already. Also, it seems that newer versions of Wine tend to break the feature whenever it is available. I’ll wait.

My next test was Wizardry 8. I love the Wizardry series, and I feel bad that I haven’t played 8 very much. It works perfectly according to the Wine application database, so I set out to install it. I had forgotten how long it took to install this game! This is one of those 3 CD games which take up a lot of space on my hard drive. At 50% I had already added 1GB to my home directory (well, including Starcraft and the rest of Wine, but still).

Unfortunately when I try to run it, Wizardry 8 thinks I am running a debugger and wants me to stop doing so. I ended up having to find a no-cd crack in order to play this game, which goes to show you how annoying copy protection can be for legitimate owners of games. In the end, I managed to get it to play and aside from a few sound cracks during the opening video, it ran flawlessly. I am very pleased but I wish I didn’t have to go looking for a No-CD crack to play. Still, it shows a legitimate use of what would otherwise be considered illegal under the DMCA.

And Total Annihilation’s setup program wouldn’t even run on the version of Wine that I was using (20050310).

Wine just isn’t ready yet, but then no one pretends it is. Still, I have the ability to play one game as it was meant to be played, and another one works fine in single-player mode. Total Annihilation was possible at one time, so I can look forward to it in the future, I’m sure. Not bad for an incomplete project.

I’ve updated the Wine-Wiki for Wizardry 8, Starcraft, and Total Annihilation. Someone else already added their own comments about Starcraft. Wikis rule.

Categories
Geek / Technical General

DePaul Linux Community Install Fest

May 14th, this Saturday, will be the DePaul Linux Community Install Fest. It’s open to everyone, so if you’re in the Chicagoland area, check out the information on DLC’s website. Here’s a handy map to the event.

Categories
Geek / Technical

Snapback2 How-To

Not too long ago I created my own automated backup script. Shortly afterwards, helpful people sent me links to other, more robust scripts that have been written. One of those was called Snapback2.

Snapback2 is a backup script based on rsync and hard-links. I could explain what that means, but why reinvent an already well invented wheel? Again?

My original script was alright, but it worked by making an exact copy of everything each time it ran. For my 4 GB home directory, backing it up weekly over the course of a month would result in a backup directory that is 16-20 GB in size! That’s a lot of wasted space, especially when some files don’t change at all.

Snapback2 uses hard links and only stores changes between one backup to the next, which means that if I only changed files that were 30MB in size, then the next backup will be 30MB as well. If no changes were made, then no space is wasted at all. Clearly this method is superior to what I have written.

Setting up Snapback2 is supposed to be very simple, but I found that the documentation assumes you know what you’re doing. The following is my Snapback2 How-To:

You can download Snapback2 at http://search.cpan.org/~mikeh/Snapback2-0.5/ for the latest version as of this writing. Technically you should be able to use Perl to download it from CPAN, but I didn’t. Most of the prerequisites should be on your Linux-based system already. According to the documentation, you’ll need:

Gnu toolset, including cp, rm, and mv
rsync 2.5.7 or higher
ssh
Perl 5.8 or higher
Perl module Config::ApacheFormat

On my Debian Sarge system, I have rsync 2.6.4, so your distribution will likely have at least 2.5.7. Similarly, I have Perl 5.8.4. The one thing that you need to do is get and install Config::ApacheFormat. To do so, make sure you have root privileges and run:

# perl -MCPAN -e 'install Config::ApacheFormat'

If it is the first time you’ve used CPAN through Perl, you will be prompted to configure it. If you aren’t sure, you can simply cancel the configuration step and it apparently grabs some defaults just fine. Any and all dependencies will also be installed.

Once you have all of the prerequisites, you can install Snapback2. Again, you could probably do the same thing above to grab it from CPAN, and it will probably grab Config::ApacheFormat for you, but as I didn’t do that, I won’t cover it here.

If you grabbed the tar.gz file from the link I provided above, you should run the following:

# tar xzf Snapback2-0.5.tar.gz

It will create a directory called Snapback2-0.5. The README tells you what to do, but for completeness, here are the next steps:

# cd Snapback2-0.5
# perl Makefile.PL
# make
# make test
# make install

Snapback2 should now be installed on your system. If it isn’t, you should double-check that you have all the prerequisites. The fourth line in the previous list runs tests before installing. If something failed, you should know why from the test results. Even if you did install it successfully, it isn’t going to do anything yet. You now need to make a configuration file.

You can read the documentation setting up the configuration file in the man page for snapback2, but you can also view it online.

Here is my file, snapback.conf:

Hourlies 4
Dailies 7
Weeklies 4
Monthlies 12
AutoTime Yes

AdminEmail gberardi

LogFile /var/log/snapback.log
ChargeFile /var/log/snapback.charges

Exclude core.*

SnapbackRoot /etc/snapback

DestinationList /home/gberardi/LauraGB

<Backup 192.168.2.41>
Directory /home
</Backup>

I didn’t change it much from what was in Snapback2-0.5/examples. I installed Snapback2 on the machine called MariaGB. MariaGB will connect to 192.168.2.41, which is the IP address of my main machine called LauraGB. This is why the DestinationList refers to LauraGB. If I wanted to backup another system, say BobGB, I would keep those backups separate in their own directory called BobGB. Normally, the ssh/rsync request would ask for a password. When I setup the backups to run automatically, it won’t be useful to me if I need to be present to login. You can do the following to create a secure public/private key pair:

$ ssh-keygen -t rsa

The above line will create keys based on RSA encryption, although you could alternatively use DSA. You will be prompted for a passphrase, which is optional. Still, a good passphrase is much better than no phrase at all. Using the defaults, you should now have two files in your .ssh directory: id_rsa and id_rsa.pub. The first one is your private key. DO NOT give it to anyone. The second one is your public key, which you could give to anyone. When setting up key-based authentication, you will append the contents of this file to the server’s .ssh/authorized_keys file. Next time you login, instead of being prompted for a password, you will find yourself at a prompt, ready to work. For more detailed information, read this document about using key-based authentication with SSH.

So now, if I run the following command on MariaGB:

# snapback2

It will backup any changes from LauraGB’s /home directory to MariaGB. If, however, it hasn’t been an hour since the last backup, it won’t do anything.

Still, manually running this command isn’t very useful, and while I could install a cron job to run snapback2, I will instead make sure that snapback_loop is running. It acts as a daemon, checking to see if a file gets created in /tmp/backups. Now I can create the following entry in my crontab:

# Create file for snapback_loop to run
0,30 * * * * touch /tmp/backups/snapback

So now, every 30 minutes, I create the file /tmp/backups/snapback, which snapback_loop will take as its cue to delete that file and run snapback2. Then snapback2 will make a backup if there has been enough time since the last backup was made.

Now, I have automated backups that run regularly. Some caveats:

  • Verify that snapback2 is in /usr/local/bin. On my system, snapback2 would run manually, but snapback_loop would output errors to /tmp/backups/errors that weren’t too clear. I had to create a symlink to /usr/bin/snapback2 in /usr/local/bin in order to get it to run.
  • Make sure snapback_loop is running with root privileges. It has to call snapback2, which will need access to files in /var/log and other directories which will have restricted access. If you run it as a regular user, you may get errors. You could also change the location of the log file, but /var/log is a standard spot to keep such output.
  • Because you are running it with root privileges, you’ll need to make sure root is the one with the public key in authorized keys rather than your user account. Otherwise, you’ll get errors like “permission denied” when rsync tries to connect to the other machine.

If you don’t use a second computer, you can always use a second hard drive instead. Either way, you now have an effortless system for automating your backups!

Categories
Geek / Technical

Automating Backups: I Reinvented the Wheel

I already knew that I had reinvented the wheel when it came to writing my automated backup solution. I spent some time researching this topic, and I found a number of resources.

Of course, as soon as I posted my results on the LUNI mailing list, I got email from helpful people pointing out that there are other projects that are out there and have been tested. Way better than what I had, and much more optimized. My solution was making multiple copies of my data, resulting in multiples of gigabytes of storage, whereas some of these solutions are clever and use much less space.

So in true blog fashion, here are some links:

Categories
Geek / Technical

Automating Backups for Fun and Profit

At the end of last month, I mentioned that I had bought a new hard drive for the purposes of backing up my data. I just now installed it in my second computer. Was getting the backup system in place important? Yes, absolutely! I don’t think that a hard drive might fail. I know it WILL fail.

While I could just simply copy files from one drive to the other manually, it depends on me to do so regularly. I’m only human, and I can forget or get sick, resulting in potentially lost data. Computers are meant to do repetitive tasks really well, so why not automate the backup process? My presence won’t be necessary, so backups can take place even if I go on vacation for a few weeks or months. The backups can be scheduled to run when I won’t be using the computer. Copying lots of data while trying to write code, check email, and listen to music at the same time can be annoyingly slow, but if it happens while I am sleeping, it won’t affect my productivity at all. Also, while the computer can faithfully run the same steps each time successfully, I might mess up if I run the steps manually and in the wrong order. So with an automated system, my backups can be regularly recurring, convenient, and reliable. Much better than what I could do on my own week after week. I decided to make those concerns into my three goals for this system.

I’ll go over my plan, but first I’ll provide some background information.

Some Background Information
LauraGB is my main Debian GNU/Linux machine. MariaGB is currently my Windows machine. The reason for the female names is because I am more a computer enthusiast than a car enthusiast. People name their cars after women, so I thought it was appropriate to name my computers similarly. For the record, my car’s name is Caroline, but she doesn’t get nearly the same care as Laura or Maria. My initials make up the suffix, GB. I guess I am not very creative, as my blog, my old QBasic game review site, and my future shareware company will all have GB. Names can change of course, but now I see I am on a tangent, so let’s get back to the backup plan.

LauraGB has two hard drives. She can run on the 40GB drive, as it has the entire filesystem on it, but the 120GB drive acts as a huge repository for data that I would like to keep. Files like my reviews for Game Tunnel, the game downloads, my music collection, funny videos I’ve found online, etc. It’s a huge drive.

For the most part, I’ve simply had backup copies of data from the 40GB drive to the 120GB drive. I also had data from an old laptop on there. Then I started collecting files there, but they don’t have a second copy anywhere. If I lose that 120GB drive, I could recover some of the files, but there is no recovery for a LOT of data. Losing that drive spells doom.

At this point, MariaGB can become much more useful than its current role as my Games OS. With the new 160GB drive, I can now have at least two copies of any data I own.

The Backup System
I spent the past few weeks or months looking up information on automating backups. I wanted something simple and non-proprietary, and so I decided to go with standard tools like tar and cron. I found information on sites like About.com, IBM, and others. I’m also interested in automating builds for projects, and so I got a lot of ideas from the Creatures 3 project.

On Unix-based systems, there is a program called cron which allows you to automate certain programs to run at specific times. Each user on the system can create his/her own crontab file, and cron will check it and run the appropriate commands when specified. For example, here is a portion of my crontab file on LauraGB:

# Backup /home/gberardi every Monday, 5:30 AM
30 5 * * 1 /home/gberardi/Scripts/BackupLaura.sh

The first line is a comment, signified by the # at the beginning, which means that cron will ignore it. The second line is what cron reads. The first part tells cron when to run, and the second part tells cron what to run. The first part, according to the man page for crontab, is divided as follows:

field allowed values
—– ————–
minute 0-59
hour 0-23
day of month 1-31
month 1-12
day of week 0-7

So as you can see, I have told cron that my job will run at 5:30 AM on Mondays (30 5 * * 1). The asterisks basically tell cron that those entries do not matter.

The second part tells cron to run a Bash script I called BackupLaura.sh, which is where most of the work gets done.

Essentially, it gets the day of the month (1-31) and figures out which week of the month it is (1-5). There are five because it is possible to have five Mondays in a single month. Once it figures out which week it is, it then goes to my 120GB drive and removes the contents of the appropriate backup directory. I called them week1, week2, etc. It then copies all of the files from my home directory to the weekX directory using the rsync utility. I use rsync because using a standard copy utility would change the file data, resulting in files that look like they were all accessed at once. Rsync keeps the same permissions and date access times as they were before the backup.

So tomorrow at 5:30 AM, this script will run. As it will find that the date is 11 (April 11th), it knows that it is in the second week. So the directory week2 will be emptied, and then all files will be copied from my home directory to week2.

That’s all well and good, but you have probably noted that every week, the weekly backup erases the same week’s backup from the month before. When this script runs on May 9th, week2 will be erased, losing all of April’s 2nd week backup data! I’m ok with that.

Here’s why: every 1st Monday of the month, the script will make the week1 backup, but it will ALSO make a monthly backup. It takes week1 and runs another utility on it called tar. Multiple files can be combined into one giant file by using tar. The resulting file can also be compressed. Most people on Windows will create .zip files using similar utilities, but tar fits the Unix programming philosophy: a robust tool that does one thing really well.

Usually tar is used with utilities like gzip or bzip2, but sometimes compression is avoided for stability reasons in corporate environments. Compressing might save you a lot of space, but on the off chance that something goes wrong, you can lose data. And if that data is important enough, the risk isn’t worth the space savings.

In my case, I decided to use bzip2 since it compresses much better than gzip or LZW (what .zip files are compressed with). If I corrupt something, it isn’t the end of the world, since bzip2 has the ability to recover from such problems. So the script will take the directory week1 and compress it into a file with the date embedded in the name. The appropriate line in the script is:

tar -cvvjf $BACKUP_DIR_ROOT/monthly/LauraGBHomeBackup-$(date +%F).tar.bz2 $BACKUPDIR

The $BACKUPSOMETHING are variables in the Bash script that I had defined earlier, but they basically tell the script where to go in my filesystem. The file created is “LauraGBHomeBackup- “+ the date in YYYY-MM-DD format + .tar.bz2. The script runs date +%F and inserts the result in the filename. The file is placed in the directory called monthly. I can manually clean that directory out as necessary, and since the dates will all be unique, none of the monthly backups will get erased and overwritten like the weekly backups.

Conclusion
And so now the need to do backups has been automated. Every week, a copy of my home directory is synced to a second hard drive. Every month, a copy of the data will be compressed into a single file named by date to make it easy for me to search for older data. What’s more, cron will send me an email to let me know that the job ran and also tell me what the output was. If I forgot to mount the second hard drive for some reason and the script couldn’t copy files over, I’ll know about it Monday morning when I wake up.

Once MariaGB is up and running, I can configure the two systems to work together regarding the monthly backups. After the .tar.bz2 file is created, I can then copy it from LauraGB’s monthly directory over to MariaGB in a directory to be determined later. Of course, normally when I copy a file from one machine to another, I would need to manually enter the username and password. I can get around this by letting the two systems consider each other “trusted”, meaning that the connection between the two can be automated, which is consistent with one of the goals of this system. I’m very proud about what I have created, and I am also excited about what I can do in the future.

Currently LauraGB’s home directory has images, code, homework files, game downloads, and other files taking up 4GB of space, which clearly won’t fit on a CDROM uncompressed. To improve my backup system, I will need to purchases a DVD burner. I can place a blank CD in the drive and have the computer automatically burn the monthly backup when it occurs, giving me another copy of my data, one that I can then bring anywhere. Ideally, remote backups would complete my system, but I think I will use those only for specific data, such as my programming projects and business data when I get some. Losing my music files and pictures in a house fire aren’t going to be as big of a conern as the fire itself, but losing sales info, game code, and the customer database, essentially THE BUSINESS, would be something that I probably won’t easily recover from.

For now, I am mitigating the disaster of a failing hard drive, which is my main concern. If you do not have your own backup system in place, either something homemade like my setup or through some commercial product, you’re walking on thin ice. Remember, hard drive failure isn’t just a potential event. It’s a certainty. It WILL fail. Prepare accordingly.

Categories
Games Geek / Technical

LAN Par-tay! or Stupid Processor…

This past weekend I went to a LAN party at my friend’s dorm. For those of you new to the term, you basically take your PC to this party and everyone connects to a Local Area Network (hence the LAN part) to play games for hours on end. This party was in the basement of the dormitory, and it started at 2PM on Saturday and ended at 12PM on Sunday. I think. It was a long night. B-)

While my Debian GNU/Linux system has a new video card and has a slightly faster processor than my Windows system, the fact was that we were playing games, and a lot of them aren’t available for GNU/Linux yet. So I took my Windows machine.

I had problems right away. My Windows machine didn’t have as complete a cooling system as my Debian system. I barely use it, so there was never a need. And I have played games on it before. Yet this weekend of all weekends, games would crash to the desktop. At first I thought that it was possibly Windows 98. I’ve refused to install Windows XP for reasons I may go into another day, but at the urging of others, I installed WIndows XP. Luckily I had brought the CD that I got for free for attending some Microsoft seminar on .NET. Still crashed to the desktop when playing games. So rule out the OS.

I opened the system and found that the ATI Radeon 8500 video card was really, really hot. I think the ribbon cables were blocking airflow. Someone had a spare GeForce 2 MX, so I installed that. I still had crashes, so I opened the case to find that the video card was hot after only a few moments in the system. Was it overheating?

My friend let me use his fan to cool my system. It was funny seeing the case opened and a giant fan blowing into the system, but it kept it quite cool. Unfortunately, games would still crash to the desktop. So rule out overheating.

Someone else insisted I should lower the clock speed on my processor. I had a motherboard that allowed me to flip switches to lower the speed, and I didn’t want to do it at first. I paid money for an AMD XP 2100+, so why lower it? Well, it did the trick. Games stopped crashing, and it still ran quite fast to handle games like Alien vs Predator 2 and Unreal Tournament 2004. I’m still upset that I had to lower the speed, but apparently the processor is overheating otherwise. Perhaps the CPU fan isn’t working well anymore.

Me and my computer woes, eh? Two other people had some issues, but we all eventually got to play.

I haven’t been to a LAN party in a long time, and these days it is less likely since I work 40 hour weeks. It was a completely different situation when I just had school to worry about. It was a good time. I think everyone should attend a LAN party. Besides reminding you that playing games is important if you are going to make good games, it also reminds us that gaming is every bit as social an activity as any other. The next time someone tells you to “put down the controller, go outside, and get a life” remind them that playing video games with friends is a bit more healthy than getting overly drunk at bars and smoking. And arguably more fun.

Categories
Geek / Technical

My Computer is Back and Badder Than Ever!

Last time I mentioned how my Debian GNU/Linux system needed to be upgraded due to a malfunctioning video card.

I am happy to say that my system is running quite fine now. Here is a listing of some of the upgrades and changes:

  • nVidia GeForce2 GTS to nVidia GeForce FX 5500
  • Linux kernel 2.4.24 to Linux kernel 2.6.11
  • Deprecated OSS sound drivers to the new standard ALSA
  • A restrictive hard drive partition scheme to a less restrictive one

The video card runs amazingly well, although my true test is to verify that Quake 3 Arena will run on the system. So far, Tuxracer and Frozen Bubble runs fine. The 2.6 kernel is a good iteration over the 2.4 kernel since it allows the system to respond faster and has more supported hardware under its belt. The OSS sound drivers, for example, have been deprecated in 2.6, but I’ve always used them before since ALSA was iffy at best, requiring a separate download and kernel recompile. Now ALSA is actually part of the kernel, and I am pleased that my system is so much more up to date. Debian is the distro that people make fun of for being less than on the cutting edge, but it makes up for it by being stable.

The hard drive partition scheme I had before made it difficult. I had a lot of space for my home directory, which is where I keep data files and the like, but less space for /usr, which I need for programs, and /tmp, which made installation of new programs difficult. The new scheme doesn’t differentiate between /usr and /tmp, and they share a LOT more space, since I have a bigger hard drive to store any data anyway.

So my system is faster, more convenient, and more compatible with what’s new in the world of technology. Games that were out of reach before are now playable. Not bad for an emergency repair.

Categories
Geek / Technical

Stupid Video Card…

Long story short: I don’t have a working GNU/Linux system at the moment.

Long story long:

Two days ago, my Debian GNU/Linux system worked fine.

Yesterday, the video card was acting up.

The display was a little off, as if the monitor cable was getting interference. I switched cables with the Windows 98 machine I have, so I eliminated the monitor and the cable as the culprits. The GeForce2 GTS that I had won in my first ever eBay auction years ago had finally started to fail.

Just to make sure, I tried to reinstall the Nvidia drivers. For those of you who don’t know, driver installation on a Linux-based system is not a matter of downloading an executable and running it. Nvidia actually does provide something like that, but the drivers have to be made part of the kernel, usually by making a module, and you do that by compiling it.

Well it complained that the compiler used to make the kernel isn’t the same as the compiler I currently have on my system. It’s been awhile, and I’ve upgraded Debian a few times, so that made sense. I decided to try to recompile my kernel since I haven’t done that in a long time and I have been meaning to get some extra features such as USB drive support anyway.

Recompiled, rebooted, and voila! I upgraded the kernel from version 2.4.24 to version 2.4.27, and I was surprised that it only took a few minutes to do so since I’ve had older/slower machines take a half hour or more. It turns out that it was a good thing it was so fast. I apparently forgot to add network support for my onboard ethernet. Whoops.

I attempt to recompile, but then I get strange errors about modules not existing, even though I did the same exact steps to recompile. So I try to install one of the older kernels since I still have some of the packages I had created in the past. Still no network support? ARGH!!

I’ve been meaning to do a fresh install of Debian on this machine anyway. The hard drive partition scheme is more limiting than I had originally anticipated years ago (who thought it was a good idea to make /tmp only 50MB?!).

Since the video card was failing, I decided it was time to get a new one. So I went to Fry’s which was 20 minutes away from my house. I bought a GeForce FX 5500 (w00t!), as well as a Western Digital 160GB drive and some quieter case fans. I bought the drive because I don’t want to end up like Lachlan Gemmell. I already have a 120GB drive that holds a lot of my files. Some of it is backups from my main drive, but a lot of it is made up of data that doesn’t have a copy anywhere. I would hate to lose the .ogg files I’ve ripped from CD or bought from Audio Lunchbox, the games I’ve reviewed for Game Tunnel, pictures of me and my friends, and of course my Subversion repositories for the projects I have been working on. I opted for a second huge hard drive since it would be easier and faster to make at least a second copy of my data. I can decide to get a DVD burner later.

And the fans? My system sounded like a jet engine starting up. I’ve been meaning to fix that problem as well.

Last night I installed the fans, and it was definitely a lot quieter. I decided to leave off the video card and hard drive until today since it was getting late and I needed to figure out how to setup the drive in the first place.

So at the moment, I have a system that can’t connect to the Internet. It still can’t display anything, so it is rendered useless for the most part. All because I tried to fix it when the video card acted up.

The good thing is that I’ve made it quieter, and when I am through with it, it will be even more powerful than before. Doom 3 and Unreal Tournament 2004 are now more easily within my grasp. And I will finally get around to designing an automated backup system. Phoenix rising, indeed.

Categories
Game Development Games Geek / Technical

I’m Live not-at-the-GDC!!

The big news in game development these days has been surrounding the Game Developers Conference. A number of indie developers have covered the event, including David “RM” Michael, Saralah, Xemu, and Thomas Warfield. I’ve had to read about it and see pictures of people I’ve met in person or online, missing out on the fun.

I’ve read a few of the writeups that David Michael wrote for GameDev.net, and I intend to read the rest. I’ve also been reading Game Tunnel’s IGF coverage, including interviews and day-by-day news. It’s just like being there…only not.

Congratulations go out to those who made it to IGF finals! Some amazing games have been made by indie game developers, and they serve as an inspiration to the rest of us. This time next year, I hope to attend.

Categories
Games Geek / Technical General

1UP.com presents The Essential 50

The Essential 50 is 1UP.com’s compilation of the 50 most influential games in video game history. I feel bad because some of the games I’ve only read about, such as Battlezone or Prince of Persia. Others bring back good memories, such as Super Mario Bros and Pac-man. I am making a point to go back and play the games I own that I haven’t gotten a chance to play yet, such as Final Fantasy 7. I was a Nintendo fanboy when it came out so I refused to touch it, but sometime last year I saw a PC version of it for under $20. I still haven’t played it.

It’s sad when you look back on the highlights of gaming and realize that you weren’t there for even half of it. Still, I have some good memories of some good gaming, and there is no reason for me to miss out on what’s to come. B-)