Which Distro? An Introduction to Picking “a Linux”

Every few days, someone on the Linux users group on Facebook posts a question that goes something like this: “I’m new to Linux. Can you recommend a good distribution for blah?” Where blah is usually something like gaming, media, or learning Linux. Like many people who are new to Linux, when I was first exploring the Linux world, I tried out a lot of distros. Eventually, I came to the conclusion that it doesn’t matter which distro you choose. I’ll explain a bit more about what a distro is, what kinds there are, and why it does and doesn’t matter, below.

What is a Distribution?

It’s hidden right in the name, but it’s not immediately obvious: a distribution of Linux is the Linux kernel, packaged up and distributed in a usable form. Linux, by itself, is just a kernel: the part of an operating system that manages the hardware, and provides interfaces for the various pieces of software and hardware that make the computer usable. If you were to just boot the kernel all by itself, the machine would start and do nothing. It would spin the fans and hard disks faster or slower, handle new device connections, and maybe even accept user input, but the computer wouldn’t be usable, because the software that interacts with the user (you) is not part of the kernel – no text terminal, no on screen menus or mouse pointer, no nothing. Once the kernel is running, it spawns other programs that handle user interaction. This is a basic model of operating systems.

So, apart from some basic patches and alterations that the makers of Linux distros might make to their specific release of the kernel, the underlying kernel is basically the same from distro to distro. The difference is in the software they install around it, and that’s also one of the reasons Linux is more customizable than any other popular operating system today. For example, there are quite a few different graphical interfaces that support Linux: KDE*, Gnome, Unity, XFCE – these are all visually different interfaces that function differently. And that’s just a very, very small portion of the desktop environments available. So two different distros with the exact same kernel can look and behave very differently on the surface. This is, to a large extent, how various Linux distros differ: the packages included with the base system, and their initial configurations. Ubuntu and Kubuntu, for example, are two distinct distros, yet they are mostly the same except that Ubuntu ships with the Unity desktop environment, and Kubuntu ships with KDE. One could easily uninstall Unity from Ubuntu and install KDE.

Major Differences: Package Management

Arguably the most fundamental distinction between various distros is the way they facilitate software installation. On a base Linux system with no package manager, the way you install packages is by copying the executable binary and it’s shared objects to the proper locations. This isn’t the easiest or most convenient thing to do, as it doesn’t allow you to easily keep track of what software package is installed where, or what version it is. So most Linux distributions ship with a package manager like apt (Debian-based), yum (Red Hat), portage (Gentoo), or pacman (Arch). These package managers will not only install packages from a central repository – all of which has been checked so that it is compatible with all the other software in the repository – but they will install all the dependencies as well. Everyone who has ever tried to install a package from source on a freshly installed system will tell you that this is a great relief, and saves hours of hunting online for tarballs**.

Which package manager you choose is largely a matter of preference, and this is the basis on which I think you should make your decision. Debian-based distributions come with “apt”, which is – from my biased perspective – a reliable, relatively easy to use package manager. Distribution upgrades (i.e. major upgrades) can be done without nuking the system and starting over, it has support for multiple architectures on the same system (e.g. installing 32-bit packages on a 64-bit machine), and is pretty painless. Even the most frustrating problems can sometimes be solved by throwing around a bunch of apt commands semi-brainlessly.

Red-Hat/Fedora-like distributions (like RHEL, Fedora, OpenSuSE, CentOS, and Oracle Linux) use “yum”, which installs rpm files. The last time I used yum was years ago, so my opinion isn’t worth much in this regard. I’ve heard that you have to nuke a Fedora system in order to do a distribution upgrade – that is, you have to do major upgrades by wiping the system and starting over – but I don’t know if that’s the case anymore. If you’re a beginner, I’d recommend using one of the other options, but this is my wholly biased opinion; take it with a grain of salt (and maybe a trial of Fedora on a VM).

Gentoo (and probably some other distros based on Gentoo) use “portage”. Portage is pretty freaking cool, in that it compiles every single package from scratch. It’s a somewhat agonizing experience (although not as much so on today’s faster machines), especially if you want to install a huge software package like KDE. But the benefit of doing things this way is that every binary on your system is optimized specifically for the box sitting in front of you (or under your desk, or wherever it is you have the thing). It’s more useful if you actually know what you’re doing, and can manipulate the various compiler flags effectively, but I’m sure there’s some speed-up even if you don’t entirely know what’s going on under the hood. Gentoo is my favorite distro for learning the ins and outs of Linux, and if you’re a first-timer and really want to dive into Linux and get a good head start, I can’t recommend enough that you take the time to do a full, manual, Gentoo install. Just… uh… don’t be discouraged if you fail the first time. Or the second. You’ll learn a TON, trust me.

My experience with other package managers like pacman is minimal. I used Arch for a while, and it’s a very nice distro. It’s something of the best of both worlds between Gentoo and more user-friendly distros like Debian.

Smaller Distros

The Internet is replete with smaller distros with funny names, and there are too many to mention. Most of them are offshoots of one of the main distributions I’ve described above, with various configuration changes. There are some medium size distributions as well (Linux Mint, Puppy Linux, etc) which tend to do a pretty decent job, and are sometimes designed for very specific situations Puppy Linux and Damn Small Linux, for example, are designed to be very small and light weight, and are especially useful for booting from a CD or USB key to do system repairs. Linux Mint, in particular, is a refreshing spin on Ubuntu. I tend not to trust the really small distros though (the ones you’ve never heard of with websites straight out of the 90’s), because I’m dubious as to whether they’ll continue to be supported in the future, and whether they’ve been tested thoroughly. There are probably good ones out there, I just don’t shop around too much anymore.

Choices

In many ways, it all comes down to choices, and the number of them you want to make. If what you want is a plug-and-play operating system that isn’t Windows or Mac OS X, go with Ubuntu, Linux Mint, Fedora, or a similar distro that has a one-and-done type install: you pop the disk in the drive, set up your language, time zone, and login credentials, and away you go. These distros have default packages that support most of your day-to-day needs, and it’s fairly easy to install components that aren’t pre-installed. They work on most of the common hardware out of the box, and they have a lot of online support options.

If, on the other hand, you want to make the choices yourself, choose a distro like Gentoo, Arch, or Debian. Gentoo and Arch, in particular, don’t even choose a default desktop environment for you, so you can choose any configuration you want right from the beginning without having to undo someone else’s work. One time, I installed Gentoo only to realize that I had disabled the kernel configuration for my hard drive controller, so the system couldn’t boot: that’s how much control you have. Debian has some base packages that install a very minimal system, as well as some options that will install a lot of common packages for you. It’s more immediately usable than the other two, but allows you to install a minimal system if you want.

At the far end of the spectrum of choices is LFS: Linux From Scratch. You compile the kernel from scratch, and gradually start loading things on the disk until you have a working operating system. I’ve never done this, but it’s always been in the back of my mind. You can find resources for doing that here: http://www.linuxfromscratch.org/

Stability

One last thing I want to mention is stability. Stability is probably the other most important dimension of a distro, and I would be remiss if I didn’t talk about it just a bit. If you’re cycling through different distros exploring the Linux world, you might not care too much about stability. Honestly, if you play around with things enough, you’re going to wreck your distro no matter how stable it is. But if you’re looking installing Linux on a machine you care about, stability is very important.

Because the distro packagers are usually on the same team (or are the same people) as the people who maintain their distro’s package repositories, their attitudes and values contribute to how stable the resulting collection of packages will be. Debian, for example, is know for being fairly conservative and FLOSS (Free Libre Open Source Software) fanatical, which makes for a very stable, very reliable system, and makes it a bit harder to install proprietary software (not much harder, though.) Ubuntu, on the other hand, is less gun-shy, and uses more up to date packages at the expense of a slightly increased probability that their packages will have unresolved bugs. It’s worth doing some research to find out the attitude of a perspective distribution’s repository maintainers before making your final choice.

Stability is the main reason I forsook Ubuntu years ago, and now only use Debian. Ubuntu is the only operating system I have ever installed (aside from Windows) which has crashed during or right after installation***. It’s a great distro that is paving the way for lots of innovation and publicity in the Linux and Open-Source world, and it has become a stepping stone (at the very least) for new users, but I don’t like the way they do choose their packages, and the default packages that are installed. And, if I’m honest, even though it’s a small and easily fixable issue, Unity absolutely drives me up the wall.

Conclusion

Hopefully this will help you choose the distro that’s right for you. You should play around with a few of them and read up on them (Wikipedia is a great place to do this) before picking the one you intend to use for ever and ever… and always know that you can change your mind at any time. As you learn to use Linux, you’ll likely realize that you wish you had done certain things differently during your installation, so you’ll likely be itching to re-install after a while anyway.

If you’re curious, as you might already have guessed, I install Debian on everything I get my hands on: my desktop, my parents computers, Raspberry Pis – hell, I’d install it on my toaster if I could. After administrating around 20 Debian machines during my two years as a SysAdmin, I’ve come to appreciate its elegant simplicity and robustness, and I wouldn’t replace it with Ubuntu if you paid me. But that’s just my opinion; I encourage you to draw your own.

*It has been pointed out – and rightly so – that the K Desktop Environment (formerly referred to as simply “KDE”) is now properly called “KDE SC”, for “KDE Software Compilation”. For simplicity, however, and since it is still referred to in the Debian repository and popularly as KDE, I’ve left the incorrect acronym as is.

**A member of the Linux Facebook group pointed out that newcomers to Linux might not know what a “tarball” is. Tarball is slang for an archive compressed using the unix tar utility, usually with the extensions .tar, .tar.gz, or .tar.bz2. Source code for many open source packages come packaged in a tar archive.

***It’s true, I’ve had many a Gentoo installation crash on me on or before startup, but that was always because I had done something stupid, and was entirely my fault. The same opportunity doesn’t really exist in Ubuntu; I’ve had installations crash once, and succeed after installing again with the same options for no discernible reason.

Advertisements

How to Heat Your Livingroom with Computers

It’s time I explained what all this cluster computer stuff is about. It’s kind of an insane project, but bare with me. Some of you may have played the Wikipedia game “Seven Steps to Jesus.” Basically, it’s a special case of the Wikipedia game Wikirace, where you try to get from one article to another as quickly as possible. In Seven Steps to Jesus, you try to get to the Wikipedia article about Jesus in seven clicks or less. Why Jesus? Why seven? Who knows, who cares. But the point is, that it’s fun for about ten minutes.

This ridiculous game wandered into my mind the other day, and I started thinking, “I wonder how you could create a computer program that would play Seven Steps to Jesus…” The answer depends. If you give the program access to the entire Wikipedia database, then the task is trivial. It’s very simple and quite fast to spit out the minimum number of steps from one article to another. But what if you don’t give the program access to the entire Wikipedia database? What if it has to go to an article, and choose which link to click just like humans do: with “intuition?”*

As you might have guessed, the task becomes more complicated. Now we’re talking about machine learning (FUN!) I started to think about how you could train an AI to assign a “relatedness” value to every given article on Wikipedia by giving it access to the Wikipedia database, and having it traverse the links from one article to another. If you’ve taken a class on AI, you know that eventually, this relatedness value will converge to the shortest path. Basically, this program will do nothing useful… except heat my living room.

Except! Except that I’m going to train an RL agent in parallel. That’s the only thing that might be novel about this (other than the fact that I’m introducing the “Seven Steps to Jesus” task to the AI world.) Ordinarily, you would want the agent to learn sequentially, because if the links change, you want the agent to learn with the changes. But in this case, I really don’t give a damn about sequential updates. Also, this task is stationary (the distance between any two articles in my downloaded Wikipedia database will never change,) so updating sequentially doesn’t matter all that much.

So what you should get from this, is that this project is a HUGE waste of time. But it’s fun, and I’m learning about graph databases, and RMI, and I got to build and run a computing cluster. Maybe there’s a real task for which this approach is suited. I’m not sure though. Usually in RL, you have to run a trial many times in order to test an approach, so there’s really no point in distributing the actual processing. In other words, if you’re going to run 100 trials of your schmancy new algorithm, you might as well just run one trial on five different machines until you finish, rather than splitting up the computation (which adds overhead,) into five different parts.

The point is, I’m having fun. Leave me alone.

Discipline Week Update: Today was day four of Discipline Week, and so far so good. I’ve been trying to avoid napping, because I want to really embrace this 7am to 11pm schedule I’ve got going, but today I really needed a nap. I ended up sleeping for maybe an hour and a half, which is really too much, but we’ll see how things go tomorrow. I’ll write a more detailed post tomorrow about how Discipline Week is going, but I thought I’d let you know that it’s still a thing, and it’s going well!

*Yes, there are algorithms that do this quickly, but you’re still missing the point: the point is, this will be fun. Fun, I tell you, FUN!

Further Blending of OS X and Linux

Those of you who have ever seen me using my laptop, or who have read my blog before, probably know that I use Linux on my MacBook Pro and love it (Linux and the MacBook Pro.) A few months ago I switched to Linux as my main OS, and I now use it 95% of the time that I’m using my computer. However, there are still a few issues with the setup, one of which is that there are still some tasks that I like or need to use Mac OS for, namely video editing and Photoshop (I like Gimp, but it’s no Photoshop.) There’s no easy way to remedy that situation, so the only choice is to boot up into Mac OS when I want to use those features, and then back into Linux afterward. But then there’s the problem of files. My usual method of sudo -s and then copying files that I need via command-line becomes tiresome, and if there’s a file that I created or updated in Linux and that I want to use on OS X, I have to use a USB key. Well no more!

I had been considering creating a partition specifically for my home directory, and then configuring Linux and OS X to use the partition for my home directory, but I think they might end up fighting over it, especially since Linux can’t write to journaled HFS+ partitions, and Mac OS X can’t even see ext partitions. So I came up with another solution. I made all the directories I wanted in OS X (which is where I still have the majority of my documents) read-write (steps below,) and then symbolically linked them to my home directory in Linux. So far it works like a charm! Here are the steps:

First, boot into OS X. You need to disable journaling on your Macintosh HD partition. Warning! Disabling journaling can lead to file system corruption! You should definitely back up your files at this point, and more frequently than usual after doing this. Open a terminal and type:

sudo diskutil disableJournal Macintosh\ HD

Now reboot into Linux. If you’re going to use your Mac HD to store all your Documents and such, then you don’t want to have to mount the partition manually all the time. Fortunately, we can get Linux to mount it manually by giving the partition a permanent mount point and then putting it in /etc/fstab. So whip out you’re favorite text editor (with sudo) and edit /etc/fstab. Add the following line:

/dev/sda2     /mnt/MacintoshHD    hfsplus    rw, nodev,nosuid,uhelper=udisks

If you haven’t done anything crazy to your partition scheme, then your Mac HD partition should be /dev/sda2. It’s important that you write “MacintoshHD” without spaces. I tried “Macintosh\ HD” and fstab didn’t seem to like it. Now do “sudo mkdir /mnt/MacintoshHD” (again, no spaces) to create the mount point. Great! Now you can simply “sudo mount /dev/sda2” and your Mac HD will be mounted.

The next step is to get all your files on Mac HD read-write-able on Linux to your user. This is the somewhat sketchy part. You need to change you your home directory, and “chmod -R go+rwx ” all the directories that you want access to. The first thing I tried was “chmod -R g+rw” and then adding my user to the dialout group, but apparently either I am missing one of the intricacies of user administration on Linux or there is something more going on here. No, giving every user on the system read-write access to all your files probably isn’t the best idea. If you’re hosting a server on your Mac, or you have a multiuser system, then I strongly suggest that you don’t do this. I’m not hosting any public servers on my laptop, and I don’t mind taking this particular risk for a very important bit of added functionality, so I did it. Ultimately it’s up to you.

The final step is to symbolically link the directories to your home folder on Linux. Take all the files and directories you want to save out of the directories you want to replace in Linux (assuming you want to use the same directories on Linux and OS X,) and then delete the directories themselves. All you need to do now is “ln -s /mnt/MacintoshHD/Users/your_user/Directory ~/Directory” for all the directories you want.

I’ve done this for my Documents and Music directories, and if it works I’ll probably do the same for Desktop, Downloads, and Pictures. The only downside I’ve noticed so far is that you can’t trash files from the directories on the HFS+ partition; you have to delete them straight away. It’s not too bad, but I wouldn’t be the first person to accidentally delete something important. And I don’t think there’s a way to undelete a file on HFS+ like there is on ext. Other than that it’s working great!

UPDATE: This post used to say that you have to do “chmod -R go+rw”, when in fact you have to use “chmod -R go+rwx”. I’m not sure why, but you won’t be able to access any of your OS X user’s subdirectories if you don’t.

UPDATE 2: I’ve also noticed that read/write -ing onto my Mac OS partition is a little slow. Although I don’t know for sure, I imagine that this is because it takes time to “translate” the files from one file system to another. It has a transfer rate of about 20 Mb/s, which is still pretty fast, and not noticeable for my day to day usage, but it’s something to consider.

Power Management FTW!

Power Management on Linux can be a bit of an issue, sometimes. It certainly was for my MacBook Pro, which ran really hot, and only got 1.5 hours of normal usage on Ubuntu 10.04 compared to 6 hours of the same on Mac OS. Needless to say, this is unacceptable. So I fixed it. Here are the steps that worked for me.

First, the kernel version that comes with Ubuntu Lucid has a bug that causes excessive kernel ticks. Kernel ticks wake the CPU up, which wastes power, so if you keep the kernel ticks to a minimum, you won’t waste as much power. Simple enough, right? So to fix the bug (at the time this writing,) you have to install Brian Rogers’ patched kernel. Add the repository to your sources list by typing:

# sudo add-apt-repository ppa:brian-rogers/power
# sudo aptitude update

Then you’ll need to install the following:

linux-headers-2.6.35-power+18
linux-headers-2.6.35-power+18-generic
linux-image-2.6.35-power+18-generic

Then reboot into your new kernel. No more useless interrupts! This will most likely knock out your STA (wireless) drivers, and your touchpad drivers (if you’ve installed the one that allows you to click and drag, rather than the default, i.e. useless, one.) To fix this, you must download the Broadcom STA drivers and both patches. Then you have to download another patch here: http://bugs.gentoo.org/attachment.cgi?id=232555, go into the src/wl/sys directory, and execute[1]:

# patch < ~/path/to/wl_linux.c.diff

Then edit src/include/linuxver.h and change the line that says “#include <linux/autoconf.h>” to “#include <generated/autoconf.h”. Finally, compile with:

# make
# sudo make install

Then you’ll have to disable the b43 and ssb modules by typing:

# sudo bash -c 'echo blacklist b43 >> /etc/modprobe.d/blacklist.conf
# sudo bash -c 'echo blacklist ssb >> /etc/modprobe.d/blacklist.conf

Now to fix the touchpad. Ubuntu, by default, uses tho bcm5974 module to handle the touchpad. Sadly, this driver doesn’t support click and drag with the Apple touchpad, unless you use a experimental version. However, the experimental version doesn’t play nicely with the kernel we just installed (in fact it won’t ever compile,) so we’ll use another driver called multitouch, which allows two finger click, two finger scrolling, regular clicking, and click-and-drag[2]. So type in the following commands:

# sudo aptitude install xserver-xorg-dev
# git clone http://bitmath.org/git/multitouch.git
# cd multitouch
# make && sudo make install

Now you need to add the following lines to /etc/X11/xorg.conf:

Section "InputClass"
        MatchIsTouchpad "true"
        Identifier "touchpad"
        Driver "multitouch"
EndSection

Now reboot again, and you should have click-and-drag support! I’m not sure how this whole modern xorg.conf thing works, so it might get erased when you update X11. In that case, you can add it again; just remember what you did. I noticed that the mouse was quite a bit faster with the multitouch driver, so I tweaked the acceleration and sensitivity in the Gnome mouse configuration (System->Preferences->Mouse.)

The new kernel should get your power usage to around 20W. To take it down to 17W, you can change your SATA link power management policy and increase the VM Dirty Writeback time thusly:

# sudo bash -c 'echo 1500 > /proc/sys/vm/dirty_writeback_centisecs'
# sudo bash -c 'echo min_power > /sys/class/scsi_host/host0/link_power_management_policy'

You might want to put these in a script of some sort that runs at startup, but I haven’t done this yet.

Finally, go into System->Administration-> Hardware Drivers, and install the latest NVidia driver. Reboot. If this took you a few minutes to do, your MacBook should be pretty hot by now. After rebooting, just for fun, let it sit for a few minutes – running all the while – and come back to it. It will feel a lot cooler. I can hardly tell that my computer is on right now, and I’ve been using it for a few hours now. According to my calculations, I should now get 6.6 hours of battery life. In real life this is probably less than accurate, but it’s a decent estimate based on the capacity of my battery (4816 mAh, or 17.34 kC, for the electrical engineers among us,) and my current power usage (an average of around 10 to 12 watts.) I’ll try to actually test this over the next few days and post an update of my actual battery usage.

Power Flower Demo

Yeah, that's right, you know you like the plaid fedora and shades.

For the past few months, there has been an idea rattling around in the back of my mind. The idea was to build a performance art-ish music device that could be controlled by “planting” fake plants in fake pots. Well last Tuesday, the day before my first day at McGill, I decided that it was time to build it. I used USB extension cables, the female end in the pot and the male end on the stem of the flower, to enable the user to “plant” the flowers. I wanted each flower to somehow generate the music, but that turned out to be too complicated for the project difficulty that I wanted, so instead I connected the signal lines on each of the flowers, and then connected the signal lines on the female ports to an Arduino. So when the flowers are plugged in, the circuit is completed, and the Arduino tests the pins to detect this. Whenever the state of one of the ports changes, the Arduino sends the list of which ports are plugged in to a python script on the computer.

The python script sporks a ChucK shred, which plays the music. ChucK is an interpreted programming language that is specifically designed for audio generation, which makes it perfect for this project. ChucK is beautifully simple to use and learn, and the virtual machine can be run in a loop as a server, and the scripts can be added or removed on the fly. This is similar to “forking a thread,” but in ChucK you “spork a shred.” Because the creators are awesome that way.

I plan to write my own shreds, but I wanted to create a quick demo video, so I used a few of the sample shreds that come with ChucK. Unoriginal? Yes. Lazy? Yes. But I gave full credit for the shreds to the creators (Ge Wang and Perry Cook,) and I plan to create my own shreds in the near future that will be more fitting to the medium. Besides, my goal was to demonstrate the hardware, not the software. (As a note, I did look for copyright information in the shreds, and didn’t find any. I don’t think there is a copyright issue, since the shreds are just software creating the music, and not the music itself, but I could be wrong. If I am, please, please, PLEASE let me know me, and I will rectify the situation.)

Paper towels as DIY softboxes FTW!

I recorded a video of a small piece performed with the above example shreds, and it turned out pretty well! See it here: http://www.youtube.com/watch?v=R-WKUTi2_X0 Ignore the note at the bottom of the video claiming that the song used is “Twitter” by Watchmen. It is not. I have listened to that particular song, and while the bassline is quite similar, the song is completely different. This is frustrating for me, because it discredits my achievement, and makes it look like I’m simply ripping someone else off. However, I understand that YouTube has a lot of copyright issues to deal with, so I’ll be patient and wait for my dispute to work its way through YouTube HQ.

Update: YouTube has removed the notice saying that my video contained copyrighted material. Thank you, YouTube! I’m not sure when it happened exactly, but I’m impressed that they managed to get it fixed so quickly over the Labor Day weekend.

Yes, I’m Still Here

“I should post something on my blog every single Wednesday,” I said to myself a few weeks ago. “I can just write things whenever I get time and set them to post automatically. That way I can post on a schedule, but I don’t have to write on a schedule. It’s perfect!” Needless to say, that plan went directly to hell. A combination of school and outside activities to which I had committed myself weeks ago conspired to usurp any control I had over my schedule. Sadly, this means that my projects over the past few weeks have been a little thin. Since this was intended to be a blog detailing my adventures in building, hacking, cracking, designing, soldering, CAD-ing, Makerbot-ing, woodworking, sewing, etc, even though my projects aren’t all that exciting, I’ll post a few to show off what I’ve been doing for the past few weeks.

I tore apart a hard drive and stole its magnets. I needed magnets.

  • Installing Ubuntu 10.04 Beta. I ❤ Ubuntu. 10.04 includes alsa drivers that make my Mac happy and, by extension, make me happy. I can now listen to sound again, and everything else just works. My power consumption is about 9.8W, which gives me an estimated battery life of 4.5 hours or so. Not quite up to Mac standards, but with a little tweaking I’ll get there soon enough.
  • Kite Aerial Photography – Yes! Yes! Yes! Attempt number two was a WIN!! This time I built my rig out of aluminum and built it to be much more stable. Check out the pictures here. They’re great! Try number three is coming up soon – I plan to take pictures outside Marianopolis (drop me a line if you want in.)
  • The Gedanalyzer – For the Marianopolis Laptop Orchestra (MLOrK) this year, I tried to write a program called the Gedanalyzer. (I’ll get to the tried part in a minute.) Basically, it watches the network for data, and then plays a sound whenever it sees information from a certain computer. This is similar to what I did for MLOrK last year, except this time the sounds were going to play to a beat, and it would also display an image whenever it received data, creating a Gedan-like presentation. Why didn’t it work? Because the only Java PCap library (the part that listens to the data) I could find only works on Linux and Windows, and this was all before I installed Ubuntu 10.04. I tried installing Ubuntu 9.10 on Parallels and VirtualBox, but neither could play sound from Java well at all. I’m planning to finish this project soon.
  • Medea – Yeah, that’s right, I do theatre too. I’ve been known to act on occasion, but my real passion is technical theatre. I like operating the light board. I really like operating the light board. It makes me happy. Right now my light board consists of a USB to DMX converter box and a program called Chameleon. Aside from the lighting for PreMed: The Musical, this was some of the most kick-ass lighting I’ve done at Marianopolis thus far.

    It didn't look so dark and dingy in real life. And I can't claim credit for the set, which was beautiful.

  • Contemplation – About what I want to do this summer! My plans include building a 4-bit computer from transistors (as much as possible; there might be a few TTL IC’s in there as well,) trying to hook a BeagleBoard up to a portable touch-screen and powering it with LiPoly batteries (can you say “LiPad?” Linux iPad? Hell yeah!), fixing my Makerbot (the plastruder plastruded itself apart,) building a futuristic looking Laptop podium specifically designed for Skyping, and a few other things I’ve been thinking about.

So that’s all I’ve been up to. Come to think of it, I did manage to do quite a few things when I wasn’t working on things for school and outside of school. At any rate, this summer will bring more projects, more fun things, and more sleep. I can’t wait.

This is Gallagherrrr. He's a sheep. He says, "Hello."