Which Distro? An Introduction to Picking “a Linux”

Every few days, someone on the Linux users group on Facebook posts a question that goes something like this: “I’m new to Linux. Can you recommend a good distribution for blah?” Where blah is usually something like gaming, media, or learning Linux. Like many people who are new to Linux, when I was first exploring the Linux world, I tried out a lot of distros. Eventually, I came to the conclusion that it doesn’t matter which distro you choose. I’ll explain a bit more about what a distro is, what kinds there are, and why it does and doesn’t matter, below.

What is a Distribution?

It’s hidden right in the name, but it’s not immediately obvious: a distribution of Linux is the Linux kernel, packaged up and distributed in a usable form. Linux, by itself, is just a kernel: the part of an operating system that manages the hardware, and provides interfaces for the various pieces of software and hardware that make the computer usable. If you were to just boot the kernel all by itself, the machine would start and do nothing. It would spin the fans and hard disks faster or slower, handle new device connections, and maybe even accept user input, but the computer wouldn’t be usable, because the software that interacts with the user (you) is not part of the kernel – no text terminal, no on screen menus or mouse pointer, no nothing. Once the kernel is running, it spawns other programs that handle user interaction. This is a basic model of operating systems.

So, apart from some basic patches and alterations that the makers of Linux distros might make to their specific release of the kernel, the underlying kernel is basically the same from distro to distro. The difference is in the software they install around it, and that’s also one of the reasons Linux is more customizable than any other popular operating system today. For example, there are quite a few different graphical interfaces that support Linux: KDE*, Gnome, Unity, XFCE – these are all visually different interfaces that function differently. And that’s just a very, very small portion of the desktop environments available. So two different distros with the exact same kernel can look and behave very differently on the surface. This is, to a large extent, how various Linux distros differ: the packages included with the base system, and their initial configurations. Ubuntu and Kubuntu, for example, are two distinct distros, yet they are mostly the same except that Ubuntu ships with the Unity desktop environment, and Kubuntu ships with KDE. One could easily uninstall Unity from Ubuntu and install KDE.

Major Differences: Package Management

Arguably the most fundamental distinction between various distros is the way they facilitate software installation. On a base Linux system with no package manager, the way you install packages is by copying the executable binary and it’s shared objects to the proper locations. This isn’t the easiest or most convenient thing to do, as it doesn’t allow you to easily keep track of what software package is installed where, or what version it is. So most Linux distributions ship with a package manager like apt (Debian-based), yum (Red Hat), portage (Gentoo), or pacman (Arch). These package managers will not only install packages from a central repository – all of which has been checked so that it is compatible with all the other software in the repository – but they will install all the dependencies as well. Everyone who has ever tried to install a package from source on a freshly installed system will tell you that this is a great relief, and saves hours of hunting online for tarballs**.

Which package manager you choose is largely a matter of preference, and this is the basis on which I think you should make your decision. Debian-based distributions come with “apt”, which is – from my biased perspective – a reliable, relatively easy to use package manager. Distribution upgrades (i.e. major upgrades) can be done without nuking the system and starting over, it has support for multiple architectures on the same system (e.g. installing 32-bit packages on a 64-bit machine), and is pretty painless. Even the most frustrating problems can sometimes be solved by throwing around a bunch of apt commands semi-brainlessly.

Red-Hat/Fedora-like distributions (like RHEL, Fedora, OpenSuSE, CentOS, and Oracle Linux) use “yum”, which installs rpm files. The last time I used yum was years ago, so my opinion isn’t worth much in this regard. I’ve heard that you have to nuke a Fedora system in order to do a distribution upgrade – that is, you have to do major upgrades by wiping the system and starting over – but I don’t know if that’s the case anymore. If you’re a beginner, I’d recommend using one of the other options, but this is my wholly biased opinion; take it with a grain of salt (and maybe a trial of Fedora on a VM).

Gentoo (and probably some other distros based on Gentoo) use “portage”. Portage is pretty freaking cool, in that it compiles every single package from scratch. It’s a somewhat agonizing experience (although not as much so on today’s faster machines), especially if you want to install a huge software package like KDE. But the benefit of doing things this way is that every binary on your system is optimized specifically for the box sitting in front of you (or under your desk, or wherever it is you have the thing). It’s more useful if you actually know what you’re doing, and can manipulate the various compiler flags effectively, but I’m sure there’s some speed-up even if you don’t entirely know what’s going on under the hood. Gentoo is my favorite distro for learning the ins and outs of Linux, and if you’re a first-timer and really want to dive into Linux and get a good head start, I can’t recommend enough that you take the time to do a full, manual, Gentoo install. Just… uh… don’t be discouraged if you fail the first time. Or the second. You’ll learn a TON, trust me.

My experience with other package managers like pacman is minimal. I used Arch for a while, and it’s a very nice distro. It’s something of the best of both worlds between Gentoo and more user-friendly distros like Debian.

Smaller Distros

The Internet is replete with smaller distros with funny names, and there are too many to mention. Most of them are offshoots of one of the main distributions I’ve described above, with various configuration changes. There are some medium size distributions as well (Linux Mint, Puppy Linux, etc) which tend to do a pretty decent job, and are sometimes designed for very specific situations Puppy Linux and Damn Small Linux, for example, are designed to be very small and light weight, and are especially useful for booting from a CD or USB key to do system repairs. Linux Mint, in particular, is a refreshing spin on Ubuntu. I tend not to trust the really small distros though (the ones you’ve never heard of with websites straight out of the 90’s), because I’m dubious as to whether they’ll continue to be supported in the future, and whether they’ve been tested thoroughly. There are probably good ones out there, I just don’t shop around too much anymore.

Choices

In many ways, it all comes down to choices, and the number of them you want to make. If what you want is a plug-and-play operating system that isn’t Windows or Mac OS X, go with Ubuntu, Linux Mint, Fedora, or a similar distro that has a one-and-done type install: you pop the disk in the drive, set up your language, time zone, and login credentials, and away you go. These distros have default packages that support most of your day-to-day needs, and it’s fairly easy to install components that aren’t pre-installed. They work on most of the common hardware out of the box, and they have a lot of online support options.

If, on the other hand, you want to make the choices yourself, choose a distro like Gentoo, Arch, or Debian. Gentoo and Arch, in particular, don’t even choose a default desktop environment for you, so you can choose any configuration you want right from the beginning without having to undo someone else’s work. One time, I installed Gentoo only to realize that I had disabled the kernel configuration for my hard drive controller, so the system couldn’t boot: that’s how much control you have. Debian has some base packages that install a very minimal system, as well as some options that will install a lot of common packages for you. It’s more immediately usable than the other two, but allows you to install a minimal system if you want.

At the far end of the spectrum of choices is LFS: Linux From Scratch. You compile the kernel from scratch, and gradually start loading things on the disk until you have a working operating system. I’ve never done this, but it’s always been in the back of my mind. You can find resources for doing that here: http://www.linuxfromscratch.org/

Stability

One last thing I want to mention is stability. Stability is probably the other most important dimension of a distro, and I would be remiss if I didn’t talk about it just a bit. If you’re cycling through different distros exploring the Linux world, you might not care too much about stability. Honestly, if you play around with things enough, you’re going to wreck your distro no matter how stable it is. But if you’re looking installing Linux on a machine you care about, stability is very important.

Because the distro packagers are usually on the same team (or are the same people) as the people who maintain their distro’s package repositories, their attitudes and values contribute to how stable the resulting collection of packages will be. Debian, for example, is know for being fairly conservative and FLOSS (Free Libre Open Source Software) fanatical, which makes for a very stable, very reliable system, and makes it a bit harder to install proprietary software (not much harder, though.) Ubuntu, on the other hand, is less gun-shy, and uses more up to date packages at the expense of a slightly increased probability that their packages will have unresolved bugs. It’s worth doing some research to find out the attitude of a perspective distribution’s repository maintainers before making your final choice.

Stability is the main reason I forsook Ubuntu years ago, and now only use Debian. Ubuntu is the only operating system I have ever installed (aside from Windows) which has crashed during or right after installation***. It’s a great distro that is paving the way for lots of innovation and publicity in the Linux and Open-Source world, and it has become a stepping stone (at the very least) for new users, but I don’t like the way they do choose their packages, and the default packages that are installed. And, if I’m honest, even though it’s a small and easily fixable issue, Unity absolutely drives me up the wall.

Conclusion

Hopefully this will help you choose the distro that’s right for you. You should play around with a few of them and read up on them (Wikipedia is a great place to do this) before picking the one you intend to use for ever and ever… and always know that you can change your mind at any time. As you learn to use Linux, you’ll likely realize that you wish you had done certain things differently during your installation, so you’ll likely be itching to re-install after a while anyway.

If you’re curious, as you might already have guessed, I install Debian on everything I get my hands on: my desktop, my parents computers, Raspberry Pis – hell, I’d install it on my toaster if I could. After administrating around 20 Debian machines during my two years as a SysAdmin, I’ve come to appreciate its elegant simplicity and robustness, and I wouldn’t replace it with Ubuntu if you paid me. But that’s just my opinion; I encourage you to draw your own.

*It has been pointed out – and rightly so – that the K Desktop Environment (formerly referred to as simply “KDE”) is now properly called “KDE SC”, for “KDE Software Compilation”. For simplicity, however, and since it is still referred to in the Debian repository and popularly as KDE, I’ve left the incorrect acronym as is.

**A member of the Linux Facebook group pointed out that newcomers to Linux might not know what a “tarball” is. Tarball is slang for an archive compressed using the unix tar utility, usually with the extensions .tar, .tar.gz, or .tar.bz2. Source code for many open source packages come packaged in a tar archive.

***It’s true, I’ve had many a Gentoo installation crash on me on or before startup, but that was always because I had done something stupid, and was entirely my fault. The same opportunity doesn’t really exist in Ubuntu; I’ve had installations crash once, and succeed after installing again with the same options for no discernible reason.

Advertisements

Projects: I Need Them

Yesterday, I went to McGill’s Tech Fair. All manner of companies were there, scouting talent and taking applications, looking for young students who need jobs. Needless to say, they found plenty, and I was one of them.

It was the first time I had ever been to the Tech Fair, and I wasn’t entirely sure what to expect. I’ve been told my CV is impressive, but I didn’t know what companies would be looking for, or even what – exactly – I was looking for. Among the myriad mining companies (I counted around three gold mining companies) and engineering firms, I managed to find a few software companies that interested me. I chatted, asked questions, and tried to make myself seem knowledgeable, curious, and passionate. The one question I wasn’t prepared for, however, was the question I expected to be the most prepared for: what projects have you done lately?

I’ve always thought of myself as someone who does projects. I’ve always done projects. Ever since I was a kid, I’ve been building things, even to the detriment of my own schooling. I have all but four volumes of Make Magazine that have ever been published. MacGyver is my hero. On any given day, I would rather code for five hours on one of my own projects than for one hour on an assignment for school, and yet I couldn’t think of a single project that I have done recently for my own interest, and of my own volition.

This realization has been a long time coming, I think. School and extracurricular activities (read: work) have sucked up a lot of the time I would ordinarily spend hacking and coding. And when I’m not studying or working, I’m usually too lazy and worn out to start working on something else. Sure, I’ve written some little programs here and there on the weekends or over the summer – coding is a part of life for me – but I haven’t really built any of the super cool, outlandish, crazy awesome projects that I used to build when I was younger and cared less about school. And that’s a shame.

So today is when it changes. This evening, I’m going to blow a TON of time that I could be spending on a DOZEN other things, writing a program that I’ve wanted to write for a few weeks. I’m not going to finish it tonight – I may not even finish it in a week, a month, or a year – but I’m going to start it, and I’m going to have fun. I’m going to tap into my passion again: I’m going to focus on what’s important.

My Starving Brain

This morning, as I ate my breakfast, I realized I was way too tired to go running. In fact, after staring out the window for about ten minutes, I realized that what I needed was a nap. The morning nap is a double edged sword. Sometimes I really need a nap in the morning, but the price of napping early in the morning is that, unless I’m super crazy tired, I’ll become groggy, and spend the rest of the day as a coffee zombie. Today, I was not quite as tired as I thought I was, and I turned into a coffee zombie.

After my nap, I got up, did some work, cleaned, read for a while, Googled my professors for next semester’s classes (I’m taking a class from one of the guys who invented quantum teleportation.… yeah, I know,) and then went to meet a friend for coffee… half an hour late. I’m rarely late for things, mainly because I know that I naturally tend toward lateness. I overcome this tendency by being compulsively early to appointments, classes, meetings, and outings with friends. Today, I was late because – although I knew we were meeting at 4:30 – in my mind I was supposed to leave at 4:30.

Combined with the fact that yesterday I forgot my violin when I left for my violin lesson, I was starting to worry about my mental health. All sorts of possibilities ran through my mind – not enough sleep, too much sleep, the new vitamins I’m taking – and then I realized what the most likely culprit was. I thought about what I’d eaten today: eggs and lentils for breakfast, a bowl of lentils, ham, and sauerkraut for lunch. That’s it. According to my calculations, that’s about 1600 Calories, most of it protein. It sounds about right, for having eaten a third of my allotted food for the day, but when you consider that 1) the Calorie is a ridiculous way of gauging food value*, and 2) eating since has made me feel more awake and like my brain is working, I think it wasn’t enough.

As soon as I got home from missing coffee, I bought a sugary mocha frappucino and a rice crispy square, and I felt like my brain went from around 30% efficiency to 80%. Yes, the caffeine helped, but I had two cups of coffee this morning, and it did very little for me. I’m pretty sure it’s the carbs.

So it seems there are two ways to fix this. First, I could pound down lentils like there’s no tomorrow. I’d guess I would need to double my lentil consumption to get enough carbs. Second, I could start eating some foods with a higher glycemic index (rice, for one.) Either way, I think altering my diet to include more carbs will help.

I think it’s safe to say that today’s segment of Discipline Week was rough, but it was not a failure. Why wasn’t it a failure? After all, I didn’t run, I hardly woke up at 7, and I didn’t accomplish any of the other cornerstones of Discipline Week 2 that I’ve run into so far. It was a success because I learned something. This isn’t just about practising discipline, this is about learning how to be more disciplined. It’s about learning to control my urge to put things off, learning to increase my concentration, and improve my life and studies. Learning that I need to alter my diet in order to stick to my goals is just as valuable – if not more so – than sticking to them in the first place.

Sometimes a failure is even more valuable than a success. When you succeed, you might not necessarily know why. If you can duplicate those circumstances in the future, maybe you can even succeed again. But when you fail, examine your failure, tweak some variables, and try again, you learn more about the problem as a whole. And once you know enough about the problem as a whole, you can manipulate it to your favour.

I’m thinking about converting my morning run into a morning walk. A morning run seems like a big, huge step toward a life goal, and while it’s exciting, I don’t think it’s realistic to jump into it right away. Running at night is fine, but especially this week while my aunt is visiting from Chicago, I know I’m not going to get to sleep in time for a 7:30am run. We’ll see what happens tomorrow.

 

*I’m no biochemist, but protein, carbohydrates, fat, and fibre are all broken down by the body differently. To try and assign a scalar value to the energy they produce in the human body, or to try and estimate their value seems pretty silly. Also, different people seem to use different nutritional components differently, so to assume that everyone needs X amount of protein, fat, and carbs seems silly as well. Come at me, nutrition majors.

How to Heat Your Livingroom with Computers

It’s time I explained what all this cluster computer stuff is about. It’s kind of an insane project, but bare with me. Some of you may have played the Wikipedia game “Seven Steps to Jesus.” Basically, it’s a special case of the Wikipedia game Wikirace, where you try to get from one article to another as quickly as possible. In Seven Steps to Jesus, you try to get to the Wikipedia article about Jesus in seven clicks or less. Why Jesus? Why seven? Who knows, who cares. But the point is, that it’s fun for about ten minutes.

This ridiculous game wandered into my mind the other day, and I started thinking, “I wonder how you could create a computer program that would play Seven Steps to Jesus…” The answer depends. If you give the program access to the entire Wikipedia database, then the task is trivial. It’s very simple and quite fast to spit out the minimum number of steps from one article to another. But what if you don’t give the program access to the entire Wikipedia database? What if it has to go to an article, and choose which link to click just like humans do: with “intuition?”*

As you might have guessed, the task becomes more complicated. Now we’re talking about machine learning (FUN!) I started to think about how you could train an AI to assign a “relatedness” value to every given article on Wikipedia by giving it access to the Wikipedia database, and having it traverse the links from one article to another. If you’ve taken a class on AI, you know that eventually, this relatedness value will converge to the shortest path. Basically, this program will do nothing useful… except heat my living room.

Except! Except that I’m going to train an RL agent in parallel. That’s the only thing that might be novel about this (other than the fact that I’m introducing the “Seven Steps to Jesus” task to the AI world.) Ordinarily, you would want the agent to learn sequentially, because if the links change, you want the agent to learn with the changes. But in this case, I really don’t give a damn about sequential updates. Also, this task is stationary (the distance between any two articles in my downloaded Wikipedia database will never change,) so updating sequentially doesn’t matter all that much.

So what you should get from this, is that this project is a HUGE waste of time. But it’s fun, and I’m learning about graph databases, and RMI, and I got to build and run a computing cluster. Maybe there’s a real task for which this approach is suited. I’m not sure though. Usually in RL, you have to run a trial many times in order to test an approach, so there’s really no point in distributing the actual processing. In other words, if you’re going to run 100 trials of your schmancy new algorithm, you might as well just run one trial on five different machines until you finish, rather than splitting up the computation (which adds overhead,) into five different parts.

The point is, I’m having fun. Leave me alone.

Discipline Week Update: Today was day four of Discipline Week, and so far so good. I’ve been trying to avoid napping, because I want to really embrace this 7am to 11pm schedule I’ve got going, but today I really needed a nap. I ended up sleeping for maybe an hour and a half, which is really too much, but we’ll see how things go tomorrow. I’ll write a more detailed post tomorrow about how Discipline Week is going, but I thought I’d let you know that it’s still a thing, and it’s going well!

*Yes, there are algorithms that do this quickly, but you’re still missing the point: the point is, this will be fun. Fun, I tell you, FUN!

My Livingroom Is A Supercomputer

As a Computer Science Student, I like the idea of being able to run my code on the biggest, baddest computers there are. Unfortunately, most of my code doesn’t need more than my laptop. In fact, many of the programs I write would run just about as well on my phone, or an Apple II. However, sometimes I do write programs that require gobs of raw computing power. Last night, I conceived an idea for just such a program. Before I even started programming, I realized that my only hope for finishing computation on the large dataset I wanted to process (all the links on Wikipedia, many times) was to spread the computation across multiple computers. I needed a cluster. So I built one.

A cluster (short for computing cluster) for those who don’t know, is basically a bunch of computers connected via a network. You could think of the Internet as a computing cluster, but the important difference between the Internet and a useful computing cluster is that a useful cluster works toward a unified goal. As you might have guessed, part of the beauty of a cluster is that the computers involved don’t have to be all that powerful by themselves to be useful. Because my family doesn’t usually get rid of computers (precisely for occasions such as this,) we had three old laptops lying around, all of which I confiscated for my cluster. Add my laptop to that, and I have a cluster of six cores. It’s not a whole lot, but it may just be enough. I also have an old switch (what most people would call a “router”) that I saved from the trash during high school, which I’m using to connect the computers together. After installing Debian Linux, and puppet to manage the configurations on all the machines, I’m almost ready to run cluster computations!

In order to run a cluster, you need a server to distribute the data you want to process, and clients to actually do the processing. In this case, I’m talking about the server and client software, not machines. So I have to write a series of programs that will split up my data and process it on each of the cluster computers. It’s simple enough when you use Java’s RMI (Remote Method Invocation.) The server maintains a collection of the processed data, which is updated whenever the clients finish processing. Once the client has finished its data, it sends back its results and requests more data.

This strategy has a lot of advantages, including the fact that I don’t have to store the entire Wikipedia database on more than one machine. Since the data is sent in chunks, I can store it all on one of the two computers in my cluster that can handle that much data. Also, the clients can work asynchronously. Since the computers don’t have to wait for each other before getting new data to work on, my laptop can crunch along at 2.53GHz per core, processing as much data as it wants, while the poor little AMD Athlon slogs through whatever data it can handle.

My livingroom table, which currently houses four computers and a switch. Delightful.

My cluster is called “End of the World,” or EotW for short. I was trying to think of a theme for the hostnames of the computers in the network, and I settled on character’s from Haruki Murakami’s novels. Since there’s a “place” in his book Hard Boiled Wonderland and the End of the World called “End of the World,” I figured that would be fitting. The nodes are named toru, watanabe, and sumire.

How much did all this cost? $20 for a power strip. Even if I didn’t have all the laptops, I could have waited until the morning, gone to the recycling center, and picked up a bunch of computers and a switch for free. That’s what I love about cluster computing: with just a little recycled hardware, you can create a pretty fast computer. It’s true, my tiny computing cluster barely rivals a new, mid-line desktop computer with an Intel Xeon processor, but it’s still faster than just my laptop. And hopefully, with all those computers working full tilt for a few days, I’ll be able to crunch through my data*.

 

*As I said before, I’ll discuss what I’m trying to do at another time. Trying to explain it here would make this post too long, and I want some tangible results before I start talking about it.

Valentine’s Day: Musical Hearts

Here it is, ladies and gentlemen: this year’s Valentine’s Day computer art project. For the past two years, I’ve done some sort of computer art project for Valentine’s Day. The first was a 3D printed heart with a red LED inside. Last year, I recorded my entire bus trip to school, and wrote a program to select only frames with a certain amount of red in them, and compile them into a video. This year’s project was slightly more ambitious.

My original idea was to take videos of artistically interesting things on my walk to school. A computer program would then find trackable points on the video, and “stick” hearts to those objects. The sizes of the hearts would correspond to the amplitudes of certain frequency ranges in a song that would play in the background… a song which I would compose and create on my computer.

Several factors led to my cutting out the motion tracking entirely (namely that OpenCV is complicated, and Adobe After Effects hates me.) However, the hearts respond to the sound of a song that I composed and performed using ChucK, which is a programming language for creating sound. If you hadn’t already guessed, the red heart is low frequency, pink is midrange, and white/light pink is high. While the result isn’t nearly as cool as my original idea would have been, I think it’s pretty nifty, and it was fun to make. I also learned about some valuable tools in Processing.

To record the videos, I walked from Westmount to McGill at 7:40am, in -15 degree weather, without gloves. When I got to McGill, I could hardly feel my hands. So it’s safe to say that I put a lot of effort into this video.

(There was initially a problem with the upload, which has now been corrected.)

For the sake of comparison, for last year’s video I used two programs, one of which I wrote myself (Cinelerra and a python script for selecting video frames based on color composition.) For this year’s video, I used a grand total of 8. In order of usage (more or less:) ChucK, miniAudicle, Sound Flower, Audacity, Processing, Adobe After Effects, and Adobe Premiere*, and a Java program that I wrote in Processing.

*I am currently upset with Adobe. As a student, I cannot afford a license for the Adobe Creative Suite; a marvelous collection of software. I also have no intention or ability to make a profit from the work I have would do with the Creative Suite. If I spent my free time playing with Photoshop, After Effects, and all the other neat programs on my own computer, however, I might be able to make money from my work some day, at which point I would buy the Creative Suite so I could profit from my art. On the other hand, there isn’t a chance in hell I’m going to spend $899 on the Master Collection just to tinker. Therefore, it’s in Adobe’s best interest to offer a FREE (as in beer) version of the Master Collection to students for strictly non-profit, educational use. So there, Adobe, the ball is in your court now.

Where Am I?!

You may have been asking yourself “Where has he been?! It’s been an entire week or so since his last blog post!!” You would be right to ask yourself that question, and I’m going to provide you with an answer. I’ve been around.

A Roomba/Create rather precariously carrying my laptop. I made sure to drive it slowly so my computer didn't fall off. It's safe to say that this was the smartest vacuum cleaner in the building.

A lot has happened since we last talked, so try to keep up. First, I started working at McGill as a research assistant, as I said  I would be. I haven’t started any of the actual machine learning stuff though; right now I’m getting used to writing simple behaviors for the Roomba (e.g. move until you hit something, find a wall and crawl along it, etc) so that we can use the sensor data collected during the behaviors to try some simple learning algorithms. It’s been a lot of fun so far. I’ve been riding my new used bike to work, which has been fantastic. It’s an oldish, classic looking bike with an awesome front lamp that’s powered by a dynamo. I’m planning to put a capacitor in the lamp as well so it will stay lit while I’m stopped at traffic lights, but I’m waiting on that for now.

Yes, that's correct, I did model it after a Fender Telecaster. (Fender, please don't sue me: this is a one-off and a modified design. Also, immitation is the finest form of flattery, and you should think this is as awesome as I do.)

Plans for the one uke to rule them all are almost complete! I’m building a solid body ukelele with pickups I made from popsicle sticks (which actually work quite well,) and I’m hoping to have it finished by the end of the summer, but right now I’m still in the drawing/planning stage.

Feast your eyes on that delicious cake and pastry cream. You know you want it.

Several weeks ago I made a cake with two duck eggs and a goose egg, and then filled it with pastry cream made from one goose egg. It actually wasn’t as good as cake and pastry cream made with chicken eggs; the goose eggs lack a certain element of taste that cakes want, but the texture was the same. They were whiter than cakes or pastry cream with chicken eggs though, since duck and goose eggs have lighter yolks. Either way, it was a delicious excuse to eat cake. (Like you need an excuse to eat cake. Sheesh.)

They were delicious. And having eight of them for practically nothing made them taste that much better.

A few weeks before that, I made spring rolls. Once I realized that using a sushi roller for the wrappers was a horrible idea (they stick to the bamboo like no one’s business,) it actually went quite well. They weren’t entirely what I was hoping for, but they were very good nonetheless. The mayonnaise sauce I made was delicious though, and I’m proud to say I made it with no instruction at all.

It’s finally spring, and spring is rapidly turning into summer. Things are blooming, people are sneezing, and there are more people out on bike paths than I think I’ve ever seen. I saw some business guys in their suits riding along the bike path discussing business things, so I’m hoping this means biking will be the trendy thing to do in the next few years. Maybe Montreal will become another Holland. I’m not convinced it’ll go that far, but we can always hope.

So that’s it for my little summer update. I’m hoping to post some of the cool projects that have been rolling around in my head as soon as I can get them going. Until then, here’s an awesome picture of a tulip tree.

This is a tulip tree down the street from my house. It's pretty startling, especially since it has no leaves. Apparently the leaves develop after the flowers bloom and die.