Thursday, April 29, 2010

Serious Uptime

I caught this article (Humming away since 1993) today and it definitely caught my eye and brought back fond memories of the start of my career.

It's about a server shipped by Stratus Computer that has been up and running since 1993. I consider my career as having started at Stratus (first as a Co-op then part-time through the rest of college), and it was in fact 1993. So this computer shipped from the company I first started working for around the time I started working and has been running my entire career so far. Crazy!

The money quote: "Around Y2K we thought it might be time to update the hardware, but we just didn’t get around to it."

You usually think in terms of whether something you worked on might still have the code out there running somewhere (I'm sure I have code dating back to 1996 running on Nortel Contivity switches somewhere out there, and code dating back to 2000 running on CIENA switches). But to think about an actual instance of hardware up and running nonstop for that long just kicks it up to a whole new level.

This made me reflect back on my time at Stratus. It was a great place to start, and back in another era. It was before any real open-source projects, and before you could go online and get answers around your programming problems almost instantly. Everything was in your head, in-house or in a book on your shelf, and all the expertise needed to be inside the company.

From a technology point of view it was great experience. It not only helped ingrain *how* to think about high availability and fault tolerance, but also that it *should* be thought about in the first place. The best lesson is probably that it forces you to think at a full-system level. Everything was redundant in the hardware - power supplies, memories, CPUs, backplane, boards. Anything could fail and be removed and replaced without the system missing a beat.

Now, this is super expensive of course. And around that time (1993-94) Stratus itself was moving away from mainframes with fault-tolerance to high availability clustering approaches. But still - it's cool as hell that there are kids out there driving cars around that were born after this thing first booted.

I worked in the HAL (High Availability LAN) group, mostly around the development of FDDI - itself a fault tolerant networking technology.

I got to work at the application level, in kernel code, and especially enjoyed in an embedded environment - the firmware running on the FDDI board itself.

It was a great kick-start to my career because Stratus had layoffs followed by attrition, which led to my being the sole software engineer running the FDDI project for a good chunk of time at about the time that my Coop stint was over (after which I was a part-time software engineer). Even better, the hardware engineers that had started the project also had left the company. There was great fun to be made of the fact that the project was being led forward by a Coop and a couple lab techs.

I suppose if it had been 2003, then the cool thing would be to drop out of college and start my own company. But in 1993, having responsibility of a full project within a large company was good enough! I loved having a challenge to rise to and doing it.

In the end I can't believe the article didn't say whether this sever is running VOS or FTX though. I bet VOS.

Saturday, April 10, 2010

A Strange Anniversary

A couple days ago passed the two year anniversary of my last paid workday. Well, hopefully not last *ever*. I joked with my wife that we'd celebrate by not going out to dinner.

I'm lucky enough that this was voluntary. It has resulted in the best two years of my life, without a doubt.

The plan at the time was to find an idea that would become a company that I would start. At that point I thought it would actually be one of the ideas that I had at the time. I had only worked for startups, 3 of them, since 1996 - the year after I graduated college.

I was looking to get out of networking and telecommunications specialties and work in the more general consumer internet space.

We found out my wife was pregnant before I made the plunge, but that timing was really perfect. It allowed me to be working on my own projects at home through the pregnancy and has allowed me to be working from home through the entirety of my daughter's life so far (17 months).

I worked on a few of my own projects independently, and worried my wife a little bit when I'd move from one to the next - just when she was getting used to the idea that the one I had been working on was going to be "the one".

The whole time I figured the worst case was I was acquiring experience in *lots* of new technologies that I'd put to good use at some point. And as it turned out, that was the perfect warm up for what came next.

Last fall I got introduced to someone looking for a technical cofounder, and the result is now Yieldbot.

Which is a great way to mark the two-year point. We've launched recently, in private Beta, and learning a ton from our first customer experiences, and being pushed by customer demand. Which is good, as this should mean not too much longer before my little family can spend some money again.

It's been a heckuva two years - wouldn't change a thing.

Wednesday, April 7, 2010

Long Tail

This isn't about the usual "long tail" you hear about, but about something similar that for some reason I find amusing - having just received a check for $39.78.

Back in 1998-99 I wrote a book on a VPN networking protocol called L2TP (Layer-2 Tunneling Protocol). I had written (in C/C++) our L2TP (as well as PPTP and L2F) implementations at my first startup, New Oak Communications, and then had gotten involved in the IETF process around the standardization of L2TP.

I wasn't looking to write a book, but based on some I-D's I had written on extensions to the base L2TP at the time, I was contacted by an editor in an email and asked if I'd think about writing a book. I had no delusions this would be a best seller, and said yes expecting (correctly) that it wouldn't really pay back in money for the time I spent on it, and that it would be a useful experience.

This makes me think of the long tail in two ways. First, this is obviously a niche subject. I wrote it for the audience of software developers who would be implementing the L2TP protocol, and secondarily for those that might be involved in the network planning around deploying its use. Yeah, there's gonna be a lot of those.

Honestly, I completely forget I even wrote a book unless one of two things happen. First, someone says (for some random reason) something about "your book" to me. It usually takes me a good 5 seconds to know what they're even talking about. Second, twice a year when I get the royalty statement. I say "statement" because it only becomes a royalty "check" when it accrues > $25 due to me.

So that's the second, and most significant, way it seems like a "long tail" to me. Because a full 11 years later the royalty stuff still dribbles in on this seriously niche-subject book. I wonder how much longer the poor publisher will need to keep sending me these statements.

It's most interesting to me just to see that 11 copies sold the second half of 2009. It must've been the holiday season. I'm really surprised it is more than zero at this point.

It's sold 4046 copies over these 11 years (technically only 10.5 so far), which is pretty cool. All in all it was worth the experience. The most surreal thing was one day years ago coming home to a small package that came in the mail from the publisher that contained a few copies of the Japanese translation. That was worth it just for the joke from my mother that she understood about as much of it as the original in English.

The thing I'm happiest about is the positive reviews on Amazon that it got. That's what I was most worried about back then, having direct access, and for all to see, to someone's potential negative take on something that I put some real effort into. That was a new thing back then, and it was scary.

Anyway, whenever this statement comes it just makes me chuckle. I was pretty sure I wouldn't break the $25 threshold again and get a check. I've got to imagine though that this one really will be the last one.

Saturday, April 3, 2010

MongoDB Sequences

I came across an issue today with MongoDB, the first one where SQL would have had a simple answer - sequences.

If you're familiar with SQL you know that this is very simple to do by declaring a column an index in a table. With MongoDB there's no built-in capability for this (as of 1.4.0, which I'm using now - I wouldn't be surprised to see some built-in sequence capability in a future release).

MongoDB does have a case that guarantees order, a "capped" collection, but it's not really meant for this purpose.

I found some hints online about how this could be done. Since none of those were satisfying and I ended up coming up with my own relatively simple way, I thought I'd share it as food for thought.

The approach is to have a javascript function saved to the database that can be called from the client to do our bidding. The client calls db.eval() to invoke this function to insert the object for us.

To set this up, I created a collection named "sequences" where the "_id" of each entry is the name of another collection in the database that I want to be sequenced. The table just needs to have an entry with an initial value for the sequence (take your pick) before the function to insert ever gets called.

For instance, if I had a collection named "foo" I would start with an entry like:
{_id: "foo", seq: 1}

To insert an object into the database I invoke db.eval() passing the function the name of the collection to insert into, and the object to be inserted.

The function that does the insert is:

function(coll, obj) {
var s = db.sequences.findOne({_id: coll});
s.seq++;
db.sequences.save(s);
obj['seq'] = s.seq;
db[coll].insert(obj);
return {'seq':s.seq,
'error_status':db.runCommand("getlasterror")};
}

I'm returning the sequence that was allocated (which I keep track of in my use case in the case where the insert was successful) and the error information associated with the insert. That way if the insert failed for some reason (like an index uniqueness constraint violation) I still find out about it.

Some caveats are probably in order.

I'm not in a sharded environment (yet), and when I am I suspect I will have to revisit this.

This also isn't the most efficient approach for high performance because db.eval() monopolizes mongod, so depending on the database usage pattern this could be pretty disruptive. On the other hand, this mongod behavior effectively acts as a lock and means calling this function will be atomic. I'm going to wait and see if this is actually a performance issue in my environment, however, before implementing an approach that brings more complexity into the application.

Whatever the case, I thought this was an interesting way to solving the problem to think about as it's a pretty straightforward analog to the SQL sequence functionality.

By the time I need to do it differently, maybe there will be a native way to do so in MongoDB.