Monday, October 11, 2010

team++ ftw!

Today's big news out of the Yieldbot offices is the addition of Soren Macbeth (@sorenmacbeth) to the team. Ok, so we don't have offices. If we did though...

Taking a step from being 2 people to being 3 people is huge. Taking that step like we just did, priceless. And now I get to work with @jonathanmendez *and* @sorenmacbeth every day. Woot!

In my startup experiences the early DNA of the team is the difference. Which is why having Soren join us just has "win" written all over it. From our first conversation it was clear our development philosophies were in alignment. I can't wait for Soren's mark to be made on the code base and on the product. Exciting times ahead!

Thursday, September 23, 2010

A Small Thought on Meaning, Humans, and Machines

I was attending Mongo Boston this week when a simple typographical oddity sent me on an interesting (and luckily brief) train of thought.

The simple occurence of "mb" on a slide to indiciate "megabytes" (it's in this presentation). First amusement because generally 'm' is milli and 'b' is bits. Of course expressing something in millibits is generally useless. So of course a human reading it (unless you're weird like me) doesn't give it a second thought.

Then I thought about the difference in how quickly I (and everyone else) make the right assumptions on what 'mb' in this context really is intended to mean.

And therein lies the huge challenge on the machine side if you're doing data mining and analytics. How do you infer intention automatically and properly, especially when you don't know ahead of time what type of mistakes or inaccuracies are going to be involved? Just the type of thing I've been having fun figuring out over the past year - at least for a particular problem space.

But something else interesting crystallized for me too at a human level. That this auto-correction that we all do so that we don't sit there confused by how one would break something into 200 millibit chunks is also the cause of many of our problems.

We need to be able to jump to conclusions in order to get past "mb" without having to ask a presenter for clarification, but we also are prone to jump to conclusions when we shouldn't - particularly in judging attributing some intention to someone else's action.

Two side of the same coin, one useful and the other problematic. Mostly we can't have one without the other though. Although I guess with practice we hopefully get better at figuring out when to jump and when not to jump. Though I think way too many people never figure out when not to jump.

If there were any typos above I hope your brain autocorrected them for you so you didn't notice.

Wednesday, September 8, 2010

The Best Wisdom is Portable

By which I mean that it can be put in multiple contexts and keep its status as wisdom.

I'm reminded of this over and over reading Seth Godin's book "Linchpin". I was expecting it would be a good book, but only a few pages in I was as excited about reading the book as maybe I've ever been starting off on a book. It just resonated so well with me, gave better voice than I could to vague thoughts that I've had, and went in brilliant directions I would have never thought of.

I'm now most of the way through, and I look back and it seems like almost at every page I wanted to stop and jot down at least one quote. Though I haven't (until now) because I couldn't bring myself to stop the flow of reading it.

The highly portable quote I hit today:

"Great bosses and world-class organizations hire motivated people, set high expectations, and give their people room to become remarkable."

This is one of the thoughts that so obviously resonates with me right now as we're thinking about putting together the team for our startup to take it to the next level.

But I feel like its true wisdom is that it resonates just as strongly in the completely different context of being a parent. It just strikes a chord with how I think about how we want to approach raising our almost-2-year-old daughter.

All children are motivated learners; are naturally curious. So you get that for free. Setting high expectations and giving her room to become remarkable is where the magic can happen. I get the impression that too many parents forgot that the last 20 years, and I hope that it's changing now.

This isn't the first case that this portability of wisdom in Linchpin hit me, and I think that is the real genius of the book.

Wednesday, September 1, 2010

Mosques, Corporations, and the First Amendment

Earlier this year I agreed with the Supreme Court majority decision on Citizens United that on First Amendment grounds Corporations and Unions could not be limited in their funding of campaigns.

The common argument made, which I disagreed with, was the simple one that the ruling made was that "corporations are people" and that this was ridiculous. And that the First Amendment should only, and was only meant to, apply to individuals.

I felt that the inclusion of "the press" in the original document indicates that these Rights, while grounded in the rights of the individual, did actually extend to the aggregations of individuals as well.

Fast forward to today and we have the "ground zero mosque" issue. Here as well I feel that under the First Amendment, this *organization* has the right to build their Community Center/Mosque wherever they see fit, and many are arguing the same. And it isn't because I/we think "churches are people" (or that "congregations are people"). Note, no single person is building or funding the community center. The first amendment here is getting invoked properly as applied to this group of people, and this should kill the "corporations aren't people" line of attack on the Supreme Court decision on Citizens United.

To put it in perspective, imagine the precedent had the Supreme Court ruled the opposite way on Citizens United. The decision would have been that the government can severely restrict the rights of aggregations of people, so long as the rights of the individuals are preserved in some more narrow sense.

Using this logic you could then say that the government could have the right to put restrictions on where a mosque get built, perhaps "within reason" or some vague qualifier. For instance, it would be easy to argue using this logic that the government could require the mosque be built 10 blocks away from the proposed location, as there would be little argument that it would put the individual members in a position of not being able to enjoy "free exercise" of their religion - it would be just a little extra commute (for some of them) after all. If you disagree with that, imagine it again but 1 or 2 blocks instead of 10.

Now one argument would be that churches exist for the express purpose of the exercise of religion and so it is more obvious that their rights more directly derive from the rights of the individual members. But then in the Citizens United case the Corporation in question was one that was created for the express purpose of putting out a political message. So its rights too were very directly derivative of the rights of the individuals that funded it.

Anyway, this is really a complex issue, much more so than "the Supreme Court thinks Corporations are people" line of reasoning.

Personally, I definitely do think that there is an overall problem of money and politics mixing. As I wrote about recently I think that transparency is a great part of the solution, as you saw in that recent case of Target dipping its toe into political contributions and the public backlash.

If we don't have full transparency yet on the money trail in politics we absolutely should have every measure in place to be fully transparent. The power of transparency in this age of information technology I think should not be underestimated.

I also think that it would be healthy to explore options to amend the Constitution to be more specific about rules around elections in particular since they are an aspect of the direct running of government. The bars are somewhat high to get that started, and they should be. But that mechanism is there for us to make changes that we deem important enough (but as Prohibition showed, we can certainly be temporarily pretty stupid about things).

But as far as the first amendment goes, I'm happy to have it consistently apply to individuals and to the organizations that individuals choose to create and belong to.

Wednesday, August 25, 2010

Go Into (Technical) Debt With Eyes Wide Open!

This morning Bijan Sabet had a good blog post on being open and honest about "technical debt".

I think (and commented there) that it's important to get into such debt with your eyes open and plans for how to address it in the future.

If the context isn't clear, what's being talked about here is basically hastily put together technology that accomplishes an early version of what was needed of a product but is considered "debt" because it cannot remain in the state it is in. It is carried as a kind of liability because at some point, either because of new features that are required or it isn't scalable, time and effort will need to be *paid* in order to make it more robust.

Having gone through the process of building multiple software systems from the ground up has clarified for me how technical debt should be planned for. And nothing clarified it for me greater than working on a large project with a hefty amount of inherited technical debt that we successfully cleared off the books, and the monumental effort that it took to do so.

I find it important as you are putting a system together to consciously make the tradeoffs that are necessary and have a plan for how the code (and technology choices) would be iterated and scaled up when the needs arise.

As in monetary debt, this is akin to advising not to take a loan without having a plan for how you're going to pay it back.

Along the way there are also very smart choices you can make to help make that debt less too. Layering and using adaptors being one great example. It can be expensive to retrofit an adaptor model to something that was written assuming it would only have to interact with one particular technology. So abstracting early, even if you are in a hurry, will help you with your later "payment plan" at a fairly low cost up-front.

Another good example is having an MVC model on your UI, which will help later when you need to put an API in place (and reuse the "M" and the "C").

Something to watch out for is piling debt on top of debt (don't charge your mortgage to your credit card!). That is, when the feature comes up that needs to utilize another technology that is in need of repair, instead of making matters worse you should seriously consider at least starting to pay off the debt on the older technology.

The important bottom line is take the debt seriously and don't enter into it lightly. Yes, it's necessary, but have a plan. Things will come up you didn't expect, and you'll suddenly find liabilities where you didn't know they existed. But then you'll be freer to deal with those unexpected things since you're in better shape overall. And your future self will likely thank you for it.

Thursday, August 19, 2010

Decentralized Beats Centralized

The current uproar around Target and its political contribution that ended up supporting a Republican candidate whose beliefs many disagree with is, I think, very instructive and holds lots of lessons. (CNN info about it here.)

The biggest for me is that it shows how transparency and decentralized action can be a healthier political force in the country than centralized enforcement.

Other than an unfortunate glossing over of some important nuance (more on that below), I think this is playing out exactly as I would have hoped after the Supreme Court ruling earlier this year that corporations could make political contributions.

Even though I'm concerned with large corporations unduly influencing our government, I felt the Supreme Court ruling was correct. Previously, the regulations around what corporations could and couldn't do were too difficult to decipher. It's a difficult problem to solve by centralized fiat because there are many subjective dimensions to defining what might be allowed.

But what is important is the requirement for transparency. This puts the information out there to be dealt with in a decentalized way, which is exactly what is happening now. And, of course, there's never been a better time to make information available which could lead to decentralized forces making their voices known.

Basically the Supreme Court crowd-sourced the problem. They essentially said that instead of saying there will be a central definition of what is acceptable, we will require transparency and let the masses do the job of keeping corporations in check. And the masses, unbeholden to "Mainstream Media" and with the tools of social networks and the virality that comes with it, have never been better prepared to do this.

The good lessons for corporations here will be that they can suffer reputational harm by making political donations, even those that might seem innocuous. This should serve as a brake on their making the contributions.

In fact, this is where the lack of nuance comes in.

The reality of the money flow is that Target contributed to a pro-Business group in Minnesota called "MN Forward" (www.mnforward.com). MN Forward then contributed to the ads for the politician in question. Their issues page lists: Tax Reform, Spending Reform, and Education Reform. Noble goals all.

The bad part about this loss of nuance in the story generally is that it means that people are assuming that Target and its principals have an anti-gay agenda. An objective look suggests that they contributed to a pro-business organization, who in turn backed a pro-business candidate who also holds non-progressive social policy positions (specifically, against gay marriage).

Now, the *good* part about this loss of nuance is that it will serve as a warning to other corporations. They will realize that their motivations for making a particular contribution won't matter as they can be undone by the perceptions about a candidate that might ultimately benefit from the contribution (even if at least once removed).

Let's hope we all keep doing our job and using all of the information we can get our hands on to our advantage, and solving the problem of corporate meddling in politics by being more a part of the process that can solve it.

Tuesday, August 10, 2010

SE is to CS as ME is to Physics

The past week I have had thoughts rolling around spurred by this post by Andrew Parker. The general theme of which is "how little a computer science education matches up to the real-world building of large scale applications." I thought it brought up a really good issue.

For me the issue comes down to whether you see a CS program as vocational in nature. Back when I graduated (in '95) I thought then that the CS program at my school (WPI) was too vocational in nature.

At WPI you do a major project your senior year as part of your graduation requirement. I've always leaned to the theoretical and mathematical side, and for my Physics degree my project was in the area of Quantum Mechanical energy degeneracies in multiple dimensions. No, I don't remember the equations in detail. But it landed me a published paper in the American Journal of Physics, so that was cool.

When it came time to do the major project for my CS degree I was the only one in my class not to work on some type of coding project. Instead my project was "Complexity Analysis of the Primitive Recursive Functions." I got all the coding experience I would need being in a Coop program and then staying on to work part-time after that, writing embedded firmware code in Assembly and C and Unix (SVR4) drivers and application code in C. And I think that's the right mix.

(And yeah, I know, with degrees in those two fields you'd think I would've leaned to Quantum Computation or something, but I'm not *that* theoretically minded :-)

It seems to me the argument is around the "S" in CS being "Science". There's a real difference between "Technology" and "Science" that we should recognize.

This is obvious when you look at Physics (which I majored in) and Mechanical Engineering (which I also majored in briefly before switching that one to CS). In ME there is more a lean to technology. No technology in Physics, other than around the instruments for running experiments.

I'd argue that we need just the same type of split (with cross-over training of course) that exists in Physics and ME in the field of computation.

I think there's already been one split that's happened that makes sense, which is in the field of Electrical Engineering with a focus on computers and lower-level programming - I think I've seen this referred to as ECE.

If we then split off "Software Engineering" as a full-fledged major, that feels like: SE is to CS as ME is to Physics.

Let's be honest, there'd be a *ton* more SE than CS majors, but who cares? At WPI when I was there we probably had at least a 10:1 ratio of ME to Physics graduates. Just like ME majors have to start with some Physics courses, SE majors would have to fulfill the basic CS courses.

The upside is that the SE majors would have a clear focus and would probably come out with much stronger industry skills that would enable them to start building things right away. Again, pretty close to the analogy to ME and Physics.

I'm sure there must be places where this split of SE off of CS is already happening. If that is the case, it should just be happening even faster :-).

I was also motivated to write this by a couple mathematically-oriented CS stories that came out in the past week. The first being around the attempted P!=NP proof, which brought back memories to my complexity theory studies. For the latest on that, this blog is following it.

The other story that caught my eye was around a proof put forward that any position of the Rubik's Cube can be solved in 20 moves or less (http://www.cube20.org/).

They served as reminders to me that a CS education is much more than learning about pointers, or whether you can create anonymous classes in Java easily, or how to work with a source control system, or how to provision and manage a Hadoop cluster.

Honestly, compared to when I graduated (it wasn't *that* long ago was it?) all of that can be learned and played with online. With guides, and open source projects looking for you to contribute. And, as a friend reminded me today, things like Google's Summer of Code to really go head down into implementation.

I'm sure there's never been a better time to learn about and become a great software developer. And really I think there's more and more proof that on the "technology" side (as opposed to "science" side) it doesn't even take going to school anymore.

Tuesday, August 3, 2010

Memory Lane

Here's to the launch of the TRS-80 back on 8/3/1977 (Wired article: http://goo.gl/njnJ). I remember the model well because I had a "pre-owned" one back in the early 80's.

At the time it was an awesome upgrade from my first computer, the Timex Sinclair 1000 (although I had the 16K expansion unit for $50 for that baby!)

The funny thing about the TRS-80 is that I liked it because it looked like a "real" computer. More like what you could see in the movies. Given everything started with "War Games" for me (like everyone else at the time?) I think it makes sense.

I regret that I have didn't have an Apple computer back in the 80's. But I did follow an eclectic path from the TS/1000, to TRS-80, Coleco Adam, Leading Edge IBM clone, and then finally 386, and eventually Pentium.

At the same time my online experience started in about 1986 on a teletype we had at school with an acoustic-coupled modem dialed into a small PDP-11 network. High School brought with it a VAX cluster, college was networked Sun workstations (Gopher was all the rage!), and finally the Internet.

Some signposts I can measure my life by.

Tuesday, July 13, 2010

Bootstrapped Travellin'

I'm getting ready for a trip to NYC "on a bootstrap" (I'm thinking this makes more sense than "on a shoestring" in this case, given we're still "bootstrapping" our startup).

Circumstances have me dropping my mother off at Logan airport during rush hour, with a trip to NYC planned for the next day. Having friends that live in Revere (very close to Logan and Boston) and NYC is making for an interesting itinerary.

It won't be "planes, trains, and automobiles" but "trains, buses, and automobiles" (and "air mattresses"). Well, "planes" too if you count dropping my mother off at the airport...

It's looking like:

  • Drive in to airport

  • Spend the night near Boston, where car will be left

  • Train (T) into Boston in the morning

  • Bus from Boston to NYC

  • Spend the night over another friend's place in NYC

  • Bus from NYC to Boston

  • Train (T) back to friend #1's house

  • Drive home!


You can't book that on Kayak.com!

Thank goodness for friends, the T, and Boltbus.

Friday, June 25, 2010

Idea vs. Execution

Last year I was on a flight and the in-flight entertainment switched after the movie to some other types of shows. One of them was around interesting engineering feats and had a feature on a submersible car. It could basically drive into water and then navigate as a submarine (shallow depth naturally) under the water.

A few rows ahead of us I heard a woman turn to her friend and exclaim "hey, I thought of that!"

This betrays, I think, a common propensity to value an idea over execution. Or at least to underestimate the challenges in the execution. Imagine all of the myriad engineering challenges that had to be overcome in order to execute on this idea of a submersible car. And then someone tries to sum it up with "I thought of that too".

For me this story embodies the increasingly common maxim for the entrepreneur that it is the execution that matters. To share your idea. Get feedback. That everyone else has their own ideas. And if somebody pulls off something that you had thought of but didn't execute on - be impressed with them, not with yourself for having the idea too.

Friday, June 11, 2010

Symmetry and Code Refactoring

Whenever I make large refactoring changes within a project I think about it in terms of symmetries. I'm not sure anyone else could find that useful, but it's worth a shot.

While getting my Physics undergrad degree there were two things that really held my fascination, quantum physics in general and the role of symmetry in natural laws.

In my opinion the concept of symmetries in nature as the basis of conservation laws was one of the most elegant findings in Physics. We owe Emmy Noether for that almost 100 years ago (between this, Relativity, and Quantum Mechanics, the early 20th century in Physics was just a ridiculously productive period of progress in scientific understanding).

Since then, I've found it interesting, and sometimes useful, to think about things in terms of symmetries. For instance, I think a fully rational ethical philosophy can be built on top of expressing our existence in terms of symmetries (the ideas behind "walking 1000 miles in someone else's shoes" or "The Golden Rule" I see as rooted in this).

But back to coding...

I've sometimes found myself needing to make major changes to a complex system. This can always, as they say, be "fraught with peril." The first (and biggest) of these started with a monolithic Java app that wasn't built on a framework, represented objects with maps, had manual construction of SQL for database access, used CORBA for Java client communication, had little vertical layering of levels of abstraction and little horizontal separations of logically separate spheres of influence.

Certain types of changes to that type of system were just a nightmare to contend with. A particularly obvious example is how changes to the object model that affected the storage in the database might require changes in several seemingly unrelated places in the code where SQL strings were manually constructed.

The team I led transformed the app (which we already started shipping) to be J2EE based, with an EJB interface, employing an ORM layer, with standard Java communication between client and server, and good horizontal and vertical layering, while adding functionality for continued evolution of the product.

To me, the key to successfully making these changes was symmetry. And the reason why is because symmetry essentially boils down to the invariance of some observable for a given type of transformation to a system.

The simple examples of symmetry are those of geometry. A circle is fully rotationally symmetric because no matter how many degrees you rotate it, its appearance remains the same. A square has a rotational symmetry of the 4th order, because you can rotate it 90 degrees and its appearance remains the same.

And this has to do with code refactoring? Sure. The idea was essentially to map out a series of transformations in the code, with a plan for what behavior of the system was going to be preserved for each particular change. This allowed the ability to test each change to make sure that it preserved the behaviors that it was meant to.

I saw each coding change as rotating the square - and the goal was to rotate it the full 90 degrees so that it still looked like a square. Even if what happened behind the scenes involved major structural change.

It's certainly possible (who knows, maybe advisable) not to think of this in terms of symmetry, but that's just how I do it. The key is really to try not to attack everything at once. But instead, to split the changes you're going to make into changes where each can be tested to not have broken the behaviors that it shouldn't have affected.

For me, thinking about it in terms of symmetry provides discipline in terms of defining how much change should be taken on at any given time. But your mileage may vary.

Monday, May 24, 2010

Small suggestion on Python nested code blocks

I've been coding in Python for a while now, and there's one thing that I've found problematic with regard to nested code blocks. It comes down to the age-old Python debate about whether you like Python's "indentation level" approach to determining what block of code a statement belongs to.

In general I actually like the Python approach. For one thing it removes the debates (going back to at least C) when developing in most languages around what style to use for braces. Especially for people who don't realize that K&R is the way to go.

The problem I have is when I have multiple levels of nesting that ends up being like an overhanging cliff that you can fall off of if you're not careful.

if one_thing:
do_something()
if another thing:
do_another_something()
if third_thing():
do_yet_another_thing()
do_thing_after_cliff()

The two problems I have are that it is harder to keep track of what level of nesting a statement is in for some cases. The bigger problem is if the statement do_thing_after_cliff() accidentally gets modified to be at another indentation level it can be very non-obvious. This can come up when moving chunks of code from one indentation level to another.

I've started using a simple trick to help with this, and for me it really helps visually. Basically I use a double comment "##" where I would normally put the "}" if I was working in another language.

For me this provides the visual cue that helps. And it also should make it obvious enough if statements accidentally get modified to the wrong indentation level.

if one_thing:
do_something()
if another thing:
do_another_something()
if third_thing():
do_yet_another_thing()
##
##
do_thing_after_cliff()
##

I don't use it absolutely everywhere, but it helps in the cases where your eye doesn't immediately feel comfortable with how blocks might line up.

Tuesday, May 18, 2010

First startup nostalgia

Going through the bootstrap process on our company has brought back memories of starting day at my first startup.

It was back in '96. Not everyone was doing or had been in a startup before so it felt really new. I remember being surprised that we wouldn't be working in a garage or a basement. And that there'd be a 401(k) and health plan. I thought it was supposed to be tough?

Though I signed on before the Series A closed I wasn't to start until it did close. That felt like a looooonnngg wait. Probably about 4 weeks in reality.

The great memory that I have of that time was reporting for the first day of work, which was the first day there was office space. Walked into the office space, and there was - nothing. No cubicles. No chairs. No desks. No computers. A big empty room. (Cool! This WAS going to be a startup after all!)

Our VP of engineering brought in donuts that morning along with some chairs and a whiteboard and we were off and running... By the end of the first week we had cubes, desks, chairs, and some plans.

There's nothing like the feeling of starting from a blank slate. I'm on the 4th time with that feeling and it never gets old. There's also an excitement to getting by with less while you help get everything in place. In my second startup my personal laptop running Windows 98 served as our corporate Internet gateway using dialup while we waited for our T1 install!

The first three times there were network devices to install, desks to help put together, and of course fundamental early decisions to make sitting or standing around a whiteboard.

This time that's all replaced pretty much with services to provision in the cloud. No office space, so no office network. Email and collaboration via Google. Product deployment on servers in far off data centers.

One thing remains the same though. That great feeling of the fresh start. Balancing priorities and trade-offs and trying to maximize the value of what you're going to build.

Very addictive - I highly recommend it.

Monday, May 3, 2010

Unintentional Google Voice Humor

Been using Google Voice for a couple months and it can be somewhat amusing when the voice recognition for the voicemail transcription doesn't quite pick it up right. Had a good one yesterday though.

"If you i just wondering if you could give me a favor. When you come back or when you're on your way back. Can you pick me up like I D cap promo much at all at Starbucks just totally wiped out. I have no idea why I feel so it I think I've been drugs. But anyway, I, I'm gonna trying to the blood down for a nap this morning and I think, up opportunity. Thank you. I mean to you later. Bye"

As a friend pointed out - notice that it didn't have a problem picking out "Starbucks". No surprise, since Google's all about the advertising!

The full corrected message isn't worth it, but for the record, "I D cap promo much at all" is actually "a decaf Carmel Machiato".

The best message transcription fail might be this next one. Although in this case it isn't to be blamed because it was my wife and 17 month old daughter speaking in Portuguese.

"I saw what at the cottage for the dangers visa. Thank you. Bye bye these pages knows. Bye bye. Tom O these Aunt Karen these whoa whoa. Hey, but cos he meant 9 bye bye delivering basis hello."

If the system were a human it would have been pulling its hair out on that one!

Saturday, May 1, 2010

"Statistics is the new grammar" - Wired

This article in Wired this month is right along the lines I've been thinking for a while now. Our current education priorities so poorly set our kids up to deal with the world of today.

I truly think it's not useful to have opinions on a large range of topics without understanding how statistics and probability factor in.

If we're clueless about correlation vs. causation, anecdotal vs. statistical, how can we make sense of the world?

This dovetails with the thoughts I put down last month on the Tyranny of Intuition. If we can't make good educated judgments on something, then we're left with making our best uneducated judgments instead. Which I think is setting our selves up for an epic fail.

Thursday, April 29, 2010

Serious Uptime

I caught this article (Humming away since 1993) today and it definitely caught my eye and brought back fond memories of the start of my career.

It's about a server shipped by Stratus Computer that has been up and running since 1993. I consider my career as having started at Stratus (first as a Co-op then part-time through the rest of college), and it was in fact 1993. So this computer shipped from the company I first started working for around the time I started working and has been running my entire career so far. Crazy!

The money quote: "Around Y2K we thought it might be time to update the hardware, but we just didn’t get around to it."

You usually think in terms of whether something you worked on might still have the code out there running somewhere (I'm sure I have code dating back to 1996 running on Nortel Contivity switches somewhere out there, and code dating back to 2000 running on CIENA switches). But to think about an actual instance of hardware up and running nonstop for that long just kicks it up to a whole new level.

This made me reflect back on my time at Stratus. It was a great place to start, and back in another era. It was before any real open-source projects, and before you could go online and get answers around your programming problems almost instantly. Everything was in your head, in-house or in a book on your shelf, and all the expertise needed to be inside the company.

From a technology point of view it was great experience. It not only helped ingrain *how* to think about high availability and fault tolerance, but also that it *should* be thought about in the first place. The best lesson is probably that it forces you to think at a full-system level. Everything was redundant in the hardware - power supplies, memories, CPUs, backplane, boards. Anything could fail and be removed and replaced without the system missing a beat.

Now, this is super expensive of course. And around that time (1993-94) Stratus itself was moving away from mainframes with fault-tolerance to high availability clustering approaches. But still - it's cool as hell that there are kids out there driving cars around that were born after this thing first booted.

I worked in the HAL (High Availability LAN) group, mostly around the development of FDDI - itself a fault tolerant networking technology.

I got to work at the application level, in kernel code, and especially enjoyed in an embedded environment - the firmware running on the FDDI board itself.

It was a great kick-start to my career because Stratus had layoffs followed by attrition, which led to my being the sole software engineer running the FDDI project for a good chunk of time at about the time that my Coop stint was over (after which I was a part-time software engineer). Even better, the hardware engineers that had started the project also had left the company. There was great fun to be made of the fact that the project was being led forward by a Coop and a couple lab techs.

I suppose if it had been 2003, then the cool thing would be to drop out of college and start my own company. But in 1993, having responsibility of a full project within a large company was good enough! I loved having a challenge to rise to and doing it.

In the end I can't believe the article didn't say whether this sever is running VOS or FTX though. I bet VOS.

Saturday, April 10, 2010

A Strange Anniversary

A couple days ago passed the two year anniversary of my last paid workday. Well, hopefully not last *ever*. I joked with my wife that we'd celebrate by not going out to dinner.

I'm lucky enough that this was voluntary. It has resulted in the best two years of my life, without a doubt.

The plan at the time was to find an idea that would become a company that I would start. At that point I thought it would actually be one of the ideas that I had at the time. I had only worked for startups, 3 of them, since 1996 - the year after I graduated college.

I was looking to get out of networking and telecommunications specialties and work in the more general consumer internet space.

We found out my wife was pregnant before I made the plunge, but that timing was really perfect. It allowed me to be working on my own projects at home through the pregnancy and has allowed me to be working from home through the entirety of my daughter's life so far (17 months).

I worked on a few of my own projects independently, and worried my wife a little bit when I'd move from one to the next - just when she was getting used to the idea that the one I had been working on was going to be "the one".

The whole time I figured the worst case was I was acquiring experience in *lots* of new technologies that I'd put to good use at some point. And as it turned out, that was the perfect warm up for what came next.

Last fall I got introduced to someone looking for a technical cofounder, and the result is now Yieldbot.

Which is a great way to mark the two-year point. We've launched recently, in private Beta, and learning a ton from our first customer experiences, and being pushed by customer demand. Which is good, as this should mean not too much longer before my little family can spend some money again.

It's been a heckuva two years - wouldn't change a thing.

Wednesday, April 7, 2010

Long Tail

This isn't about the usual "long tail" you hear about, but about something similar that for some reason I find amusing - having just received a check for $39.78.

Back in 1998-99 I wrote a book on a VPN networking protocol called L2TP (Layer-2 Tunneling Protocol). I had written (in C/C++) our L2TP (as well as PPTP and L2F) implementations at my first startup, New Oak Communications, and then had gotten involved in the IETF process around the standardization of L2TP.

I wasn't looking to write a book, but based on some I-D's I had written on extensions to the base L2TP at the time, I was contacted by an editor in an email and asked if I'd think about writing a book. I had no delusions this would be a best seller, and said yes expecting (correctly) that it wouldn't really pay back in money for the time I spent on it, and that it would be a useful experience.

This makes me think of the long tail in two ways. First, this is obviously a niche subject. I wrote it for the audience of software developers who would be implementing the L2TP protocol, and secondarily for those that might be involved in the network planning around deploying its use. Yeah, there's gonna be a lot of those.

Honestly, I completely forget I even wrote a book unless one of two things happen. First, someone says (for some random reason) something about "your book" to me. It usually takes me a good 5 seconds to know what they're even talking about. Second, twice a year when I get the royalty statement. I say "statement" because it only becomes a royalty "check" when it accrues > $25 due to me.

So that's the second, and most significant, way it seems like a "long tail" to me. Because a full 11 years later the royalty stuff still dribbles in on this seriously niche-subject book. I wonder how much longer the poor publisher will need to keep sending me these statements.

It's most interesting to me just to see that 11 copies sold the second half of 2009. It must've been the holiday season. I'm really surprised it is more than zero at this point.

It's sold 4046 copies over these 11 years (technically only 10.5 so far), which is pretty cool. All in all it was worth the experience. The most surreal thing was one day years ago coming home to a small package that came in the mail from the publisher that contained a few copies of the Japanese translation. That was worth it just for the joke from my mother that she understood about as much of it as the original in English.

The thing I'm happiest about is the positive reviews on Amazon that it got. That's what I was most worried about back then, having direct access, and for all to see, to someone's potential negative take on something that I put some real effort into. That was a new thing back then, and it was scary.

Anyway, whenever this statement comes it just makes me chuckle. I was pretty sure I wouldn't break the $25 threshold again and get a check. I've got to imagine though that this one really will be the last one.

Saturday, April 3, 2010

MongoDB Sequences

I came across an issue today with MongoDB, the first one where SQL would have had a simple answer - sequences.

If you're familiar with SQL you know that this is very simple to do by declaring a column an index in a table. With MongoDB there's no built-in capability for this (as of 1.4.0, which I'm using now - I wouldn't be surprised to see some built-in sequence capability in a future release).

MongoDB does have a case that guarantees order, a "capped" collection, but it's not really meant for this purpose.

I found some hints online about how this could be done. Since none of those were satisfying and I ended up coming up with my own relatively simple way, I thought I'd share it as food for thought.

The approach is to have a javascript function saved to the database that can be called from the client to do our bidding. The client calls db.eval() to invoke this function to insert the object for us.

To set this up, I created a collection named "sequences" where the "_id" of each entry is the name of another collection in the database that I want to be sequenced. The table just needs to have an entry with an initial value for the sequence (take your pick) before the function to insert ever gets called.

For instance, if I had a collection named "foo" I would start with an entry like:
{_id: "foo", seq: 1}

To insert an object into the database I invoke db.eval() passing the function the name of the collection to insert into, and the object to be inserted.

The function that does the insert is:

function(coll, obj) {
var s = db.sequences.findOne({_id: coll});
s.seq++;
db.sequences.save(s);
obj['seq'] = s.seq;
db[coll].insert(obj);
return {'seq':s.seq,
'error_status':db.runCommand("getlasterror")};
}

I'm returning the sequence that was allocated (which I keep track of in my use case in the case where the insert was successful) and the error information associated with the insert. That way if the insert failed for some reason (like an index uniqueness constraint violation) I still find out about it.

Some caveats are probably in order.

I'm not in a sharded environment (yet), and when I am I suspect I will have to revisit this.

This also isn't the most efficient approach for high performance because db.eval() monopolizes mongod, so depending on the database usage pattern this could be pretty disruptive. On the other hand, this mongod behavior effectively acts as a lock and means calling this function will be atomic. I'm going to wait and see if this is actually a performance issue in my environment, however, before implementing an approach that brings more complexity into the application.

Whatever the case, I thought this was an interesting way to solving the problem to think about as it's a pretty straightforward analog to the SQL sequence functionality.

By the time I need to do it differently, maybe there will be a native way to do so in MongoDB.

Wednesday, March 31, 2010

Homemade Anti-siphon Check Valve (zip-tie + latex glove)

A tale about bootstrapping your startup.

The weather lately has been super rainy, which unfortunately comes with some water in our basement. After our sump pump broke down I was able to switch in a utility pump and move the float-switch from the sump pump to that and it worked well.

The one problem I had, though, was that the utility pump uses a standard garden hose and neither I, nor my local hardware stores, had an attachment to make it not siphon when it wasn't running.

What this means is that once the utility pump emptied the well it is in and switched off, all the water that was in the hose would flow back down and fill the well half way back up. Obviously this means that half the work the utility pump was doing was put into moving that half-well of water back out. Basically it was working twice as much as it needed to.

I put up with this for a bit, but it stayed in the back of my head because this is a problem that should be solvable.

Today I hit upon it - using a latex glove to block the end of the hose when the water stopped so that air couldn't enter the end of the hose and let the water flow backward.

Here's the recipe:
  1. cut the tips off all the fingers of a latex glove, except the middle finger;
  2. put the latex glove over the hose with the end of the hose down as far into the middle finger as you can get it;
  3. stretch it fairly tight and ziptie the wrist of the glove around the hose to hold it in place.


I'm mainly posting this because I didn't have much luck searching for homemade solutions online. So maybe this method will catch on. Just sent me a $1 royalty any time you use it. ;-) My guess is the "parts" probably total about $0.05 in cost.

Here's what happens. When the water turns on, the pressure expands the middle finger part and the water flows up and out the other fingers. When the water stops, the lack of pressure lets the glove contract back into place, with the middle finger back around the hose. When the water wants to flow backward in the hose, the negative pressure sucks the tip of the middle finger back into the hose where it forms a tight seal around the end of the hose and stops it.

Of course, you could use duct-tape instead of a zip-tie - but I like to use zip-ties whenever possible.

Here's why this is better than store-bought! I was able to do this on the end of a hose that had been cut to length, with an irregular shaped end and no threads.

I just couldn't resist a little video to show clearly how it works. This is the one I did around the hose with the cut end. Yes, it is giving you the finger.

Direct link: here



The funny thing about this whole exercise is that it reminded me of what it's like bootstrapping a startup.

There's always a way to solve a problem. The solution you think of first you might not be able to get access to or afford. But remember that you may just have a latex glove and some zip-ties laying around that might even do the job better.

Wednesday, March 24, 2010

Pulling back the veil

Today we started to reveal what our company is working on. Jonathan, CEO & Founder, went into it in an interview here: http://www.adexchanger.com/platforms/yieldbot/

Pretty exciting to see info out there describing what we've been working on for months. Just as exciting is the data that we're starting to get from our private Beta.

Publicly talking about the product + customers using the platform. Two great tastes that taste great together.

Monday, March 15, 2010

The Web is still in Beta

You hear the 2.0 and 3.0 labels used so much to describe "The Web" that you can actually start to believe it.

But it got me thinking, if you step back and look at the state of the web as you would a product, can it even be considered to be 1.0 yet?

I think the 1.0/2.0/3.0 mentality put too much emphasis on the "techy" point of view, and not enough on that of "normals" (See Chris Dixon's post "Techies and normals").

I'd make a case that what is commonly referred to as "Web 1.0" (late 90's - the "bubble" period) should be considered "Alpha". In this phase everyone was trying straightforward ports of functionality that you can do offline and bringing it online. The result was too much investment, weak business plans, etc., etc. Most people bothering to read this probably know it well.

For most of the past decade and up to now we've had massive adoption of online services and much broader demographics coming online and using services (such as the distribution that Facebook has).

There's no doubt that the online experience has gotten better and better, and ever more useful services are becoming available (and ever more intricate and involved time sinks). But I think this period should be considered "Beta", and that's still where we are.

I think the main problem is that the web is still a set of disjointed tools that us creators are still trying to figure which should be built, and regular users are still trying to find.

That's a fine state to be in, it just isn't "2.0".

There's a few areas that stick out to me in particular (some around booking travel or the struggles of the "news" industry), and one we're building a company around is online advertising.

The Alpha phase of the Web saw flashing gifs, popups, and "punch the monkey" type ads. The Beta phase has seen Google massively capture the value of the advertising dollar with a truly revolutionary model around a service everyone uses (search).

The problem has been the sucking sound of money flowing away from the publishers in this model. Before the web hits 1.0 it needs a model where publishers can have a substainable business. We're creating a platform that will do just that, putting that sucking sound into reverse, and allowing publishers to capture the value of their audience. Our little part of getting the Web to 1.0. :-)

Saturday, March 13, 2010

Tyranny of Intuition (part two)

In my first post on intuition I mentioned how it fascinated me how often our intution is wrong. This fascination started with the natural sciences, but when it comes to social and political science it becomes even more fascinating. Mostly because it stops being personal and reliance on intuition across the population I think leads to bad policy decisions.

I thought it would be interesting to touch on some cases off the top of my head where intuition leads us (or has led us) down mistaken paths.

Availability of pornography increases sexual crimes. I was reminded of this one this week when I saw an article (http://www.the-scientist.com/article/display/57169/) on this topic citing a study that found the opposite. It's not the first study I've seen referenced that found the same. Same goes for violent movies/games/music and crime rates.

Constantly praising kids will raise their self esteem. We went through quite a period of "all kids need to win" which I think was a real problem. I've read a lot in recent years about how the opposite actually is true. Kids know when they're being gamed, and the praise is devalued if they know they didn't do much to get it. They lose their incentive to try harder. They will also give up earlier on hard problems because they don't know what it means to be challenged. This is an area that seems to be correcting, and the Pixar movie The Incredibles did a great job on this theme. Self-esteem is important of course - and more important is earning it. For some reason our intuition was that we could create self-esteem out of thin air.

College financial aid makes college more affordable. Intuitively it makes sense, if you ignore (purposely or not) fundamental dynamics of economics. Means-tested assistance does make sense, but the system we have has clearly gotten out of control. College education inflation has persisted over time well above the general inflation rate because it has turned into a government subsidized industry. I actually think this one will change. It might be painful, like the real estate bubble bursting, but by the time my daughter is of college age (17 years!) I expect this to be much different. Technology will eventually undercut the traditional model, and I think you're starting to see that.

The price of real estate will always go up. We're living the effects of that mistaken intuition (or maybe just what was previously a massively-shared assumption), but how can you not mention it?

Our neighborhoods are more dangerous now than when I was a kid. To see the levels of protection applied to today's kids you'd think so anyway. In second grade (7 years old) I walked more than a mile to school, along and across major roads, in a city. Now we have parents in the suburbs waiting with their kids for the bus at the end of their driveway. Of course I think we mostly know that it's the availability of information about bad things happening that has actually gone up. We're bombarded by media that sells us the fear that is mostly what we'll respond to with buying.

Paper money is worth something. Maybe this is less an intuition, and more a shared scam we've mostly bought into. But all past fiat currencies have been hyperinflated out of existence.

The heads of our government are less greedy than the heads of our corporations. I think we have this intuition because we naturally want to believe the someone is looking out for us, and that government (or at least our US government) is automatically benevolent. The reality is that people are mostly all the same. This intuition causes us to not be critical enough of the power government assumes for itself.

I want to finish up with the punchline - why I think we live under a tyranny of intuition and why it's probably inevitable (which, admittedly, is disappointing). It's this:

* Most people are fine not looking past their intuitions;
* They will elect representatives who cater to those intuitions;
* In many cases these intuitions are wrong.

As a result, the major policy decisions that contol the ways we live our lives and limits our freedom are and will continue to be mainly driven by the mistaken intutions of the majority of the population.

It's not a happy story, but it's a conclusion I've been coming to more and more these days. Could I be wrong? I'd actually hope so. Is my intuition that most people are fine not looking past their intuitions wrong? My experience says no - and the level of discourse of recent policy discussions that I've witnessed reinforces this for me.

Well, at least I got that off my chest.

Friday, March 12, 2010

Tyranny of Intuition (part one)

I've long had a fascination with intution and how often it can be wrong. I think it stems back to when I first learned in grade school that a feather and a lead ball dropped in a vacuum would fall at the same rate. Confounding factors lead us all to have the intuition that this is not true. I remember we all argued with the teacher at first, and we were all wrong of course.

This realization that intuition can be so utterly wrong is I think what drew me to science and drove me to a degree in Physics, where I especially was drawn to that most anti-intuitive of areas: Quantum Mechanics. That was cool stuff.

In software development I've seen intuition get in the way in two major areas - and when you're a better developer you realize these exist and you combat them. The first is that you know ahead of time what code needs to be optimized. You don't. You know once the code gets exercised under real or close-to-real circumstances and you can measure the bottlenecks. The second is in thinking you know what type of user interface your customer is going to want. You don't. You will after you see them flail around with one you've built (the more experience you have the closer you'll get the first time of course).

The thing is, complex systems (which includes aggregations of people) behave in ways that often are not intuitive. Often confounding factors cloud your judgement of how they'll behave. The more you know and deeper you can think about the interactions the better. That's the good news.

The "problem" of intuition, as I see it, is that we have to rely on it. Reality is too complex to be able to think through everything in full. Even though our intution is going to be wrong, we often have to rely on it.

Because of that, I think the problem of intuition isn't one that can be solved, but it's one of those things where at least acknowledging the problem will help you avoid some pitfalls.

Experience teaches us when to trust our intuition. We won't always be right, so we should always be on the lookout for when intuition might be leading us astray.

While any given person can combat over-reliance on intuition, it's the operation of intuition in populations as a whole and its effect on public policy decisions that makes it a tyranny.

More thoughts on intuition coming in the next post.

Tuesday, February 23, 2010

Everybody was A/B Testing - Hee-yah!

(Sung to the tune of "Everybody was Kung-Fu Fighting")

Last month I read an article in The Economist (here: How to combat the natural tendency to procrastinate) about a study on procrastination that centered around farmers in Africa, and sought to answer a question about how policy could be arranged to increase fertilizer use (the usage of which is in the farmers' self interest).

The economists devised a scheme in which farmers paid the full market price for fertiliser, but had it delivered to their homes by a non-governmental organisation at no additional cost. A subset received this “discount” at harvest time, while another group were also offered free delivery, but only when planting time was imminent. Still others were offered a 50% subsidy on the market price, an approach commonly taken by governments to encourage fertiliser use. As the model of time-inconsistent preferences predicted, the offer of free delivery early in the season pushed up usage of fertiliser by 11 percentage points over a control group who were not offered anything. The same discount late in the season, however, had a statistically insignificant effect. A 50% subsidy later in the season, a much costlier policy than free delivery, pushed up usage by about as much as the early discount.

Something that really struck me about this was it seemed to parallel very strongly the type of A/B testing that is becoming ever more popular on the web (my latest venture is steeped in that world, so I've got it on the brain anyway).

I've read about plenty of studies where a group opts in to a study, maybe for some payment, and then plays some games that are meant to uncover some aspect of psychology. There's a control group, etc.

But what struck me here is that these were farmers just going about their business. I'm not sure if they were privvy to the study that was being conducted. But even if they were, they weren't "opting in" in the usual way. This gives it a striking resemblance to A/B testing online, which happens without the knowledge of the web user. They aren't taking surveys or playing a game; just going about their business with a specific behavior being tested and analyzed for the purpose of optimizing some specific outcome.

In the world of public policy it also points to a more practical mindset that could be effective if used wisely. It's more a scientific mindset than a political one, so maybe there is little hope that meaningful public policy would happen this way (although I can't help but note that it is obviously related to the States vs. Federal question within our own US system).

It's clear that in most ways the web isn't inventing something completely new with this A/B testing thing. But the web does provide unprecedented opportunities to use and learn from A/B testing.

Even though not invented there, I do think that lessons in A/B testing that will be learned from the rapidly innovative online world will find their way back into the non-online world more and more. Although it should also be a two way street. I'm sure some online marketers could learn some good lessons from the observed behavior of African farmers too.

If nothing else, the spread of a mindset that we don't necessarily know the answers at the start, and that identifying a goal to test and optimize for would be a great thing. In the world of public policy, this could make a world of difference.

Maybe it could even help get us off a road paved with good intentions, and instead onto a path toward the goals we're actually aiming at.

Monday, February 15, 2010

Time Contraction

The last months I've been coding away in my home office with time flying past me almost unnoticed. Two things really brought home what effect this is having on my time perception.

One is that my wife has pointed out a number of times lately where I've told somebody that something happened "a couple weeks ago" when in fact it could have happened as much as a couple months ago.

The other was that when we went to Florida for a family visit last week for 5 days I came back feeling like I had taken about 2 weeks off. This even though I brought my laptop and still did some work.

I generally don't mind this time contraction, especially in the Winter. What's great about it is that it is caused getting a lot done and enjoying it. "Time flies when you're having fun" as they say.

It's great to measure the passage of time by how much new stuff is working rather than how many times the earth has spun around.

Saturday, January 30, 2010

Some thoughts on free markets

I'm certainly not a pro economist - but who would want to have been these days? But at least thinking about economics is kind of a hobby of mine. Since way before the crisis, but of course the last couple years have provided a lot to think about.

I was contemplating this article in the New Yorker interviewing Eugene Fama from the "Chicago School" of economics, in which he's mostly on the defensive.

At base is whether the "efficient market hypothesis" is in tatters after the financial crisis (most recently, but also dating back to 1987).

As the article summarizes the theory: "[it] says prices of financial assets accurately reflect all of the available information about economic fundamentals."

I think there's some things to be said in defense of that. I think the point that can be forgotten is the role of the Government and its presence in the market and what that means.

I've read the point, and am sympathetic to it, that the government bailout of the S&L banks in the late 1980's is what emboldened them more recently leading up to our recent financial crisis. This caused them to under-price risk and to take risks they shouldn't have. In other words, the presence of the Government as a "bailout" agent made it an actor in the market and was part of the "information available about economic fundamentals". Arguments about whether the *recent* bailouts are a slippery slope should remember we were *already* on that slippery slope.

In addition, the presence of Fannie and Freddie in the housing market and the loosening of restrictions on getting loans also made it a factor in the market, one which strongly boosted the demand side of the equation.

Then there's the role of the Government in controlling the cost of credit, another distortion, obviously stimulative during the housing boom.

In fact, as I remember it, and haven't heard much about it until recently, it was the repricing of ARM loans, sub-prime or otherwise, when rates began to get raised that precipitated the financial crisis of the last couple years.

For me this all makes it hard to judge free markets as having been at the root of the financial crisis, the markets not having been free and all.

That isn't to say that Government doesn't have a role, like with regulations such as Glass-Steagal - and it seems like some of that will be coming back. Hopefully we will judge regulation not by "how much" but by "how good".

Saturday, January 9, 2010

Optimizing the Common Case

For some reason I realized yesterday that my behavior of wearing shorts and sneakers all year (in New England, even in winter, and yes even when shoveling snow) is a manifestation of optimizing for the common case.

At some level I knew this, but never quite drew the parallel with writing code.

I, like most people, spend most of my time indoors or between places in a warm car. My dress is therefore optimized to comfort indoors. In fact, in the winter, indoors is usually kept warmer than it is in the summer (not at our house, especially before my daughter came along).

Shoveling the snow in sneakers isn't a problem. I'm always *behind* the shovel after all. I rarely actually have to step in snow above my ankle.

The few minutes I'm generally in the cold just doesn't seem worth optimizing while being too warm the rest of the time.

And it always struck me funny on days where it snowed that people would spend the day tromping around the office space in their boots. All because of half an hour in the morning (likely walking in already shoveled/plowed areas from their car to the door at work).

Of course, conditions can get cold enough, such as with wind in January. But those are outlier cases certainly.

When coding, optimizing for the common case is important to keep in mind. As well as not optimizing too early (before you really know what needs optimizing) of course.

Sitting in my shorts in the winter while coding will now hopefully serve as a reminder of that! At least until I smarten up and move somewhere warmer someday...