We're the Stupid Ones: Facebook, Google, and Our Failure as Developers

Be Stupid
Photo by michiel

Update July 16, 2022

I wrote this a long time ago -- February 15th, 2010 -- and I tend to view with suspicion my opinions about anything that is more than a couple days old.

However, I think there is still a lot of value in what I wrote here, and in the comments that followed.

I feel a personal moral responsibility to serve the needs of the users of software I create.

I am tasked with creating tools to help them solve problems they have.

That's my job. I have done it for 25 years, and I am pretty good at it.

The technology stacks have moved around, but the job hasn't changed.

Normally I try to chew on an idea for a post for a few days; it lets me sort out my thoughts and form some kind of thesis. I’m totally not doing this here, though, so I should preface this with a note that I could be completely off-base. But I don’t think so.

Discussion about how we interact with computers heated up recently with the introduction of the iPad. Lots of nerdy types (myself included) were frustrated that Apple had introduced not a tablet “computer,” but a big iPod Touch. They’re both computers, of course, but the way we interact with them is different: the modern computer interface uses a multitasking windowing motif, and the iPod/iPad interface is fullscreen and single-task focused.

As a Nerdy Power User, I am well-versed in how to navigate a multitasking interface, and for the most part I understand how and why it works the way it does. I, in fact, enjoy learning about the intricacies of these kinds of systems. So when I use a single-task interface like that of the iPod Touch, I frequently bash my noggin against the barriers it imposes. Copying a URL from the web browser to my Twitter client takes orders of magnitude longer than it would on OS X or Windows, for example.

What I’ve learned from interacting with most computer users, though, is that they do not give a rat’s ass about how computers work. They want to accomplish certain tasks, and will do this in the way that is most sensible and direct for them. And the way they end up accomplishing these tasks within the multitasking window motif is typically not the way I would do it.

The recent fiasco on ReadWriteWeb, where a RWW article became the first Google result for “facebook login,” is a classic example of this. And, unfortunately, so is the reaction of most Learned Computer Fellows: one of mockery and derision, admonishing the confused users for being stupid, incompetent, or lazy.

I’ll admit that I took some glee when I first saw the numerous comments on the article; I love a humorous clusterfuck as much as the next guy. But seeing some of the reactions by the Very Smart Computer People, I began to realize that We Are Not Getting It. Consider:

These people have better things to do with their days than tweaking out the spacing in their browser toolbars. A computer for them is a utility. One that is increasingly complex, and one that is used because it’s the only option for accomplishing certain things – not because it’s a good option.

It’s kind of like the Photoshop Problem: when people want to crop a picture, we give them Photoshop. Photoshop is a behemoth application with nearly every image editing and touchup function imaginable, and it is terribly complex. Now Photoshop is an impressive tool, but only a very tiny percentage people need the power it offers. The vast majority just want to crop their ex-husband from the photo and let their friends look at it. But even iPhoto, the poster child for Apps So Easy Your Grandparents Can Use Them, continues to pile on features and complexity.

When folks need an elevator, we should give them an elevator, not an airplane. We’ve been giving them airplanes for 30 years, and then laughing at them for being too stupid to fly them right.

I think we’re the stupid ones.


As I said at the start, I wrote this piece a bit off the cuff, so upon further review I think I could have made it a bit clearer. First, a couple great rebuttals I read:

I posted a comment on Phil Crissman’s blog, which I think explains a bit further what I’m thinking, and addresses the notion that some learning may still be required. To copy and paste myself:

I certainly don’t think that the computer can become (anytime soon) a magic box that determines our whims, nor do I think that people shouldn’t have to learn some things.

What I do think is that the current interface modern OSes on computers provide is simply overwhelming for most users, to the point that it’s very challenging to learn how to accomplish tasks without a very significant investment of time. Driving would be a good example of a task that does require investment of time, but is not so overwhelming that the vast majority of people fundamentally get it wrong: you don’t see people steering with their feet, or accelerating and braking with the radio. I’d argue that modern computer interfaces, in a rush to offer flexibility and capability, make it possible to steer with your hands, feet, teeth, and knees — and don’t make it particularly clear which one is best.

Update 2

Some more responses:

Feel free to forward me others; I think I’ve given up trying to track them down for now.