I wrote this a long time ago -- February 15th, 2010 -- and I tend to view with suspicion my opinions about anything that is more than a couple days old.
However, I think there is still a lot of value in what I wrote here, and in the comments that followed.
I feel a personal moral responsibility to serve the needs of the users of software I create.
I am tasked with creating tools to help them solve problems they have.
That's my job. I have done it for 25 years, and I am pretty good at it.
The technology stacks have moved around, but the job hasn't changed.
Normally I try to chew on an idea for a post for a few days; it lets me sort out my thoughts and form some kind of thesis. I’m totally not doing this here, though, so I should preface this with a note that I could be completely off-base. But I don’t think so.
Discussion about how we interact with computers heated up recently with the introduction of the iPad. Lots of nerdy types (myself included) were frustrated that Apple had introduced not a tablet “computer,” but a big iPod Touch. They’re both computers, of course, but the way we interact with them is different: the modern computer interface uses a multitasking windowing motif, and the iPod/iPad interface is fullscreen and single-task focused.
As a Nerdy Power User, I am well-versed in how to navigate a multitasking interface, and for the most part I understand how and why it works the way it does. I, in fact, enjoy learning about the intricacies of these kinds of systems. So when I use a single-task interface like that of the iPod Touch, I frequently bash my noggin against the barriers it imposes. Copying a URL from the web browser to my Twitter client takes orders of magnitude longer than it would on OS X or Windows, for example.
What I’ve learned from interacting with most computer users, though, is that they do not give a rat’s ass about how computers work. They want to accomplish certain tasks, and will do this in the way that is most sensible and direct for them. And the way they end up accomplishing these tasks within the multitasking window motif is typically not the way I would do it.
The recent fiasco on ReadWriteWeb, where a RWW article became the first Google result for “facebook login,” is a classic example of this. And, unfortunately, so is the reaction of most Learned Computer Fellows: one of mockery and derision, admonishing the confused users for being stupid, incompetent, or lazy.
I’ll admit that I took some glee when I first saw the numerous comments on the article; I love a humorous clusterfuck as much as the next guy. But seeing some of the reactions by the Very Smart Computer People, I began to realize that We Are Not Getting It. Consider:
Isn’t this really a failure of Google? How did it become so easy to game search engine results that an article about Facebook and AOL became the first result for ‘facebook login,’ instead of the obvious thing people are actually looking for?
How is it the fault of the users when we present them with multiple, barely-differentiated text fields within the same window. Is it really surprising that they don’t understand the differences between each? And is it surprising that they choose to use the one which works with more natural language, rather than entering syntactically-unnatural domain names?
There is LOADS of anecdotal evidence that most users simply use search engines as a sort of natural language CLI. Shouldn’t we be designing interfaces that work in the way most natural for the majority of users?
These people have better things to do with their days than tweaking out the spacing in their browser toolbars. A computer for them is a utility. One that is increasingly complex, and one that is used because it’s the only option for accomplishing certain things – not because it’s a good option.
It’s kind of like the Photoshop Problem: when people want to crop a picture, we give them Photoshop. Photoshop is a behemoth application with nearly every image editing and touchup function imaginable, and it is terribly complex. Now Photoshop is an impressive tool, but only a very tiny percentage people need the power it offers. The vast majority just want to crop their ex-husband from the photo and let their friends look at it. But even iPhoto, the poster child for Apps So Easy Your Grandparents Can Use Them, continues to pile on features and complexity.
When folks need an elevator, we should give them an elevator, not an airplane. We’ve been giving them airplanes for 30 years, and then laughing at them for being too stupid to fly them right.
I think we’re the stupid ones.
As I said at the start, I wrote this piece a bit off the cuff, so upon further review I think I could have made it a bit clearer. First, a couple great rebuttals I read:
I posted a comment on Phil Crissman’s blog, which I think explains a bit further what I’m thinking, and addresses the notion that some learning may still be required. To copy and paste myself:
I certainly don’t think that the computer can become (anytime soon) a magic box that determines our whims, nor do I think that people shouldn’t have to learn some things.
What I do think is that the current interface modern OSes on computers provide is simply overwhelming for most users, to the point that it’s very challenging to learn how to accomplish tasks without a very significant investment of time. Driving would be a good example of a task that does require investment of time, but is not so overwhelming that the vast majority of people fundamentally get it wrong: you don’t see people steering with their feet, or accelerating and braking with the radio. I’d argue that modern computer interfaces, in a rush to offer flexibility and capability, make it possible to steer with your hands, feet, teeth, and knees — and don’t make it particularly clear which one is best.
Some more responses:
Feel free to forward me others; I think I’ve given up trying to track them down for now.
Very thought-provoking, and you’ve given me a kind of constructive guilt here. Reminds me of this book: http://amzn.com/0672326140
I couldn’t agree more. The majority of computer users, want a simple basic interface that is clean, so they can get in, get out, and get on with their lives.
I don’t give them photoshop. I given them irfanView. It does the job and gets the hell out of the way.
Your point is well taken. Power features for power users.
The problem is, how to tell which a particular user will be? Kids in particular should have the ability to “dig deeper.” What concerns me isn’t that the ipod or the ipad is too simple, it’s that if it’s the only computing device that someone grows up WITH how do they learn to manipulate the cybersphere from a position of control AT ALL?
Products like Palm’s Ares become more and more important in that context.
You have a divide in the software development world as entrenched as the current divisions in the political world. There is the “make it as flexible as possible” camp and the “keep it simple” camp. The former is represented by the open source development community, cramming as much functionality into what they build as inhumanly impossible, giving developers more options than they could ever possibly use in a lifetime, making it just as difficult to do something commonplace as it would be to do something complex. The latter camp is represented by commercial software vendors like Microsoft who push componentized solutions that offer supposed out-of-the-box simplicity but make anything beyond the simplistic nearly impossible to achieve. Ideally, we could build software development platforms that still make simple things simple but simultaneously enable more sophisticated tasks in a methodical way. But we don’t do that. Instead we have as our alternatives either overly complex open source APIs with steep learning curves or simplistic componentized platforms not capable of much beyond the trivial.
The point being: if we as software developers can’t find the happy medium between “keeping the simple things simple” and “enabling the complex in a methodical way” for ourselves, how can we expect to write programs, webapps, and user interfaces that achieve that happy medium for end-users?
I generally agree with your points, but I think anyone who recommends/gives a non-techy a copy of Photoshop just so that they can crop a photo, is the stupid one!
Something free and fairly small like Paint.NET does the trick, and if Microsoft had better understanding of what users want, they would have incorporated cropping into their OS photo viewer (as they already have with rotation).
I’ve run into the same thing with web2project. The class of user that wants to use or comes from BaseCamp doesn’t have a concern or concept on what a Gantt chart is. Their idea of project management is simply a checklist.
Towards that goal, we’ve tried to figure out good ways to implement normal vs advanced modes but there are some tough things to balance in there. You have to determine which features belong on which side of that divide and then you have to teach the users to figure out which set they need.
My 0.02.
@Dave Nattris
I would agree that it’s stupid, but:
Wonderful post. Really. I think Apple’s ahead of the game in bringing really useful functionality to users who consider tech a pain in the ass. I get the underlying issues we geeks have with their design decisions, but I think we need to accept that they didn’t design it for us.
I’m totally exhausted from having my own business plan torn apart all day by venture capitalists, so I’ll just thank you for putting this idea out there in a way that’s aimed at the tech crowd. We keep acting like pushing the usability threshold further away from the ability of a typical average joe will somehow improve things for everyone. In fact the most interesting technology will bring the masses one or two steps closer to loving technology for the same reasons we all do; it makes our livers that much easier.
I commend your example regarding the RWW “facebook login” debacle . Jolie O’Dell made a similar point in her follow-up article “The Internet is Hard” on RWW. And if you read the comments there (and on the original article), it fell on deaf ears. Most of the developers, designers, and power-users were convinced that these people were just stupid and that “you cannot and should not design for stupid”. It’s precisely that sentiment that perpetuates the problem.
Looking at Dave Nattris’ comment about giving someone Paint.NET over Photoshop sort of reinforces why Apple has been so successful with the iPhone/iPod Touch. Any user can enter the App Store and simply search for “crop photo” to find an application that can do this. The iPad will have the same AppStore and users will be able to search for apps by task/function, download them, and run them. And it’s pretty handy for the average user who doesn’t read the tech rags that all applications have ratings, reviews, and number of downloads.
@Funkatron sorry, I don’t get what you’re saying. Just because it’s happened doesn’t make it right! I don’t expect all people to understand Photoshop - in fact far from it, as a general IT contractor, I charge people to use the software on their behalf because they don’t have that knowledge/expertise. What I do expect, though, is that if people want to do things themselves, they need to learn how to do so instead of expecting everything to magically happen. If you want some food, you may not have to go out and pick it from the ground, but you do have to find your way to the supermarket.
@Ankush I don’t think that situation will be likely, as a ‘dumb’ user who wants to crop their photo probably does not know that ‘crop’ is the word for trimming their photo. People normally think of a crop being something that’s grown on a farm (and of course, the reason why it’s called that is because it gets cropped at harvest time). If we really want to make things easier for people to use on a really dumb level, we have to change the language too. Search engines only work well when you know how to describe what you’re searching for.
@Dave, my feeling is that the complexity presented by the average computer is orders of magnitude beyond going to a grocery store. I’d also say that an successful grocery store adapts to the way people tend to shop.
I’m of course not suggesting that there will never be any learning curve, and computers should be usable by infants. I do think that current computer interfaces are, for many reasons, have far too steep a learning curve for most users. There is, I believe, a balance to be struck, and right now we’re far off that balance.
@Dave I agree with you - and perhaps the semantics inherited by the software industry are too domain specific for the average user. This is a core usability issue in human factors. Words mean different things to different people. You and I say “crop” while someone else might think of “trim” or “chop” or “cut” or maybe even “resize”.
So it furthers the point that @funkatron made about search technology being able to understand context and intent. For example, if someone types, “I want to trim my pictures” into Google or the AppStore’s search, shouldn’t they return items that pertain to cropping images?
How we get there is another question.
@Dave, I wrote a reply to this article last night that takes the same stance, more or less: http://blog.cursingnerds.com/2010/02/reply-to-funkatrons-analysis-of.html
@Funkatron, thanks for getting the discussion going!
@dave - dude, if thats your attitude towards the people who use what you develop, be it a program or a web site - that if they’re too stupid to use the brilliant stuff you’ve written, thats not your problem - then boy are you in the wrong business! your attitude seems to be that THEY need to learn to use what you wrote, no matter how bloated and complex it may be. instead of YOU needing to learn to tailor what you write for your audience, and make it useable for both people like them and for people at your level of sophisticaation. no wonder web interfaces are so badly designed these days with attitudes like this so common!
Your general idea is sound but I disagree with your conclusions.
In the Facebook incident, I feel that as a society, we already pander enough to ignorance. If we attempt to compensate for users who are unable to even discern what website they are on, then we only encourage the spreading of that behavior and we create a software ecosystem that will become utterly and ridiculously complex as we try to account for every possible human error and move to solve it.
As engineers, we should seek to educate the consumers of our products, rather than encourage them to stay in the dark.
But on your point in general, yes, most people just use computers as utility, but that doesn’t mean they should be absolved from obeying basic rules about what the hell URL they are accessing.
I commented, on a friend’s Web log posting about the same topic, that the iPad hews precisely to Apple’s explicitly-stated lifelong goal of making computers into appliances, like toasters or microwave ovens. Well, maybe not like microwave ovens—the user interfaces on them often tends to be too abstruse. For that matter, how many readers here remember VCR interfaces and the blinking 12:00? Tunnel-visioned developers have been with us as long as the field of consumer electronics has existed.
@Michael,
I wouldn’t disagree with that. I suppose my feeling at the moment is that the complexity of the interface makes it extremely difficult to educate. As someone who works in infosec a fair bit, education of users comes up a lot, and over the years I’ve felt more and more that the whole model is just fucked.
Admittedly, I’m not offering much direction, and I suspect much smarter people than me will have better ideas about where to go from here.
Funny how some of the geeks here say they’d recommend a simpler program than Photoshop but don’t even seem to see the problem in the fact that this simpler program would still run on the computer (airplane); and this right after reading the article.
I can’t wait ‘til march 2011 to see how the pc-world looks 1 year after Apple started to sell the iPad.
You are perfectly right. I know people who use the browser’s home “Google” to open each and every web page, and when they want to google something, enter “Google” in the search box.
Spot on. Even as a software developer, I’m looking forward to simpler interfaces and devices. I waste too much time keeping my MacBook software up-to-date and figuring out how to do this or that.
It’s taken me awhile to get to this point but after having worked for several years now with intelligent people who find standard Windows machines confusing, I’ve come to the conclusion that most computer interfaces are too complex.
I recently read an editorial on Engadget (http://www.engadget.com/2010/01/27/editorial-engadget-on-the-ipad/) where a dozen or so of their staff - ‘journalist’ would be too kind a term - gleefully explain how the iPod is not for them.
Of course it’s not for them. It’s for everyone in the known universe who doesn’t read or care about Engadget, its readers and the products it follows. It’s for all the people out there who want less computing in their lives and not more.
It’s not for the millions of people that like or need to use Windows, Mac OS X or Linux. It’s for the billions that don’t. The billions that would rather do anything else on Christmas Day than install Direct X updates so that the kids’ new game runs on the family PC. The billions of people who would rather have their teeth pulled than struggle with security updates and 8-button mice. The billions of people who just want to share pictures of their grandchildren and book tickets for a movie.
In short it’s for normal people, and they’re in the majority. We, the developers, designers, content creators and enthusiasts have to understand that we are not normal computer users. We’re the modern-day equivalent of the auto mechanic, the kit-car fan, or the tuning enthusiast and we’re sorely outnumbered, by both the young and the old who just want computing to be easier. And for it to waste less of their time.
I can’t see why the computer industry has such a hard time understanding this.
I used to think users were stupid, and that one could hardly place the blame at Redmond’s feet whenever they did something daft and got another obvious virus. Then, as I saw smarter and smarter people fall for it, I realised that this is how it happened.
Microsoft assumed their users would be savvy and relaxed certain things to allow users to bodge around more and this meant more vectors that the regular folk could catch an infection from. Of course it’s the fault of those who coded Windows and allowed stupid things to happen, and IT departments the world over had to wallpaper over this issue day-in, day-out.
Stuff like this simply shouldn’t happen, and it’s our job, as IT people, to make sure it doesn’t. Much like the dumbing down of cars (oh crap, a car analogy; please forgive me) didn’t put mechanics out of business or drive people (oops, a pun - it’s ok, I’m nearly done) away from designing and improving new cars, the simplification of consumer computers will not lead to a dearth of new programmers. Geeks will be geeks, and the rest can get on with their lives and jobs.
My mother calls me monthly to tell me that her “google” isn’t working right.
She means she has been signed out of her iGoogle account.
I’m really astonished by how many people have proclaimed the end of desktop computing as we know it, despite that so few of them have actually used an iPad.
I agree that there are many things wrong with the usability of modern computers, but did you ever stop to think that some people really are just stupid? Every single person I know can understand the difference between a URL and a Google search.
What’s ironic is that you, like many others, have begun to suggest that we should design computers for the bottom 1% of users (in other words, the ones Googling “facebook login”). How is this any better than designing computers for the top 1% of users (the geeks)?
I’d say at least 80% of users (yes, even my mother) use multitasking to some degree, even if it’s limited to a browser window and an instant messaging client, or a word processor and a web browser. And I’m sure that those who don’t understand what multitasking is could have a sufficient grasp of it if they bothered to use their brains for 30 minutes to learn something. Why don’t we just cover cars in rubber, set the speed limit to 10 mph, and let anyone drive without a license?
Yet now that the iPad has come around, everyone is proclaiming how the obvious solution was to just start removing functionality. The iPad doesn’t abstract the manual transmission into an automatic transmission (which is the obligatory car metaphor I keep hearing), it removes the transmission.
If Apple had done something truly revolutionary, do you think there would be so many people arguing about its functionality and usability? And please don’t give me this “well back in the day CLI users whined about how inefficient the Mac’s GUI was” because, even today, some things are better done by the CLI, and nothing is preventing me from using a CLI when I want to.
But to be fair, I have not used an iPad either. The only way to know for sure is to wait and see.
My take on the “just a big iPod Touch” thing: http://blog.insightvr.com/?p=224
@Rick Boatwright — I see this concern come up a lot (what if kids grow up with this kind of tinkering-hostile computing device). I understand it, since I grew up with TRS-80s and computers that you virtually had to tinker with in order to get anything done.
But I think this gives kids, and tinkerers in general, a little too short shrift. People who want to tinker with things will be able to, I suspect; for a few days earlier this week I had a “Hello World” app I wrote installed on my iPhone. Even if “computing appliances” become the norm, there will obviously be people developing applications for them, and there will almost certainly be ways for people to come up with “cottage” applications for them.
My only concern with the iPhone OS devices is with respect to developer licensing — I think they really need a “hobbyist” license of, say, $25 one-time, rather than $99/year, to let people create and sign developer-only certificates that can be used to install apps on individually-provisioned devices. But there’s no guarantee that the Way Things Are Now is the Way Things Will Always Be — and of course on non-Apple devices it isn’t the way things are now. (One could even argue that the recent rewrite of Google Voice as an HTML5 app for the iPhone suggests that it isn’t entirely the way things are now even on Apple devices.)
@dave
“What I do expect, though, is that if people want to do things themselves, they need to learn how to do so instead of expecting everything to magically happen. If you want some food, you may not have to go out and pick it from the ground, but you do have to find your way to the supermarket.”
Once upon a time people did grow food in their own gardens. Then supermarkets (food stores) came along and some ridiculed the idea that food users would buy stuff shipped for miles when they could just walk out their back door and pick it.
The world changed. Everyone now thinks a FoodStore is normal.
@Ian,
Do keep in mind I didn’t suggest the iPad was a, or the, solution. Far from it.
In addition, I am definitely not suggesting that some effort may be involved, or that computers should be designed for “the bottom 1%” (although I think you vastly underestimate both the intelligence and number of users who have difficulty with modern OSes).
I’m not talking about computers for infants. I’m talking about computers for non-enthusiasts.
Excellent observations Ed! This web-wide discussion really reveals people’s critical thinking skills, or the lack thereof. It’s fascinating. I love it.
But get this… I’ve been thinking about the old-school premise that it’s the Marketing Department that should determine what a product should or shouldn’t do related to optimizing sales. Stop and think about coming at it from that angle and this discussion gets even more interesting. AMD (Apple’s Marketing Department, uh, Steve Jobs?) sees a vast sea of untapped customers -> the non-geeks and tinkerers we’re talking about here. They want to optimize for that market.
The iPhone has multitasked from day one, but only under Apple’s control. People usually cite the desire to play Pandora in the background while doing something else. But the AMD says no, we want you to play our iTunes content in the background.
Push Notifications is a brilliant device to delay the need for free-for-all app multitasking and retain control of the interface. Apple also made it so you don’t have to leave an app by providing developers with access to the camera, music, photos, contacts, email, maps, the web, etc, delaying the need for multitasking further still.
And is the iPad’s lovely design faithful and true to the ideal of human-computer interaction, or is it also meeting the goals of the AMD as well?
We have a choice, and the half-dozen existing free-for-all mobile interfaces give the tinkerers what they want, at a cost (e.g. app store spam). But I’m glad there’s at least one company that’s being a butthole about NOT putting out just another free-for-all environment, whether it’s for an ideal or for a dollar, or both.
And now everyone’s all ooh-ahh about Windows Phone 7 Series (gotta stop and chuckle for a minute) but in the coming months it’s fundamental flaws will be revealed and it will, again, be another interface for young people, I suspect.
I’ve been nerding around computers for most of my life, and I’ve started using Google as my URL bar. Why? It requires less thought. And that is precisely all the reason I need.
I’m ready for my flying cars and personal jetpacks and computing interfaces as simple as toasters. I just don’t want to bust out the CLI when I’m questing for information.
Please, make it simpler for the average Joe to find what he’s looking for. That does mean writing software to the lowest common denominator— but it doesn’t mean the audience is necessarily as stupid as a sack of hammers. Would you ask your local baker or carpenter to put up with virus- and adware-infested cheap-ass Windows machines, and still mock them for trying a notional shortcut of using a browser’s search bar to get to the Google home page?
Hell, I’ve done that too.
Designing computing interfaces is hard, and especially when the target is inching toward simplicity but still requires a silly amount of use cases behind the curtain. But that’s where we’re headed…. no two ways about it. So let’s live up to the challenge.
It is valuable for readers to be able to choose their level of engagement — with a book or article, with software.
With a book, this means formatting it so it can be easily skimmed.
With software, this means being able to find the most common operations easily and choose whether (and when) to move to the more complex operations. Photoshop Elements is a good compromise between power and simplicity.
Microsoft’s approach using customizable tool bars, however, strikes me as awkward — replacing flexibility with a confusing interface where changes have mainly eccentric significance. It is a difficult thing, offering both stability and flexibility.
One of the needs of users is to have commands stay in place (Stay!) and not appear and disappear or move around in different documents.
The comment I left on the aforementioned site…
Maybe instead of calling people dumb, we should see this as a learning experience. The Internet is a vast world created by the nerdery that broke into the mainstream. Is it really surprising that someone’s mom or sister doesn’t use it the same way you do.
When you open a browser window for the first time, you are brought to either Yahoo, Google or MSN or maybe Bing, and what happens with no user interaction at all?… focus is put on the search bar by the browser. Users are taught from the moment the first get online that the way to find stuff is with the browser search bar.
There is no indication that someone can even type in the address bar. Just because it’s white I am supposed to know I can type there? And if I want to send someone a link, I don’t even need to copy and paste it, I just hit file -> send link. Who would even think to interact with the bar?
Maybe if we studied what was happening here, we would work on creating a better user experience, instead of just assuming most people are dumb.
Creating simpler user interfaces isn’t “pandering to ignorance.” It’s recognizing that while you may enjoy computers for their own sake, other people don’t. For them, computers are only a means to an end.
This is as it should be. You’re the one with a job in the industry. They’re the end users who pay for your expertise. If someone’s got to find computers and software interesting for their own sake, better it should be you.
You know what the alternative is? Have you ever noticed that the fashion industry is run, not just by, but for, people who find fashion interesting for its own sake? That probably isn’t you. You just want clothing that will look reasonably good on you, mesh with your existing clothing, and not require you to memorize a bunch of weird rules or go broke paying for dry cleaning.
So much for the beginning of the day, when you’re getting dressed. At the end of the day, you hope the former Eng. Lit. and Cinema majors who keep a professional eye on your entertainment don’t forget that you want a decent, fast-moving story and some good lines of dialogue, and that the cook at your local restaurant doesn’t go hog-wild with organ meats and obscure vegetables.
In short, forgive as you hope to be forgiven. We all look like stupid barbarians to someone.
One thing people keep forgetting is iPad is a computerized pad of paper, in the same way an iPod is a computerized music box. There is no excuse for adding a computer to these everyday items and making them harder to use. You don’t have to learn how to make paper, ink, or how to set hot type in order to read a book.
> if [iPad is] the only computing > device that someone grows up WITH how do they learn > to manipulate the cybersphere from a position of > control AT ALL?
That is the same exact “dumbing down” argument that was made against the World Wide Web in the 1990’s and the Mac in the 1980’s and the personal computer in the 1970’s. You’re making me think the iPad will be a huge success.
> What I do expect, though, is that if people want to > do things themselves, they need to learn how to do > so instead of expecting everything to magically happen.
People seem pretty happy to learn how to do things themselves on an iPhone. But wow do they hate learning how to do things with a PC. The difference is the additional CS/I-T work that is forced upon them or not.
> you, like many others, have > begun to suggest that we should design computers > for the bottom 1% of users
That is a perfect example of computer science nerd imperialism at its worst.
People who don’t want to learn computer science are more like 90%, and they’re not below you in any way, many are much, much smarter than you. They have special skills that aren’t computer science skills. You should not look down on them because they don’t want to swing the same CS/I-T wrench that you swing.
I’m working with an I-T group at a law firm right now, and there isn’t a single lawyer here who can run a PC. We can say they are stupid or we can admit that there hasn’t been a computer good enough to replace their common task of annotating 300 page documents. Word/Acrobat with a mouse is not it. iPad may do it with a virtual 300 page document they can manipulate like the paper version.
I play the piano and often have a piano keyboard plugged into my computer. Imagine if that was the primary computer interface and I said no computer for you until you learn to play the piano!
> As engineers, we should seek to educate the > consumers of our products, rather than encourage > them to stay in the dark.
This is the same logic that a cat uses when they bring you a mouse. Do you eat the mouse?
People are not “in the dark” about engineering. You will not enlighten them by teaching them engineering. They’ve heard of it. They are not interested, or else they would be engineers.
The solution is to design your products so that the users don’t have to be engineers. Otherwise you are just being lazy, pushing unfinished engineering work down the line to the user.
> I’d say at least 80% of users (yes, even my mother) > use multitasking to some degree,
Every single iPhone OS user, all 100%, uses multitasking all day long. A very, very common task is to talk on the phone with someone while reading an email they sent you earlier. It’s common to have the iPod playing all the time, and iTunes downloading a movie while you do something else.
Second, you’re talking about computer tasks, not user tasks. Users multitask with their iPhones by running 50 apps in a day as they go about their business and the device never stalls, never slows down, never interrupts their multitasking by asking them to help the computer manage its multiple tasks.
Further, the success of the App Store is because users are running MORE apps than they do on Mac/PC, not less. Users who have no 3rd party apps on their Mac/PC have dozens or hundreds on their iPhone.
The truth is, most users do not know what it means for an app to be “running” or not. That is deeply computer science based. You have to know about RAM and processes. They don’t know that the reason their computer has slowed down is an app they used 3 days ago for 5 minutes is still running.
> Yet now that the iPad has come around, everyone > is proclaiming how the obvious solution was to just > start removing functionality
No, no functionality has been removed. The only difference between iPhone OS and Mac OS is the top layer, the user interface, which is built for touch. The bottom layers are the same OS X.
> nothing is preventing me from using a CLI when I > want to.
CLI is there but it’s an add-on. The Cisco guy already drooled all over this right after the iPad launch. You can run a whole PC desktop over VPN on iPad and have 52 Linux CLI’s if you want.
Your PC doesn’t have touch but you don’t say that Dell is preventing you from using touch, do you? You add it if you want it.
Okay, that’s weird. My comment posted without its opening line: “Funkatron is right.” I can only hope the existence of that line can be inferred from the rest of my remarks.
Terrific article. Smart phones and the iPad have me think a lot about the possibilities of computing again. I’m reading Alan Cooper’s “About Face” for the first time.
A couple of his Design Principles:
“Software should behave like a considerate human being.” Have you ever seen a human try to multitask and be fully engaged with you?
“Managing disk and files is not a user goal.” Data/Files and located on the operating system vs my letters and presentations are immediately available when I use the my software.
Hey Ed, thanks for the mention & the reply. Like I said in a comment, I think I actually agree with almost everything you’ve said here; I think it was mostly the idea that a failure on the part of Google or developers was in some way responsible for the fallout that occurred that inspired the post.
Great post and great comments here; I’m hoping to add a follow up post with a few more developed thoughts along these lines soon.
@Phil,
Right on, glad to hear it.
@Teresa,
Weird; thanks for following up on that. I’ll keep an eye in case others report such an issue.
Everyone is overlooking the fact that readers can use Facebook Connect to leave comments on the ReadWriteWeb page in question. No wonder people were confused!
It’s not just that they Googled “Facebook Login” and came to that RWW page; it’s also that there is a legitimate Facebook login form at the bottom of the page—with the Facebook logo!
There are (at least) two failures of design here: 1. We haven’t taught people how to use a URL to determine what page they’re on. 2. We haven’t effectively communicated that you can use federated login schemes, like Facebook Connect or OpenID, to login to various sites with a single, universal key. This is a pretty complex concept that I don’t expect a lot of people to understand—in terms of how it really works behind the scenes.
In other words, there are two web design problems that need better solutions: 1. Helping people find what they want. 2. Simplifying or eliminating the need to login.
“That is a perfect example of computer science nerd imperialism at its worst.”
Yes, and saying that children shouldn’t be allowed to drive is adult imperialism at its worst. The difficulty of making such a system safe for children to use far outweighs the benefits of kids driving around by themselves.
“People who don’t want to learn computer science are more like 90%, and they’re not below you in any way, many are much, much smarter than you. They have special skills that aren’t computer science skills. You should not look down on them because they don’t want to swing the same CS/I-T wrench that you swing.”
Thank you for insulting my intelligence, and assuming I’m some IT drone. Again, I know plenty of people who know nothing about the nitty-gritty of computers, yet somehow seem to manage to use a computer. Similarly, I have almost no understanding of how an automobile works. Miraculously, I can still drive one. I’d hazard a guess that most creative professionals are power users. And yet… I see very few of them with CS degrees.
“iPad may do it with a virtual 300 page document they can manipulate like the paper version.”
Yes! The touch interface really is amazing! And I don’t mean that sarcastically, either. But why does functionality for those of us who can take advantage of it have to suffer?
“Every single iPhone OS user, all 100%, uses multitasking all day long.” … “Second, you’re talking about computer tasks, not user tasks. Users multitask with their iPhones by running 50 apps in a day as they go about their business and the device never stalls, never slows down, never interrupts their multitasking by asking them to help the computer manage its multiple tasks.”
No, you are talking about computer tasks, not user tasks. I’m well aware that the iPhone can perform certain tasks in the background. And for the iPhone, this is sufficient. The iPad, on the other hand, has ample screen real estate and a significantly faster processor than the iPhone. If I want to view two documents side by side, I can’t. It doesn’t even matter if they’re both Pages documents. If I want to use an email as a reference for something I’m writing… I can’t. If I want to perform research on the Internet and make an outline or write notes… I can’t.
Or more accurately, I can. After switching the home screen, finding the other App I’m trying to work in, and opening that. And then I have to switch back again. That way of working is fine for the iPhone, but I don’t think it’s the best solution for the iPad. In fact, functionality would be far improved if there was only a way to quickly switch contexts… the apps don’t even need to run in the background so long as they can maintain a persistent state.
“No, no functionality has been removed. The only difference between iPhone OS and Mac OS is the top layer, the user interface, which is built for touch. The bottom layers are the same OS X.”
I’m arguing that the graphical interface that is to be the successor to the multi-windowed interfaces of today is far too simplified. Again, I have to jump through hoops to do something as simple as work with two documents at once. I don’t care if it won’t run three servers and a filesharing client in the background while idling on 22 irc channels for three weeks. You may find this hard to believe, but there are legitimate reasons for multitasking that aren’t only for “computer science nerd imperialists”.
“CLI is there but it’s an add-on. The Cisco guy already drooled all over this right after the iPad launch. You can run a whole PC desktop over VPN on iPad and have 52 Linux CLI’s if you want.”
If the rest of my life is to be spent jumping through hoops to connect to some server to work on something that doesn’t fit the iPhone OS’s unitasking restriction, I might as well quit while I’m ahead.
“Your PC doesn’t have touch but you don’t say that Dell is preventing you from using touch, do you? You add it if you want it.”
If I want to change something about Mac OS X, I can. In fact, I don’t even have to use Mac OS X… I can do whatever I want to my hardware. But the iPad won’t even let the hackers have their fun. It’s Apple’s way or the highway. I can’t even launch my own applications without Apple’s consent and $99 in their direction. Yes, I’m not being forced to use the iPad, but why start down such a slippery slope?
I think Apple deserves a lot of credit for the UX, but the locked-downness of it is a terrible trend, something that could destroy the exact openness and freedom that made something like the iPad viable. What if, in the end, tools like the iPad turn TCP/IP into Apple/IP®? Because, after all, that’s where the strategy of Cocoa Touch is headed. You don’t need open standards when people can only do a limited number of things with a device like the iPad.
Freedom includes the right to be stupid. There is no such right in the iPad world. If we have to put up with a fraction of the people trying to log into a news article, I say it’s well worth it. If this is being stupid, I am stupid and remain wary of those who would have me be “smart”.
Fair warning:
Thanks!
I remember watching a broadcast of the first woman in space, via the space shuttle, and the male newscaster said to the female newscaster “this is an amazing day for woman around the world” to which she replied “it won’t be amazing until we no longer talk about it.”
Funkatron —
I’ve rechecked the text of my first comment. It isn’t just missing its first line. It’s also missing a couple of minor changes I made further on in the text. That is: it’s my first draft.
Sequence: I wrote the first draft in the text entry box, reviewed it in “preview” mode, and added a new first line and made some other minor changes to it in the text entry box. I then hit “submit” without previewing my edited version.
My guess is that the version in “preview” was what got posted, rather than the revised version in the text entry box.
Imagine being an auto-manufacturer and deciding for whom to design your cars: professional mechanics (IT Pros), tinkerers (designers & developers), sports car enthusiasts who insist upon manual transmissions (geeks), or the other 90% who deal with cars because they’re a necessary evil, required to get from one place to another (typical users whose only application is a browser).
There are markets for each and sometimes you can create multiple products, but if you have to pick just one, it may be possible to find overlaps, abstract the advanced features, or just accept that your product won’t please everyone.
@Teresa, I’m sorry. That’s frustrating. Comments seem more than a little mucked up. I’ll have to mess with them.
Well said, Ed. I feel dumb.
“I’d argue that modern computer interfaces, in a rush to offer flexibility and capability, make it possible to steer with your hands, feet, teeth, and knees — and don’t make it particularly clear which one is best.”
I fully agree. Computers are created by nerds for nerds who think that regular humans should think like they do, but don’t and mostly can’t.
It’s the same as someone knowing a spoken language that is different than what most people grew up with, pick a country that has a language that is quite different than your own, and expects you to learn the language and use their product. It just isn’t a sane premise.
Ian,
I am an IT professional, having worked at desktop/user support for over 15 years on both Mac and Windows OSes.
I agree that your attitude displays the very worst of the old guard’s “I’m king of the hill” attitude.
Face it, IT pros aren’t king any more. There are a LOT more of the regular folks than there are It pros. Software SHOULD be written for the lowest common denominator.
Not to say that apps for the geeks won’t be written. Geez, we’ve gotta have tools to do our jobs too!
But,
“If I want to change something about Mac OS X, I can. In fact, I don’t even have to use Mac OS X… I can do whatever I want to my hardware. But the iPad won’t even let the hackers have their fun. It’s Apple’s way or the highway. I can’t even launch my own applications without Apple’s consent and $99 in their direction. Yes, I’m not being forced to use the iPad, but why start down such a slippery slope?”
Obviously, that tool (the iPad) hasn’t been designed for YOU, but for the mass of users that Apple wants to target, which is the point of this article! I strongly feel that the iPad WILL be a huge success, and marks the future (or the beginnings of such) of computing.
Remember the KISS principle? Apple’s got it down pat, and anybody that wants to sell computers and software to run on them in the future had better remember it, cause that’s what people want.
I’ve been in support roles helping the common user for over 15 years, and most of them were WAY smarter than me. But they don’t have the TIME to learn my way of doing things, nor do they want to even if they did. They prefer for it to be easy, simple, so they can go on with the really important things they want to do.
Klutzing around with a box that requires a full time desktop tech to keep it running isn’t, believe me, what they really want, its the LAST thing.
And they shouldn’t have to, either.
Ed is fully right about this.
@OlsonBW No, if you visit another country and you want to get an authentic and fairly priced experience, you learn the lingo and you get treated like a citizen. If you want everything spoken to you in English, you will pay the price, and won’t get exactly what you want because some nuances get lost in translation.
The same goes for Information Technology - you can pay over the odds for an Apple mobile device that speaks your language, but won’t do exactly what you might need of it, or you can buy a Microsoft/Google/Linux OS device that gives you much more control but you will have to figure a few things out.
If you want to drive a car, you have to take a written and practical test (in the UK at least), to prove you know how to operate it. Why? Because you could cause harm to yourself and others. The same goes for guns, certain types of machinery etc. I don’t see why digital devices are any different, or the Internet for that matter. It is unreasonable to expect a system to cater for your every need without having to learn a single thing. I would love there to be a legally required qualification to use an online device (as again you can possibly harm others as well as yourself) - and it’s a tribute to most user interfaces that users can usually figure them out pretty well without any formal training.
@Rich Rosen I totally and absolutely don’t agree with your statement that Microsoft does stupid program which Just Work™. I am an employee of a company working on Linux and I am always shocked how endlessly complicated using Windows is and how incredibly buggy their UI is (meaning bugs in design of the UI, not software bugs). Of course, I use for my work computer with slightly different tools than what I would expect my mom would use, but stupidity and useless complexity of most programs which are claimed to be just for “normal people” (Picasa, Microsoft Office 2007 … omg :() just astonishes me. And people are paying for this? Wov!
“Dave Nattriss @OlsonBW No, if you visit another country and you want to get an authentic and fairly priced experience, you learn the lingo and you get treated like a citizen. If you want everything spoken to you in English, you will pay the price, and won’t get exactly what you want because some nuances get lost in translation.”
No Dave. You have it wrong. You are the foreigner invading another country with your other language. These people that are using computers … they existed BEFORE you and they will exist AFTER you. You are the one that didn’t learn their language, the native language and you expect everyone around you to learn your, not their language. That’s why computer software is so ****** up.
@OlsonBW No, sorry, it is not natural or normal for a organic human creature to edit a digital image on a LCD using a piece of plastic with a ball inside. This is new territory here for the human race, with new rules.
The foreign country is the digital world, and it has its own languages. Software tries to model the real world as much as possible, but at the end of the day there are abstract concepts in software that either don’t exist in the real world, or that do but the common person rarely if ever encounters them.
Computers have allowed us to become image manipulators, for instance. This used to be something that only skilled professionals can do, so it’s perfectly fair that if you want to have a go at it yourself, now that the equipment is available cheaply/for free, you should have some idea about how it works. Of course, you can just figure out for yourself, as most people do, but that’s your choice. Photoshop is for skilled professionals, not untrained amateurs.
Computers are not mind-readers… yet.
@Matěj Cepl - I guess I wasn’t clear about what aspects of the Microsoft product line I was referring to. Not their Office software which is most certainly bloated and overly complicated and as guilty as every one of the crimes that sadly some people here are proudly advocating. I was comparing the tools developers use for building application: the typical (and this IS a generalization) open source API/framework for developing software, designed to be SO powerful and flexible that it becomes a Sisyphean/Herculean task just to do something trivial (like set up and run the “Hello World” app), versus a Microsoft .NET style “widgety” approach in which a salesman can walk in and say “See? I’ve just built you a storefront app! Buy this from me!” - but when you try to customize the damned thing according to your requirements, armies of outside Microsoft-trained consultants need to be thrown at the project to make it the way you want it (and it still doesn’t work in the end).
My point was that if we as developers cannot get our act together to supply ourselves with APIs and frameworks that are “right-sized” for what we’re trying to do, that make simple tasks simple but are still customizable and extensible in an orderly fashion when more advanced work needs to be done, then how can we expect to build software for END-USERS that is right-sized and right-functioned for them?
i dont think the car example is really very accurate. sure, operating a motor vehicle isn’t tremendously difficult, but we still make everyone take a class to learn how to drive and we make them pass two test before giving them the privilege to drive on roads. we do this because driving is a complex activity and a lot of people screw it up, even after extensive training.
sure, its easy to say that its our fault for not making it simpler. but we’re fighting battles between simplicity and utility. sure, you can simplify something until it can’t do anything at all. a computer with one button is simple. but it can’t do much. its easy to say that it all should be simpler, but maybe simple isn’t as possible as you think. maybe complex is what is necessary and the solution is that some people will have to take a class and pass two tests before being able to navigate the system well enough to accomplish their goals.
I’m not going to reply to the earlier posts but I will reply to this one.
“Rich Rosen @Matěj Cepl - I guess I wasn’t clear about what aspects of the Microsoft product line I was referring to. Not their Office software which is most certainly bloated and overly complicated and as guilty as every one of the crimes that sadly some people here are proudly advocating. I was comparing the tools developers use for building application:”
Rich - I couldn’t agree more. The concept of how you make programs with current compilers, ANY compilers, is still in the dark ages at best. Until compilers get up to and past the Model T stage there isn’t a whole lot of hope that most programmers will be able to make a really good program that isn’t way too slow or too buggy.
Way too much of a developers time is spent on memory management. In fact any time at all that is spent on memory management is wasted time.
LET ME FINISH.
When you drive to a store or even go to a website. Do you write down every single thing about what you are going to do to get there? Do you manage all of your thought processes? Do you manage every inch of finger movement to get the job done?
If you do you need serious help or have a some kind of brain issue. I don’t mean that in a mean way, just an honest way.
In real life you concentrate on what you want to get done. Where you want to go either in a car or on the internet. Your brain has learned long ago how to do most stuff behind the scenes. For the most part, unless you are having an off day, you don’t pay attention to the little details. Or at least don’t have to.
Now, with compilers, you DO have to think about all the little things. How long have programmers been programming? Since the 50s or even before? It is about time that compilers get their act together and take care of ALL memory management and all the little things that you shouldn’t have to worry about. A programmers focus should only have to be on what they are trying to accomplish.
Until they are freed to do exactly that, well, they won’t have the time to be truely innovative.
PS: Yes I know there is compiler X or Y. No compiler is up to snuff yet. I won’t argue about it. If you think otherwise you just have your head in the sand and are a glutton for punishment. Be honest though. Don’t you Wish that you could concentrate on just what you want to create instead of all the other stuff you have to do just to get it to work? That’s what I am talking about. Programmers of the future won’t have to do this. The question is how long it will take.
@OlsonBW: While I think your comment may be considered ancillary to the subject of this particular thread, I couldn’t agree with you more. My focus was not so much on the compilers (though even languages that supposedly manage memory and such for you will still require tinkering when it comes to real world performance tuning) but on the higher-level abstractions for building applications. Our choices seem to be:
1) A Microsofty salesy widget-driven “out-of-the-box” framework that a salesperson could easily demonstrate to a potential client. “Look, here’s your whole website, it will just take a few weeks to customize it the way you want. Where’s my money?” In the end, the project will require years of efforts with more bodies brought in than necessary because it turns out customization of these pre-built widgets is nearly impossible meaning you have to work from scratch.
2) An open source and/or Java/JEE based approach that has all the bells and whistles you could possibly imagine. When it comes time to demo it to management, it takes four weeks to get the Hello World app running and another month to showcase a skeleton of the site looking as if… well, as if developers designed it. It also requires years of effort because the lack of solid documentation makes it impossible to get anything done, plus the codebase changes over the lifetime of the project without regard to backward compatibility.
All this verbiage is ALSO an ancillary point relative to the subject matter of this thread, but the real point is this: if we as developers cannot build application development platforms FOR OURSELVES that are right-sized and right-functioned, what hope is there that we can build applications and interfaces for end-users that have those same qualities? That’s the point I was trying to make.
Meanwhile, apropos another comment of yours, @OlsonBW: never mind @Dave has to say, you got it right on the money. “You are the foreigner invading another country with your other language. You are the one that didn’t learn their language, the native language and you expect everyone around you to learn your, not their language. That’s why computer software is so ****** up.” Well said.
Here’s a test: If users of your email client, social networking interface, or other messaging program try to drag an image from another window into messages they are composing, and this doesn’t work in your application the way you wrote it, do you:
A) “educate” the user that what he’s doing is wrong, by badgering them with explanations about the HTTP protocol, Einsteinian relativity, the internal combustion engine, and how PHP works (or doesn’t) - in the hopes that these people (that you obviously think are not as smart as you are) will understand the world the way you do and learn the “right” way to use your product, or…
B) observe usage patterns for your product in the user community and enhance the product to provide the functionality that users want and need based on those observations, learning from what your users expect from your product rather than dictating to them how they should use it?
You’d be surprised how many developers would answer A!
(Actually, no, you wouldn’t. :-( Look at the discussions some people spawned off of this one, unapologetically mocking users, referring to them as ignorant, and posturing themselves as the uber-intelligent vanguard protecting the world from a future out of the film “Idiocracy”. Laughable.)
I’m sure some out there are shouting “but I’m a knowledgeable developer, that’s something I would have known to do in my code!” The point is that this is just an example, one based on knowledge we already have about user behaviors. There are examples out there that we haven’t witnessed yet, and it’s arrogant and foolhardy to believe that you as a developer will know everything about what the users should be doing with your code, before they do.
It’s somewhat refreshing to see that many developers ARE getting the idea that the user is the focus, not them; that the “right” way to use software emerges and evolves from observations about what users do and want to do with software, not from Biblical edicts from developers that dictate right vs. wrong. At the same time, it’s sad to see some developers get even more entrenched in their deprecated attitudes about who needs to be “educated” about what.
I love this post. Finally, someone who gets it. Computers are tools. Tools for living life (or working a job). They are wonderfully complex and maddeningly simple. Its fine for developers to say well you have to at least know such and such to use the software. But if, once its in the wild and people are using it for something it was not designed for or in a way not anticipated, lots of people, then it would behoove the developers to take this into account rather than call the users stupid. Using Google to find the login page to something I forgot to bookmark is a really common example of this. I might do this dozens of times for the same page until I finally get to bookmark it. I am sure I am not alone. I think a lot of functionality could be gained from looking at how people actually use technology rather than demeaning them for not doing it how they are supposed to.
I believe it was Roy Batty in Blade Runner who said to J.S. Sebastian, “We are not computers; We’re physical”. And to Phillip K. Dick’s point - humans are physical as well. Part of being physical and interfacing with the physical world is the expectation that the interfaces around us are relatively dumb and static. Computer interfaces are adaptive and dynamic - driven by user choices and increasingly by algorithmic inference - and presenting information/feedback in dynamic ways. This dynamism is the strength of computers - but it’s also a weakness if the interaction is so abstract that it alienates or confuses the user.
The foreign country analogy is fairly accurate. However, isn’t the entire point of technology to lower the bar to the point where interaction is easy and seamless so we don’t have to worry about the language we speak? Otherwise, what’s the point of using technology? If I want to read the news and I have to jump through hoops to do it, why not just pickup the newspaper instead?
@Rich, bravo! I’ve tangled with my share of obstinate condescending purists who could care less about users and their input (including one of the spawners of bad attitude you mention in your post). Your all too realistic A/B test is one the majority of developers I’ve worked with would sadly fail. I also agree with you about development tools, we are in a quagmire where the options available to us are at opposite poles neither of which can accomplish what we want.
@Victoria, bravo to you too. I don’t know if you realize how reasonable it would be to have web applications like the Google home page do exactly what you describe. Cookies could be created based on an observed usage pattern when you type in “facebook login”, noting that every time you perform that search, you click on the link to the Facebook login page. An area at the top of the page presented when this search is performed could highlight the link you usually click, even if it is no longer the top result of the search.
But can’t you already hear the screams and yowls of the condescending purists? “NO NO NO NO NO!!! A search engine front end isn’t supposed to do things like that, that’s WRONG, it offends my religious sensibilities, please let me hit this ignorant user over the head until she understands the RIGHT way to use this page!” In calling users stupid and ignorant for not following a purist religious code about how the internet shold be used, these people ensure that they will never be asked to work on user-facing software projects that MATTER. At least we can only hope they don’t!
Last but not least, @Ankush, I would change your wording only slightly: Computer interfaces are SUPPOSED TO BE adaptive and dynamic. When they are, that’s fantastic, and when they’re not, that’s one place where there’s room for improvement.
@Sviergn - I stand corrected. However, I would argue that this comes down to the gap between the users’ and designer’s mental or conceptual model. Here’s a nice little article about the subject: http://www.interaction-design.org/encyclopedia/mental_models.html
@Ankush, I wasn’t trying to correct you, just noting a disparity between the ideal and the reality. Interesting link.
Great post, I came via a line on my fav writing site.
Someone’s post in the great list above talked about kids learning to use the ‘easy way’ simple interfaces before learning how the process works reminds me of a tale from my own days of learning how to use computers …
Let us return to when dinosaurs walked the earth:
I never touched a computer before college. In college, I had to learn … GASP … DOS as my method of interfacing with the blinking green (or amber) screen. I learned it fairly quickly and easily and soon could go and do just about all that I needed. (To put this in perspective, I installed my first hard drive with a screwdriver and a butterknife and thought those 10 megs of computer might would last forever.)
Then came Windows 3.1. I was in the biz world by then and disdained the cute little cursors. I know DOS after all!
A couple of years later, a fresh-out-of-school colleague threw an F-5 fit when his computer locked up. He thought his files were gone forever because nothing happened when he clicked on the document icon.
I pulled out my butterknife (figuratively, not literally), pulled up a DOS prompt and rescued all of his files onto a floppy in about a dozen keystrokes. He looked at me like I was a wizard and a genius all wrapped up into one package. Gee … I’m awesome …
The moral of this tale (other than helping me avoid work?) Yes, the computer is a tool. However, I think it is important to make kids muddle through some of the hard stuff before you hand them the easy stuff on a platter. Like learning your multiplication tables before picking up a calculator.
On the flip side though, overloading power programs for everyday use is just as bad. I am a fairly competent photoshop and dreamweaver user. However, I no longer want to start with a blank screen every time I need a web page. Since I am comfortable with the fundamentals, I’ve picked up a couple of cheap WYSIWYG programs that incorporate the basics of photoshop, dreamweaver, and flash into a point-drag-click-publish that saves me a lot of time and debugging.
Rambling rant over! Great post, I’ll be back. Terri
I have to agree with your point to some extend.
Yes, we should give the users the simplest interface so they can use whatever the software is. I’ve tried that approach and it tends to work…. for a while. After that, the user tells you how restrictive the software is and how nice it would be if it could do “whatever” out of the box. So you try to accomodate the user and add the new functionality which makes the interface a bit more complex. And then another one. And another one. And finally you end-up making the software even more complex that it would have been in the first place by adding functionalities on and on to an interface that hasn’t been designed to receive that many functionalities.
Ed, I agree with your assessment.
Emmanuel, We as developers need to learn how to do market research and be psychologist. If the user wants X, he may not really. Many psychologist will tell you most people may wish for something but their behavior tells you a different story. We all want an airplane, but do you want to train for 2+ years or less than a month to get into the driver seat? We need to build cars for people even when they demanded a plane… we are the stupid ones!
phil crissman’s entire site contains drop-shadowed text. I’m fairly sure you can consider his opinion invalid.
Nice article.