Predicting the future of computing is difficult, and these days possibly the word “computing” itself brings along baggage of its own.
Today’s computers don’t spend a lot of their time computing. What we find useful in them is farther removed from computing in every succeeding generation of these systems.
Think about the implications of Moore’s Law. When computing power increases so quickly, things change. At 10x computing power, everything is faster. At 100x computing power, new things become possible.
It’s hard to imagine the “new things” that 100x computing power makes possible. We’re saddled with our old conceptions of user interfaces, input devices, and of work itself.
Today, there is a lot of argument about touch interface devices, and the future of “real computers”. Real computers are, of course, the ones that we’re used to. They are powerful machines that have keyboards and mice, controlling window and icon based user interfaces in which we do serious work. The touch interface devices are different, and clearly only good for consumption, right?
Or are they? What if these new touch interface devices are capable of more? What if other interfaces are possible? What is it about the keyboard and mouse that so necessary to “serious work”?
A keyboard is a very poor interface device that we’ve learned to use extremely well. The common QWERTY layout of the keys was originally created to prevent jamming of the swinging arms of machines called typewriters. Most of us would have a hard time typing on a mechanical typewriter today. But we are so used to the key layout, that even when a demonstrably better keyboard layouts are invented[cite] that will make us faster and more efficient, very few of us put in the effort to learn to use it. Similarly, the mouse is a pretty poor pointing device — but it’s what we’re used to.
Reimagining even something as simple as text input is very difficult. We tend to jump to flights of fantasy; solutions that sound like something out of 19th century science fiction. If we’re lucky, we’re as prescient as Jules Verne, but having a good concept for the distant future doesn’t always help us get there.
We do know that people who are disinterested in computers and technology take to the iPad immediately — they just “get it”. The touches and gestures are easy to understand, not because they map conceptually to how we manipulate real objects, but because the engineers and designers at Apple have attended so carefully to the responsiveness of the interface and the way objects on the screen move and change. There’s a lot of computing power going into making that all just right, and a lot of brain power that went into the details.
You don’t get to a result like iOS on the iPad by asking people what they want. You get it by re-imagining what they need.