Tog on Software Design

Excerpt from Tog on Software Design

The Shape of Tomorrow's Computers

For description & reviews, or to order now, visit
the Tog on Software Design page at

In January 1996, Addison-Wesley published Tog on Software Design, exploring the Starfire project in detail. The following is excerpted from the chapter, "The Shape of Tomorrow's Computers."

Requiem for a Heavyweight

Dateline: Atlanta. May 23, 2003. Today, the last desktop
computer rolled down the production line and into history.
Even those of us who long anticipated their demise were unprepared
for how quickly the unwieldy behemoths actually went. It was as 
if, on a single day, people woke up no longer seeing any need for 
the glass monsters. While a few hold-outs continued buying them--
people isolated geographically, financially, or chronologically--
for most of us, the desktop computer suddenly became a thing of 
the past. Still, this reporter experienced a twinge of sadness 
watching the last worker on the last line experience a twinge of 
pain as he lowered the heavy monitor into a waiting box. 

We visited the hangout favored by many of the former assembly 
workers at the plant. The Ferguson Chiropractic Institute stands 
on a bluff overlooking Silicon Valley, long its primary source of 
patients--and revenue. We spoke with Clara Tydewaller, a rising 
assembler until the fateful day she toppled attempting to lift a 
435 pound, 26" color monitor by herself. "I always told 
them that big screens were going to disappear. Why I remember the 

Hold everything. That's not quite what is going to happen. True, the desktop computer is going to disappear, but not because of portables. In fact, I expect portables, as we know them today, will also disappear, and the sooner, the better. People cannot reasonably be expected to constantly haul around a six pound portable that requires a battery change every three and a half minutes.

In the next decade, portables will shrink in weight, power consumption, and thickness. The need for back lighting will be eliminated. The need for even a screen, in many cases, will be eliminated as powerful computers shrink down to the size of wristwatches, with all interaction being done through voice.

Reduction in size will not be the whole answer, however. Screens will also be getting bigger.

Big Needs, Big Screens

Stationary computers will be with us for a long, long time. Just as theaters gained a competitive advantage by supplying their spectators with wide-screen color and high-definition pictures, so, too, will desktop hardware manufacturers. As I write this, I'm using a system with three contiguous 21 inch displays. The three screens are filled with letters, articles, illustrations, and various viewports onto information systems, all directly related to the task at hand. Being able to see them all at once offers me a clear advantage over people who must view their cyberspace through a 9" to 16" keyhole. (Moving to a large display area is as much a one-way street as moving from floppy to hard disk. You'll never go back.)

In the not-too-distant future, stationary computer systems with displays as tiny as 20 inch glass monitors may well be the stuff of history museums, not homes and offices. Today, such a prediction may seem fanciful, but only because of a limitation of the moment: Yield problems (high manufacturing failure rates) are currently plaguing flat panel production, keeping sizes small and prices high. Several promising lines of research promise to raise yield effectively to 100% in the not too distant future. (One approach is to use perhaps four "lights" per pixel, so that should a single pixel prove defective, it would suffer a 25% drop in intensity, not a 100% drop.) Once yield is no longer a problem, panels will grow in size, increase in density, and drop in price.

The Really, Really Big Display

Both P. J. Plauger and Mark Weiser of Xerox PARC have been crying out against the mad rush to miniaturization. During the Starfire project, I became interested in carrying their work one step further by revisiting the assumptions that have led to our current planar designs.

Today's real-world desktop surfaces are invariably flat and level. Why? Certainly not to make it easier for us to access every inch of their surface. When we sit at our desks, we typically have a prime area within the sweep of our arms, with diminishing real estate values toward the back and sides of our desks. Items shoved into the far corners are likely to remain in residence, unread for days, if not decades.

Desks are level because gravity tends to make paper slide down hill. Drafting tables are not level, they're tilted, but require their users to tape, clamp, or hook their paper and tools. The pay-off for this decrease in document mobility is a large increase in draftsperson mobility. Draftspersons typically trade off between pacing the length of their drafting tables and wheeling back and forth on mobile high stools . In either case, they fluidly command the entire surface of their table, even if they must still reach for the extremities.

Cyberspace has no inherent gravity. Therefore, cyberspace displays can be set at the most convenient angle for the user. Today's angle is most often vertical (the face of a CRT or flat panel display), with users indirectly interacting via keyboard, mouse, or trackball. The other, increasingly popular, alternative is a horizontal or hand-held display with which a user interacts by use of touch or a stylus directly applied to the display surface.

In designing a easy-access large surface display, I wanted to make use of both horizontal and vertical display area. Horizontal displays enable comfortable direct interaction and can also double as a conventional desktop, where people can continue to open terrestrial mail and sign the occasional letter. But vertical surfaces have their own payoffs.

Work done by Abbigail Sellen and William Buxton at Xerox EuroPARC, confirmed by the Solaris Live project at Sun, has shown the need for vertical surfaces for desktop video communication. The EuroPARC experiments have used small "towers," topped with three or four inch monitors, elevated to the user's eye level. For conference calls, several of these towers can be set apart in a rough circle on a standard desktop to simulate the normal separation between speakers. This enables people to assess at whom you are looking at any given moment by the angular difference in eye gaze. It also allows you to whisper to a single member of the conference by leaning toward their tower. The other participants can no longer hear you, although they can see what you are up to, just as in "real life."

In our experiments at Sun that led to the Starfire film, Frank Ludolph used a wide-screen vertical display with images of other callers that, rather than being straight on, were "keystoned," or angled somewhat away from the viewer and toward each other. The result was a heightened sense of eye gaze.

Having horizontal and vertical display areas also enables users to move their work between orientations, allowing them to shift their body and neck positions, relieving physical stress and discomfort. However perhaps the biggest reason for building a large display incorporating both orientations is accessibility. If you swing your arms around your body, you will find that they describe the inside of two large, overlapping spheres, each having its center at the joint of your shoulder and arm. The greatest surface area you can access without moving your entire body fits within these contours.

We built Julie's Starfire display inside the limits of the spheres. The result was a semi-circular workspace that surrounds the user, curving side-to-side, like a personal Cinerama screen, as well as vertically, beginning as a flat horizontal surface in the center, where the user sits, but sweeping gently upwards until, near arm's length, the wall will becomes vertical.

Even with its large area-equivalent to the square footage of a normal desktop-test subjects were able touch any spot on its surface without undue stretching.

Different areas of the screen will call for different pointing devices. The human arm weighs a good five or ten pounds. A user accessing a mouse twice per minute during the course of a six-hour typical work day touches the mouse more than 14,400 times per month. Imagine having to lift your arms all the way up in the air 14,400 times per month. This would bring about repetitive stress injuries on a scale undreamed of today.

Our conjecture was that people would tend to do their more intensive work on the level part of Julie's workspace, saving the vertical surface for communication, reference, and viewing. When people did want to move to their vertical surface, they could, as today, use an indirect pointing device such as a mouse, so that their hands could remain horizontal and supported.

The display was based on 300 dot per inch resolution, working today in the laboratory. We will eventually reach densities of 1200 or 2400 dots per inch. This would allow scaling the desk back in size without reduction in information. With a smaller overall desk, a user could look out over the top without obstruction, while still seeing the same level of detail. 1200 to 2400 dots per inch also begins to enable good 3D graphic display. On today's low resolution monitors, all but the foreground appears out of focus, not because it really is out of focus, but simply because there are not sufficient pixels to define it.

Sensitive Surface Displays

We have grown up in this industry with the reality that we can never have enough memory or enough processing power. We tend to let that spill over into a belief that we can never have enough of anything. Not so. Human perceptual parameters have true upper limits. The 16, 777,216 colors we can display with 24 bits per pixel is greater than the number of colors a human being can perceive. 24 bit color solves the problem. Likewise, we will reach an upper limit on display density. It may prove to be 2400 dpi or even 4800 dpi, but it will stop. A 30 frames per second rate does not solve the flicker problem, but probably 120 frames per second will.

With high refresh rates, 1200 to 2400 dot per inch resolution, zero parallax, and 24 bit color, our display problems will be solved. Then we can turn our attention to "weaving in" other sensory and motor devices, producing a "sensitive surface," an amalgam capable of being simultaneously a high-definition display, scanner, pressure sensor, and tactile feedback generator. Add to it an array of cameras-or even an array of lenses within the surface matrix-and you have the makings of a first-class workspace.

What would it be like to sit at tomorrow's Starfire display, experiencing a sensitive-surface display? You can see one obvious advantage in the film. When Julie wants to scan in the newspaper, it instantly becomes part of her cyberspace when she flips it over and rubs it. The intelligent surface knows to "read" it, converting it into an optical-character-recognized and optical-graphic-recognized cyber-document.

Scanning will probably be the first added capability. It only requires the addition of a fourth element to the current red-blue-green triad: a photocell. Scanning today requires a special piece of equipment, with its own special drivers, and its own special software. Starfire is about simplicity. By adding scanning capability to the display, a major source of physical and mental clutter is removed forever.

What if you want to reach for a cyber-document that is to the side of the desk? If it were real paper, you would probably glance over long enough to see that your fingers were somewhere near the document. Then you would bring your eyes back to begin lining up where you want the document to end up, leaving to your fingers the task of finding the edge of the document so you can slide it to the new position. The same could work with a sensitive surface display: once you have touched the paper with your hand, the surface deforms into "goose-bumps" above the paper. You can then slide your hand, without looking, until you find the edge of the goose bumps. A quick flick will send the paper on its way, where your other hand can "catch" it, indicated by the arrival of the goose bumps beneath your fingers.

If you want to "straighten" the paper, give it a twist with your fingers. The agent in the system will show you when you have reached a position where the paper is tangential to your body. Through working with it? Flick it away from you as Julie did and it will slide up the vertical wall of the display until it pins at the top.

Have you asked your computer agent to present you with a viewport onto Financial News Network? It will find a spot on your intelligent surface other than where your coffee cup happens to be resting.

Want to use your mouse? Use your mouse. Would you rather use voice, or gestures, or paint with an electronic brush? Do so. The interface of tomorrow will allow you to work the way you feel most comfortable at the moment. They will also allow those with limitations to do the same. Today, enabling technology often means building special devices for special people, but the same flexibility that can allow the normally-able to rapidly shift among methods will bring the differently-abled fully into the fold. Enabling technology can and should be approached as a technique for helping everyone.

Tomorrow's machines will also become "aware" of activities beyond the confines of the screen. Armed with an array of input sensors, like built in "awareness" devices in bookcases. desk drawers, cars, and refrigerators, your personal agents will be able to extend their domain well beyond the edges of your sensitive surface. This increasing power will generate even more need for privacy and security.

Tog on Software Design

Tog on Software Design

For description & reviews, or to order now, visit the Tog on Software Design page at

Questions or comments? tog
Copyright 1998 Quailwood Design
All rights reserved.