Jump to content

Future/User Interface

From Wikibooks, open books for an open world

Increasingly better user interfaces allow us to benefit from the ongoing computer revolution. By 2020 computers as we know them are likely to disappear to be replaced with ubiquous computing.

Output

[edit | edit source]

Computer displays are extremely versatile - they can display static content, can be used for editing it, adding new content, showing text, images and video. There is no end to what can be done with screens, once you have the technology in place.

Screen size

[edit | edit source]

In a sense, the changes screen size form one of the main struggles in the evolution of interfaces. There are two main development directions:

  • smaller screens for portable devices
  • larger screens for rooms

The progress in this field is very easy to envision by describing the ultimate goals. The smaller screens need a thin (possibly flexible), low-power, bright high-contrast, high-resolution display - the goal is to have something that looks like paper, but can glow (in low light) too. Currently it's various e-paper technologies, some of which are already used in digital watches and ebook readers.

The larger screens need to be high-resolution, very large and bright. Ultimately they need to be arbitrarily large, light and bright so that you can put them on any surface. Currently this segment is covered by large plasma and LCD displays, as well as projectors. Smart boards are already practical for use in education and business.

Augmented reality

[edit | edit source]

Of course, one way to solve the problem of screen size is abandon the idea of the screen and just project the image to the eye somehow (using glasses with projectors or retinal laser projectors). With augmented reality information can be overlayed on the real world images and the real world can even incorporate parts of the virtual world.

An interface can include both virtual (digital) object and real (physical) ones. It can use a combination of projectors, videosurfaces and wearable displays. Examples: a real book with virtual 3D images, an interface to move real objects such as business cards or texts to computer display (instantly digitizing them) and moving virtual objects out. [1]

3D displays

[edit | edit source]

Ever since Star Wars has first shown the 3D hologram of Princess Leia pleading for help, engineers tried to replicate the technology in real life.

Probably the most impressive attempt so far is the 3D technology from AIST (National Institute of Advanced Industrial Science and Technology, Japan). That technology creates fully 3D images consisting of up to 100 dots in the air. It uses laser induced plasma created in the focal point of focused laser light.

Input

[edit | edit source]

Improvement (or replacement) of the tired workhorses of input — the keyboard and the mouse — is long overdue.

Two technologies that receive most of attention are speech recognition and handwriting recognition.

  • Most displays in the future will be touch sensitive and allow more complex interaction than just point and click FTIR.
  • An interesting concept is touch-based interaction with a physics-based desktop interface. [2]
  • A new and promising field is gesture controls. Using a tracking technology (such as laser tracking [3]) the computer can see and recognise the gestures that the user makes. This would allow more direct control of the virtual objects than when using a mouse. For example, the user could use both hands and several fingers simulaneously.
  • Microsoft have under development a 3D input technology called 'Touchlight'. Using three cameras it is able to gauge hand position in 3D space, allowing model building using the hands.
  • Equipment similar to existing barcode scanners can track finger tip motion in 3D in 'minority report' fashion. The technique uses a tip tracking mirror pair and a saccade mirror pair - the 'saccade' moves the laser point in a circular fashion, and backscattered light gives an indication of the distance, position and orientation of the target.
  • Eye tracking is another interesting solution, where several cameras track user's eyeballs and translate them into mouse or cursor movements. [4].
  • Commercial grade eye tracking software will soon be integrated into computer games and, according to researchers at Queen's University in Canada, allow programs to respond to people's intentions. That claim is not as fanciful as it may sound. Eye movement decoding is one of the key elements of the (non computer based) Neuro Linguistic programming (NLP) method of body language reading.

These technologies require algorithms capable of simulating and controlling a more complex model than traditional simple windows/objects-based interfaces. An example of flexibility that comes from sufficiently complex/advanced algorithms is Teddy [5], a 3D modelling software that even a kid can use after a few minutes of training.

Similar level of flexibility (and naturalness) can be achieved in graph interfaces, making managing data/information/knowledge trivially easy.

Windows Vista

[edit | edit source]

New Microsoft operationg system has added 3D rendering to its interface. A user can scroll through windows with the mouse in 3 dimensions. Its new rendering technology could enable people to create 3D worlds more easily.

Unsorted predictions

[edit | edit source]
  • High quality A3 flat displays - using digital paper.
  • Colour video display with > 2000 x 2000 pixels
  • Large, wall hung high definition colour displays
  • Wide screen (>100 in) with contrast ratio of > 10:1
  • Video walls, including living area use of VR (scenes)
  • Electronic notebook, contrast = paper even after power off
  • Video playback over network at 10 x normal speed
  • Many people sharing a virtual space
  • Positioning sound at any point in space
  • 3D TV without need for special glasses
  • Personal audio-visual interfaces well developed
  • 3D video conferencing
  • Computer link to biological sensory organs
  • Use of cheap holograms to convey 3D images
  • Full integration of processing, audio and video equipment
[edit | edit source]