Stephen Brewster (Co-I)

Glasgow Interactive Systems
Group Department of Computing Science
University of Glasgow
G12 8QQ, UK
stephen@dcs.gla.ac.uk

Overview

The area of human computer interaction (HCI) is part of computing science that deals with the ways in which humans interact and use technology (and interact with other humans through technology). It goes from application software all the way through the user interface (traditionally the screen, mouse, keyboard, windows, etc.) to how the technology is used in society. Multimodal HCI particularly focuses on how the users interact with the device and how we can get them to use as many of their different senses (other than just mostly the eyes) and actuators (currently the fingers on the keyboard or mouse) that they can. The idea is that in our everyday lives we use all of our senses, for example, to give us rich input about the world so we should aim to create user interfaces which are equally flexible. For example, we may create computer interfaces that allow us to see, hear (speech and non-speech sounds), touch, smell or even taste. For input to the computer we might use speech and gesture alongside the keyboard. We might want to use as many of our human capabilities as possible for many reasons. We evolved our five different senses to give us different types of information about the world, so why limit ourselves to only vision when using a computer? In some cases this is inappropriate. For example, a user with a visual impairment may need a different form of display. However, even an able bodied person may be ‘situationally impaired’. Outside in the bright sunlight the screen of a phone may be hard to read; wearing gloves in cold weather makes the keyboard on a phone hard to use; trying to use a mobile phone when walking down the street can be difficult as the users must look where they are going and look at the screen of their device too. Different types of information are suited to different forms of presentation and interaction. The aim of multimodal HCI is to try and create richer and more flexible interaction with technology to allow more people to do more things with their devices.

Issues and Interests

The area of HCI can provide a range of new ways to interact with desktop computers, mobile devices and virtual realities. We are always looking for interesting problems to which we can apply some of our solutions.
For particular interest to the ‘Touching the Untouchable’ project multimodal HCI could provide new ways of experiencing objects and artefacts through the sense of touch, using haptics. Using both tactile (similar to the kinds of vibration motors in mobile phones, but with much higher quality and more than a single point of vibration. These can be used to present different textures or tactile icons. See Figure 1) and force-feedback (these allow users to feel virtual solid objects that feel hard, have different surface properties, textures, etc. See Figure 2) displays we could make very fragile objects available to scholars, allow visitors who live far from museums to feel objects at a distance, let visually-impaired and blind people feel exhibits that are normally behind glass and allow museums to show off a range of artefacts that are currently in storage due to a lack of space.


Figure 1: A range of tactile devices. Left is the C2 from EAI, a high quality vibration motor. The centre shows three small pins arrays mounted on a mouse. Right shows a bracelet with an internal vibration motor from LM Technologies.


Figure 2: Two force-feedback devices. Left shows the Sensable PHANTOM, right a Logitech Wingman force-feedback mouse.

In a previous research project (Brewster, 2002) we used the Wingman force-feedback mouse to provide access to museum artefacts that were not touchable by the public and also to allow visually impaired people to feel objects that were in cases. Other possibilities include audio for an additional form of display. At Glasgow we have been investigating the use of three-dimensional sound (played through headphones) to display audio information all around a user to create an immersive audio experience. This can be done from a mobile phone so users could interact with audio (and tactile) aspects of artefacts using devices they bring with them to the museum.
Figure 3 shows a smell display device. This could be used to present smells in relation to objects to enhance the realism or show off important aspects such as of material of construction. There is little work on smell as a display technique, but it has potential due to its strong link to emotion and memory.


Figure 3: Smell output device from DaleAir.

For input we have studied the use of gestures alongside keyboards and mice. These could be drawn on the touchscreen of a mobile phone with a finger, made by moving a device around or movements with other parts of the body. We generally use sensors (accelerometers, magnetometers) that are already in some mobile phones or sensor packs that can be attached to different parts of the body (see Figure 4). Gestures can also be tracked by cameras if they can be mounted in the environment. Gestures can be very flexible for input, for example it is very natural to point at something to find out more information about it.


Figure 4: A SHAKE sensor pack for gesture recognition.

Further reading

1. Brewster, S.A. Chapter 30: The impact of haptic 'touching' technology on cultural applications. In Hemsley, J., Cappellini, V. and Stanke, G. (eds.) Digital Applications for Cultural Heritage Institutions. (2005) Ashgate Press, UK, pp 273-284. ISBN 0754633594
http://www.dcs.gla.ac.uk/~stephen/papers/EVA2001.pdf This paper describes some of the background in force-feedback research and shows some of the simple museum exhibits that we have worked on.
2. Brewster, S.A., Wall, S., Brown, L. and Hoggan, E. Tactile Displays. The Engineering Handbook on Smart Technology for Aging, Disability and Independence (Helal, A., Mokhtari, M. and Abdulrazak, B. eds), John Wiley & Sons. 2008. ISBN 0471711551.
http://www.dcs.gla.ac.uk/~stephen/papers/wiley.pdf This paper gives some background into tactile displays and what can be done with them.
3. Plimmer, B., Crossan, A., Brewster, S.A. and Blagojevic, R. Multimodal collaborative handwriting training for visually-impaired people. In Proceedings of ACM CHI2008 (Florence, Italy). ACM Press Addison Wesley, 393-402.
http://www.dcs.gla.ac.uk/~stephen/papers/CHI2008_crossan.pdf http://www.dcs.gla.ac.uk/~stephen/videos/CHI2008_crossan.wmv
This paper gives an example of how a haptic device can be used with visually impaired children.

Links

www.dcs.gla.ac.uk/~stephen