Sonic Arts Research Centre
Queen’s University Belfast
Belfast BT7 1NN
Multimodal interaction, the many-faceted sensory experience we have when manipulating objects in the world is something we rarely stop to consider. Filling a glass of water requires the use of different senses or ‘sensory modalities’ to monitor our progress and each brings to the interaction the ability to track unique properties – vision provides feedback about the amount of water and of remaining space in the glass, audition provides temporal cues about the progress of the interaction and touch allows us to assess the change in mass and distribution of mass as the glass we are holding fills. We know that the glass is full from the interactions between these different modalities and from past experience of glass-filling interactions that were unsuccessful. Thus, knowing that the glass is full is directly derived from acting on the glass, i.e. being involved in the action of glass-filling. This active knowledge, also called enactive knowledge is only acquired by acting in the world (Varela et al.,1991.)
The concept of such body-mediated or embodied interaction, of the coupling of interface and actor, has become increasingly relevant within the domain of human-computer interaction. One only has to scan the shelves of any phone or computer game shop to see how many manufacturers are actively promoting devices that capture and utalise bodily movements from handwriting to throwing.
There are many directions in which Human Computer Interaction design is developing beyond the Graphical User Interface, all of which move toward a greater integration of the body’s motion and its sense of its own motion. Virtual Reality (VR) approaches a situation where the user is drawn into a high-quality, animated 3D world on the display. In its extreme, the display migrates onto the user's body as goggles, headphones and even clothing]. The second approach that of Augmented Reality (AR), recognises that computation is embodied in physical devices that exist as elements in the physical world and that the physical configuration of these computational devices is a major determinant of their usability. A third approach, that of Enactive Interface design [Enactive], extends this notion by emphasising a subtle shift of the seat of embodiment from the world of the application to the body of the user, acknowledging that the body of the user, with its associated sensorimotor capacities and its ability to acquire and recall bodily knowledge, is a crucial and often underutalised resource in interaction design.
Reaching into the Past
The attraction of considering the Enactive approach for a project such as ‘Touching the Untouchable’ is that it may be possible to literally allow a visitor to an exhibition to ‘reach’ into the past. Given that Enactive Interface Design involves building interfaces to computational devices that specifically build upon the kind of knowing that is acquired by doing (Essl and O’Modhrain, 2006, www.enactivenetwork.org)
a person might, by means of a computationally mediated interaction, be able to wear a garment or wield a tool that would have existed and thereby experience at first hand both the potential and limitations of the materials and objects that were available to our ancestors. Enactive interfaces make it possible to design such forms of interaction with computational devices because they start from the premise that the user must be able to gain knowledge by acting on the computationally enhanced environment through the interface. Furthermore, the coupling of perception and action that is a central tenet of acquiring enactive knowledge in the real world, is no less important for the computationally enhanced interactions that enactive interfaces support. Thus it follows that an enactive interface must be able to interpret actions and respond with appropriate reactions to build for the user an experience that is robust with respect to the coupling of perception and action. As with real-world interactions, enactive interfaces may be expected to represent their world in a richly multimodal way, providing multisensory cues that reflect the various properties of their constructed environments. They are not in or of themselves ‘enactive’, but are designed specifically to promote the building up of enactive knowledge for those who use them.
Varela, F. J., E. Thompson, E. Rosch, The Embodied Mind: Cognitive Science and
Human Experience, MIT Press, 1991.
In this book, the authors lay out the basis of a theory of Enactive Knowledge that links cognitive processing of actions and their environmental consequences to the perceptual correlates of this tight coupling between action and perception.
Essl, G., and S. O’Modhrain, An enactive approach to the design of new tangible musical
Instruments, Organised Sound 11 (2006) 285–296.
In this paper we present the case for employing the Enactive Approach to the design of digital musical instruments. In particular we discuss the development of a series of prototype instruments and installations that tightly couple perception and action by means of shared physical models for generating both the sound and feel of an interaction.
This website presents the work of the Enactive project, an EU Network of Excellence that ran from January 2004 until December 2008 focusing specifically on the development of practical ways of instantiating the theory of Enaction in the design of human-computer interfaces.