Today I’m very excited to be at CHItaly 2017 in Cagliari!
The conference opened yesterday with three workshops and the doctoral consortium: we had a very interesting and stimulating workshop on Designing, Implementing and Evaluating Mid-Air Gestures and Speech-Based Interaction, where we discussed several aspects regarding mid-air gestural and speech interaction, from the accuracy and robustness of automatic recognition techniques, to aspects related to the design of the interaction, fatigue and exertion, feedback, guidance and feedforward, and what means to design natural user interfaces (NUIs).
Today, the main conference opened with a keynote speech by Michel Beaudouin-Lafon entitled “Towards Unified Principles of Interaction”. Michel Beaudouin-Lafon is Professor of Computer Science, Classe Exceptionnelle, at Université Paris-Sud and senior fellow of Institut Universitaire de France.
The talk started with a simple but sometimes overlooked observation: although today’s’ computers are used for a wide range of very different tasks, with diverse and innovative interaction styles, and by users with different peculiarities, they still rely on user interfaces that were designed for office workers in the ’70s and the ’80s.
Indeed, we have folders, a bin and a “desktop” on our computer because the interface was designed for previous users: secretaries, and because we usually struggle to exit the comfort zone that we are familiar with. But what are the characteristics of future interaction?
Today’s interaction style is mainly graphical (i.e., GUIs): something is starting to change, for example, work has been done in the field of voice interaction (think of Siri by Apple), or virtual devices like Hololens. Things have been moving in research too. For instance, steps have been made towards augmented reality, tangible and embodied interaction. A few examples of such advance in research:
- Skinput by Chris Harrison, which allows you to use your phone through a projection on the hand;
- PaperTonnetz by Jérémie Garcia, which exploits paper interaction to compose music;
- HoloDesk by Microsoft Research, which creates the illusion of directly interacting with 3D graphics;
- Roomalive by Microsoft Research, for interactive projection mapping that dynamically adapts content to the room;
- inFORM by the Tangible Media Group of MIT, on physical telepresence;
- Zooids by the Stanford Shape Lab, which introduces swarm user interfaces.
So, something is moving in research, but the question remains: what’s going to be the next thing? Which style will replace GUIs? According to Michel: none, if one big problem is not overcome first. And this problem is finding a way to combine together all these different styles so that they can enrich each other. In other words, finding significant commonalities across these diverse interaction styles.
Walled gardens and information silos
Think for example about how email works: whoever has an email account can send an email to whoever has an email account. With email, the information flow follows this simple and straightforward path:
computer of the sender -> server > Internet -> server -> computer of the receiver.
Now, consider what happens instead in other widespread and more novel systems, like Facebook for example. Facebook is not defined by a protocol, like email: if you don’t accept all Facebook terms and conditions you don’t get to use it, and within FB you can communicate only with other people within Facebook. The same happens with the cloud: we can store our things, for example, on Google Drive, OR Dropbox, OR box.net. We don’t own anything, if we’re not connected we can’t access to our data – we are completely disempowered! This reminded me about the provocative book that Jeremy Rifkin published in 2000, The Age of Access: The New Culture of Hypercapitalism Where All of Life Is a Paid-For Experience, where the author argued for the first time the end of ownership of physical property.
So, back to Michel Beaudouin-Lafon’s talk, if you look at the overall picture, we are less in control now than before.
Unified principles of interaction
Considering this situation, how can we support interoperability and end-user appropriation? According to Michel, we need unified principles!
Today most things are designed for ONE user using ONE computer at a time, to do ONE task at a time. But we need to embrace multidevice interaction and multiuser interaction.[O]ne is [N]ot [E]nough
[O]ne is [N]ot [E]nough
So, how do we find these unified principles? Michel took into consideration the way we interact in the physical world, that is, through language and through physical actions, using our hands. But physical action is often indirect, mediated by tools (which we use for doing a wide range of things, like moving ourselves around, cooking or drawing), and tools are extremely powerful: for millennia human beings have been creating tools, and also tools for creating tools 😉
Research has also shown that tools are really internalized by our brain: if we are in a room with a number of tools around, our neurons will fire when we look at the tools. That is, we internalize tools as part of our body, we appropriate objects as tools.
Also, the physical world is flexible: a pencil can be used as a pencil or as a ruler, a mug can be used as a mug or as a paper weight. And we do that naturally.
But in the virtual world, things are much more rigid, even if it is software (which is not soft indeed). Of course, there is some software that is flexible, (i.e. Excel can be used to create animated ninja turtle warriors).
Back in 1997, Michel Beaudouin-Lafon and his colleagues started to explore instrumental interaction, defining the notion of instrumental interaction as mediated interaction. For instance, the scroll bar is a good example of mediated interaction because there’s not a physical parallel in the real world (we created it purposefully to scroll long documents).
Later, they proposed three unified design principles:
- Reification: for example, magnetic guidelines to align elements. To this regard, they presented Stickyline;
- Polimorphism: to make instruments work with different content. For example, think of colopickers: every tools has its own (Photoshop colorpicker cannot be used in Excel, but it should because the purpose of colorpickers is really to pick a colour from the interaface, regardless of the specific software you’re in);
- Reuse: to capture and reuse interaction patterns.
Michel also talked about information substrates. What are information substrates? For instance, in music you need a reference for being able to read it: the pentagram; and, for a painter, the final artwork is made of different layers of colours; the same happens in Photoshop with layers, but also in Excel, where the different information substrates are made of all the hidden functions: tables, graphs, shapes, pixels are all different levels of representation, and for which one could imagine different tools that are able to interact with each level of content (for example, values to enter in the table, or the function to set the type of the graph, set colour of the shape, or paint over the pixels).
The conceptual model of the digital workspace envisioned by Michel Beaudouin-Lafon combines substrates to manage digital information at the different levels of abstraction, with different instruments that allow the user to manipulate substrates, and also environments to organize substrates and instruments.
One example that goes in this direction is Musink by Theophanis Tsandilas, Catherine Letondal and Wendy E. Mackay, a tool for music composition that makes a creative use of paper and allows composers to smoothly move from their own paper drawings to the software tool.
In conclusion, according to Michel Beaudouin-Lafon, to augment and empower the human intellect, we need unified principles of interaction in order to make content and functions sharable, even across platforms and systems.
After this thought-provoking talk, the conference continued with three sessions: personalization & user context, smart environments, and assistive scenarios.
Tomorrow will be the last day of the conference, with a talk by Marianna Obrist on Mastering the senses in HCI: towards multisensory Interfaces.