Guest Post: 'Can we afford not to think about affordances? Usability lessons from the ecological psychology of design'...by Dominik Lukes

Bottles of hand sanitiser on shelf

About

This blog post is a preamble to my session at the ALT 2020 Online Summit on what we can learn from UX principles for making more accessible and inclusive learning experiences. It outlines some of the key concepts and gives background for thinking about them.

Affordances and meaning

We often throw the word affordance into conversation casually when we talk about what we can do with something. But this devalues a very powerful concept that can help us rethink how we design virtual learning experiences to be more accessible.

Definition

Affordance is the property of an object that allows the user to interact with it in order to achieve a particular goal without conscious deliberation. It is about the immediate moment of interaction and it allows for what has been called "directly meaningful perception". But it is not straightforwardly a fixed feature of the object, it only really makes sense to talk about an affordance of something with respect to the object's user's needs and aims.

How not to use 'affordances'

Affordance is often used in the sense 'what X lets me do'. So we may say things like 'contacting students' is an 'affordance of email'. This seems to make sense because I can also say 'email affords me the chance to contact students'. But it really just dresses up a much more mundane phrase like 'I can use email to contact students'. There's nothing wrong with this but it is a waste of a perfectly good word that can help us capture something not at all obvious.

Example of Affordance

Let's talk about a mug. I can use it to drink but that is not its affordance. What I'm interested in is what makes a mug meaningful to me when I see it. I can grasp it with my hand, lift it, pour liquid into it, scoop liquid from a container with it, bring it to my lips, tilt it up, etc. I can also put it on a piece of paper to stop it from flying away. When I see a mug, none of these things come to mind. I just see a mug. The mug's affordances only become its affordances when I see a mug with a goal or a task in mind. I want to take a drink, I want to empty a bucket of water, I want to hold something light in place. The mug presents itself to me as having affordances. The mug may have certain features, like a handle, that facilitate grasping, or a spout, that facilitate pouring, if I can naturally employ these features for the purposes they were intended for, then it makes sense to speak of them as the affordances of the mug.

Affordances and design

Physical objects and their affordances

We can easily think about how this would have impact on the design of physical objects. Buttons, levers, or handles can all become more useful if the users can approach them as directly meaningful without having to consult some mental manual. We can turn a knob and the very physical motion gives us feedback on the direction, speed of turning, resistance, end of turn, etc. Those are also affordances of the physical. We walk into a classroom and naturally face in the direction of the teacher, side by side with other students. Combining the physical affordances of space, vision and sound with social affordances of interaction.

Virtual objects and their affordances

But things become much more complicated when we start thinking about virtual objects. These do not have any affordances that have direct meaning to us just by virtue of their physical interaction with our body. The mediating devices like computers or phones do have physical affordances but we need to find a meaningful link between those and the virtual objects we use them to manipulate. These links have to be learned and internalised before we can meaningfully interact with virtual environments.

Origins of the term

Affordances and ecological psychology

The concept of 'affordance' was developed by James Gibson as part of his program of 'ecological psychology'. Gibson was trying to bring psychology and perception closer together and show how much more sense our ability to navigate the world makes when viewed from the ground up. Affordances make the world meaningful even to animals without higher cognitive faculties and just because we possess these faculties, it does not mean we use them to pick up mugs and open doors. In fact, this insight was used by Rodney Brooks in his work on bottom-up robots at MIT which directly led to the Roomba (which preceded usable neural-network based AI by over a decade).

Affordances and user experience design

But Gibson did not leave us with a good theory of how we learn these meanings of the world around us. Some of them seem natural, like the fact that the ground holds us up or concave surfaces hold water, but the use of the handle of mug is a learned activity. This gaps which persists in ecological psychology would make 'affordance' a lot less useful practical concept if it were not for its expansion into the field of design (still evoking some controversy 30 years later). This was done by Don Norman in the now classic 'The Design of Everyday Things' (originally released as 'The Psychology of Everyday Things') which became one of the foundations of the then emerging field of Human-Computer Interaction (HCI) studies and usability design (now UX).

Norman added the dimension of 'mental representation' which is something programmatically resisted by Gibson. And while Gibson's arguments against 'mental models' have a lot of theoretical merit, affordances without them are just too cumbersome for practical use. Norman showed that if we think about the internal models users have of virtual objects or even of mechanical machines, we can design interfaces with features that become their affordances - that is allow for directly meaningful interaction (these are my words rather than Norman's).

VLEs and PLEs

To see how this works in practice, let's compare the 'virtual learning environment' (VLE) with a 'physical learning environment' (PLE or perhaps more accurately the classroom). The physical classroom is filled with objects that are directly meaningful to us. Chairs, windows, whiteboards, desks - we don't have to think about how to use them. We just do. When I walk into a room, I may spend time deciding about which seat to sit in, but I will just know which way to face or where to place the piece of paper I want take notes on.

Now lets think about what happens when I land on a front page of a course in a VLE. The only natural affordance is text that presents itself for me to read. Everything else has to be mediated by the text. Unless the design of the page and the course guides me, such as with prominent forward / back buttons, calendar widgets, hierarchical structure.

What’s more this ‘space’ is only accessible via information in my memory, ie where I keep the link. In comparison, the physical classroom is in spaces with people indicating direction of travel, visible notices, and the social space it occupies also makes it easy to ask for directions. I’m also occupying the same space as othersand present affordances to other people who can ask me, follow me, or offer me help. Designs of virtual environments that do not think of these differences will be much less easy to use.

The raised hand conundrum

These things are not alway very easy or straightforward to solve. For me, a very visceral illustration is what I call the 'raised hand conundrum' in webinar interfaces. I've been running webinars since 2010 and I have yet to come up with a way to consistently explain to a group of people how to raise and lower the hand to attract the speakers attention.

On the surface, nothing could be simpler. Make a button with the picture of a hand. The button is as close as we can come to a directly meaningful affordance in computer interfaces and the picture of a hand also seems to have a meaningful link to the desired action. And usually, saying 'click the hand button to raise your hand' is enough. The problem is when the time comes to lower the hand. People often forget to click the button again but just as often, they don't know what the state of the button is by what it tells them visually. Even adding colours or arrows does not help. Does the down arrow mean that the hand is down or that I should click it to lower it? Green indicates an ON state by convention, but on a button, does it indicate that or is on, or that I should push it to turn it on?

None of this is a problem in the classroom. The bodily affordance of the arms give a student constant feedback as to whether their hand is raised. And they naturally lower it when called upon to speak (with some humorous exceptions). Their hand is also directly visible to everybody who can help magnify the signal by noises to alert the teacher should they have missed it. The teacher also knows that the raised hand is there for a question not yet answered. Whereas in a virtual Zoom or Teams call, they have to remember that the person already had the question and didn't click the button again. No platform has solved this. I have suggested to several manufacturers that the hand is automatically lowered when the person who raised it speaks or chats but none of them have implemented it.

Practical UX heuristics to help design VLEs

So can we actually design interfaces that meet make the virtual interfaces directly meaningful? Thinking more deeply about affordances is a good start but we also need some more practical heuristics. UX literature is full of these. In my session, I want to focus on two ways of organising these heuristics: 1. The Knowledge Gap and 2. Idiomatic design.

The Knowledge Gap

Often, we hear that usable systems need to be 'intuitive'. It might seem that this 'intuitiveness' is the same as being directly meaningful through an affordance. But that is not the case. This way of thinking ignores that the affordance is a combination of the property of the object and the user's aims which include any prior knowledge. In fact, the demand that interfaces be intuitive would make usable design impossible because different users are in different situations and therefore have different intuitions.

Thus any design process needs to start with a focus on what the UX expert Jared Spool has called The Knowledge Gap. This is the gap between what the user knows and what they need to know in order to complete the task. The aim of the designer is to provide enough cues to help users bridge the gap. This could be something as simple as using language the user is familiar with but sometimes it needs to be in the form of instructions or even training.

In other words, we need to work to make virtual interfaces directly meaningful to their users. It won't happen on its own. And for that we need to pay close attention to what users do and how they think and speak about the interfaces we present them with.

Idiomatic design

One way to think about bridging the knowledge gap to bring more meaning into the users' interactions with machines is 'idiomatic design'. It is an idea developed by Alan Cooper and colleagues in the book 'About Face'. Cooper et al. start with the same rejection of the 'intuitive' design. Instead, they propose we think in terms of idioms. Idioms are set phrases that are not easy to figure out from their component words. But you only need to hear them used in the right context once or twice to know what they 'mean'. 'Kick the bucket' is a famous idiom of this type.

A good example of idiomatic design is the pinch-to-zoom gesture on touch screens popularised by Apple's iPhone. There is nothing natural or intuitive about it. There are no examples in nature when you can make an object bigger or smaller by pinching it. However, you only need to see it once to find it completely 'natural' and 'intuitive'. And in fact, Apple was so successful in bridging the knowledge gap through advertising and socialising this feature that by the time the first users got their iPhones, it seemed completely natural to them.

Sometimes the bridging can be achieved with just trial and error such as the new laudable trend in booking interfaces where to select the start and end of the booking, the user just has to click twice on a calendar widget. First click is for the start and second click for the end of the booking period. There is nothing obvious about it but two clicks should be enough to teach the user how this works provided the system gives enough feedback on what happens after each click.

Usability heuristics

But most of the time, even idiomatic design has to do more to bridge the knowledge gap than mention a feature a few times or rely on users to figure it out. We need to look deeper. Books like About Face and many others help. Jakob Nielsen's 10 Usability Heuristics are also a good start and we will look at them very closely during my session. They can be used to evaluate virtual spaces and interfaces without relying on the evaluators' personal preferences.

But whichever framework we are using to make systems more usable, we always need to be mindful of the need to pay attention to the meaning our users want to derive from using them. And for that the concept of 'affordance' is fundamental.

About the author

Dominik Lukeš (@techczech) is a Digital Learning Technologist at Oxford University's Said Business School. He has previously worked in the field of dyslexia and has also contributed to the study of metaphor. He blogs on MetaphorHacker.net about questions of cognition and Digiknow.sbsblogs.co.uk about his work in digital learning technology.

Where to read more

Where to read more about affordances

Unfortunately, there is no one definitive description of affordances that captures all the aspects of this concept.

Perhaps the best place to start is the 2013 edition of Norman's 'The Design of Everyday Things'. In it he not only describes what he means by the term but also addresses the differences between his and Gibson's conception of it, as well as some common misconceptions about it since he introduced it.

To get some background on what Gibson had in mind by directly meaningful perception, the chapter on Theory of Affordances in his 1979 book 'The Ecological Approach to Visual Perception' is a good place to start.

But to truly appreciate its aims, I suggest Harry Heft's 'Ecological Psychology in Context' which outlines not only Gibson's own work but also that of his predecessors going back to William James.

To see some of the discussions about how the term affordances can be used or misused in the context of learning technology, there was a useful exchange in 2004 between Connole and Dyke advocating for a maximalist interpretation and Boyle and Cook who suggested a more narrow focus which I'm also proposing here.

I have also tried to use the notion in my recent piece on why online learning can present so many challenges.

Boyle, T. and Cook, J. (2004) ‘Understanding and using technological affordances: a commentary on Conole and Dyke’, Research in Learning Technology, 12(3). doi: 10.3402/rlt.v12i3.11260.

Conole, Gráinne and Dyke, M. (2004) ‘Understanding and using technological affordances: a response to Boyle and Cook’, Research in Learning Technology, 12(3). doi: 10.3402/rlt.v12i3.11261.

Conole, Grainne and Dyke, M. (2004) ‘What are the affordances of information and communication technologies?’, Research in Learning Technology, 12(2). doi: 10.3402/rlt.v12i2.11246.

Gibson, J. J. (1979) The ecological approach to visual perception. Hillsdale, N.J. ; London: Erlbaum.

Heft, H. (2016) Ecological psychology in context: James Gibson, Roger Barker, and the legacy of William James’s radical empiricism. London: Routledge (Resources for ecological psychology).

Lukeš, D. (2020) ‘No back row, no corridor – metaphors for online teaching and learning’, Oxford Magazine. No 422, Eighth Week, Trinity Term, pp. 5–8.

Norman, D. A. (2013) The design of everyday things. New York, N.Y.: Basic books.

Where to read more about UX and HCI

UX and HCI are vast arenas and it can be quite difficult to navigate. About Face by Alan Cooper and colleagues is a clear statement of some of the most important principles including affordances, idiomatic design, user research and personas (which Cooper introduced into the field). It contains many examples of useful and less useful design.

Of course, going back to Don Norman's foundational book is also important although it focuses much less on digital design. Don Noman also leads an early MOOC on Udacity, now available on YouTube.

To get a sense of the depth of the field a browse through the monumental two volumes of Handbook of Human Computer Interaction may also be very useful.

However, some of the most up to date thinking about UX is happening on the web alongside books and academic articles. Here I would recommend the work of Jared Spool, Jakob Nielsen (who runs a company together with Don Norman) and websites like UX Matters.

Alan Cooper et al. (2014) About face: the essentials of interaction design. Fourth edition. Indianapolis, Indiana: Wiley.

Nielsen, J. (1994) 10 Heuristics for User Interface Design, Nielsen Norman Group. Available at: https://www.nngroup.com/articles/ten-usability-heuristics/ (Accessed: 30 April 2020).

Norman, K. L. and Kirakowski, J. (2018) The Wiley handbook of human computer interaction. Hoboken, New Jersey: Wiley Blackwell (Ebook central).

Spool, J. (2005) ‘What Makes a Design Seem “Intuitive”?’, UX Articles by UIE, 10 January. Available at: https://articles.uie.com/design_intuitive/ (Accessed: 10 February 2020).

Spool, J. (2011) ‘Riding the Magic Escalator of Acquired Knowledge’, UX Articles by UIE, 2 November. Available at: https://articles.uie.com/magic_escalator/ (Accessed: 10 February 2020).

 

Topic: