Future Playful Interactive Environments for Animals

Earlier the other week I got sent a book chapter (from Playful User Interfaces) by Patricia Pons, who I met at ACE’14, on the future of playful interactive setting for animals. The authors (Patricia Pons, Javier Jaen and Alejandro Catla) are proposing through a set of guidelines a system for meaningful playful environments called Intelligent Playful Environments for Animals (IPE4A).

The chapter beautifully outlines animal-games within Animal Computer Interaction (ACI) through the history of unintended games for animals to custom built systems and ecosystems. The chapter then goes on to outline more recent machine interfaces, such as the Ai Project, Pawsabilities and LonleyDog@home. The analysis of these projects then brings the authors to set out their AIPE4A framework through an extensive list of requirements and their features. Using these guidelines recent research projects are then analysed against the proposed framework to evaluate current ACI. The chapter then questions some important queries when designing games for animals and looks at scenarios in which IPE4A could be used (e.g. therapy, with training etc.).

All together the chapter is a very forward leading framework with some good ideas. However there is a missing gap within chapter, which is the ability to put these ideas into practice. For example a system that senses animals’ emotions and adapts to them- how could this be done? To shed some more light on this questions and similar thoughts that came to mind when reading the chapter, I asked the authors, Patricia Pons, Javier Jaen and Alejandro Catala a few questions around the journal.

Interview with the authors

‘ACI community should rely on their most natural and intrinsic behaviour: play.’ While I agree with the concept of play being an innately already learned behaviour in most animals, what do you see as natural to pets?

In our opinion, an animal’s natural behavior could be understood as an action the animal can physically and cognitively perform without effort, and which the animal is used to do in its daily routine without had being taught about it. The challenge, as with traditional Human-Computer Interaction, is to find out the right artifacts, understood as integrated spaces and devices that pets perceive as coherent interaction environments and whose affordances (the relationships between the artifacts and the intentions, capabilities and perceptions of the animals) can be naturally identified with little or no training. For example, if we take a look at Mancini’s work on Cancer Detection Dogs (http://dl.acm.org/citation.cfm?id=2702562), they centered the dogs’ interaction on the sniffing behavior, which is natural for them (physically possible and extremely usual). If, instead of just sniffing, the dog had to push a regular button (see Fig. 1) with its paw after sniffing a positive sample, we won’t consider it as a natural behavior for the dog: dogs do not use their paws as we use our hands, therefore, although easy to learn, it will require for them to perform some new physical actions they are not used to do. We could consider pushing a button with the nose as a more natural behavior for the dog, or maybe adapting the location, size and materials of the button to fit the dog’s convenience and comfort (see https://youtu.be/_vYdDrMHC0U?t=55). Fig. 1. Dog pushing a regular button 

Your chapter focuses around the newly developed IPE4A system, which I love the self-evolving factor. What currently designed system do you see as closest to this model?

When we devised the playful environments for animals as intelligent and self-evolving systems, we were inspired by the work within the Ambient Intelligence (AmI) field. Mark Weiser in his visionary work “The computer for the 21st Century” wrote in 1991 “Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.” For us as humans, we have conceived smart environments where a virtual intelligence takes care of our duties and routines, easing our tasks and adapting to our preferences. We have all dreamed about that intelligent house which wakes up by opening the blinds, prepares coffee for us while we are taking a shower, and cleans itself while we are out. Smart environments for humans are meant to learn and evolve towards adapting to our preferences. In the case of smart environments for animals, we still have a long way to go. However, we could follow the same process we did to originate smart environments: study how sensing devices could gather information about the users, process this information in order to extract preferences and discard unwanted behaviours, and develop appropriate actuator mechanisms which make the system adapt to the context.

Within the IPE4A model you mention that there should be sense-guided stimuli, how would you measure the appropriate stimuli?

The appropriate stimuli depends on the animal species and several aspects related to their context. At the moment, we are focusing our research on cats and we are currently conducting a set of experiments in order to gather observational feedback about cats’ reactions to different kind of stimuli. We have defined a list of potential appealing stimuli of different nature: sound, movement, light, virtual content, etc. We let the cat move and explore freely the room where the experiment takes place, but at intervals of time we provoke one of the stimuli repeatedly for a while and observe the cat’s reaction to it. With this study, we would like to come up with an initial taxonomy of what stimuli could be more appropriate for the cat, attending to the cat’s personality, age and living conditions. The results should be based on observational cues, because we are unable to get the cats fill in questionnaires about their preferences J For this reason, we are being assisted by feline caretakers and therapists from our local shelter, who help us to understand the cats’ reactions and preferences.

In games such as Felino by Michelle Westerlaken, there is participation from both the animal and the human. Within your chapter (section 4.1) you mention that this gaming together strengthens the bond between animal-human. Do you think, for example in Felino, the cat knows that the human is controlling part of the game?

We do not believe they really know we are controlling part of the game, but we believe they have the notion that the human is participating in the game somehow. One of our experiences during the stimuli experiment was very remarkable in this sense. During one of the sessions with a cat, a human subject was controlling a small drone with a mobile phone, moving it so the cat could chase it. Surprisingly for us, sometimes the cat came near to her and seemed to be curious about what she had in her hands, which was the mobile phone acting as a remote controller. Maybe the cat saw the human moving her hands and looking at him and he understood it as part of the game. We think that in the case of Felino something similar occurs. As the cat and the human interact over the same space (in this case, the tablet screen), perhaps just the human interaction with the tablet is enough for the cat to assume they are playing together.

Again in 4.1 (by the way this question I am really interested in) you say that the ACI machinery should sense emotions behind the animal: how do you see a system sensing emotions? Biomechanics?

This is a very interesting question as this problem has been the focus of a very intense research activity since the term “affective computing” was coined in 1997. In the pioneering work of Rosalind Picard (MIT) physiological data acquisition has been a traditional way of inferring emotional states but probably the most distinct and rich source of affective information in the case of humans is our facial response. If we consider animals, we would consider starting with less invasive methods such as body posture recognition and analysis, movement patterns, facial expressions, etc. For example, you can get a lot of information about dogs and cats emotional status by looking at their tails. But not only biomechanics could help in this task: the sounds the animal emits are also a source of information. Analyzing barking patterns in dogs maybe could lead to understand when they are barking for happiness, annoyance or sadness, and the same could happen with cats’ meowing. Of course, this is only an idea and there is still a lot of work to be done. But hopefully, ACI would help in transferring the knowledge we have about animals into software systems capable of recognizing and detecting the animals’ emotional states in a reliable way.

How do you see animals withdrawing consent? Do you think they would understand this concept?

It is easy for us to see if an animal does not want to play anymore. They walk away from the playful activity, or they get angry and annoyed if you insist in continuing the activity. We believe that detecting an animals’ unwillingness to play should be done by analyzing their behaviors and reactions. As withdrawal is a human concept, animals can’t explicitly say or indicate to us they don’t want to play. We will need therefore to rely on their natural and intrinsic responses to learn how they express to us that they don’t want to play anymore and, therefore, use these responses to detect withdrawal in our systems. This is one of the things we are considering and implementing at the moment in our prototype system for cats.

Do you think it is ethical to put ACI in wild domains? I also thought it would be a really good idea while thinking about this question to have an adaption within a zoo that lets wild animals play with zoo animals to help them learn ordinary behaviour, what do you think of this idea?

Regarding ACI in wild domains, we should be really careful and deeply consider what benefits could an ACI system give to the animals in their ecosystem, and above all, if those animals really need technological support to improve their wellbeing. Perhaps it is still soon to discuss or envision such a system, and we should first focus our efforts on understanding our nearest animals’ needs. On the other hand, regarding intra species communication, there are several questions to be answered before we could rely on wild animals’ communication with semi-wild animals. The first that comes to mind is that of self-awareness: would the animals recognize themselves in a picture/mirror? Would the animals recognize another animals of the same specie? Would the animals be capable of understanding that the consequences of their actions have real-time reactions from the other animal? Although it is an interesting perspective, there are several steps we have to figure out before establishing those links.

How do you define meaningful interaction in animals? I’m not sure how I would know what is meaningful to an animal without biasing it by what is meaningful for me.

We could consider meaningful interactions as the ones that cause reactions on the animal side of the interplay. In order to know how the animal interpreted that interaction, i.e. the meaning the interaction had for him, we should rely again on animal psychology and behavior. It is really hard not to put our own feelings in what the animal seems to feel, and the most impartial and less obtrusive way to do it for now is based on observable physiological responses. Indeed, in our experiments we are being helped by a cat therapist in order to read the body language of the cats when facing the interactions we are presenting to them.

You mention that gradually introducing machines into an animal’s environment helps the animal not perceive it as a danger. In Donna J Harraways book, there is a case study by a woman who works with Apes and she initially took the approach when observing them to try and blend in. By doing this the apes were more wary of her. So instead she started to take the approach of mimicking their behaviour in a non-aggressive friendly way. This allowed her to record them behaving naturally as they did not perceive her as a threat. Do you think instead of gradually introducing the machine to the environment, the environment instead, like you mentioned adapting to their behaviour, could behave in a way that allows it to display friendly body language?

Both methods should be applied together. As the environment would be comprised of several digital devices, those devices should start to interact with the animal gradually, and behaving in a way the animal does not understand as aggressive/intrusive. This is one of the goals of the experiment we are currently conducting with cats: testing different devices with different interactive modalities allows us to observe which devices and/or interactions scare the animal or cause annoyance. We are also observing which interactions seem to be best to let the cat get comfortable with the device and play with it. An intelligent environment should be able to perform this process of test and error in order to introduce a new interactive device in the friendliest possible way for the animal. Again, as we stated in our answer to your first question, the challenge is to provide the right artifacts exhibiting the most adequate affordances for the actual context a particular animal is currently in. This is the main challenge that the field of “Ambient Intelligence for Animals” will need to address in the coming years.


I would like to thank the authors so much for sharing this interesting conversation with me. Find below the links to their chapter and the full springer book

Chapter: Envisioning Future Playful Interactive Environments for Animals                              http://link.springer.com/chapter/10.1007%2F978-981-287-546-4_6

Book: More Playful User Interfaces  http://link.springer.com/book/10.1007/978-981-287-546-4

Written by Ilyena Hirskyj-Douglas on 4th June 2015

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s