For an up to date record please also refer to my Google Scholar page.

Social Cognition in the Age of Human-Robot Interaction.
Henschel*, Hortensius*, & Cross, Trends in Neurosciences, 2020


Artificial intelligence advances have led to robots endowed with increasingly sophisticated social abilities. These machines speak to our innate desire to perceive social cues in the environment, as well as the promise of robots enhancing our daily lives. However, a strong mismatch still exists between our expectations and the reality of social robots. We argue that careful delineation of the neurocognitive mechanisms supporting human-robot interaction will enable us to gather insights critical for optimising social encounters between humans and robots. To achieve this, the field must incorporate human neuroscience tools including mobile neuroimaging to explore long-term, embodied human-robot interaction in situ. New analytical neuroimaging approaches will enable characterisation of social cognition representations on a finer scale using sensitive and adequate categorical comparisons (human, animal, tool, or object). The future of social robotics is undeniably exciting, and insights from human neuroscience research will bring us closer to interacting and collaborating with socially sophisticated robots.


[Paper] [Audio Paper]

Faces do not attract more attention than non-social distractors in the Stroop task.

Henschel, Bargel, & Cross

This is a preprint, i.e. a paper before peer review.


As robots begin to receive citizenship, be treated as beloved pets, and are given a place at Japanese family tables, it is becoming clear that these machines are taking on increasingly social roles. While human robot interaction research relies heavily on self-report measures for assessing people’s perception of robots, a distinct lack of robust cognitive and behavioural measures to gage the scope and limits of social motivation towards artificial agents exists. Here we adapted Conty and colleagues’ (2010) social version of the classic Stroop paradigm, in which we showed four kinds of distractor images above incongruent and neutral words: human faces, robot faces, object faces (for example, a cloud with facial features) and flowers (control). We predicted that social stimuli, like human faces, would be extremely salient and draw attention away from the to-be-processed words. A repeated-measures ANOVA indicated that the task worked (the Stroop effect was observed), and a distractor-dependent enhancement of Stroop interference emerged. Planned contrasts indicated that specifically human faces presented above incongruent words significantly slowed participants’ reaction times. To investigate this small effect further, we conducted a second study (N=51) with a larger stimulus set. While the main effect of the incongruent condition slowing down the reaction time of the participants replicated, we did not observe an interaction effect of the social distractors (human faces) drawing more attention than the other distractor types. We question the suitability of this task as a robust measure for social motivation and discuss our findings in the light of the conflicting results of Hietanen and colleagues (2016).


[Preprint] [Data]

No evidence for enhanced likeability and social motivation towards robots after synchrony experience.

Henschel, & Cross, Interaction Studies, 2020


A wealth of social psychology studies suggests that moving in synchrony with another person can positively influence their likeability and prosocialbehavior towards them. Recently, human-robot interaction (HRI) researchers have started to develop real-time, adaptive synchronous movement algorithms for social robots. However, little is known how socially beneficial synchronous movements with a robot actually are. We predicted that moving in synchrony with a robot would improve its likeability and participants’ social motivation towards the robot, as measured by the number of questions asked during a free interaction period. Using a between-subjects design, we implemented the synchrony manipulation via a drawing task. Contrary to predictions, we found no evidence that participants who moved in synchrony with the robot rated it as more likeable or asked it more questions. By including validated behavioral and neural measures, future studies can generate a better and more objective estimation of synchrony’s effects on rapport with social robots.

[Preprint] [Data] [Paper]

The causal role of the somatosensory cortex in prosocial behaviour.

Gallo, Paracampo, Müller-Pinzler, Severo, Blömer, Fernandes-Henriques, Henschel, ... & Gazzola, eLife, 2018


Witnessing another person’s suffering elicits vicarious brain activity in areas that are active when we ourselves are in pain. Whether this activity influences prosocial behavior remains the subject of debate. Here participants witnessed a confederate express pain through a reaction of the swatted hand or through a facial expression, and could decide to reduce that pain by donating money. Participants donate more money on trials in which the confederate expressed more pain. Electroencephalography shows that activity of the somatosensory cortex I (SI) hand region explains variance in donation. Transcranial magnetic stimulation (TMS) shows that altering this activity interferes with the pain–donation coupling only when pain is expressed by the hand. High-definition transcranial direct current stimulation (HD-tDCS) shows that altering SI activity also interferes with pain perception. These experiments show that vicarious somatosensory activations contribute to prosocial decision-making and suggest that they do so by helping to transform observed reactions of affected body-parts into accurate perceptions of pain that are necessary for decision-making.


Click the boxes to see media coverage of my research: