In the 21st century, our daily interactions are more and more likely to involve communicating with artificial agents equipped with spoken language interfaces. What assumptions do we spontaneously "bring to the table" in these interactions? Do we presume an artificial partner will respect the same conversational conventions as humans when it comes to things like (i) turn-taking, (ii) understanding that "what is meant" often goes beyond what is said, and (iii) that speakers and listeners typically attempt to optimize communication by providing sufficient but not excessive information?
Our work in this area explores aspects of real-time interpretation as humans interact with a Furhat Model II robot that we have (perhaps predictably) named "Pal". Our trusty team of programmers helps us develop experiments where listeners follow instructions from the robot, who is capable of producing many natural behaviours. These include mouth movements that are fully synchronized to speech, natural shifts in gaze and head-position, and a range of facial emotions. The robot's speech characteristics can also be altered, such as the apparent gender and age of the voice, the language and dialect being spoken, the presence or absence of a foreign accent, intonation, and rate of speech. Our multidisciplinary research in this area draws heavily on the insights we have developed in our work on human-human communication.
Saryazdi, R., Nuque, J., & Chambers, C.G. (2019). Interaction with a robot partner: Age-related differences in communicative perspective taking. Paper presented at the APA Conference on Technology, Mind & Society, Washington, DC.
(More coming soon!)