Dr. Manuel Giuliani is Professor in Embedded Cognitive AI for Robotics at the Bristol Robotics Laboratory, University of the West of England, Bristol. Before coming to Bristol, he led the Human-Robot Interaction group at the Center for Human-Computer Interaction, Department of Computer Sciences, University of Salzburg. He received a Master of Arts in computational linguistics from Ludwig-Maximilian-University Munich, a Master of Science in computer science from Technische Universität München, and a PhD in computer science from Technische Universität München. He worked on the European projects JAST (Joint Action Science and Technology), JAMES (Joint Action for Multimodal Embodied Social Systems), ReMeDi (Remote Medical Diagnostician) and the Austrian Christian-Doppler-Laboratory "Contextual Interfaces". His research interests include human-robot interaction, social robotics, natural language processing, multimodal fusion, multimodal output generation, and robot architectures.
The two general topics of my research are human-robot interaction and social robotics. Here, I am mainly interested in multimodal fusion: every robot that is built to interact with humans, needs to be able to understand information from several input channels, for example from speech and gesture recognition. This is called multimodal fusion.
In this picture you can see me together with the JAST robot. I'm pointing at a green cube while I'm saying "please take this". For this simple interaction the robot needs to be able to understand language and gestures. Furthermore, the robot has to recognise the object I pointed to and it needs further information about that object, for example it needs to know if it can pick up the object with its grippers.
But it doesn't stop here. For a meaningful interaction, the robot needs to have even more abilities: it needs to know if picking up the green cube is useful in the given situation. Maybe the cube does not fit into the current assembly plan that the robot and I are following. But how should the robot interact? Should it just pick up the green cube, even though it knows that that is wrong? Or should it risk that I might get angry and simply tell me that the green cube is not necessary for the current building step?
- Social robotics. How should the robot react to a human's words and gestures? How do humans perceive the movements and actions by the robot? Together with my colleageus, I did research on how the words the robot says and the role it takes in the interaction effects the way how humans perceive the robot.
- Knowledge representation. How can we represent the knowledge of the robot about its human interaction partners and about its environment? In my PhD thesis as well as in my publications, I studied different ways to represent knowledge about human utterances as well as the robot's own actions
- Robot architectures. I am also interested in research about the architectures and methods that have to be implemented to realise multimodal fusion in a human-robot interaction system.
- Safety issues. Since robots are typically heavy machines that could harm humans, research about safety principles is indispensable for human-robot interaction.