Plenary speakers

Dr Rebekah Wegener (Paris Lodron University Salzburg)

Rebekah Wegener is Assistant Professor at Paris Lodron University Salzburg and her research focuses on multimodal interaction across contexts, computer mediated communication and human computer interaction. In addition to theoretical work in linguistics, she looks at applications in human centred and explainable AI and contextual computing, particularly in medical and educational domains. Outside academia, Rebekah was project manager and head linguist in language technology and medical informatics; working on designing and developing tools for data collection and annotation and on building and annotating large datasets for industry and government.


The Semiotic Machine: Technology and multimodal interaction in context

Human interaction is inherently multimodal and if we want to integrate technology into human sense-making processes in a meaningful way, what kinds of theories, models, and methods for studying multimodal interaction do we need? Bateman (2012) points out that “most discussions of multimodal analyses and multimodal meaning-making still proceed without an explicit consideration of just what the ‘mode’ of multimodality is referring to”, which may be because it seems obvious or because development is coming from different perspectives, with different ultimate goals. However, when we want to put multimodality to work in technological development, this becomes problematic. This is particularly true if any attempt is being made at multimodal alignment to form multimodal ensembles: two terms which are themselves understood in very different ways. Here I take up Bateman’s (2012 and 2016) call for clarity on theoretical and methodological issues in multimodality to first give an overview of our work towards an analytical model that separates different concerns, namely the technologically mediated production and reception, the human sensory-motor dispositions and the semiotic representations. In this model, I make the distinction between modality, codality and mediality and situate this with context. To demonstrate the purpose of such a model for representing multimodality and why it is helpful for the machine learning and explicit knowledge representation tasks that we make use of, we draw on the example of CLAra, a multimodal smart listening system that we are building (Cassens and Wegener, 2018). CLAra is an active listening assistant that can automatically extract contextually important information from an interaction using multimodal ensembles (Hansen and Salamon,1990) and a rich model of context. In order to preserve privacy and reduce the need for costly data as much as possible, we utilise priviledged learning techniques, which make use of multiple modality input during training, learn the alignments and rely on the learned association during run-time without access to the full feature set used during learning (Vapnik and Vashist, 2009).Finally, I will demonstrate how the integration of rich theoretical models and access to costly, human annotated data in addition to data that can easily be perceived by machines makes this an example of development following true ‘smart data’ principles, which utilize the strength of good modelling and context to reduce the amount of data that is needed to achieve good results.

Prof. Kay O’Halloran (University of Liverpool)

Professor Kay O’Halloran is Chair Professor and Head of Department of Communication and Mediain the School of the Arts at the University of Liverpool. She is an internationally recognized leading academic in the field of multimodal analysis, involving the study of the interaction of language with other resources in texts, interactions and events. A key focus of her work is the development of digital tools and techniques for multimodal analysis. Kay is developing mixed methods approaches that combine multimodal analysis, data mining and visualisation for big data analytics within and across different media platforms.


Multimodality: Informing design, policymaking and activism for digital media

Members of society use digital media for every facet of their lives while being watched, analysed and manipulated by those who have designed and own the digital platforms. For example, data science and AI are being used to determine what information is made available and to whom. As a consequence, we have moved to unparalleled imbalances in power, knowledge and wealth arising from the “unauthorised privatization of the division of learning in society today” (Zuboff, 2019, p.192). The question arises as to the role of multimodality today. Perhaps it is to assist with the development of Explainable AI algorithms (with clarity about the results) to understand the distribution and filtering of information, together with its inherent biases and possibilities for social change (O’Halloran, 2023).This would enable a step change in research methodologies for understanding the co-development of digital media and society. The aim is to inform design, policymaking, and activism around future digital media technologies based on principles of inclusion, equality, transparency, privacy, social solidarity, health and wellbeing, sustainability, and preservation of the natural world. At the moment, the current digital ecosystem makes it difficult to understand and interpret the legacies of digital media and their social, cultural, political and economic impact. From this perspective, it is evident that multimodality has a major role to play to mitigate the risks and leverage the benefits of digital media for the foreseeable future.

Dr. Inés Olza (University of Navarra)

Inés Olza is Senior Research Fellow in Language and Cognition at the Institute for Culture and Society (ICS) of the University of Navarra (Spain), where she directs the Multimodal Pragmatics Lab. She earned her PhD in Spanish Linguistics in 2009 (University of Navarra), and she has been an invited researcher at several universities and institutions–among others, University of Antwerp (2006, 2007), University of Bergen (2011), University of Birmingham (UK, 2012), and the European Court of Justice (Luxembourg, 2017). She has authored several dozens of publications on Pragmatics, Cognitive Linguistics, Corpus Linguistics and Discourse Analysis. Her research has focused so far on Spanish, English, German and French. Currently, she is Principal Investigator of the Knowledge Generation Project MultiNeg, funded by the Spanish Ministry of Science (PGC2018-095703-B-I00) and centered on the detection and analysis of multimodal patterns for negation and disagreement in (semi)spontaneous interaction. Since 2013, she is a leading member of the Spanish chapter of the Red Hen Lab for the Study of Multimodal Communication.


Systemic-functional approaches, cognitive approaches, or both? Recent developments in gesture analysis

In this talk, I will relate and contrast two different methodological approaches to gesture analysis in face-to-face interaction: (1) the cognitive and psycholinguistic view of gesture and sign as an integrated part of human communicative behavior (cf. Müller et al. 2013, Sandler 2018, among many others); and (2) a series of social semiotic approaches to multimodal communication that have recently shifted their focus onto paralanguage, relying on the of Systemic Functional Linguistics apparatus (Martinec 2004; Martin & Zappavigna 2019; Lim 2019; Farsani, Lange & Meaney 2022). Both approaches rely on different models of communication (e.g. relationship and semiotic hierarchy between verbal and non-verbal cues) and diverge in considering language as a process and/or a product. I claim,however, that both accounts also share relevant epistemological groundings and can be successfully integrated into more efficient explanations of the multimodal nature of language and communication. To show this, I will focus on research conducted by the CREATIME and MultiNeg projects on multimodal patterns for the representation of time and the expression of disagreement in interaction, where a dynamic approach to informativity, the speech-gesture interface and semiotic-cognitive-pragmatic continuum has been applied successfully (see Pagán et al. 2020; Olza 2022).