Grounded Meaning in Real-Time Communication
What's in a word? Well, much less than you might think. Although we rarely notice this, our interpretation of words, phrases, and utterances involves an unconscious array of mental acrobatics that help us infer what a talker means. When viewed this way, the task of the understanding language in real time requires a range of cognitive mechanisms that rapidly integrate incoming speech with available multi-modal information. This allows us to readily understand a phrase like "Roll the duck..." in relation to a scene like the one below, even though, divorced from the situation, this sequence of words is comparatively rare (about 1/100th as common as "roll the dice", for instance, according to Google).
Also, notice that we do not require any more information to identify the intended referent, even though there is another toy duck present. This is because the information elsewhere in the utterance (the verb "roll") seems to be sufficient for picking out the intended duck, based on its properties. Our research has shown that this kind of common-sense reasoning has broad impacts on the time-course and character of language processing. Yet the tendency for scientists to study language in non-situated contexts (without concrete referents or tasks), has entailed a situation where many formal models of language and cognition overlook these considerations, and their implications for the nature and architecture of language processing mechanisms. Our ongoing work explores these kinds of considerations using a methodology in which eye movements are recorded as listeners follow spoken instructions. This technique allows us to understand the ongoing operations of mental processing systems at the millisecond level.
Some of our recent work draws on phenomena from linguistic semantics and pragmatics to explore cases where communicative partners differ in their knowledge about what an object actually is. For example, look at the object to the bottom left below: it's a light bulb right? Wrong! It's a candle (see the other photograph on the right of the same image from a different perspective). Even if the listeners know it is in fact a candle, how hard is it to identify that object in real time when a speaker refers to it as a "candle"? (is it harder than if it were a typical candle?) And what if you knew the speaker believes it is a light bulb, yet began speaking the word "candle" (or maybe even a word that started with those sounds, like "candy"? Could you help yourself from (even briefly) considering that the speaker might be referring to that object? A line of work exploring the impact of knowledge discrepancies in spoken conversation explores these and related issues.
Chambers, C.G. (2016). The role of affordances in visually situated language comprehension. In P. Knoeferle, P. Pykkonen-Klauck, & M. Crocker (Eds.), Visually situated language comprehension. (pp. 205-226) New York: John Benjamin.
Mozuraitis, M., Chambers, C.G., & Daneman, M. (2015). Privileged versus shared knowledge about object identity in real-time referential processing. Cognition, 142, 148-165.
Heller, D. & Chambers, C.G. (2014). Would a blue kite by any other name be just as blue? Effects of descriptive choices on subsequent referential behavior. Journal of Memory and Language, 70, 53-67.
Tsang, C., & Chambers, C.G. (2011). Appearances aren't everything: When perceptual and linguistic cues conflict during referential processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 1065-1080.
Chambers, C.G., & Cooke, H. (2009). Lexical competition during second-language listening: Sentence context, but not proficiency, constrains interference from the native lexicon. Journal of Experimental Psychology: Learning, Memory & Cognition, 35, 1029-1040.
Chambers, C.G., & San Juan, V. (2008). Perception and presupposition in real- time language comprehension: Insights from anticipatory processing. Cognition, 108, 26-50.
Chambers, C.G., Tanenhaus, M.K., Magnuson, J.S. (2004). Actions and affordances in syntactic ambiguity resolution. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 687-696.