Spring 2007 Seminar

In May 2007, over a dozen researchers from computer science, linguistics and psychology presented work related to this project. The presenters included undergraduates, graduate students, postdocs, and faculty members. We met together to listen to presentations about various aspects of our project. Below is the program for the May 7 conference. We have also linked to the slides that were presented (click on titles). ALL WORK IS IN PROGRESS/ IN DEVELOPMENT--PLEASE DO NOT CITE.

Abstract: The task of recognizing spoken words is notoriously difficult. Once dialectal variation is considered, the difficulty of this task increases. When living in a new dialect region, however, processing difficulties associated with dialectal variation dissipate over time. Through a series of priming tasks (form priming, semantic priming, and long-term repetition priming), we examine the general issue of dialectal variation in spoken word recognition, while investigating the role of experience in perception and representation. The main questions we address are: (1) How are cross-dialect variants recognized and stored and (2) How are these variants accommodated by listeners with different levels of exposure to the dialect? Three claims are made based on the results: (1) Dialect production is not always representative of dialect perception and representation, (2) Experience strongly affects a listener's ability to recognize and represent spoken words, and (3) There is a general benefit for having the status as the "ideal" variant, even if this variant is not the most common one.

Abstract: The question at the heart of speech perception is: How do listeners arrive at stable percepts from an acoustic signal that is continuously variable? We propose that speech perception processes may attempt to learn those properties of the speaker (rather than the signal) that provide stable cues to speech invariance. In the absence of other information about the speaker, the system relies on episodic order, representing those properties present during early experience with the speaker. But this 'first-impressions' heuristic can be overridden when variation is attributable to something pragmatic and temporary (a pen in the speaker's mouth), rather than to a speaker's permanent attributes.



Abstract: In a card task where English was used, non-native speakers of Korean used different degrees of vowel duration in two phonetic contexts depending on their conversational partners (English vs. Korean). This demonstrates that Korean speakers were sensitive to the interlocutors' phonological knowledge and adapt their speech to them.



Abstract: Addressee behavior shapes what speakers say; when speakers interact with distracted addressee, the quality of their narrations decreases (Bavelas, Coates, & Johnson, 2000). Is this decrease due to the lack of addressee feedback, or to speakers’ expectations about whether addressees are engaged? In 39 dyads (32 men and 46 women), speakers told addressees two jokes. Addressees were either attentive or distracted by a second task, and speakers expected their addressees to be either attentive or distracted. When addressees were distracted, they gave less feedback. Nevertheless, when speakers expected distracted addressees, they put more time into the task, but only when addressees were actually distracted (not when addressees expected to be distracted were actually attentive). Moreover, speakers interacting with attentive addressees told the jokes with more extra vivid details when they expected attentive addressees than distracted ones, or when they were interacting with distracted addressees. These results suggest that speakers’ narratives are shaped not only by addressees’ feedback, but also by how speakers’ construe a lack of feedback on the part of a distracted addressee.

Abstract: In spontaneous speaking, new information tends to be expressed clearly, while given, predictable, or previously referred-to information tends to be attenuated. Current debate concerns whether this given/new effect is due to audience design (speakers adapting to addressees by taking their knowledge and needs into account), or is simply an automatic process by which speakers do what is easiest for themselves.  In 20 groups of 3, one person retold the same Road Runner cartoon story twice to one partner (so the second retelling was to an Old Addressee) and once to another (New Addressee), counterbalanced for hypermnesia (Addressee1/ Addressee1/ Addressee2 or Addressee1/ Addressee2/ Addressee1). We compared events realized, words, details, perspectives, and word lengths (for lexically identical expressions) for a given speaker across all 3 retellings. We also compared the gesture space, iconic precision and distribution of gesture types for a given speaker across the 3 retellings. In general, stories retold to Old Addressees were attenuated compared to those retold to New Addressees. We conclude that the given/new attenuation effect takes place at various levels of representation that may lend themselves differently to adjustments intended for the speaker or the addressee. Overall, the partner-specific adjustments we report in speech and gesture have implications for understanding the alignment of these modalities and extending the scope of audience design to include nonverbal behavior.

Abstract: Code-switching and borrowing are phenomena appearing in Deaf signers’ communication and an expected part of their culture. Sign language interpreters are users of ASL and English. They also utilize cross-linguistic strategies in interpretation including English mouthing, fingerspelling, and borrowing (Weisenberg 2002; Davis 2003). Preliminary observations show that language mixing is now an expected behavior of webcam interpretation via nationwide services like 'video-relay'. Equipment constraints and FCC regulations appear to be driving this language mixing. Furthermore, interpreters’ use of the language of the “automated-world” is reaching deaf persons and deaf callers are ‘experimenting’ with automated language, sacrificing accuracy for speed.

Judgments based on common ground
Matt Jacovina
4:10-4:20
Abstract: Common ground is information that is known as shared between two individuals.   When making judgments for others (such as when purchasing a gift), we sometimes make inferences based on this common ground.   In this study, we looked at how the amount of information in common ground influences how accurately one can predict a partner’s preferences.   Conversational partners asked each other four or twelve questions and then made choices about the other’s preferences for a display of similar items.   Those with more information were less accurate at choosing the item their partner did.

Abstract: Most of what we know about the architecture of the language production system, and the extent to which speakers plan utterances in advance of articulation, comes from studies of speech without addressees (monologue). However, it is possible that the pressure to take addressees’ needs into account could shift the scope of planning in language production. I will briefly present some results from a study in which speakers described pictures with an addressee present (dialog) or without an addressee present (monologue).


The RavenCalendar System
Amanda Stent
4:45-4:55
Abstract: The RavenCalendar System is a multimodal dialog system for maintaining your Google calendar.  It is based on the Ravenclaw/Olympus dialog framework from CMU.  We have adapted this architecture and several of the components to better serve user adaptation.  I will demonstrate the system and discuss planned development.  We want your ideas for adaptation experiments in this system!

Abstract: I will describe our rate-a-course system.  Preliminary data suggest that system users who are allowed relatively more initiative (with respect to the questions they answer) provide both qualitatively and quantatively different responses than their peers who are allowed less initiative. 

Summaries in the Rate-a-Course System
Patricia Ding
5:05-5:15
Abstract: Currently, the Rate-a-Course system lets users evaluate courses but not hear course evaluations.  I have designed several summaries that are being integrated into the system.  We hope to continue this work over the summer.