Assistant Professor of Psycholinguistics at the University of Buffalo, USA
چکیده: (18 مشاهده)
Every time we speak, we find a way to turn an abstract message in our heads into a structured sequence of sounds that a listener can understand. In other words, we must assign a phonological or a phonetic form to our message. What is this process like, and how can we learn about it? Traditionally, these production processes have been studied by gaining insights from speech errors or from speech onset latencies. In this talk, I present two lines of research that argue that we can learn about these production processes from the prosodic and suprasegmental properties of the message. Throughout the first part of the talk, I present research that uses priming to investigate the mental representations necessary for a speaker to encode prosodic phenomena into their message. In the second half of the talk, I propose that word durations can reflect the difficulty faced by the production system throughout phonological encoding, and I present a computational model that shows how the same process can account for durational patterns that have traditionally been treated as separate in the literature: phonological overlap lengthening and repetition reduction. Together, these lines of research show that how speakers say something can reveal necessary information about the cognitive processes engaged in spoken language.
نوع مطالعه:
پژوهشي اصیل |
موضوع مقاله:
فلسفه ذهن و زبان شناسی شناختی دریافت: 1404/9/5 | پذیرش: 1404/9/10 | انتشار: 1404/9/10