Linguistic inferences from pro-speech music: Musical gestures generate scalar implicatures, presuppositions, supplements, and homogeneity inferences
Language has a rich typology of inferential types. It was recently shown that subjects are able to divide the informational content of new visual stimuli among the various slots of the inferential typology: when gestures or visual animations are used in lieu of specific words in a sentence, they can trigger the very same inferential types as language alone. How general are the relevant triggering algorithms? We show that they extend to the auditory modality and to music cognition. We tested whether pro-speech musical gestures, i.e. musical excerpts that replace words in sentences, can give rise to the same inferences. We show that it is possible to replicate the same typology of inferences using pro-speech music. Minimal and complex musical excerpts can behave just like language, gestures, and visual animations with respect to the logical behavior of their content when embedded in sentences. Specifically, we found that pro-speech music can generate scalar implicatures, presuppositions, supplements, and homogeneity inferences.
Co-written with Janek Guerrini
Détails de l’édition
- Éditeur :
- Springer Nature, Linguist and Philos 46, 989-1026