

Parts of this work were developed from the doctoral thesis of S.V.

Examining languages across modalities illustrates how this interaction impacts language processing. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. Spoken word activation follows the temporal structure of that word only when the word itself is heard for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)-even when the sign is seen. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. The results revealed cross-language, cross-modal activation in both directions. The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from within-language coactivation. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish–Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing.
