Auditory Interference in Chinese-English Code-Switching: Immersion as Help and Hinderance
By ALEXNDRA EKSHTEYN
The purpose of this study is to identify and analyze the effects of setting language on word identification and recall between Chinese-English bilinguals and English monolinguals. Thirty participants completed a lexical decision task to identify identical words in Mandarin Chinese and American English under three different auditory environments: Chinese, English, and no audio. Reaction time and accuracy results were analyzed for the relationship between successful word identification and the auditory interference during each block. Data indicated that English audio resulted in a larger interference effect, manifesting as low accuracy and long reaction time in both bilinguals and monolinguals, exhibiting a larger effect than Chinese audio and silence. These results elaborate on the line between immersion in a second language as a help and that same immersion as a hindrance, and furthers the investigation into the underlying mechanisms of bilingual language processing.
Keywords: bilingualism, Chinese-English, code-switching, lexical decision task
Supported by the language-learning industry, bi- or multilingualism has been attributed to benefits in overall cognition, such as earlier development of selective attention in bilingual children over their monolingual counterparts, and the staving off of dementia symptoms in elderly, at-risk patients by an average of four years (Bialystok, 2010; Bialystok & Craik, 1999). However, despite the touted benefits of bilingualism, the actual code-switching inherent to its processing is still under debate, as well as the asymmetry in processing both languages and potential deficits in accuracy thereof. (Marian & Shook, 2012). This debate runs in tandem with discussions on the most efficient and productive way to produce bilinguals—individuals who are able to actively process and interact in both languages outside of a structured classroom setting. Most reports point to immersion, whether by teaching the classes entirely in the target language or immersing a student in target language media during their personal study time, as the most effective and natural way to supplement and encourage language learning (Lazaruk, 2007; Thomas & Collier, 2003). The cost of switching from the dominant language (L1 ) to the weaker language (L2 ) in bilingual speakers is larger than that of the inverse (Meuter & Allport, 1999). Current research also reports a persistence of the previous language when the participant switches between the two, indicating a significant proactive interference when moving between two mental sets and the necessity of active, conscious cognitive processes to facilitate the switch. In particular, the dominant language leaves an imprint on tasks conducted in L2, because the participant must devote more energy to inhibiting the L1 recall. And when further compared in a general task-switching study against monolinguals, bilinguals have less of this cognitive cost in switching between tasks, presumably because of prior experience switching between linguistic sets (Prior & Macwhinney, 2010). The study compared monolinguals and bilinguals in a task-switching study and while there was no significant difference between monolinguals and bilinguals in mixed-task sections of the study, in the single-task section bilingual participants demonstrated more flexibility is switching between them. This, in conjunction with Meuter and Allport’s study would suggest that while there is a cost in switching between languages, repeated practice in suppressing one language in order to think in another may benefit overall ability to switch between cognitive tasks.Further, while studies have shown that bilinguals are able to switch tasks with less cognitive cost than monolinguals, there may be interference in recalling one language because of environmental priming in the other. When presented with auditory interference to a sentence processing task in a language shared by monolingual and bilingual children and in a language unknown to both groups, bilingual children demonstrated greater proficiency in focus on the task despite the distraction (Filippi et al., 2014). However, there is a lack of research in bilingual interference processing where the distractor languages are those two which the participants speak (and whether the concurrent processing of both the distraction and the target task damper the advantage bilinguals are lauded for) outside of anecdotal evidence from bilinguals attempting to code-switch with auditory interference. This, in combination with the advocacy for immersion in the target language with minimal dominant language use, highlights a gap in the understanding of bilingual processing. Specifically, where in the process of mastering a second language does immersion become a hindrance instead of a help, and does the language of immersion determine the magnitude of the interference?The purpose of this study is twofold: using auditory interference in Chinese and English to measure the effect in reaction time (RT) to word pairs, using Chinese-English bilinguals and English monolinguals coached in Chinese, to determine if opposing effects on accuracy are present between the two groups. Therefore, the hypothesis is also twofold: for monolinguals, exposure to L2 audio would assist in the recall of study words for the lexical decision task, increasing accuracy and lowering RT, while exposure to L1 would inhibit recall. For bilinguals, exposure to both languages would inhibit recall, lower accuracy, and increase RT.
Thirty participants were recruited from the University of Utah, Westminster College, Salt Lake Community College, and Weber State University (17 female, 10 male, 3 other). Participants were “traditional” college age between 18-24 years (M = 20.23, SD = 1.59), and evenly separated into monolingual and Chinese-English bilingual subgroups. The fifteen monolingual participants were defined as not being fluent in another language besides English, disregarding introductory foreign language classes taken as part of college requirements. Bilingual participants were required to be either native speakers of both languages, or natively fluent in one language with 2+ years of extensive experience in the other.
Participants were asked to complete a lexical decision task to determine if two words present on the screen were the same word in both Mandarin Chinese and American English.The monolingual and bilingual groups took two different tests catering to their level of comprehension of Chinese. The monolingual task consisted of 28 words in Mandarin Chinese, presented in pinyin (phonetic romanization of Chinese characters). Pinyin was used instead of the original Hanzi characters in order to limit the potential confounding variable of an orthographic shift, and to prevent monolinguals from memorizing the shapes of the characters instead of actually understanding their semantic meaning. The vocabulary consisted of numbers (1-10), colors, cardinal directions, and seasons, and words were selected because most included category markers to hint at their definition. For example, color words contain the suffix sè, so a participant presented with hóngsè (red) would already be cued to look for a corresponding color word in English. Similarly, this remedied the confusion inherent in homophones in Mandarin, where the category suffix tiān in dōngtiān (winter) differentiates it from dōng (east). Monolingual participants were also given a study guide upon enrollment in the study for personal review, as well as an allotted 20 minutes to review again before the task. The study guide was designed to introduce the vocabulary using pinyin and pictures, without using English definitions. The bilingual task was considerably more difficult, with 232 words taken from the New Practical Chinese Reader textbook, also in pinyin. Words included colors, animals, foods, locations, pronouns, and situational verbs, and amounted to an A2/B1 level on the Common European Framework of Reference for Languages (CEFRL) as advanced beginner/early intermediate understanding and ability to carry out basic conversations about personal interests, occupation, and daily interactions (Council of Europe, 2011). Bilingual participants were not provided with a study guide prior to the task, under the assumption that the words presented would be ones they used in daily interactions and had a solid understanding of. Finally, both monolingual and bilingual participants were exposed to three different auditory interferences: Chinese, English, and a block of no audio to establish a testing baseline. Audio for both the Chinese and English blocks were spliced together from at four different news broadcasts or talk shows in the respective languages overlayed over one another, so participants would be exposed to the vocabulary and prosody of each language without being too distracted by following one concise narrative within the interference.
Participants were asked to complete three blocks of a computerized lexical decision
task to identify words in Mandarin pinyin and English, with each block corresponding
to a different auditory interference. Using a clicker box (“1” for correct, “4” for
incorrect,), participants responded to two words on the screen, one in each language.
If both words were the same in Chinese and English (for example, píngguǒ and apple),
participants would click “1”. If the words were different (píngguǒ and elephant),
they would select “4”. Each word-pair trial was present on the screen for three seconds.
The monolingual task had a total of 168 trials, with 56 per auditory condition, and
the bilingual task had 464 trials, with 154 per auditory condition.
Resulting data for both tasks was analyzed for accuracy and RT for each group (bilingual and monolingual) in each condition (Chinese, English, no audio) using an multivariable ANOVA. The mean accuracy scores for the bilingual participants were 90%, 89.5%, and 91.2% across the Chinese, English, and no audio conditions respectively, with RTs of 1082.1 ms, 1074 m, 991.2 ms. The mean monolingual scores were accuracies of 83.9%, 83.1%, and 88.9%, with RTs of 1311.9 ms, 1304.9 ms, and 1200.2 ms across the same three conditions. There was a significant difference in accuracy between monolingual and bilingual groups for the Chinese and English conditions, [F(1,28) = 8.1, p < 0.05; F(1,28) = 9.7, p < 0.05]. In addition, there was a significant difference in RT between monolingual and bilingual groups for the Chinese, English, and no audio conditions, [F(1,28) = 5.1, p < 0.05; F(1,28) = 5.7, p < 0.05; F(1,28) = 7.1, p < 0.05].
Analysis of the results supports both parts of the hypothesis. Bilinguals regularly outperformed the monolingual participants with respects to both accuracy and RT, which can be attributed both to more in-depth familiarity with both languages as well as a better ability to code-switch while parsing auditory interference (as supported by the Filippi et al. paper). However, bilingual and monolingual results follow the same pattern across the three conditions. The English condition had the largest interference effects in both groups, with the highest RTs and low accuracies, and the no audio condition had the lowest interference effect, high accuracies, and low RTs. This is understandable, given that that common test-taking environment in United States classrooms is a silent one, so participants were more accurate and faster in a more predictable, standard environment with no distractions. Finally, both groups experienced a speed/accuracy trade-off for the Chinese audio block; while accuracy scores increased when exposed to the Chinese audio, participants slowed down in order to be more accurate, resulting in higher RT. In the case of the bilingual participants, this can be explained in that 10 of the 15 bilinguals were originally native English speakers who learned Chinese later in life. Therefore, despite extensive experience (an average of 3 years) using Chinese on a regular basis, they still benefited from the immersive Chinese audio to some extent. From these results, it can be concluded that interference from L1 (English for both monolinguals and bilinguals as the dominant, most commonly used language in their daily lives) reduced their ability to accurately code-switch between the two languages, and exposure to L2 (Chinese for the monolinguals and most bilinguals) allowed for a high accuracy in recall in the monolinguals and an overall increase in accuracy in bilinguals at the expense of their speed. It can be said, then, that auditory interference in L1/L2 in both bilinguals-in-training and fluent bilinguals affects the speed with which they can code-switch between their two languages, more-so than the accuracy with which they do so. These results also indicate that language learning and processing incorporates multiple modalities in tandem to learn and apply linguistic knowledge, and speaks to the variety of teaching methods and classroom implements that must be incorporated to fluently learn a language. Further, this research into the underlying processing in bilingual code-switching calls into question the general benefits of bilingualism in relation to larger cognitive processes or task-switching in general, specifically the potential deficits faced by bilinguals in comparison to monolingual counterparts.
This study was limited by an inability to reliably measure the intensiveness of self-study in the monolingual group. Monolinguals were provided the study guide upon enrollment and were also given time before the task to review, but it can’t be known how long or well the monolinguals used the study guide and how that may have affected their performance. Also, as previously stated, two-thirds of the bilinguals were originally English native speakers, and as they still benefited from the Chinese audio immersion to an extent, this study wasn’t able to measure a truly bilingual reaction that comes from simultaneous language learning and symmetrical code-switching costs. Finally, this study only looked at single-word matching, and used Mandarin Chinese pinyin instead of the original Hanzi characters. Future research could incorporate full-sentence verification to examine difference in L1 and L2 syntax structure or orthographic shifts, and how those differences and more extensive processes would be affected by auditory interference.
- Bialystok, E., & Craik, F. I. (2010). Cognitive and Linguistic Processing in the Bilingual Mind. Current Directions in Psychological Science, 19(1), 19-23. doi:10.1177/0963721409358571
- Bialystok, E. (1999). Cognitive Complexity and Attentional Control in the Bilingual Mind. Child Development, 70(3), 636-644. doi:10.1111/1467-8624.00046
- Council of Europe (2011). Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Council of Europe.
- Filippi, R., Morris, J., Richardson, F. M., Bright, P., Thomas, M. S., Karmiloff-Smith, A., & Marian, V. (2014). Bilingual children show an advantage in controlling verbal interference during spoken language comprehension. Bilingualism Bilingualism: Language and Cognition, 18(03), 490-501
- Lazaruk, W. (2007). Linguistic, Academic, and Cognitive Benefits of French Immersion. Canadian Modern Language Review, 63(5), 605-627
- Marian, V., & Shook, A. (2012). The Cognitive Benefits of Being Bilingual. Cerebrum: The Dana Forum on Brain Science, 2012(13)
- Meuter, R. F., & Allport, A. (1999). Bilingual Language Switching in Naming: Asymmetrical Costs of Language Selection. Journal of Memory and Language, 40(1), 25-40.
- Prior, A., & Macwhinney, B. (2010). A bilingual advantage in task switching. Bilingualism:Language and Cognition, 13(2), 253-262.
- Thomas, W.P. & Collier, V.P. (2003). The Multiple Benefits of Dual Language. Teaching All Students, 61(2), 61-64
Neuroscience and Honors
Salt Lake City
Astronomy, special effects/movie magic, and soccer.
I've had an interest in linguistics for a very long time, and when I was researching potential senior projects, I decided to pursue it through a neuroscience lens.