A typical kind of L2 learning mistake is to use L1 syntax and semantics to generate a L2 sentence. Some people may think that every type of L2 mistakes made are related to one's L2. However, outside of the domain of phonology, I'm not convinced that's the case. Take myself for example. I speak Chinese as my L1, a SVO language. However, when I started learning Japanese and be exposed to lots of L2 SOV sentences at the age of 18, I did not feel weird at all about the SOV word order. Also, when I was learning English, I did not end up making lots of mistakes that look like Chinese syntax/semantics in English. Instead, I learned all the correct usage of subtle details about the language, not being aware of many of its differences from Chinese (like native speaker who is not aware). Therefore, what happened here is not that I am (unconsciously or consciously) comparing English to Chinese when I was learning it; rather, English sentences, with all its details, just was presented to me in a holistic fashion (with its information about phonology, syntax, semantics, pragmatics, etc), and I was able to rather unconsciously extract all those information from my brain's statistical learning mechanism, rather implicitly. In this case, for example, Japanese cannot be confusing with Chinese to me because the SOV word order is associated only with those sounds in Japanese. That is a good strategy to learn a new language-make good use of your brain's native function. And that strategy is much better than an explicitly grammar based and analytically based approach. That route is a different one, i believe.
0 Comments
recently many people in our family in Beijing have been hospitalized because of various diseases. My father (who is live and well in Singapore) gives me some life advice: first, don't be too tired with work, but spend time to enjoy your life as well, even when you're young; second, don't constrain yourself to much, if you want something, go and do it; third, don't buy into any health advice without considering whether it is compatible with yourself(this points to the fact that many people today die in the running machine and my grandpa, who is now well at the age of 92, have never exercised a day in his life). In general, be good to yourself and enjoy your life!
It hit me yesterday when I was watching an animation movie "the painting" at Goethe Institute, that the human word recognition processing is not an isolated, simple problem, but is a much integrated brain process. In this movie, there is an imagined world in every painting on the wall, and within that world, when a painter did not finish the work, there are three classes of people. The name of the three classes are presented to the audience, without the written forms: All-done, Halfy, and Sketchy(characters in the painting that are all done, half done, and only a sketch). It took me a few minutes and a few instances of usage before I realized that these spoken words correspond to those three written forms that are listed above. The first time the character said "i'm a halfy", I had no idea what the sound 'halfy' corresponds to in English, and how to spell it. Similarly, when they say "she's an all-done", I had no idea that the sound is 'all done', because syntactically and semantically, I had not heard the sound sequence 'all-done' to be used as a noun and i have no experience with a noun in such a syntactic position that maps to any English word I know that sounds like 'all done'. Therefore, I did not successfully complete the task of word recognition even though I, in fact, know these words. What does this tell us? Well, first of all, i think recognition of word in an embedded syntactic structure and semantic/pragmatic context is probably different from recognizing a word in isolation. In this case, in order to map the sound to the form 'all done', a good amount of syntactic, semantic and pragmatic information is needed (also many of those information may not be presented linguistically, for instance, in this movie, visual information will help decoding the visual difference between an 'all-done' and a 'sketchy'). Second of all, it seems like to map the sound 'all done' to the abstract label of 'all done' is not a simple task that is relevant to whether the speech signal is varied or distorted. Imagine you hear a British person, an Indian person and a standard American person saying this. American English sounds the most familiar to me, and it was presented in a movie, kind of a newscaster sound, so it's very standard. I still can't effectively map the sound of 'all done' to any abstract label (in fact, the label of 'all done', or any other label, did not appear to me as a possible candidate for this sound at all). in other words, variation or no variation in speech signal, I simply cannot recognize this word effectively without the corresponding information of syntax, semantics, and pragmatics. You can say that syntax is blocking me from activating the label of 'all done' that it is not even activated slightly as a possible candidate. In this case, it seems that even in the case of exemplar theory, we will have to consider the possibility that in word recognition of the brain, not only the phonetic details and the social information is stored simultaneously, but also syntactic, semantic, and pragmatic information. In other words, it is stored in a truly holistic fashion that if one or more of these aspects (or constraints) are not matched, no matter how unvaried the speech signal is, you still cannot activate and access that label even slightly.
Recently listened to this brain science podcast interviewing cognitive linguist Ben Bergen (UCSD,COG SCI). Basically how we retrieve word meaning is a puzzle. Before, people have proposed the dictionary like approach and the mental language kind of approach, both have some internal logical problems. Now Bergen's point about embodied cognition is simple: when we interpret the semantic value of a sentence, our brain try to re-create a virtual experience that the language has coded. This idea is not new: we know from other areas of neurocognition that the brain parts responsible for, for example, imagining something and actually seeing it are the same. This comes from what is known as the Perky effect, where seeing an actual banana interferes with imagining seeing the same banana on the wall. (alternatively, you can get priming effect when the timing of one precedes another). One evidence that this is the case with language processing is that, people with lesions in the visual cortex will have impaired processing of visual words as well. Another point he brought up is that, syntactic structures also have meaning: the two ways of saying the same thing usually are not exactly identical semantically and pragmatically. That is the effect
|
NEWSLOG
|