戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 y mapping the alphabetic characters onto the spoken word.
2 ing, and integration of a heard sound with a spoken word.
3 itten text before a degraded (noise-vocoded) spoken word.
4 EG data while human participants listened to spoken words.
5 uals report color experiences when they hear spoken words.
6 r the acoustic-phonetic cues at the onset of spoken words.
7 about listeners' brain activity as they hear spoken words.
8  pictures, indicating their understanding of spoken words.
9 into neural mechanisms contributing to human spoken language.
10 hen it comes to an important human attribute-spoken language.
11 y integration in sign language compared with spoken language.
12 tational primitive for the representation of spoken language.
13  but no other animal, make meaningful use of spoken language.
14 eme difficulties producing and understanding spoken language.
15 unique in their ability to communicate using spoken language.
16 ationships among sign language, gesture, and spoken language.
17 ind from birth responds to touch, sound, and spoken language.
18 "visual") brain regions respond to sound and spoken language.
19 namics adjusts to the temporal properties of spoken language.
20 scripts which encode the sound properties of spoken language.
21 behaviorally relevant oscillatory tuning for spoken language.
22 ocalize very much like human infants acquire spoken language.
23  explain the evolutionary advantage of human spoken language.
24 amental difference versus human gestures and spoken language [1, 5] that suggests these features have
25 e phenotypes were (1) phonemic awareness (of spoken words); (2) phonological decoding (of printed non
26          Response patterns discriminative of spoken words across language were limited to localized c
27 loped by deaf individuals who cannot acquire spoken language and have not been exposed to sign langua
28 with both the perception of visual words and spoken language, and it examines how such functional cha
29 e neural parallel between birdsong and human spoken language, and they have important consequences fo
30                    By contrast, responses to spoken language are present by 4 years of age and are no
31 work of neural structures, regardless of how spoken words are represented orthographically in a writi
32                                    We used a spoken word as a repeating "standard" and periodically i
33 panzee (Pan troglodytes) that recognizes 128 spoken words, asking whether she could understand such s
34  Functional flexibility is a sine qua non in spoken language, because all words or sentences can be p
35 ille words and occipital cortex responded to spoken words but not differentially with "old"/"new" rec
36 y tool humans use to exchange information is spoken language, but the exact speed of the neuronal mec
37 ests that the semantic representation of the spoken words can be activated automatically in the late
38  Learning to read requires an awareness that spoken words can be decomposed into the phonologic const
39 - is iconic, highly variable, and similar to spoken language co-speech gesture.
40 onclude that in the absence of visual input, spoken language colonizes the visual system during brain
41 ngs support the dual neurocognitive model of spoken language comprehension and emphasize the importan
42              Electrophysiological studies of spoken language comprehension have identified an event-r
43  issue within a dual neurocognitive model of spoken language comprehension in which core linguistic f
44 ating children's syntactic processing during spoken language comprehension, and a wealth of research
45 of hearing loss on neural systems supporting spoken language comprehension, beginning with age-relate
46 plore the brain regions that are involved in spoken language comprehension, fractionating this system
47 ed from processes of semantic integration in spoken language comprehension.
48                                    Toddlers' spoken word comprehension was examined in the context of
49 age in this posterior perisylvian region and spoken word comprehension.
50 al coordinate data for lip shape during four spoken words decomposed into seven visemes (which includ
51              We explored the neural basis of spoken language deficits in children with reading diffic
52 ded woman with left-hemisphere dominance for spoken language, demonstrated a dissociation between spo
53 n V4/V8 when imagining colors in response to spoken words, despite overtraining on word-color associa
54  to produce songs in a manner reminiscent of spoken language development in humans.
55 ification (hearing aids) that can facilitate spoken language development in young children with sever
56 itudinal, and multidimensional assessment of spoken language development over a 3-year period in chil
57     Two groups of participants learned novel spoken words (e.g., cathedruke) that overlapped phonolog
58 y to do so depends on the structure of their spoken language (English vs. Hebrew).
59       Poor hearing acuity reduces memory for spoken words, even when the words are presented with eno
60 s.;>This concept implies vocal continuity of spoken language evolution at the motor level, elucidatin
61 ver, it has not been clear whether it is the spoken word forms or the meanings (or both) of nouns and
62 ths by number of phonemes and graphemes, and spoken-word frequencies.
63 cabulary and learning the sound structure of spoken language go hand in hand as language acquisition
64                            The processing of spoken language has been attributed to areas in the supe
65 s on one of the first steps in comprehending spoken language: How do listeners extract the most funda
66 ic evolution of this crucial prerequisite of spoken language: (i) monosynaptic refinement of the proj
67 elation to performance on a standard test of spoken language in 16 chronic aphasic patients both befo
68 ce of "visual" cortex responses to sound and spoken language in blind children and adolescents.
69 e of auditory feedback in the development of spoken language in humans is striking.
70                      Participants recognized spoken words in a visual world task while their brains w
71 n predominantly based on written text or the spoken word increasing numbers are now drawing on visual
72 f a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackerm
73 word recognition propose that the onset of a spoken word initiates a continuous process of activation
74        SIGNIFICANCE STATEMENT: Understanding spoken words involves complex processes that transform t
75                                Understanding spoken words involves complex processes that transform t
76                                              Spoken language is a central part of our everyday lives,
77 ndings suggest that occipital plasticity for spoken language is independent of plasticity for Braille
78 ed that one of the fundamental properties of spoken language is the arbitrary relation between sound
79 in young children was associated with better spoken language learning than would be predicted from th
80 sleep in the consolidation of a naturalistic spoken-language learning task that produces generalizati
81 poral regions in which symbolic gestures and spoken words may be mapped onto common, corresponding co
82 old control was critical to the evolution of spoken language, much as it today allows us to learn vow
83                           Yet, in evolution, spoken language must have emerged from neural mechanisms
84 nts evidence that audiovisual integration in spoken language occurs when one modality (vision) acts o
85 e compare with gesture, on the one hand, and spoken language on the other?
86 ; and that the "language of thought" maps to spoken language or symbol systems.
87              The answer may take the form of spoken words or a nonverbal signal such as a hand moveme
88 grated in cognitive and/or motor theories on spoken language origins and with more analogous nonhuman
89 f the roles assigned to the basal ganglia in spoken language parallel very well their contribution to
90 r implantation showed greater improvement in spoken language performance (10.4; 95% confidence interv
91           Our observers identify printed and spoken words presented concurrently or separately.
92                                   Learning a spoken language presupposes efficient auditory functions
93 g ability on the neural processes supporting spoken language processing in humans, we used functional
94 activate orthographic representations during spoken language processing, while those with reading dif
95 y focusing on the role of orthography during spoken language processing.
96                            Certain models of spoken-language processing, like those for many other pe
97                                  We compared spoken language production (Speech) with multiple baseli
98 identify spatiotemporal networks involved in spoken language production in humans.
99                                              Spoken language production is a complex brain function t
100  in cognitive control specific to sentential spoken language production.
101 ss large-scale cortical networks involved in spoken word production.
102 ns: clinical diagnosis, language impairment (spoken language quotient <85) and reading discrepancy (n
103                   In toddlers, as in adults, spoken words rapidly evoke their referents.
104 and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechan
105         Although it is well established that spoken word recognition engages the superior, middle, an
106      We propose a predictive coding model of spoken word recognition in which STG neurons represent t
107              Influential cognitive models of spoken word recognition propose that the onset of a spok
108                                              Spoken word recognition requires complex, invariant repr
109 ates for higher cognitive functions, such as spoken word recognition.
110 mics of lexical activations during real-time spoken-word recognition in a visual context.
111 ws one to project the internal processing of spoken-word recognition onto a two-dimensional layout of
112 his idea, we proposed that AV integration in spoken language reflects visually induced weighting of p
113  The relationship between these gestures and spoken language remains unclear.
114                                Understanding spoken language requires a complex series of processing
115                                Understanding spoken language requires the rapid integration of inform
116                                              Spoken language samples were obtained using the Cookie T
117                                              Spoken language samples were obtained using the Cookie T
118 ear implantation, some deaf children develop spoken language skills approaching those of their hearin
119                       Studies of written and spoken language suggest that nonidentical brain networks
120  continuous goal-directed hand movement in a spoken-language task, online accrual of acoustic-phoneti
121                       By coarse-graining the spoken word testimony into synonym sets and dividing the
122 ological awareness, the auditory analysis of spoken language that relates the sounds of language to p
123 opriate behavior, they have difficulty using spoken language to explain why it is inappropriate.
124                   The ability of written and spoken words to access the same semantic meaning provide
125 subjects, we compared semantic processing of spoken words to equivalent processing of environmental s
126                                         As a spoken word unfolds over time, it is temporarily consist
127          By contrast, occipital responses to spoken language were maximal by age 4 and were not relat
128          Vocal learning is a key property of spoken language, which might also be present in nonhuman
129                         Humans can recognize spoken words with unmatched speed and accuracy.
130 s have argued that sign is no different from spoken language, with all of the same linguistic structu
131          Response patterns discriminative of spoken words within language were distributed in multipl
132  fMRI response patterns that enable decoding spoken words within languages (within-language discrimin
133 tantiation of written language processes and spoken language, working memory and other cognitive skil
134 with dyslexia for a wide variety of stimuli, spoken words, written words, visual objects, and faces.
135                            Using a number of spoken word-written word matching paradigms, her compreh

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top