Autism Reveals Social Roots of Language
Temple Grandin, who teaches animal science at Colorado State University and is autistic, says it's taken her a lifetime to speak in a way that sounds natural to others.
Scientists say that's probably not a coincidence.
There's growing evidence that language depends as much on the brain circuits that help us navigate a cocktail party as those that conjugate verbs.
One of the people who believes that evidence is Temple Grandin. She teaches animal science at Colorado State University and has written several best-selling books. She's also autistic.
Grandin says it has taken her most of her life to reach the point where she can speak with other people in a way that sounds natural. She says that's because she's had to learn language without the social abilities most people have.
Grandin didn't begin speaking until she was 3 ½ years old. Her first words referred to things, not people, she says.
"I'd point at something that I wanted, you know like a piece of candy or whatever, and say, 'there,'" Grandin says.
She wasn't using language to reach out to her parents or to other children, the way most kids do, so she didn't have the same motivation to talk.
A Tool for Information, or Attention?
When Grandin finally did become interested in words, it was because they provided a way to get information, not attention.
"When I was in third grade, I had trouble with reading, so mother taught me how to read," she says. It opened up a world full of "so many interesting things," she recalls: "I used to like to get the World Book Encyclopedia and read it."
But the encyclopedia taught her little about using language to make friends. Even when she got to high school, chit-chat and gossip meant nothing to her.
She says that made her teenage years the worst part of her life. "Kids teased me, called me tape recorder because when I talked it was kind of like just using the same phrases."
She also kept talking, without letting other people respond.
Grandin and many others with autism have no problem with the mechanics of language, says Dr. V.S. Ramachandran, a neuroscientist at University of California, San Diego. But they don't understand what's really going on in many conversations.
"That's one of the hallmarks of autism," he says, "difficulty with social interaction, manifest both in spoken language and in just lack of empathy. The ability to understand other minds would be one way of describing it."
The Role of Mind Reading
Ramachandran says it's hard to use language if you don't have any idea what someone else is thinking and feeling.
That may seem obvious. But in the past, researchers have treated language as if it were primarily a system of rules. They assumed that people spoke because every human brain came pre-wired with a "universal grammar."
Now, a growing number of researchers, including Ramachandran, argue that the social and emotional aspects of language are at least as important as the rules for stringing words together.
Emotional Neurons
Ramachandran says one reason for the new thinking is a new understanding of the human brain. He says it's become clear that babies' brains are programmed to imitate.
"You stick your tongue out at a newborn baby, very often the newborn baby will stick its tongue out," he says.
Similarly, babies return smiles and often make sounds when someone speaks to them.
A few years ago, scientists found a biological explanation for this phenomenon: specialized brain cells called mirror neurons.
These neurons fire when you do things such as sticking your tongue out. They also fire when you watch someone else stick their tongue out.
And mirror neurons can reflect emotions as well as physical actions. Experiments show that some of the same cells that fire when we feel pain also fire when we see another person in pain.
But people with autism appear to have faulty mirror neurons. That may be why they have trouble putting themselves in someone else's shoes. And Ramachandran says without that ability, a lot of what you can accomplish with language disappears.
"You have to be aware of the effects that your words are having on the other person's mind," he says. Otherwise, how could we use words to manipulate other people?
Picking Up Non-Verbal Cues
Temple Grandin has learned to compensate for her difficulty.
Early in her career, she spoke to people on the phone instead of face to face. That way she didn't miss messages conveyed through eye contact or body language.
But even on the phone, people may not say what they mean. The phrase "I'm fine" sometimes means just the opposite.
So Grandin taught herself to listen very closely to a person's tone of voice.
"When I had a client that I thought might be angry with me, I'd call him up just so I could listen to his voice," she says. "If it had a certain little whine sound in it I'd go, 'Oh he's still angry with me.'"
Over time, Grandin has developed a catalogue of signals she uses to figure out what people are thinking. She checks to see if they are fidgeting during a lecture, or making eye contact during a conversation, or folding their arms during an argument -- emotional cues most of us register automatically.
"I always keep learning," Grandin says. "People ask for the single magic breakthrough. There isn't one. I keep learning every day how I think and feel is different. It's all through logic, trial and error, intellect."
Intellect can only take her so far, though. Grandin says she still has trouble with certain types of conversations.
"Just a couple of years ago I went out to dinner with some salesmen, and these people were absolutely totally social," she says. "They talked for three hours about sports-themed nothing. There was no informational content in what they were talking about. It was a lot of silly jokes about the color of medication and the color of different team mascots. It was boring for me."
Social Motivation for Language
The salesmen were using language as a way of bonding with one another -- not a way to share information. Scientists say this sort of behavior may explain how humans developed language in the first place.
Bonding is something most animals do. For example, apes bond by grooming each other. And one theory has it that early humans began to augment their grooming with affectionate gestures and sounds that eventually led to primitive language.
Ramachandran says there are some gaps in that hypothesis. Like how people got from grunts to grammar.
"The difficult part is to try to disentangle the notion that emotional empathy merely gives you motivation, a reason to talk to somebody, versus an absolutely critical role in the emergence of language," he says.
Ramachandran suspects it's the latter because empathy is what allows people to understand the intention behind an action or a phrase.
For example, he says, when we see someone reach for a peanut, empathy helps us decide if they intend to eat it, or throw it at us. And when we hear someone use a string of words, empathy tells us whether to take the words literally or figuratively.
Ramachandran says people who lack empathy also lack the ability to read another person's intentions -- whether physical or linguistic.
"Not only do they have problems understanding an action like reaching for a peanut," he says, "but also a metaphor like reaching for the stars."
Grandin doesn't use metaphors very often, even though she has mastered the mechanics of language. Grandin says she will never fully understand the social aspects of language, including other people's intentions. And that means language will never offer her more than a rough translation of what other people are trying to say.
Produced by NPR's Anna Vigran
The Study done by Ramachandran:
"EEG evidence for mirror neuron dysfunction in autism spectrum disorder"
http://cbc.ucsd.edu/ramapubs.html
2. Materials and methods
2.1. Subjects
Our original sample consisted of 11 individuals with
ASD and 13 age- and gender-matched control subjects. All
subjects in the study were male. The ASD group was
composed of ten individuals diagnosed with autism and one
individual diagnosed with Asperger ’s syndrome. One
subject with autism and two control subjects were excluded
prior to analysis due to excessive movement artifacts that
resulted in an inability to obtain sufficient EEG data. One
additional control subject was excluded prior to analysis due
to a technical malfunction in the EEG system. Therefore,
our final sample consisted of 10 individuals with ASD and
10 age- and gender-matched controls. Subjects ranged in
age from 6–47 years (ASD: M = 16.6, SD = 13.0; Control:
M = 16.5, SD = 13.6; t (18) = 0.017, P N 0.98). One
individual was left handed in the ASD group, while in the
control group 3 individuals were left-handed.
ASD subjects were recruited through the Cure Autism
Now Foundation, the San Diego Regional Center for the
Developmentally Disabled, and the Autism Research
Institute. Control subjects were recruited through the UCSD
Center for Human Development subject pool and the local
community. Individuals were included in the ASD group if
they were diagnosed with either autism or Asperger ’s
syndrome by a clinical psychologist. Subjects met DSM-
IV criteria for a diagnosis of Autistic disorder or Asperger’s
disorder [3]. In addition, subjects in the ASD group
exhibited the following diagnostic behaviors at the time of
testing, including, but not limited to, awkward use of
pragmatics, intonation, and pitch in communication, lack of
initiation of social interactions, and obsessive preoccupation
with the order and specific details of the study. All subjects
were considered high-functioning, defined as having age
appropriate verbal comprehension and production abilities
and an IQ greater than 80 as assessed by either school
assessments or psychometric evaluations from a clinician.
Subjects without age appropriate verbal comprehension and
production abilities were excluded from the study. Subjects
were given age-appropriate consent/assents (for subjects
under the age of 18). In addition, in order to ensure that
subjects understood the procedure and the tasks involved, a
picture board was created and the study was fully explained,
in age-appropriate language, prior to the subjects’ partic-
ipation. This project was reviewed and approved by the
UCSD Human Research Protections Program.
2.2. Procedure
EEG data were collected during four conditions: (1)
Moving own hand : Subjects opened and closed their right
hand with the fingers and thumb held straight, opening and
closing from the palm of the hand at a rate of approximately 1
Hz. Subjects watched their hand at a comfortable viewing
distance, the hand held at eye level. (2) Watching a video of a
moving hand : Subjects viewed a black and white video of an
experimenter opening and closing the right hand in the same
manner as subjects moved their own hand. Videos were
presented at a viewing distance of 96 cm, and the hand
subtended 58 of visual angle when open and 28 when closed.
The hand was medium gray (8. 6 cd/m2) on a black
background (3.5 cd/m2). (3) Watching a video of two
bouncing balls : two light gray balls (32.9 cd/m2) on a black
background (1.0 cd/m2) moved vertically towards each other
touched in the middle of the screen then moved apart to their
initial starting position. This motion was visually equivalent
to the trajectory taken by the tips of the fingers and thumb in
the hand video. The ball stimulus subtended 28 of visual angle
when touching in the middle of the screen and 58 at its
maximal point of separation. (4) Watching visual white noise :
full-screen television static (mean luminance 3.7 cd/m2) was
presented as a baseline condition. All videos were 80 s in
length and both the ball and hand videos moved at a rate of 1
Hz. All conditions were presented twice in order to obtain
enough clean EEG data for analyses and the order of the
conditions was counterbalanced across subjects, with the
constraint that the self-movement condition always followed
the watch condition so that the subjects had a model on which
to base their movement.
To ensure that subjects attended to the video stimuli during
the watching hand movement and bouncing balls conditions,
they were asked to engage in a continuous performance task.
Between four and six times during the 80-s video, the stimuli
stopped moving for one cycle (a period of 1 s). Subjects were
asked to count the number of times stimuli stopped moving
and report the number of stops to the experimenter at the end
of the block.
2.3. EEG data acquisition and analysis
Disk electrodes were applied to the face above and below
the eye and behind each ear (mastoids). The mastoids were
used as reference electrodes. Data were collected from 13
electrodes embedded in a cap, at the following scalp
positions: F3, Fz, F4, C3, Cz, C4, P3, Pz, P4, T5, T6, O1,
and O2, using the international 10–20 method of electrode
placement. Following placement of the cap, electrolytic gel
was applied at each electrode site and the skin surface was
lightly abraded to reduce the impedance of the electrode-
skin contact. The impedances on all electrodes were
measured and confirmed to be less than 10 KV both before
and after testing. Once the electrodes were in place, subjects
were seated inside an acoustically and electromagnetically
shielded testing chamber.
EEG was recorded and analyzed using a Neuroscan
Synamps system (bandpass 0.1–30 Hz). Data were collected
for approximately 160 s per condition at a sampling rate of
500 Hz. EEG oscillations in the 8–13 Hz frequency
recorded over occipital cortex are influenced by states of
expectancy and awareness [31]. Since the mu frequency
band overlaps with the posterior alpha band and the
generator for posterior alpha is stronger than that for mu,
it is possible that recordings from C3, Cz, and C4 might be
affected by this posterior activity. Therefore, the first and
last 10 s of each block of data were removed from all
subjects to eliminate the possibility of attentional transients
due to initiation and termination of the stimulus. A 1-min
segment of data following the initial 10 s was obtained and
combined with the other trial of the same condition,
resulting in one 2-min segment of data per condition. Eye
blink and eye and head movements were manually
identified in the EOG recording and EEG artifacts during
these intervals were removed prior to analysis. Data were
coded in such a way that the analysis was blind to the
subjects’ diagnosis. Data were only analyzed if there was
sufficient clean data with no movement or eye blink
artifacts. For each cleaned segment, the integrated power in
the 8–13 Hz range was computed using a Fast Fourier
Transform. Data were segmented into epochs of 2 s
beginning at the start of the segment. Fast Fourier Trans-
forms were performed on the epoched data (1024 points). A
cosine window was used to control for artifacts resulting
from data splicing.
Two measures of mu suppression were calculated. First,
we calculated the ratio of the power during the observed
hand movement and self hand movement conditions relative
to the power during the baseline condition. Second, we
calculated the ratio of the power during the observed and
self hand movement conditions relative to the power in the
ball condition. A ratio was used to control for variability in
absolute mu power as a result of individual differences such
as scalp thickness and electrode impedance, as opposed to
mirror neuron activity. The ratio to the ball condition was
computed in order to control for the attention to counting or
any effects due to stimulus stopping during the continuous
performance task and processing of directional motion.
Since ratio data are inherently non-normal as a result of
lower bounding, a log transform was used for analysis. A
log ratio of less than zero indicates suppression whereas a
value of zero indicates no suppression and values greater
than zero indicate enhancement.
3. Results
3.1. Behavioral performance
To ensure that the subjects were attending to the stimuli,
during the hand and ball conditions, they were asked to
count the number of times the stimuli stopped moving.
Since all subjects performed with 100% accuracy on this
continuous performance task, we infer that any differences
found in mu suppression are not due to differences in
attending to the stimuli.
3.2. Mu suppression
Power in the mu frequency at scalp locations correspond-
ing to sensorimotor cortex (C3, Cz, and C4) during the self-
initiated action and watching action conditions was com-
pared to power during the baseline (visual white noise)
condition by forming the log ratio of the power in these
conditions for both groups (Figs. 1A, B). Although data
were obtained from electrodes across the scalp, mu rhythm
is defined as oscillations measured over sensorimotor
cortex, thus only data from C3, Cz, and C4 are presented.
The control group (Fig. 1A) showed significant suppres-
sion from baseline in mu oscillations at each electrode during
both the self-initiated hand movement condition (C3 t (9) =
3.97, P b 0.002; Cz t (9) = 2.85, P b 0.01; C4 t (9) =
4.00, P b 0.002) and observed hand movement condition
(C3 t (9) = 3.99, P b 0.002; Cz t (9) = 3.21, P b 0.005; C4
t (9) = 2.78, P b 0.01). The ASD group (Fig. 1B) also
showed significant mu suppression during the self-initiated
hand movement condition (C3 t (9) = 2.27, P b 0.03; Cz
t (9) = 1.91, P b 0.05; C4 t (9) = 2.50, P b 0.02). Unlike
controls, the ASD group did not show significant suppres-
sion during the observed hand movement condition (C3
t (9) = 0.64, P N 0.25; Cz t (9) = 0.98, P N 0.15; C4
t (9) = 0.74, P N 0.20). The failure to find suppression in
the ASD group was not due to differences in baseline mu
power (C3 t (9) = 0.99, P N 0.30; Cz t (9) = 0.69, P N
0.50; C4 t (9) = 0.47, P N 0.50). Lastly, neither group
showed significant suppression from baseline during the
non-biological motion (bouncing balls) condition (ASD:
C3 t (9) = 0.73, P N 0.20; Cz t (9) = 0.49, P N 0.65; C4
t (9) = .25, P N 0.40; Control: C3 t (9) = 1.45, P N 0.08;
Cz t (9) = 0.54, P N 0.30; C4 t (9) = 0.00, P N 0.50).
The discovery of mirror neurons seems to satisfy a long held psychological and philosophical question: How do people learn language? That we are “programmed to imitate” sheds light on how language learning “gets off the ground.” Grandin has been able to learn language with “faulty” mirror neurons and the fact that she has made a catalogue of human behavior and its correlating emotion is of particular interest to me. This proves that even someone who does not intuitively empathize with others is still able to recognize the behavioral criteria for different emotions and link that criteria correctly with expressive descriptions such as “angry,” “in pain,” This simple point turns out to have tremendous impact, again, on a long held philosophical debate about language and how words—namely words for sensations—mean.
ReplyDeleteI am curious how autistic children learn to follow a pointed finger and how to point themselves. The article mentions Grandin pointing to things but it seems that understanding pointing (as most very young children do) must have to do with mirror neurons as well. How would someone with “faulty” mirror neurons learn this?