Saturday, July 25, 2009

Brain Develops Motor Memory For Prosthetics


ScienceDaily (July 24, 2009) — "Practice makes perfect" is the maxim drummed into students struggling to learn a new motor skill - be it riding a bike or developing a killer backhand in tennis. Stunning new research now reveals that the brain can also achieve this motor memory with a prosthetic device, providing hope that physically disabled people can one day master control of artificial limbs with greater ease.
In this study, macaque monkeys using brain signals learned how to move a computer cursor to various targets. What the researchers learned was that the brain could develop a mental map of a solution to achieve the task with high proficiency, and that it adhered to that neural pattern without deviation, much like a driver sticks to a given route commuting to work.
The study, conducted by scientists at the University of California, Berkeley, addresses a fundamental question about whether the brain can establish a stable, neural map of a motor task to make control of an artificial limb more intuitive.
"When your own body performs motor tasks repeatedly, the movements become almost automatic," said study principal investigator Jose Carmena, a UC Berkeley assistant professor with joint appointments in the Department of Electrical Engineering and Computer Sciences, the Helen Wills Neuroscience Institute, and the Program in Cognitive Science. "The profound part of our study is that this is all happening with something that is not part of one's own body. We have demonstrated that the brain is able to form a motor memory to control a disembodied device in a way that mirrors how it controls its own body. That has never been shown before."
Researchers in the field of brain-machine interfaces, including Carmena, have made significant strides in recent years in the effort to improve the lives of people with physical disabilities. An April 2009 survey by the Christopher and Dana Reeve Foundation found that nearly 1.3 million people in the United States suffer from some form of paralysis caused by spinal cord injury. When other causes of restricted movement are considered, such as stroke, multiple sclerosis and cerebral palsy, the number of Americans affected jumps to 5.6 million, the survey found.
Already, researchers have demonstrated that rodents, non-human primates and humans are able to control robotic devices or computer cursors in real time using only brain signals. But what had not been clear before was whether such a skill had been consolidated as a motor memory. The new study suggests that the brain is capable of creating a stable, mental representation of a disembodied device so that it can be controlled with little effort.
To demonstrate this, Carmena and Karunesh Ganguly, a post-doctoral fellow in Carmena's laboratory, used a mathematical model, or "decoder," that remained static during the length of the study, and they paired it with a stable group of neurons in the brain. The decoder, analogous to a simplified spinal cord, translated the signals from the brain's motor cortex into movement of the cursor.
It took about four to five days of practice for the monkeys to master precise control of the cursor. Once they did, they completed the task easily and quickly for the next two weeks.
As the tasks were being completed, the researches were able to monitor the changes in activity of individual neurons involved in controlling the cursor. They could tell which cells were firing when the cursor moved in specific directions. The researchers noticed that when the animals became proficient at the task, the neural patterns involved in the "solution" stabilized.
"The solution adopted is what the brain returned to repeatedly," said Carmena.
That stability is one of three major features scientists associate with motor memory, and it is all too familiar to music teachers and athletic coaches who try to help their students "unlearn" improper form or techniques, as once a motor memory has been consolidated, it can be difficult to change.
Other characteristics of motor memory include the ability for it to be rapidly recalled upon demand and its resistance to interference when new skills are learned. All three elements were demonstrated in the UC Berkeley study.
In the weeks after they achieved proficiency, the primates exhibited rapid recall by immediately completing their learned task on the first try. "They did it from the get-go; there was no need to retrain them," said Carmena.
Real-life examples of resistance to interference, the third feature of motor memory, include people who return to an automatic transmission car after learning how to drive stick-shift. In the study, the researchers presented a new decoder - marked by a different colored cursor - two weeks after the monkeys showed mastery of the first decoder.
As the monkeys were mastering the new decoder, the researchers would suddenly switch back to the original decoder and saw that the monkeys could immediately perform the task without missing a beat. The monkeys could easily switch back and forth between the two decoders, showing a level of neural plasticity never before associated with the control of a prosthetic device, the researchers said.
"This is a study that says that maybe one day, we can really think of the ultimate neuroprosthetic device that humans can use to perform many different tasks in a more natural way," said Carmena.
Yet, the researchers acknowledged that prosthetic devices will not match what millions of years of evolution have accomplished to enable animal brains to control body movement. The complexity of wiring one's brain to properly control the body is made clear whenever one watches an infant's haphazard attempts to find its own hands and feet.
"Nevertheless, beyond its clinical applications, which are very clear, this line of research sheds light on how the brain assembles and organizes neurons, and how it forms a motor memory to control the prosthetic device," Carmena said. "These are important, fundamental questions about how the brain learns in general.
This study was supported by the Christopher and Dana Reeve Foundation, the American Heart Association and the American Stroke Association.
Journal reference:
Ganguly K, Carmena JM. Emergence of a Stable Cortical Map for Neuroprosthetic Control. PLoS Biol, 7(7):e1000153 DOI:
10.1371/journal.pbio.1000153
Adapted from materials provided by
University of California - Berkeley.

Wednesday, July 22, 2009

Brain's Center For Perceiving 3-D Motion Is Identified


ScienceDaily (July 21, 2009) — Ducking a punch or a thrown spear calls for the power of the human brain to process 3-D motion, and to perceive an object (whether it's offensive or not) moving in three dimensions is critical to survival. It also leads to a lot of fun at 3-D movies.
Neuroscientists have now pinpointed where and how the brain processes 3-D motion using specially developed computer displays and an fMRI (functional magnetic resonance imaging) machine to scan the brain.
They found, surprisingly, that 3-D motion processing occurs in an area in the brain—located just behind the left and right ears—long thought to only be responsible for processing two-dimensional motion (up, down, left and right).
This area, known simply as MT+, and its underlying neuron circuitry are so well studied that most scientists had concluded that 3-D motion must be processed elsewhere. Until now.
"Our research suggests that a large set of rich and important functions related to 3-D motion perception may have been previously overlooked in MT+," says Alexander Huk, assistant professor of neurobiology. "Given how much we already know about MT+, this research gives us strong clues about how the brain processes 3-D motion."
For the study, Huk and his colleagues had people watch 3-D visualizations while lying motionless for one or two hours in an MRI scanner fitted with a customized stereovision projection system.
The fMRI scans revealed that the MT+ area had intense neural activity when participants perceived objects (in this case, small dots) moving toward and away from their eyes. Colorized images of participants' brains show the MT+ area awash in bright blue.
The tests also revealed how the MT+ area processes 3-D motion: it simultaneously encodes two types of cues coming from moving objects.
There is a mismatch between what the left and right eyes see. This is called binocular disparity. (When you alternate between closing your left and right eye, objects appear to jump back and forth.)
For a moving object, the brain calculates the change in this mismatch over time.
Simultaneously, an object speeding directly toward the eyes will move across the left eye's retina from right to left and the right eye's retina from left to right.
"The brain is using both of these ways to add 3-D motion up," says Huk. "It's seeing a change in position over time, and it's seeing opposite motions falling on the two retinas."
That processing comes together in the MT+ area.
"Who cares if the tiger or the spear is going from side to side?" says Lawrence Cormack, associate professor of psychology. "The most important kind of motion you can see is something coming at you, and this critical process has been elusive to us. Now we are beginning to understand where it occurs in the brain."
Huk, Cormack, and post-doctoral research and lead author Bas Rokers published their findings in Nature Neuroscience online the week of July 7. They are members of the Institute for Neuroscience and Center for Perceptual Systems. The research was supported by a National Science Foundation CAREER Award to Huk.
Adapted from materials provided by University of Texas at Austin, via EurekAlert!, a service of AAAS.

Friday, July 17, 2009

Scientists discover why we never forget how to ride a bicycle


(PhysOrg.com) -- You never forget how to ride a bicycle - and now a University of Aberdeen led team of neuroscientists has discovered why.
Their research, published this month in Nature Neuroscience, has identified a key nerve cell in the brain that controls the formation of memories for such as riding a bicycle, skiing or eating with chop sticks.
When one acquires a new skill like riding a bicycle, the cerebellum is the part of the brain needed to learn the co-ordinated movement.
The research team, which includes scientists from the Universities of Aberdeen, Rotterdam, London, Turin and New York, has been working to understand the connections between in the cerebellum that enable learning.
They discovered that one particular type of nerve cell -the so called molecular layer interneuron - acts as a "gatekeeper", controlling the that leave the cerebellum. Molecular layer interneurons transform the electrical signals into a language that can be laid down as a memory in other parts of the brain.
Dr Peer Wulff, who led the research in Aberdeen together with Prof. Bill Wisden at the University's Institute of Medical Sciences, said: "What we were interested in was finding out how memories are encoded in the brain. We found that there is a cell which structures the signal output from the cerebellum into a particular code that is engraved as memory for a newly learned motor skill. "
It could pave the way for advancements in prosthetic devices to mimic normal brain functions, which could benefit those who have suffered brain disorders, such as a stroke or multiple sclerosis.
Dr Wulff said: "To understand the way that the normal brain works and processes information helps the development of brain-computer interfaces as prosthetic devices to carry out the natural brain functions missing in patients who have suffered a stroke or have multiple sclerosis.
"Our results are very important for people interested in how the brain processes information and produces and stores memories. One day these findings could be applied to the building of prosthetic devices by other research teams."
Provided by University of Aberdeen (news : web)

Entirely New Way To Study Brain Function Developed


ScienceDaily (July 16, 2009) — Scientists at Duke University and the University of North Carolina have devised a chemical technique that promises to allow neuroscientists to discover the function of any population of neurons in an animal brain, and provide clues to treating and preventing brain disease.
With the technique they describe in the journal Neuron online on July 15, scientists will be able to noninvasively activate entire populations of individual types of neurons within a brain structure.
"We have discovered a method in which systemic administration of an otherwise inert chemical to a mutant mouse selectively activates a single group of neurons," said James McNamara, M.D., chairman of the Duke Department of Neurobiology and co-senior author of the paper. "Elaborating on this method promises to let scientists engineer different kinds of mutant mice in which single groups of neurons will be activated by this chemical, so scientists can understand the behaviors mediated by each of these groups."
Right now, most scientists gain knowledge of brain function by correlating brain activity with certain behaviors; connecting a damaged brain area with an observed loss of function; or activating entire brain structures invasively and observing the resulting behavior.
Knowing what a particular type of neuron in a specific brain region does will help researchers find the root of certain diseases so they can be effectively treated, said McNamara, an expert in epilepsy. He pointed out that the human brain contains billions of neurons that are organized into thousands of distinct groups that need to be studied.
Four years ago, co-senior author Bryan Roth, M.D., Ph.D., and colleagues at UNC set out to create a cell receptor activated by an inert drug, but not by anything else. "Basically we wanted to create a chemical switch," said Roth, who is the Michael Hooker Distinguished Professor of Pharmacology at UNC-Chapel Hill.
"We wanted to put this switch into neurons so we could selectively turn them on to study the brain," said Roth, who was trained as a psychiatrist. "At the time, this idea was science fiction."
They used yeast genetics to evolve a specific receptor that could react with a specific chemical, because yeast quickly produces new generations. "If the theory of evolution were not true, this experiment would not have worked," Roth added.
The lab then worked to create a similar receptor in mice. In the initial attempt to create mice that expressed the receptor, the lab targeted receptor expression to neurons in the hippocampus and cortex of the brain. The receptor was designed to be activated by the drug clozapine-N-oxide (CNO), which has no other effects on the mice and no effects on normal neurons, those without the receptor.
Roth asked a student to inject the mice with CNO. They expected to register some type of change in neuronal activity, but were very surprised to see the mice have seizures. Suddenly, they had a model for studying epilepsy.
Roth immediately looked for epilepsy experts to collaborate with and contacted McNamara at Duke. Together they worked on this system that allowed them to regulate the activity of neurons in mice with CNO that was injected and able to cross the blood-brain barrier to access deep-brain neurons. With this model, the scientists were able to examine neuronal activity leading to seizures and activity that occurred during seizures.
This receptor was designed for experimental use with animals. "Based on what we learn from animal models of disease, we could get better target treatments for humans," said Georgia Alexander, Ph.D., a postdoctoral fellow in Duke Neurobiology and co-lead author. "The great thing about these drug-activated receptors is that they can be applied to study any disease state, not just epilepsy. With this, you could try to selectively activate other populations of neurons, in an animal model of Parkinson's disease, for example." Roth said that the technique is not limited to neurons and brains, and is being used to study other cells in the body as well.
Alexander said researchers now can ask which areas of the brain are most susceptible to and critical to seizure generation, "because we can use similar techniques to inactivate or silence neurons, too."
For example, some people with seizures have a portion of their temporal lobes removed from their brains. "Now we can ask, 'Is there a different part of the brain or population of neurons we could selectively silence that would be an even better way to treat epilepsy patients?'" Alexander said.
Other authors include Miguel A. Nicolelis of the Duke Department of Neurobiology; John Hartmann of the UNC School of Medicine; co-lead author Sarah C. Rogan, Blaine N. Armbruster, Ying Pei and John A. Allen of the UNC Department of Pharmacology; Sheryl S. Moy of the UNC Department of Psychiatry; Randal J. Nonneman of the Neurodevelopmental Disorders Research Center; and Atheir I. Abbas of the Department of Biochemistry at Case Western Reserve University.
This work was funded by the National Institutes of Health and the National Alliance for Research into Schizophrenia and Depression.
Adapted from materials provided by Duke University Medical Center.

Classifying 'Clicks' In African Languages To Clear Up 100-year-old Mystery


ScienceDaily (July 16, 2009) — A new way to classify sounds in some human languages may solve a problem that has plagued linguists for nearly 100 years--how to accurately describe click sounds distinct to certain African languages.
Cornell University professor Amanda Miller and her colleagues recently used new high-speed, ultrasound imaging of the human tongue to precisely categorize sounds produced by the Nuu language speakers of southern Africa's Kalahari Desert. The research potentially could change how linguists describe "click languages" and help speech scientists understand the physics of speech production.
The African languages studied by Miller use a series of consonants called "clicks" which are unlike most consonants in that they are produced with air going into the mouth rather than out. The Nuu clicks, produced using both the front and back of the tongue, are difficult to characterize.
"When we say 'k' or 't,' the sound is produced by air breathing out of our lungs," said Miller. "But click sounds are produced by breathing in and creating suction within a cavity formed between the front and back parts of the tongue. While linguists knew this, most didn't want to accept it was something people controlled." So they loosely classified these click consonants using imprecise groupings.
"For nearly a century, some of these sounds fell into an imprecise catch-all category that included every type of modification ever reported in a click language," said Miller. "The movements of the tongue at the front of the mouth were quite accurately classified. But tongue movements at the back part of the mouth were not classified properly."
The reason was that prior tools were either too large to carry to fieldwork situations in Southern Africa, or too unsafe. Ultrasound imaging changed that by allowing Miller's research team to use safer, faster, non-invasive technology in the field to view the back part of the tongue.
Early ultrasound tools captured images only at about 30 frames per second, and thus are not able to keep up with the tongue's speed in fast sounds like clicks. The new ultrasound imaging tool is capable of capturing more than 125 frames per second, producing clearer images.
Miller and her colleagues used the high-speed ultrasound imaging to group the clicks more accurately. Her colleagues included Johanna Brugman, Cornell University; Bonny Sands, Northern Arizona University; Levi Namaseb, The University of Namibia; Mats Exter, University of Cologne; and Chris Collins, New York University.
"We wanted to classify clicks in the same way we classify other consonants," said Miller, who was a visiting faculty member at the University of British Columbia during the 2008-2009 academic year. "We think we've been pretty successful in doing that."
Nuu is severely endangered with fewer than 10 remaining speakers, all of whom are more than 60 years of age. Linguists are working diligently to document the unique aspects of this language before it disappears.
She explains her findings in the online version of the Journal of the International Phonetic Association posted on July 10. The National Science Foundation supports the research.
Adapted from materials provided by National Science Foundation.

Learning Is Both Social And Computational, Supported By Neural Systems Linking People

SOURCE

ScienceDaily (July 16, 2009) — Education is on the cusp of a transformation because of recent scientific findings in neuroscience, psychology, and machine learning that are converging to create foundations for a new science of learning.
Writing in the July 17 edition of the journal Science, researchers report that this shift is being driven by three principles that are emerging from cross-disciplinary work: learning is computational, learning is social, and learning is supported by brain circuits linking perception and action that connect people to one another. This new science of learning, the researchers believe, may shed light into the origins of human intelligence.
"We are not left alone to understand the world like Robinson Crusoe was on his island," said Andrew Meltzoff, lead author of the paper and co-director of the University of Washington's Institute for Learning and Brain Sciences. "These principles support learning across the life span and are particularly important in explaining children's rapid learning in two unique domains of human intelligence, language and social understanding.
"Social interaction is more important than we previously thought and underpins early learning. Research has shown that humans learn best from other humans, and a large part of this is timing, sensitive timing between a parent or a tutor and the child," said Meltzoff, who is a developmental psychologist.
"We are trying to understand how the child's brain works – how computational abilities are changed in the presence of another person, and trying to use these three principles as leverage for learning and improving education," added co-author Patricia Kuhl, a neuroscientist and co-director of the UW's Institute for Learning and Brain Sciences.
University of California, San Diego robotics engineer Javier Movellan and neuroscientist-biologist Terrence Sejnowski are co-authors. The research was funded by the National Science Foundation and the National Institute of Child Health and Human Development. The National Science Foundation has funded large-scale science of learning centers at both universities.
The Science paper cites numerous recent advances in neuroscience, psychology, machine learning and education. For example, Kuhl said people don't realize how computational and social factors interact during learning.
"We have a computer between our shoulders and our brains are taking in statistics all the time without our knowing it. Babies learn simply by listening, for example. They learn the sounds and words of their language by picking up probabilistic information as they listen to us talk to them. Babies at 8 months are calculating statistically and learning," Kuhl said.
But there are limits. Kuhl's work has shown that babies gather statistics and learn when exposed to a second language face to face from a real person, but not when they view that person on television.
"A person can get more information by looking at another person face to face," she said. "We are digging to understand the social element and what does it mean about us and our evolution."
Apparently babies need other people to learn. They take in more information by looking at another person face to face than by looking at that person on a big plasma TV screen," she said. "We are now trying to understand why the brain works this way, and what it means about us and our evolution."
Meltzoff said an important component of human intelligence is that humans are built so they don't have to figure out everything by themselves.
"A major role we play as parents is teaching children where the important things are for them to learn," he said. "One way we do this is through joint visual attention or eye-gaze. This is a social mechanism and children can find what's important – we call them informational 'hot spots' – by following the gaze of another person. By being connected to others we also learn by example and imitation."
Infants, he said, learn by mixing self-discovery with observations of other people for problem-solving.
"We can learn what to do by watching others, and we also can come to understand other people through our own actions," Meltzoff said. "Learning is bi-directional."
The researchers believe that aspects of informal learning, the ways people, particularly children, learn outside school, need to be brought into the classroom.
"Educators know children spend 80 percent of their waking time away from school and children are learning deeply and enthusiastically in museums, in community centers, from online games and in all sorts of venues. A lot of this learning is highly social and clues from informal learning may be applied to school to enhance learning. Why is it that a kid who is so good at figuring out baseball batting averages is failing math in school?" said Meltzoff.
Even though it appears that babies do not learn from television, technology can play a big role in the science of learning. Research is showing that children are more receptive to learning from social robots, robots that are more human in appearance and more interactive.
"The more that interacting with a machine feels like interacting with a human, the more children – and maybe adults – learn," said Kuhl. "Someday we may understand how technology can help us learn a new language at any age, and, if we could, there are countless schools around the world in which that would be helpful."
"Science is trying to understand the magic of social interaction in human learning," said Meltzoff. "But when it does we hope to embody some of what we learn into technology. Kids today are using high-powered technology – Facebook, Twitter and text messaging – to enhance social interaction. Using technology, children are learning to solve problems collaboratively. Technology also allows us to have a distributed network from which to draw information, a world of knowledge."
Adapted from materials provided by University of Washington.

New Science Of Learning Offers Preview Of Tomorrow's Classroom


ScienceDaily (July 16, 2009) — Of all the qualities that distinguish humans from other species, how we learn is one of the most significant. In the July 17, 2009 issue of the journal Science, researchers who are at the forefront of neuroscience, psychology, education, and machine learning have synthesized a new science of learning that is already reshaping how we think about learning and creating opportunities to re-imagine the classroom for the 21st century.
“To understand how children learn and improve our educational system, we need to understand what all of these fields can contribute,” explains Howard Hughes Medical Institute investigator Terrence J. Sejnowski, Ph.D., professor and head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies and co-director of the Temporal Dynamics of Learning Center (TDLC) at the University of California, San Diego, which is sponsored by the National Science Foundation. “Our brains have evolved to learn and adapt to new environments; if we can create the right environment for a child, magic happens.”
The paper is the first major publication to emerge from a unique collaboration between the TDLC and the University of Washington’s Learning in Informal and Formal Environments (LIFE) Center. The TDLC focuses on the study of learning—from neurons to humans and robots—treating the element of time as a crucial component of the learning process. This work complements the psychological research on child development that is the principal focus of the LIFE Center. Both have been funded as part of the NSF’s Science of Learning initiative.
Among the key insights that the authors highlight are three principles to guide the study of human learning across a range of areas and ages: learning is computational— machine learning provides a unique framework to understand the computational skills that infants and young children possess that allow them to infer structured models of their environment; learning is social—a finding that is supported by studies showing that the extent to which children interact with and learn from a robot depends on how social and responsive its behavior is; and learning is supported by brain circuits linking perception and action— human learning is grounded in the incredibly complex brain machinery that supports perception and action and that requires continuous adaptation and plasticity.
As the only species to engage in organized learning such as schools and tutoring, homo sapiens also draw on three uniquely human social skills that are fundamental to how we learn and develop: imitation, which accelerates learning and multiplies learning opportunities; shared attention, which facilitates social learning; and empathy and social emotions, which are critical to understanding human intelligence and appear to be present even in prelinguistic children.
These and other advances in our understanding of learning are now contributing to the development of machines that are themselves capable of learning and, more significantly, of teaching. Already these “social robots,” which interface with humans through dialogue or other forms of communication and behave in ways that humans are comfortable with, are being used on an experimental basis as surrogate teachers, helping preschool-age children master basic skills such as the names of the colors, new vocabulary, and singing simple songs (see image).
“Social interaction is key to everything,” Sejnowski says. “The technology to merge the social with the instructional is out there, but it hasn’t been brought to bear on the classroom to create a personalized, individualized environment for each student.” He foresees a time when these social robots may offer personalized pedagogy tailored to the needs of each child and help track the student’s mastery of curriculum. “By developing a very sophisticated computational model of a child’s mind we can help improve that child’s performance.”
“For this new science to have an impact it is critical that researchers and engineers embed themselves in educational environments for sustained periods of time,” says coauthor Javier Movellan, Ph.D., co-PI of TDLC’s Social Interaction Network and director of the Machine Perception Laboratory at UC San Diego. “The old approach of scientists doing laboratory experiments and telling teachers what to do will simply not work. Scientists and engineers have a great deal to learn from educators and from daily life in the classroom.” Movellan is collaborating with teachers at the UC San Diego Early Childhood Education Center to develop social robots that assist teachers and create new learning opportunities for children.
What makes social interaction such a powerful catalyst for learning, how to embody key elements in technology to improve learning, and how to capitalize on social factors to teach children better and foster their innate curiosity remain central questions in the new science of learning.
“Our hope is that applying this new knowledge to learning will enhance educators’ ability to provide a much richer and more interesting intellectual and cultural life for everyone,” Sejnowski says.
Researchers who also contributed to this work include Andrew N. Meltzoff, D.Phil., and Patricia K. Kuhl, Ph.D., co-PI and PI, respectively, of the Learning in Informal and Formal Environments (LIFE) Center at the University of Washington
About the Temporal Dynamics of Learning Center
The Temporal Dynamics of Learning Center, in operation since 2006 as one of six Science of Learning centers across the country, is funded by the National Science Foundation.
The TDLC mission is to develop a new science of learning that treats time as a crucial component in the learning process, on time scales that range from milliseconds to years. There is also a particular focus on inreach from the classroom into the labs and translation of the science back into the classroom.
Adapted from materials provided by Salk Institute, via EurekAlert!, a service of AAAS.

Multitasking Ability Can Be Improved Through Training


ScienceDaily (July 16, 2009) — Training increases brain processing speed and improves our ability to multitask, new research from Vanderbilt University published in the June 15 issue of Neuron indicates.
"We found that a key limitation to efficient multitasking is the speed with which our prefrontal cortex processes information, and that this speed can be drastically increased through training and practice,” Paul E. Dux, a former research fellow at Vanderbilt, and now a faculty member at the University of Queensland in Brisbane, Australia, and co-author of the study, said. “Specifically, we found that with training, the 'thinking' regions of our brain become very fast at doing each task, thereby quickly freeing them up to take on other tasks."
To understand what was occurring in the brain when multitasking efficiency improved, the researchers trained seven people daily for two weeks on two simple tasks — selecting an appropriate finger response to different images, and selecting an appropriate vocal response (syllables) to the presentation of different sounds. The tasks were done either separately or together (multitasking situation). Scans of the individuals’ brains were conducted three times over the two weeks using functional magnetic resonance imaging (fMRI) while they were performing the tasks.
Before practice, the participants showed strong dual-task interference—slowing down of one or both tasks when they attempted to perform them together. As a result of practice and training, however, the individuals became very quick not only at doing each of the two tasks separately, but also at doing them together. In other words, they became very efficient multitaskers.
The fMRI data indicate that these gains were the result of information being processed more quickly and efficiently through the prefrontal cortex.
"Our results imply that the fundamental reason we are lousy multitaskers is because our brains process each task slowly, creating a bottleneck at the central stage of decision making," René Marois, associate professor of psychology at Vanderbilt University and co-author of the study, said. “Practice enables our brain to process each task more quickly through this bottleneck, speeding up performance overall.”
The researchers also found the subjects, while appearing to multitask simultaneously, were not actually doing so.
"Our findings also suggest that, even after extensive practice, our brain does not really do two tasks at once,” Dux said. “It is still processing one task at a time, but it does it so fast it gives us the illusion we are doing two tasks simultaneously."
The researchers noted that though their results showed increased efficiency in the posterior prefrontal cortex, this effect and multitasking itself are likely not supported solely by this brain area.
“It is conceivable, for example, that more anterior regions of prefrontal cortex become involved as tasks become more abstract and require greater levels of cognitive control,” Marois said.
Dux completed this study while conducting post-doctoral research at Vanderbilt. Michael Tombu, Stephenie Harrison and Frank Tong, all of the Department of Psychology at Vanderbilt, and Baxter Rodgers of the Vanderbilt University Institute of Imaging Science and Department of Radiology and Radiological Sciences also co-authored the study. Marois, Tombu, Harrison and Tong are members of the Vanderbilt Vision Research Center and the Vanderbilt Center for Integrative and Cognitive Neurosciences.
The research was funded by the National Institute of Mental Health.
Adapted from materials provided by Vanderbilt University.

Wednesday, July 15, 2009

Brain Emotion Circuit Sparks As Teen Girls Size Up Peers

SOURCE


ScienceDaily (July 15, 2009) — What is going on in teenagers' brains as their drive for peer approval begins to eclipse their family affiliations? Brain scans of teens sizing each other up reveal an emotion circuit activating more in girls as they grow older, but not in boys. The study by Daniel Pine, M.D., of the National Institute of Mental Health (NIMH), part of National Institutes of Health, and colleagues, shows how emotion circuitry diverges in the male and female brain during a developmental stage in which girls are at increased risk for developing mood and anxiety disorders.

"During this time of heightened sensitivity to interpersonal stress and peers' perceptions, girls are becoming increasingly preoccupied with how individual peers view them, while boys tend to become more focused on their status within group pecking orders," explained Pine. "However, in the study, the prospect of interacting with peers activated brain circuitry involved in approaching others, rather than circuitry responsible for withdrawal and fear, which is associated with anxiety and depression."
Pine, Amanda Guyer, Ph.D., Eric Nelson, Ph.D., and colleagues at NIMH and Georgia State University, report on one of the first studies to reveal the workings of the teen brain in a simulated real-world social interaction, in the July, 2009 issue of the Journal Child Development.
Thirty-four psychiatrically healthy males and females, aged 9 to 17, were ostensibly participating in a study of teenagers' communications via Internet chat rooms. They were told that after an fMRI (functional magnetic resonance imaging) scan, which visualizes brain activity, they would chat online with another teen from a collaborating study site. Each participant was asked to rate his or her interest in communicating with each of 40 teens presented on a computer screen, so they could be matched with a high interest participant (see picture below).
Two weeks later, the teens viewed the same faces while in an fMRI scanner. But this time they were asked to instead rate how interested they surmised each of the other prospective chatters would be in interacting with them.
Only after they exited the scanner did they learn that, in fact, the faces were of actors, not study participants, and that there would be no Internet chat. The scenario was intended to keep the teens engaged –– maintain a high level of anticipation/motivation –– during the tasks. This helped to ensure that the scanner would detect contrasts in brain circuit responses to high interest versus low interest peers.
Although the faces were selected by the researchers for their happy expressions, their attractiveness was random, so that they appeared to be a mix of typical peers encountered by teens.
As expected, the teen participants deemed the same faces they initially chose as high interest to be the peers most interested in interacting with them. Older participants tended to choose more faces of the opposite sex than younger ones. When they appraised anticipated interest from peers of high interest compared with low interest, older females showed more brain activity than younger females in circuitry that processes social emotion.
"This developmental shift suggested a change in socio-emotional calculus from avoidance to approach," noted Pine. The circuit is made up of the nucleus accumbens (reward and motivation), hypothalamus (hormonal activation), hippocampus (social memory) and insula (visceral/subjective feelings).
By contrast, males showed little change in the activity of most of these circuit areas with age, except for a decrease in activation of the insula. This may reflect a waning of interpersonal emotional ties over time in teenage males, as they shift their interest to groups, suggest Pine and colleagues.
"In females, absence of activation in areas associated with mood and anxiety disorders, such as the amygdala, suggests that emotional responses to peers may be driven more by a brain network related to approach than to one related to fear and withdrawal," said Pine. "This reflects resilience to psychosocial stress among healthy female adolescents during this vulnerable period."
Adapted from materials provided by NIH/National Institute of Mental Health.

Monday, July 13, 2009

Fussy Baby? Linking Genes, Brain And Behavior In Children


ScienceDaily (July 13, 2009) — It comes as no surprise that some babies are more difficult to soothe than others but frustrated parents may be relieved to know that this is not necessarily an indication of their parenting skills. According to a new report in Psychological Science, children's temperament may be due in part to a combination of a certain gene and a specific pattern of brain activity.
The pattern of brain activity in the frontal cortex of the brain has been associated with various types of temperament in children. For example, infants who have more activity in the left frontal cortex are characterized as temperamentally "easy" and are easily calmed down. Conversely, infants with greater activity in the right half of the frontal cortex are temperamentally "negative" and are easily distressed and more difficult to soothe.
In this study, Louis Schmidt from McMaster University and his colleagues investigated the interaction between brain activity and the DRD4 gene to see if it predicted children's temperament. In a number of previous studies, the longer version (or allele) of this gene had been linked to increased sensory responsiveness, risk-seeking behavior, and attention problems in children. In the present study, brain activity was measured in 9-month-old infants via electroencephalography (EEG) recordings. When the children were 48 months old, their mothers completed questionnaires regarding their behavior and DNA samples were taken from the children for analysis of the DRD4 gene.
The results reveal interesting relations among brain activity, behavior, and the DRD4 gene. Among children who exhibited more activity in the left frontal cortex at 9 months, those who had the long version of the DRD4 gene were more soothable at 48 months than those who possessed the shorter version of the gene. However, the children with the long version of the DRD4 gene who had more activity in the right frontal cortex were the least soothable and exhibited more attention problems compared to the other children.
These findings indicate that the long version of the DRD4 gene may act as a moderator of children's temperament. The authors note that the "results suggest that it is possible that the DRD4 long allele plays different roles (for better and for worse) in child temperament" depending on internal conditions (the environment inside their bodies) and conclude that the pattern of brain activity (that is, greater activation in left or right frontal cortex) may influence whether this gene is a protective factor or a risk factor for soothability and attention problems. The authors cautioned that there are likely other factors that interact with these two measures in predicting children's temperament.
Journal reference:
Schmidt et al. Linking Gene, Brain, and Behavior: DRD4, Frontal Asymmetry, and Temperament. Psychological Science, 2009; 20 (7): 831 DOI: 10.1111/j.1467-9280.2009.02374.x
Adapted from materials provided by Association for Psychological Science.

Why It Is Easy To Encode New Memories But Hard To Hold Onto Them

SOURCE

ScienceDaily (July 13, 2009) — Memories aren't made of actin filaments. But their assembly is crucial for long-term potentiation (LTP), an increase in synapse sensitivity that researchers think helps to lay down memories. In the July 13, 2009 issue of the Journal of Cell Biology, Rex et al. reveal that LTP's actin reorganization occurs in two stages that are controlled by different pathways, a discovery that helps explain why it is easy to encode new memories but hard to hold onto them.
If you can't seem to forget those ABBA lyrics you heard in seventh grade but can't remember Lincoln's Gettysburg address, the vagaries of LTP might be to blame. Neuroscientists think that the process, in which a brain synapse becomes more potent after repeated stimulation, underlies the formation and stabilization of new memories. LTP involves changes in the anatomy of synapses and dendritic spines, a process that depends on reorganization of the supporting actin cytoskeleton. However, researchers didn't know what controlled these changes.
Rex et al. tackled the question by dosing slices of rat hippocampus with adenosine, a naturally occurring signal that squelches LTP. Adenosine prevents phosphorylation and inactivation of cofilin, an inhibitor of actin filament assembly, the team found. Cofilin's involvement, in turn, implicates signaling cascades headed by GTPases, such as the RhoA-ROCK and Rac-PAK pathways. The researchers showed that a ROCK inhibitor stalled actin polymerization and resulted in a short-lived LTP. A Rac-blocking compound had no effect.
That doesn't mean the Rac-PAK pathway isn't involved in LTP, however. The team discovered that the Rac inhibitor prolonged cells' vulnerability to a molecule that prevents the stabilization of new actin filaments. That result led Rex et al. to conclude that the two pathways exert their effects at different points. The Rho-ROCK pathway initiates the cytoskeletal changes of LTP, and the Rac-PAK pathway solidifies them so that heightened synapse sensitivity can persist. The researchers hypothesize that one pathway encodes memories, while the other makes sure they stick around.
Journal reference: Rex, C.S., et al. 2009. J. Cell Biol. doi:10.1083/jcb.200901084.
Adapted from materials provided by Rockefeller University Press, via EurekAlert!, a service of AAAS.

Friday, July 10, 2009

Map Of Your Brain May Reveal Early Mental Illness


ScienceDaily (July 10, 2009) — John Csernansky wants to take your measurements. Not the circumference of your chest, waist and hips. No, this doctor wants to stretch a tape measure around your hippocampus, thalamus and prefrontal cortex.
OK, maybe not literally a tape measure, but he does want to chart the dimensions of the many structures in the human brain. From those measurements -- obtained from an MRI scan -- Csernansky will produce a map of the unique dips, swells and crevasses of the brains of individuals that he hopes will provide the first scientific tool for early and more definite diagnosis of mental disorders such as schizophrenia. Diagnosing the beginning stage of mental disorders remains elusive, although this when they are most treatable.
The shapes and measurements of brain structures can reveal how they function. Thus, Csernansky hopes his brain maps will reveal how the brains of humans with and without major mental disorders differ from each other and the time frame over which those differences develop.
Diagnosing psychiatric disorders currently is more art than science, said Csernansky, M.D., the chair of psychiatry and behavioral sciences at the Northwestern University Feinberg School of Medicine and of psychiatry at the Stone Institute of Psychiatry at Northwestern Memorial Hospital. Unlike a heart attack, for example, which can be identified with an EKG and a blood test for cardiac enzymes, psychiatric illness is diagnosed by asking a patient about his symptoms and history.
"That's akin to diagnosing a heart attack by asking people when their pain came and where it was located," Csernansky said. "We would like to have the same kinds of tools that every other field of medicine has."
To that end, he is heading a National Institutes of Mental Health study to measure the differences between the structure of the schizophrenic and normal brain to be able to more quickly identify schizophrenia in its early stages and see if the medications used to treat the illness halt its devastating advance.
Schizophrenia usually starts in the late teens or early 20s and affects about 1 percent of the population. If the disease is caught early and treated with the most effective antipsychotic medications and psychotherapy, the patient has the best chance for recovery.
Current treatments are evaluated on whether the patients' symptoms improve over several months. Csernansky, however, wants to take a longer and broader view.
"What we want to know is whether a few years later are you more able to work, are you better able to return to school?" he said. "If you take these medicines for years at a time, is your life better than if you had not taken them? We want to understand the effects of the medicines we give on the biological progression of the disease. We think that's what ultimately determines how well someone does."
Psychotic and mood disorders are life-long illnesses and require management throughout a person's life.
Csernansky is recruiting 100 new subjects, half with early-stage schizophrenia and half who are healthy, to map their brain topography and compare the differences and changes over two years.
"The brain is very plastic and is constantly remodeling itself. Any changes we see in a disease has to be compared in a background of normal changes of brain structures," said Csernansky, who also is the Lizzie Gilman Professor of Psychiatry and Behavioral Sciences.
He said a brain map of schizophrenia would enable doctors to make the diagnosis with more confidence as well as catch it earlier.
"Like every other illness, psychiatric illnesses don't blossom in their full form overnight. They come on gradually," he said. "You don't need a biomarker to tell you that you have breast cancer, if you can feel a tumor that is the size of a golf ball. But who wants to discover an illness that advanced? A biomarker of the schizophrenic brain structure would help us define it, especially in cases where the symptoms are mild or fleeting."
In the past, comparing MRI brain maps was done painstakingly by hand. A technician used a light pen and attempted to trace and manually measure the boundaries of structures in the brain.
"It was very laborious and you had to have an expert in your laboratory," Csernansky explained. Now he is teaching computers to do the work, speeding the process and enhancing accuracy.
Csernansky's previous research has already shown that the brains of schizophrenic patients have abnormalities in the shape and asymmetry of the hippocampus, a part of the brain that is critical to spatial learning and awareness, navigation and the memory of events.
"People with schizophrenia also have problems with interpretation, attention and controls and thought and memory. So the thalamus is another natural structure to study," said Lei Wang, assistant professor of psychiatry and behavioral sciences, and of radiology, at Northwestern's Feinberg School. Wang works with Csernansky on brain mapping.
Csernansky says, "Understanding what changes in brain structure occur very early in the course of schizophrenia and how medication may or may not affect these structures as time goes by will help us reduce the uncertainty of psychiatric diagnosis and improve the selection of treatments."
Adapted from materials provided by Northwestern University, via EurekAlert!, a service of AAAS.

Newborn Brain Cells Improve Our Ability To Navigate Our Environment


ScienceDaily (July 9, 2009) — Although the fact that we generate new brain cells throughout life is no longer disputed, their purpose has been the topic of much debate. Now, an international collaboration of researchers made a big leap forward in understanding what all these newborn neurons might actually do. Their study, published in the July 10, 2009, issue of the journal Science, illustrates how these young cells improve our ability to navigate our environment.
"We believe that new brain cells help us to distinguish between memories that are closely related in space," says senior author Fred H. Gage, Ph.D., a professor in the Laboratory for Genetics at the Salk Institute and the Vi and John Adler Chair for Research on Age-Related Neurodegenerative Diseases, who co-directed the study with Timothy J. Bussey, Ph.D., a senior lecturer in the Department of Experimental Psychology at the University of Cambridge, UK, and Roger A. Barker, PhD., honorary consultant in Neurology at Addenbrookes Hospital and Lecturer at the University of Cambridge.
When the first clues emerged that adult human brains continually sprout new neurons, one of the central tenets of neuroscience—we are born with all the brain cells we'll ever have—was about to be overturned. Although it is never easy to shift a paradigm, a decade later the question is no longer whether neurogenesis exists but rather what all these new cells are actually good for.
"Adding new neurons could be a very problematic process if they don't integrate properly into the existing neural circuitry," says Gage. "There must be a clear benefit to outweigh the potential risk."
The most active area of neurogenesis lies within the hippocampus, a small seahorse-shaped area located deep within the brain. It processes and distributes memory to appropriate storage sections in the brain after readying the information for efficient recall. "Every day, we have countless experiences that involve time, emotion, intent, olfaction and many other dimensions," says Gage. "All the information comes from the cortex and is channeled through the hippocampus. There, they are packaged together before they are passed back out to the cortex where they are stored."
Previous studies by a number of laboratories including Gage's had shown that new neurons somehow contribute to hippocampus-dependent learning and memory but the exact function remained unclear.
The dentate gyrus is the first relay station in the hippocampus for information coming from the cortex. While passing through, incoming signals are split up and distributed among 10 times as many cells. This process, called pattern separation, is thought to help the brain separate individual events that are part of incoming memories. "Since the dentate gyrus also happens to be the place where neurogenesis is occurring, we originally thought that adding new neurons could help with the pattern separation," says Gage.
This hypothesis allowed graduate student Claire Clelland, who divided her time between the La Jolla and the Cambridge labs, to design experiments that would specifically challenge this function of the dentate gyrus using different behavioral tasks and two distinct strategies to selectively shut down neurogenesis in the dentate gyrus.
In the first set of experiments, mice had to learn the location of a food reward that was presented relative to the location of an earlier reward within an eight-armed radial maze. "Mice without neurogenesis had no trouble finding the new location as long as it was far enough from the original location," says Clelland, "but couldn't differentiate between the two when they were close to each other."
A touch screen experiment confirmed the inability of neurogenesis-deficient mice to discriminate between locations in close proximity to each other but also revealed that these mice had no problem recalling spatial information in general. "Neurogenesis helps us to make finer distinctions and appears to play a very specific role in forming spatial memories," says Clelland. Adds Gage, "There is value in knowing something about the relationship between separate events and the closer they get the more important this information becomes."
But pattern separation might not be the only role that new neurons have in adult brain function: a computer model simulating the neuronal circuits in the dentate gyrus based on all available biological information suggested an additional function. "To our surprise, it turned out that newborn neurons actually form a link between individual elements of episodes occurring closely in time," says Gage.
Given this, he and his team are now planning experiments to see whether new neurons are also critical for coding temporal or contextual relationships.
Researchers who also contributed to the work include M. Choi, A. Fragniere, and P. Tyers in the Centre for Brain Repair at the University of Cambridge, UK, C. Romberg and L. M Saksida in the Department of Experimental Psychology at the University of Cambridge, UK, graduate student G. Dane Clemenson Jr. in the Laboratory of Genetics at the Salk Institute for Biological Studies and assistant professor Sebastian Jessberger, M.D. at the Institute of Cell Biology at the Swiss Federal Institute of Technology in Zurich, Switzerland.
Adapted from materials provided by Salk Institute, via EurekAlert!, a service of AAAS.

Wednesday, July 8, 2009

Finding Fear: Neuroscientists Locate Where It Is Stored In The Brain


ScienceDaily (July 8, 2009) — Fear is a powerful emotion, and neuroscientists have for the first time located the neurons responsible for fear conditioning in the mammalian brain. Fear conditioning is a form of Pavlovian, or associative, learning and is considered to be a model system for understanding human phobias, post-traumatic stress disorder and other anxiety disorders.
Using an imaging technique that enabled them to trace the process of neural activation in the brains of rats, University of Washington researchers have pinpointed the basolateral nucleus in the region of the brain called the of amygdala as the place where fear conditioning is encoded.
Neuroscientists previously suspected that both the amygdala and another brain region, the dorsal hippocampus, were where cues get associated when fear memories are formed. But the new work indicates that the role of the hippocampus is to process and transmit information about conditioned stimuli to the amygdala, said Ilene Bernstein, corresponding author of the new study and a UW professor of psychology.
The study is being published on July 6, in PLoS One, a journal of the Public Library of Science.
Associative conditioning is a basic form of learning across the animal kingdom and is regularly used in studying how brain circuits can change as a result of experience. In earlier research, UW neuroscientists looked at taste aversion, another associative learning behavior, and found that neurons critical to how people and animals learn from experience are located in the amygdala.
The new work was designed to look for where information about conditioned and unconditioned stimuli converges in the brain as fear memories are formed. The researchers used four groups of rats and placed individual rodents inside of a chamber for 30 minutes. Three of the groups had never seen the chamber before.
When control rats were placed in the chamber, they explored it, became less active and some fell asleep. A delayed shock group also explored the chamber, became less active and after 26 minutes received an electric shock through the floor of the chamber. The third group was acclimated to the chamber by a series of 10 prior visits and then went through the same procedure as the delayed shock rats. The final group was shocked immediately upon being introduced inside the chamber.
The following day the rats were individually returned to the chamber and the researchers observed them for freezing behavior. Freezing, or not moving, is the most common behavioral measure of fear in rodents. The only rats that exhibited robust freezing were those that received the delayed shock in a chamber which was unfamiliar to them.
"We didn't know if we could delay the shock for 26 minutes and get a fear reaction after just one trial. I thought it would be impossible to do this with fear conditioning," said Bernstein. "This allowed us to ask where information converged in the brain."
To do this, the researchers repeated the procedure, but then killed the rats. They then took slices of the brains and used Arc catfish, an imaging technique, which allowed them to follow the pattern of neural activation in the animals.
Only the delayed shock group displayed evidence of converging activation from the conditioned stimulus (the chamber) and the unconditioned stimulus (the shock). The experiment showed that animals can acquire a long-term fear when a novel context is paired with a shock 26 minutes later, but not when a familiar context is matched with a shock.
"Fear learning and taste aversion learning are both examples of associative learning in which two experiences occur together. Often they are learned very rapidly because they are critical to survival, such as avoiding dangerous places or toxic foods," said Bernstein.
"People have phobias that often are set off by cues from something bad that happened to them, such as being scared by a snake or being in a dark alley. So they develop an anxiety disorder," she said.
"By understanding the process of fear conditioning we might learn how to treat anxiety by making the conditioning weaker or to go away. It is also a tool for learning about these brain cells and the underlying mechanism of fear conditioning."
Co-authors of the study, all at the UW, are Sabiha Barot, who just completed her doctoral studies; Ain Chung, a doctoral student; and Jeansok Kim, an associate professor of psychology.
Journal reference:
Sabiha K. Barot, Ain Chung, Jeansok J. Kim, Ilene L. Bernstein. Functional Imaging of Stimulus Convergence in Amygdalar Neurons during Pavlovian Fear Conditioning. PLoS ONE, 2009; 4 (7): e6156 DOI: 10.1371/journal.pone.0006156
Adapted from materials provided by University of Washington.

Tuesday, July 7, 2009

Songbirds Reveal How Practice Improves Performance


ScienceDaily (July 6, 2009) — Learning complex skills like playing an instrument requires a sequence of movements that can take years to master. Last year, MIT neuroscientists reported that by studying the chirps of tiny songbirds, they were able to identify how two distinct brain circuits contribute to this type of trial-and-error learning in different stages of life.
Now, the researchers have gained new insights into a specific mechanism behind this learning. In a paper being published in the Proceedings of the National Academy of Sciences during the week of July 6, the scientists report that as zebra finches fine-tune their songs, the brain initially stores improvements in one brain pathway before transferring this learned information to the motor pathway for long-term storage.
The work could further our understanding of the complicated circuitry of the basal ganglia, brain structures that play a key role in learning and habit formation in humans. The basal ganglia are also linked to disorders like Parkinson's disease, obsessive-compulsive disorder and drug addiction.
"Birds provide a great system to study the fundamental mechanisms of how the basal ganglia contributes to learning," said senior author Michale Fee, an investigator in the McGovern Institute for Brain Research at MIT. "Our results support the idea that the basal ganglia are the gateway through which newly acquired information affects our actions."
Young zebra finches learn to sing by mimicking their fathers, whose song contains multiple syllables in a particular sequence. Like the babbling of human babies, young birds initially produce a disorganized stream of tones, but after practicing thousands of times they master the syllables and rhythms of their father's song. Previous studies with finches have identified two distinct brain circuits that contribute to this behavior. A motor pathway is responsible for producing the song, and a separate pathway is essential for learning to imitate the father. This learning pathway, called the anterior forebrain pathway (AFP), has similarities to basal ganglia circuits in humans.
"For this study, we wanted to know how these two pathways work together as the bird is learning," explained first author Aaron Andalman, a graduate student in Fee's lab. "So we trained the birds to learn a new variation in their song and then we inactivated the AFP circuit to see how it was contributing to the learning."
To train the birds, researchers monitored their singing and delivered white noise whenever a bird sang a particular syllable at a lower pitch than usual.
"The bird hears this unexpected noise, thinks it made a 'mistake', and on future attempts gradually adjusts the pitch of that syllable upward to avoid repeating that error," Fee said. "Over many days we can train the bird to move the pitch of the syllable up and down the musical scale."
On a particular day, after four hours of training in which the birds learned to raise the pitch, the researchers temporarily inactivated the AFP with a short-acting drug (tetrodotoxin, a neurotoxin that comes from the puffer fish). The pitch immediately slipped back to where it had been at the start of that day's training session — suggesting that the recently learned changes were stored within the AFP.
Listen to the birds adjust the pitch of their song here: http://web.mit.edu/feelab/media/andalmanandfee.html
But the researchers found that over the course of 24 hours, the brain had transferred the newly learned information from the AFP to the motor pathway. The motor pathway was storing all of the accumulated pitch changes from previous training sessions.
Fee compares the effect to how recent edits to a document are temporarily stored in a computer's dynamic memory and then saved regularly to the hard drive. It is the accumulation of changes in the motor pathway "hard drive" that constitutes the development of a new skill.
The NIH and Friends of McGovern Institute supported this research.
Adapted from materials provided by Massachusetts Institute of Technology.

Sunday, July 5, 2009

Paralyzed People Using Computers, Amputees Controlling Bionic Limbs, With Microelectrodes On (Not In) Brain


ScienceDaily (July 6, 2009) — Experimental devices that read brain signals have helped paralyzed people use computers and may let amputees control bionic limbs. But existing devices use tiny electrodes that poke into the brain. Now, a University of Utah study shows that brain signals controlling arm movements can be detected accurately using new microelectrodes that sit on the brain but don't penetrate it.
"The unique thing about this technology is that it provides lots of information out of the brain without having to put the electrodes into the brain," says Bradley Greger, an assistant professor of bioengineering and coauthor of the study. "That lets neurosurgeons put this device under the skull but over brain areas where it would be risky to place penetrating electrodes: areas that control speech, memory and other cognitive functions."
For example, the new array of microelectrodes someday might be placed over the brain's speech center in patients who cannot communicate because they are paralyzed by spinal injury, stroke, Lou Gehrig's disease or other disorders, he adds. The electrodes would send speech signals to a computer that would covert the thoughts to audible words.
For people who have lost a limb or are paralyzed, "this device should allow a high level of control over a prosthetic limb or computer interface," Greger says. "It will enable amputees or people with severe paralysis to interact with their environment using a prosthetic arm or a computer interface that decodes signals from the brain."
The study is scheduled for online publication July 1 in the journal Neurosurgical Focus.
The findings represent "a modest step" toward use of the new microelectrodes in systems that convert the thoughts of amputees and paralyzed people into signals that control lifelike prosthetic limbs, computers or other devices to assist people with disabilities, says University of Utah neurosurgeon Paul A. House, the study's lead author.
"The most optimistic case would be a few years before you would have a dedicated system," he says, noting more work is needed to refine computer software that interprets brain signals so they can be converted into actions, like moving an arm.
An Advance over the Penetrating Utah Electrode Array
Such technology already has been developed in experimental form using small arrays of penetrating electrodes that stick into the brain. The University of Utah pioneered development of the 100-electrode Utah Electrode Array used to read signals from the brain cells of paralyzed people. In experiments in Massachusetts, researchers used the small, brain-penetrating electrode array to help paralyzed people move a computer cursor, operate a robotic arm and communicate.
Meanwhile, researchers at the University of Utah and elsewhere are working on a $55 million Pentagon project to develop a lifelike bionic arm that war veterans and other amputees would control with their thoughts, just like a real arm. Scientists are debating whether the prosthetic devices should be controlled from nerve signals collected by electrodes in or on the brain, or by electrodes planted in the residual limb.
The new study was funded partly by the Defense Advanced Research Projects Agency's bionic arm project, and by the National Science Foundation and Blackrock Microsystems, which provided the system to record brain waves.
House and Greger conducted the research with Spencer Kellis, a doctoral student in electrical and computer engineering; Kyle Thomson, a doctoral student in bioengineering; and Richard Brown, professor of electrical and computer engineering and dean of the university's College of Engineering.
Microelectrodes on the Brain May Last Longer than Those Poking Inside
Not only are the existing, penetrating electrode arrays undesirable for use over critical brain areas that control speech and memory, but the electrodes likely wear out faster if they are penetrating brain tissue rather than sitting atop it, Greger and House say. Nonpenetrating electrodes may allow a longer life for devices that will help disabled people use their own thoughts to control computers, robotic limbs or other machines.
"If you're going to have your skull opened up, would you like something put in that is going to last three years or 10 years?" Greger asks.
"No one has proven that this technology will last longer," House says. "But we are very optimistic that by being less invasive, it certainly should last longer and provide a more durable interface with the brain."
The new kind of array is called a microECoG – because it involves tiny or "micro" versions of the much larger electrodes used for electrocorticography, or ECoG, developed a half century ago.
For patients with severe epileptic seizures that are not controlled by medication, surgeons remove part of the skull or cranium and place a silicone mat containing ECoG electrodes over the brain for days to weeks while the cranium is held in place but not reattached. The large electrodes – each several millimeters in diameter – do not penetrate the brain but detect abnormal electrical activity and allow surgeons to locate and remove a small portion of the brain causing the seizures.
ECoG and microECoG represent an intermediate step between electrodes the poke into the brain and EEG (electroencephalography), in which electrodes are placed on the scalp. Because of distortion as brain signals pass through the skull and as patients move, EEG isn't considered adequate for helping disabled people control devices.
The regular-size ECoG electrodes are too large to detect many of the discrete nerve impulses controlling the arms or other body movements. So the researchers designed and tested microECoGs in two severe epilepsy patients who already were undergoing craniotomies.
The epilepsy patients were having conventional ECoG electrodes placed on their brains anyway, so they allowed House to place the microECoG electrode arrays at the same time because "they were brave enough and kind enough to help us develop the technology for people who are paralyzed or have amputations," Greger says.
The researchers tested how well the microelectrodes could detect nerve signals from the brain that control arm movements. The two epilepsy patients sat up in their hospital beds and used one arm to move a wireless computer "mouse" over a high-quality electronic draftsman's tablet in front of them. The patients were told to reach their arm to one of two targets: one was forward to the left and the other was forward to the right.
The patients' arm movements were recorded on the tablet and fed into a computer, which also analyzed the signals coming from the microelectrodes placed on the area each patient's brain controlling arm and hand movement.
The study showed that the microECoG electrodes could be used to distinguish brain signals ordering the arm to reach to the right or left, based on differences such as the power or amplitude of the brain waves.
The microelectrodes were formed in grid-like arrays embedded in rubbery clear silicone. The arrays were over parts of the brain controlling one arm and hand.
The first patient received two identical arrays, each with 16 microelectrodes arranged in a four-by-four square. Individual electrodes were spaced 1 millimeter apart (about one-25th of an inch). Patient 1 had the ECoG and microECoG implants for a few weeks. The findings indicated the electrodes were so close that neighboring microelectrodes picked up the same signals.
So, months later, the second patient received one array containing about 30 electrodes, each 2 millimeters apart. This patient wore the electrode for several days.
"We were trying to understand how to get the most information out of the brain," says Greger. The study indicates optimal spacing is 2 to 3 millimeters between electrodes, he adds.
Once the researchers develop more refined software to decode brain signals detected by microECoG in real-time, it will be tested by asking severe epilepsy patients to control a "virtual reality arm" in a computer using their thoughts.
Adapted from materials provided by University of Utah.

Scientists Develop Echolocation In Humans To Aid The Blind


ScienceDaily (July 6, 2009) — A team of researchers from the University of Alcalá de Henares (UAH) has shown scientifically that human beings can develop echolocation, the system of acoustic signals used by dolphins and bats to explore their surroundings. Producing certain kinds of tongue clicks helps people to identify objects around them without needing to see them, something which would be especially useful for the blind.
“In certain circumstances, we humans could rival bats in our echolocation or biosonar capacity”, Juan Antonio Martínez, lead author of the study and a researcher at the Superior Polytechnic School of the UAH, tells SINC. The team led by this scientist has started a series of tests, the first of their kind in the world, to make use of human beings’ under-exploited echolocation skills.
In the first study, published in the journal Acta Acustica united with Acustica, the team analyses the physical properties of various sounds, and proposes the most effective of these for use in echolocation. “The almost ideal sound is the ‘palate click, a click made by placing the tip of the tongue on the palate, just behind the teeth, and moving it quickly backwards, although it is often done downwards, which is wrong”, Martínez explains.
The researcher says that palate clicks “are very similar to the sounds made by dolphins, although on a different scale, as these animals have specially-adapted organs and can produce 200 clicks per second, while we can only produce three or four”. By using echolocation, “which is three-dimensional, and makes it possible to ‘see’ through materials that are opaque to visible radiation” it is possible to measure the distance of an object based on the time that elapses between the emission of a sound wave and an echo being received of this wave as it is reflected from the object.
In order to learn how to emit, receive and interpret sounds, the scientists are developing a method that uses a series of protocols. This first step is for the individual to know how to make and identify his or her own sounds (they are different for each person), and later to know how to use them to distinguish between objects according to their geometrical properties “as is done by ships’ sonar”.
Some blind people had previously taught themselves how to use echolocation “by trial and error”. The best-known cases of these are the Americans Daniel Kish, the only blind person to have been awarded a certificate to act as a guide for other blind people, and Ben Underwood, who was considered to be the world’s best “echolocator” until he died at the start of 2009.
However, no special physical skills are required in order to develop this skill. “Two hours per day for a couple of weeks are enough to distinguish whether you have an object in front of you, and within another two weeks you can tell the difference between trees and a pavement”, Martínez tells SINC.
The scientist recommends trying with the typical “sh” sound used to make someone be quiet. Moving a pen in front of the mouth can be noticed straightaway. This is a similar phenomenon to that when travelling in a car with the windows down, which makes it possible to “hear” gaps in the verge of the road.
The next level is to learn how to master the “palate clicks”. To make sure echoes from the tongue clicks are properly interpreted, the researchers are working with a laser pointer, which shows the part of an object at which the sound should be aimed.
A new way of seeing the world
Martínez has told SINC that his team is now working to help deaf and blind people to use this method in the future, because echoes are not only perceived by their ear, but also through vibrations in the tongue and bones. “For these kinds of people in particular, and for all of us in general, this would be a new way of perceiving the world”.
Another of the team’s research areas involves establishing the biological limits of human echolocation ability, “and the first results indicate that detailed resolution using this method could even rival that of sight itself”. In fact, the researchers started out by being able to tell if there was someone standing in front of them, but now can detect certain internal structures, such as bones, and even “certain objects inside a bag”.
The scientists recognise that they are still at the very early stages, but the possibilities that would be opened up with the development of echolocation in humans are enormous. This technique will be very practical not only for the blind, but also for professionals such as firemen (enabling them to find exit points through smoke), and rescue teams, or simply people lost in fog.
A better understanding of the mental mechanisms used in echolocation could also help to design new medical imaging technologies or scanners, which make use of the great penetration capacity of clicks. Martínez stresses that these sounds “are so penetrating that, even in environments as noisy as the metro, one can sense discontinuities in the platform or tunnels”.
Journal reference:
Rojas et al. Physical Analysis of Several Organic Signals for Human Echolocation: Oral Vacuum Pulses. Acta Acustica united with Acustica, 2009; 95 (2): 325 DOI: 10.3813/AAA.918155
Adapted from materials provided by Plataforma SINC, via AlphaGalileo.

Saturday, July 4, 2009

A Young Brain For An Old Bee


ScienceDaily (July 5, 2009) — We are all familiar with the fact that cognitive function declines as we get older. Moreover, recent studies have shown that the specific kind of daily activities we engage in during the course of our lives appears to influence the extent of this decline. A team of researchers from Technische Universität Berlin are studying how division of labour among honey bees affects their learning performance as they age.
Surprisingly, they have found that, by switching their social role, aging honey bees can keep their learning ability intact or even improve it. The scientists are planning to use them as a model to study general aging processes in the brain, and they even hope that they may provide some clues on how to prevent them. Dr. Ricarda Scheiner, leader of the research team, will present these findings at the Society of Experimental Biology Annual Meeting in Glasgow on July 1st 2009.
The oldest bees in a colony are the foragers - a task that demands a high amount of energy. With increasing foraging duration, their capacity for associative learning was found to decrease. On the other hand, no decline was observed in nurse bees that remain inside the hive taking care of the brood and the queen, even though their age was the same as that of their foraging sisters. When the scientists artificially forced a subset of these foragers to revert to nursing tasks, they discovered that they learning performance improved again, demonstrating a remarkable plasticity in their brain circuits.
"The honey bee is a great model", explains Dr. Scheiner, "because we can learn a lot about social organisation from it and because it allows us to revert individuals into a 'younger' stage. If we remove all of the nurse bees of a colony, some of the foragers will revert to nursing behaviour and their brains become 'young' again. We thus hope to study the mechanisms responsible for age-dependent effects, like oxidative damage, and also to discover new ways to act against these aging processes."
Adapted from materials provided by Society for Experimental Biology, via EurekAlert!, a service of AAAS.

Friday, July 3, 2009

Human-like Brain Disturbances In Insects: Locusts Shed Light On Migraines, Stroke And Epilepsy


ScienceDaily (July 3, 2009) — A similarity in brain disturbance between insects and people suffering from migraines, stroke and epilepsy points the way toward new drug therapies to address these conditions.
Queen's University biologists studying the locust have found that these human disorders are linked by a brain disturbance during which nerve cells shut down. This also occurs in locusts when they go into a coma after exposure to extreme conditions such as high temperatures or lack of oxygen.
The Queen's study shows that the ability of the insects to resist entering the coma, and the speed of their recovery, can be manipulated using drugs that target one of the cellular signaling pathways in the brain.
"This suggests that similar treatments in humans might be able to modify the thresholds or severity of migraine and stroke," says Gary Armstrong, who is completing his PhD research in Biology professor Mel Robertson's laboratory. "What particularly excites me is that in one of our locust models, inhibition of the targeted pathway completely suppresses the brain disturbance in 70 per cent of animals," adds Dr. Robertson.
The Queen's research team previously demonstrated that locusts go into a coma as a way of shutting down and conserving energy when conditions are dangerous. The cellular responses in the locust are similar to the response of brain cells at the onset of a migraine.
Noting that it's hard to drown an insect – due to their ability to remain safely in a coma under water for several hours – Mr. Armstrong says, "It's intriguing that human neural problems may share their mechanistic roots with the process insects use to survive flash floods."
The Queen's study is published in the current edition of the Journal of Neuroscience. Other researchers on the team are Corinne Rodgers and Tomas Money who are also in Dr. Robertson's laboratory. The research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Adapted from materials provided by Queen's University.

Thursday, July 2, 2009

New Actions Of Neurochemicals Discovered


ScienceDaily (July 3, 2009) — Although the tiny roundworm Caenorhabditis elegans has only 302 neurons in its entire nervous system, studies of this simple animal have significantly advanced our understanding of human brain function because it shares many genes and neurochemical signaling molecules with humans. Now MIT researchers have found novel C. elegans neurochemical receptors, the discovery of which could lead to new therapeutic targets for psychiatric disorders if similar receptors are found in humans.
Dopamine and serotonin are members of a class of neurochemicals called biogenic amines, which function in neuronal circuitry throughout the brain. Many drugs used to treat psychiatric disorders, including depression and schizophrenia, target these signaling systems, as do cocaine and other drugs of abuse. Scientists have long known of a class of biogenic-amine receptors that are G protein-coupled receptors (GPCRs) and that, when activated, trigger a slow but long-lasting cascade of intracellular events that modulate nervous system activity.
A study in the July 3 issue of Science has found that in C. elegans these chemicals also act on receptors of a fundamentally different type. These receptors are chloride channels that open and close quickly in response to the binding of a neurochemical messenger. By allowing the passage of negatively charged chloride ions across the cell membrane, chloride channels can rapidly inactivate nerve cells.
"These results underscore the importance of determining whether, as in the C. elegans nervous system, a diversity of biogenic amine-gated chloride channels function in the human brain," said H. Robert Horvitz of the McGovern Institute for Brain Research at MIT and senior author of the study. "If so, such channels might define novel therapeutic targets for neuropsychiatric disorders, such as depression and schizophrenia."
In 2000, Horvitz's group discovered that serotonin activates a chloride channel they called MOD-1, which inhibits neuronal activity in C. elegans.
In the current study, Niels Ringstad and Namiko Abe, a postdoctoral researcher and an undergraduate in Horvitz's laboratory, respectively, looked for other ion channels that could be receptors for biogenic amines. Using both in vitro and in vivo methods, they surveyed the functions of 26 ion channels similar to MOD-1 and found three additional ion channels with an affinity for biogenic amines: dopamine activates one, serotonin another, and tyramine (the role of which in the human brain is unknown) a third. All three were chloride channels, like MOD-1.
"We now have four members of a family of chloride channels that can act as receptors for biogenic amines in the worm," Ringstad said. "That these neurochemicals activate both GPCRs and ion channels means that they can have very complex actions in the nervous system, both as slow-acting neuromodulators and as fast-acting inhibitory neurotransmitters."
It is unknown as yet whether an equivalent to this new class of worm receptor exists in the human brain, but Horvitz points out that worms have proved remarkably informative for providing insights into human biology. In 2002, Horvitz shared the Nobel Prize in Physiology or Medicine for the discovery based on studies of C. elegans of the mechanism of programmed cell death, a central feature of some neurodegenerative diseases and many other disorders in humans.
"Historically, studies of C. elegans have delineated mechanisms of neurotransmission that subsequently proved to be conserved in humans," says Horvitz, the David H. Koch Professor of Biology at MIT and a Howard Hughes Medical Institute Investigator. "The next step is to look for chloride channels controlled by biogenic amines in mammalian neurons."
This study was supported by the NIH, the Howard Hughes Medical Institute, the Life Sciences Research Foundation, and The Medical Foundation.
Adapted from materials provided by McGovern Institute for Brain Research.