Thursday, March 26, 2009

Forget It! A Biochemical Pathway For Blocking Your Worst Fears?

SOURCE

ScienceDaily (Mar. 26, 2009) — A receptor for glutamate, the most prominent neurotransmitter in the brain, plays a key role in the process of "unlearning," report researchers at the Salk Institute for Biological Studies.
Their findings, published in the current issue of the Journal of Neuroscience, could eventually help scientists develop new drug therapies to treat a variety of disorders, including phobias and anxiety disorders, particularly post-traumatic stress disorder.
"Most studies focus on 'learning,' but the 'unlearning' process is probably just as important and much less understood," says Stephen F. Heinemann, Ph.D., a professor in the Molecular Neurobiology Laboratory, who led the study. "Most people agree that failure to 'unlearn' is a hallmark of post-traumatic stress disorders and if we had a drug that affects this gene it could help soldiers coming back from the war to 'unlearn' their fear memories."
Post-traumatic stress disorder or PTSD is an anxiety disorder that can develop after exposure to a terrifying event or ordeal in which grave physical harm occurred or was threatened. PTSD is affecting approximately 5.2 million Americans, according to the National Institute of Health. As many as one in eight returning soldiers suffer from PTSD.
But you don't have to be a combat soldier to develop anxiety disorders such as PTSD. Any bad experience in daily life is a learning experience that can result in anxiety disorders. If traumatic memories persist inappropriately, sensory cues, sometimes not even recognized consciously, trigger recall of the distressing memories and the associated stress and fear.
As a way of modeling anxiety disorders in humans, researchers train mice to fear a tone by coupling it with a foot shock. If this fear conditioning is followed by repeated exposure to the tone without aversive consequences, the fear will subside, a behavioral change called fear extinction or inhibitory learning.
Heinemann and his team were particularly interested in whether mGluR5, short for metabotropic glutamate receptor 5, which had been shown to be involved in several forms of behavioral learning, also plays a role in inhibitory learning. "Inhibitory learning is thought to be a parallel learning mechanism that requires the acquisition of new information as well as the suppression of previously acquired experiences to be able to adapt to novel situations or environments," says Heinemann.
When senior research associate and first author Jian Xu, Ph.D., put mice lacking the gene for mGluR5 through the fear extinction-drill, they were unable to shake off their fear of the now harmless tone. "We could train the mice to be afraid of the tone but they were unable to erase the association between the tone and the negative experience," he says.
In the second series of experiments, Xu tested whether deleting mGluR5 also affected animals' ability to learn new spatial information. He first trained mice to find a hidden platform placed in a fixed location in the water maze. Although it took mutant mice slightly longer than control animals to remember the position of the submerged platform, after several days of training the mutants finally got the hang of it and were able to find it almost as quickly as the control animals.
Xu then moved the platform to a different location in the water maze and re-trained the animals. He observed that normal animals quickly adjusted their searching strategy once they realized that the platform had been moved to a different spot. The mice lacking mGluR5, however, just couldn't get it into their heads that the platform was no longer there and kept coming back to the original location. It took them several more trials until they finally gave up searching in the old location.
"Mice without mGluR5 had severe deficits in tasks that required them to 'unlearn' what they had just learned," explains Xu. "We believe that the same mechanism is perturbed in PTSD and that mGluR could provide a potential target for therapeutic intervention."
In addition to Xu and Heinemann, postdoctoral researchers Yongling Zhu, Ph.D., and Anis Contractor, Ph.D., contributed to the research.
Journal reference:
Jian Xu, Yongling Zhu, Anis Contractor, and Stephen F. Heinemann. mGluR5 Has a Critical Role in Inhibitory Learning. Journal of Neuroscience, 2009; 29 (12): 3676 DOI: 10.1523/JNEUROSCI.5716-08.2009
Adapted from materials provided by Salk Institute.

Touch Helps Make The Connection Between Sight And Hearing

ScienceDaily (Mar. 25, 2009) — The sense of touch allows us to make a better connection between sight and hearing and therefore helps adults to learn to read. This is what has just been shown by the team of Édouard Gentaz, CNRS researcher at the Laboratoire de Psychologie et Neurocognition in Grenoble (CNRS/Université Pierre Mendès France de Grenoble/Université de Savoie).
These results, published March 16th in the journal PloS One, should improve learning methods, both for children learning to read and adults learning foreign languages.
To read words that are new to us, we have to learn to associate a visual stimulus (a letter, or grapheme) with its corresponding auditory stimulus (the sound, or phoneme). When visual stimuli can be explored both visually and by touch, adults learn arbitrary associations between auditory and visual stimuli more efficiently. The researchers reached this conclusion from an experiment on thirty French-speaking adults. They first compared two learning methods with which the adults had to learn 15 new visual stimuli, inspired by Japanese characters, and their 15 corresponding sounds (new auditory stimuli with no associated meaning). The two learning methods differed in the senses used to explore the visual stimuli. The first, "classic", method used only vision. The second, "multisensory", method used touch as well as vision for the perception of the visual stimuli. After the learning phase, the researchers measured the performances of each adult using different tests (1). They found that all the participants had acquired an above-chance ability to recognize the visual and auditory stimuli using the two methods.
The researchers then went on to test the participants by two other methods (2), this time to measure the capacity to learn associations between visual and auditory stimuli. The results showed that the subjects were capable of learning the associations with both learning methods, but that their performances were much better using the "multisensory" learning method. When the subjects were given the same tests a week after the learning phase, the results were the same.
These results support those already found by the same team, in work done with young children. The explication lies in the specific properties of the haptic sense (3) in the hands, which plays a "cementing" role between sight and hearing, favoring the connection between the senses. What goes on in the brain remains to be explored, as does the neuronal mechanism: the researchers plan to develop a protocol that will let them use fMRI (4) to identify the areas of the cortex that are activated during the "multisensory" learning process.
(1) The first two tests respectively measured the learning capacity for visual and auditory stimuli using recognition tests. In a visual test, a visual stimulus had to be recognized among 5 new visual stimuli. In an auditory test, the target had to be recognized among 5 new sounds.
(2) In the "visual-auditory" test, the subject was presented with a visual stimulus and had to recognize its corresponding sound among 5 other sounds. In the "auditory-visual" test, the opposite was done.
(3) Or tactile-kinesthetic. "Haptic" corresponds to the sense of touch, used to feel the letters.
(4) Functional magnetic resonance imaging: the application of magnetic resonance imagery to study the function of the brain.
Journal reference:
Fredembach, B., Boisferon, A. et Gentaz, E. Learning of Arbitrary Association between Visual and Auditory Novel Stimuli in Adults: The “Bond Effect” of Haptic Exploration. PLoS ONE, 2009; 4 (3): e4844 DOI: 10.1371/journal.pone.0004844
Adapted from materials provided by CNRS (Délégation Paris Michel-Ange).

Why We Have Difficulty Recognizing Faces In Photo Negatives


ScienceDaily (Mar. 25, 2009) — Humans excel at recognizing faces, but how we do this has been an abiding mystery in neuroscience and psychology. In an effort to explain our success in this area, researchers are taking a closer look at how and why we fail.
A new study from MIT looks at a particularly striking instance of failure: our impaired ability to recognize faces in photographic negatives. The study, which appears in the Proceedings of the National Academy of Sciences this week, suggests that a large part of the answer might lie in the brain's reliance on a certain kind of image feature.
The work could potentially lead to computer vision systems, for settings as diverse as industrial quality control or object and face detection. On a different front, the results and methodologies could help researchers probe face-perception skills in children with autism, who are often reported to experience difficulties analyzing facial information.
Anyone who remembers the days before digital photography has probably noticed that it's much harder to identify people in photographic negatives than in normal photographs. "You have not taken away any information, but somehow these faces are much harder to recognize," says Pawan Sinha, an associate professor of brain and cognitive sciences and senior author of the PNAS study.
Sinha has previously studied light and dark relationships between different parts of the face, and found that in nearly every normal lighting condition, a person's eyes appear darker than the forehead and cheeks. He theorized that photo negatives are hard to recognize because they disrupt these very strong regularities around the eyes.
To test this idea, Sinha and his colleagues asked subjects to identify photographs of famous people in not only positive and negative images, but also in a third type of image in which the celebrities' eyes were restored to their original levels of luminance, while the rest of the photo remained in negative.
Subjects had a much easier time recognizing these "contrast chimera" images. According to Sinha, that's because the light/dark relationships between the eyes and surrounding areas are the same as they would be in a normal image.
Similar contrast relationships can be found in other parts of the face, primarily the mouth, but those relationships are not as consistent. "The relationships around the eyes seem to be particularly significant," says Sinha.
Other studies have shown that people with autism tend to focus on the mouths of people they are looking at, rather than the eyes, so the new findings could help explain why autistic people have such difficulty recognizing faces, says Sinha.
The findings also suggest that neuronal responses in the brain may be based on these relationships between different parts of the face. The team found that when they scanned the brains of people performing the recognition task, regions associated with facial processing (the fusiform face areas) were far more active when looking at the contrast chimeras than when looking at pure negatives.
Other authors of the paper are Sharon Gilad of the Weizmann Institute of Science in Israel and MIT postdoctoral associate Ming Meng, both of whom contributed equally to the work..
The research was funded by the Alfred P. Sloan Foundation and the Jim and Marilyn Simons Foundation.
Adapted from materials provided by Massachusetts Institute of Technology.

Brain Wave Patterns Can Predict Blunders, New Study Finds


ScienceDaily (Mar. 25, 2009) — A distinct alpha-wave pattern occurs in two brain regions just before subjects make mistakes on attention-demanding tests, according to a new study.
From spilling a cup of coffee to failing to notice a stop sign, everyone makes an occasional error due to lack of attention. Now a team led by a researcher at the University of California, Davis, in collaboration with the Donders Institute in the Netherlands, has found a distinct electric signature in the brain which predicts that such an error is about to be made.
The discovery could prove useful in a variety of applications, from developing monitoring devices that alert air traffic control operators that their attention is flagging, to devising new strategies to help children cope with attention deficit hyperactivity disorder (ADHD). The work will be posted online on March 23 by the journal Human Brain Mapping as part of a special issue highlighting innovations in electromagnetic brain imaging that will be published in May.
How the brain responds to mistakes has been the subject of numerous studies, said Ali Mazaheri, a research fellow at the UC Davis Center for Mind and Brain. "But what I was looking for was the state the brain is in before a mistake is made," he said, "because that's what can tell us what produces the error."
Working with colleagues at the Donders Institute for Brain, Cognition and Behavior at Radboud University, where he was a Ph.D student at the time, Mazaheri recruited 14 students into his study. While they took an attention-demanding test, Mazaheri recorded their brain activity using MEG — magnetoencephalography — a non-invasive brain-wave recording technique similar to, but more sensitive than electroencephalography (EEG), the technique commonly used in hospitals to detect seizures.
The test, known as the "sustained attention response task," was developed in the 1990s to evaluate brain damage, ADHD and other neurological disorders. As participants sit at a computer for an hour, a random number from 1 to 9 flashes onto the screen every two seconds. The object is to tap a button as soon as any number except 5 appears.
The test is so monotonous, Mazaheri said, that even when a 5 showed up, his subjects spontaneously hit the button an average of 40 percent of the time.
By analyzing the recorded MEG data, the research team found that about a second before these errors were committed, brain waves in two regions were stronger than when the subjects correctly refrained from hitting the button. In the back of the head (the occipital region), alpha wave activity was about 25 percent stronger, and in the middle region, the sensorimotor cortex, there was a corresponding increase in the brain's mu wave activity.
"The alpha and mu rhythms are what happen when the brain runs on idle," Mazaheri explained. "Say you're sitting in a room and you close your eyes. That causes a huge alpha rhythm to rev up in the back of your head. But the second you open your eyes, it drops dramatically, because now you're looking at things and your neurons have visual input to process."
The team also found that errors triggered immediate changes in wave activity in the front region of the brain, which appeared to drive down alpha activity in the rear region, "It looks as if the brain is saying, 'Pay attention!' and then reducing the likelihood of another mistake," Mazaheri said.
It shouldn't take too many years to incorporate these findings into practical applications, Mazaheri said. For example, a wireless EEG could be deployed at an air traffic controller's station to trigger an alert when it senses that alpha activity is beginning to regularly exceed a certain level.
It could also provide new therapies for children with ADHD, he said. "Instead of watching behavior — which is an imprecise measure of attention — we can monitor these alpha waves, which tell us that attention is waning. And that can help us design therapies as well as evaluate the efficacy of various treatments, whether it's training or drugs."
Collaborating in the study were Ingrid Nieuwenhuis, Hanneke van Dijk and principal investigator Ole Jensen, all at the Donders Institute.
Support for this work was provided by the framework of the Netherlands Organization for Scientific Research (NWO) and BrainGain Smart Mix Programme of the Netherlands Ministry of Economic Affairs. Ali Mazaheri is currently funded by a Rubicon grant From NWO.
Journal reference:
Ali Mazaheri, Ingrid L.C. Nieuwenhuis, Hanneke van Dijk, Ole Jensen. Prestimulus alpha and mu activity predicts failure to inhibit motor responses. Human Brain Mapping, 2009; NA DOI: 10.1002/hbm.20763
Adapted from materials provided by University of California - Davis, via EurekAlert!, a service of AAAS.

Early Brain Marker For Familial Form Of Depression: Structural Changes In Brain's Cortex


ScienceDaily (Mar. 26, 2009) — Findings from one of the largest-ever imaging studies of depression indicate that a structural difference in the brain – a thinning of the right hemisphere – appears to be linked to a higher risk for depression, according to new research at Columbia University Medical Center and the New York State Psychiatric Institute.
The research was led by Myrna Weissman, Ph.D., professor of epidemiology in psychiatry, Columbia University College of Physicians and Surgeons, and director of the Division of Epidemiology at the New York State Psychiatric Institute, and co-senior author of the study, and Bradley Peterson, M.D., director of Child & Adolescent Psychiatry and director of MRI Research in the Department of Psychiatry at Columbia University Medical Center and the New York State Psychiatric Institute, and first author of the study.
Published in the upcoming early online edition of the Proceedings of the National Academy of Sciences (PNAS), the researchers found that people at high risk of developing depression had a 28 percent thinning of the right cortex, the brain's outermost surface, compared to people with no known risk.
The drastic reduction surprised researchers, which they say is on par with the loss of brain matter typically observed in persons with Alzheimer's disease and schizophrenia. "The difference was so great that at first we almost didn't believe it. But we checked and re-checked all of our data, and we looked for all possible alternative explanations, and still the difference was there," said Dr. Peterson.
Dr. Peterson says the thinner cortex may increase the risk of developing depression by disrupting a person's ability to pay attention to, and interpret, social and emotional cues from other people. Additional tests measured each person's level of inattention to and memory for such cues. The less brain material a person had in the right cortex, the worse they performed on the attention and memory tests.
The study compared the thickness of the cortex by imaging the brains of 131 subjects, aged 6 to 54 years-old, with and without a family history of depression. Structural differences were observed in the biological offspring of depressed subjects but were not found in the biological offspring of those who were not depressed.
One of the goals of the study was to determine whether structural abnormalities in the brain predispose people to depression or are a cause of the illness. Dr. Peterson said, "Because previous biological studies only focused on a relatively small number of individuals who already suffered from depression, their findings were unable to tease out whether those differences represented the causes of depressive illness, or a consequence."
The study found that thinning on the right side of brain did not correlate with actual depression, only an increased risk for the illness. It was subjects who exhibited an additional reduction in brain matter on the left side, who went on to develop depression or anxiety.
"Our findings suggest rather strongly that if you have thinning in the right hemisphere of the brain, you may be predisposed to depression and may also have some cognitive and inattention issues. The more thinning you have, the greater the cognitive problems. If you have additional thinning in the same region of the left hemisphere, that seems to tip you over from having a vulnerability to developing symptoms of an overt illness," said Dr. Peterson.
Imaging Done on Participants of One of Longest Multi-Generational Studies of Depression
Participants were pulled from "Children at High and Low Risk of Depression," an earlier study, which was begun 27 years ago by Dr. Weissman. While at Yale, Dr. Weissman began the trial to examine the familial risk for depression. She identified people with moderate to severe depression, as well as people with no mental illness, and followed these families for more than 25 years. Dr. Weissman found that depression was transmitted across the generations in the high risk families and at the 20 year follow-up invited Dr. Peterson to collaborate on imaging the participants. The study now includes grandparents, their children and grandchildren.
Future Clinical Implications of the Findings
Commenting on the potential clinical implications of the findings, Dr. Peterson said, "If the mechanism–or pathway to illness–indeed runs from the thinning of the cortex to these cognitive problems that affect a person's attention and their ability to ­­interpret social and emotional cues – it would suggest that there may be potential treatments or novel uses of already existing treatments for intervention. For example, either behavioral therapies that aim to improve attention and memory and/or stimulant medications currently used for attention-deficit/hyperactivity disorder (ADHD), may surface as possible treatments for people who have familial depression and this pattern of cortical thinning, in a highly personalized form of medical decision-making and treatment, for it may be that treating their inattention could improve their processing of social information. This conjecture is entirely speculative at this point, but it is a logical hypothesis to test based on the findings from this study."
Next Steps
Using function magnetic resonance imaging (fMRI) with 152 subjects, aged 12 to 20, with and without a family history of depression, Dr. Peterson and Dr. Weissman plan to learn more about the pattern of thinning by observing the circuits of functional activation during attentional tasks to look at how these groups differ.
Rescanning of the subjects in the future is also expected to allow researchers to determine if the reduction in brain matter relates to neurons rather than other supporting cells in the brain, know as glia. In addition, specific behavioral and cognitive testing can help to identify more definitively the causal pathways that lead from thinning of the cortex to depression.
Drs. Peterson, Weissman, and their colleagues also plan to study the DNA of these subjects to determine if there is a particular gene that contributes to having an elevated risk for depression. The researchers can then investigate whether individuals with this depression risk gene have more thinning in the cortex.
Background
A highly familial illness, depression is a leading cause of disability worldwide for persons 15 to 44 years of age, and is associated with increased mortality resulting from cardiovascular disease, poor personal care and suicide. Early onset of depression, which occurs before young adulthood, tends to be familial and is usually characterized as being more chronic and having greater severity.
Until now, there have been no studies of brain structure in depression which have focused on cortical thickness.
This study was supported by funding from a grant from the National Institute of Mental Health of the National Institutes of Health.
Adapted from materials provided by Columbia University Medical Center.

Sunday, March 22, 2009

Living Model Of Basic Units Of Human Brain Created

ScienceDaily (Mar. 22, 2009) — Researchers in the School of Life & Health Sciences at Aston University in Birmingham, UK are developing a novel new way to model how the human brain works by creating a living representation of the brain.

They are using cells originally from a tumour which have been ‘reprogrammed’ to stop multiplying. Using the same natural molecule the body does to stimulate cellular development, the cells are turned into a co-culture of nerve cells and astrocytes - the most basic units of the human brain.
These co-cultures can be developed into tiny, connected balls of cells called neurospheres, which can process information, which, at a very simple level, is the basis of thought. The research process does not require animal testing and since 2007 has been generously supported by the Humane Research Trust.
In the future, the tiny three-dimensional cell clusters, which are essentially very small models of the human nervous system, could be used to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease. These progressive and debilitating neurodegenerative conditions are becoming more common as the population of the UK ages.
Professor Michael Coleman, who is leading the research team, said: ‘We are aiming to be able to study the human brain at the most basic level, using an actual living human cellular system. Cells have to be alive and operating efficiently to enable us to really understand how the brain works. In the longer term we hope that our procedure can be used to help us understand how conditions such as Alzheimer’s and other neurodegenerative diseases develop. At the moment, most people are only too aware that current treatments for these conditions do not halt their progress and often have side-effects. We hope that our technique will provide scientists with a new and highly relevant human experimental model to help us understand the brain better and develop new drugs and treatments to tackle neurodegenerative disease ’
Adapted from materials provided by Aston University.

Wednesday, March 18, 2009

Brain On A Chip?



ScienceDaily (Mar. 18, 2009) — How does the human brain run itself without any software? Find that out, say European researchers, and a whole new field of neural computing will open up. A prototype ‘brain on a chip’ is already working.

“We know that the brain has amazing computational capabilities,” remarks Karlheinz Meier, a physicist at Heidelberg University. “Clearly there is something to learn from biology. I believe that the systems we are going to develop could form part of a new revolution in information technology.”
It’s a strong claim, but Meier is coordinating the EU-supported FACETS project which brings together scientists from 15 institutions in seven countries to do just that. Inspired by research in neuroscience, they are building a ‘neural’ computer that will work just like the brain but on a much smaller scale.
The human brain is often likened to a computer, but it differs from everyday computers in three important ways: it consumes very little power, it works well even if components fail, and it seems to work without any software.
How does it do that? Nobody yet knows, but a team within FACETS is completing an exhaustive study of brain cells – neurons – to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things.
Mapping brain cells
“We are now in a situation like molecular biology was a few years ago when people started to map the human genome and make the data available,” Meier says. “Our colleagues are recording data from neural tissues describing the neurons and synapses and their connectivity. This is being done almost on an industrial scale, recording data from many, many neural cells and putting them in databases.”
Meanwhile, another FACETS group is developing simplified mathematical models that will accurately describe the complex behaviour that is being uncovered. Although the neurons could be modelled in detail, they would be far too complicated to implement either in software or hardware.
The goal is to use these models to build a ‘neural computer’ which emulates the brain. The first effort is a network of 300 neurons and half a million synapses on a single chip. The team used analogue electronics to represent the neurons and digital electronics to represent communications between them. It’s a unique combination.
Since the neurons are so small, the system runs 100,000 times faster than the biological equivalent and 10 million times faster than a software simulation. “We can simulate a day in one second,” Meier notes.
The network is already being used by FACETS researchers to do experiments over the internet without needing to travel to Heidelberg.
New type of computing
But this ‘stage 1’ network was designed before the results came in from the mapping and modelling work. Now the team are working on stage 2, a network of 200,000 neurons and 50 million synapses that will incorporate all the neuroscience discoveries made so far.
To build it, the team is creating its network on a single 20cm silicon disk, a ‘wafer’, of the type normally used to mass-produce chips before they are cut out of the wafer and packaged. This approach will make for a more compact device.
So called ‘wafer-scale integration’ has not been used much before for this, as such a large circuit will certainly have manufacturing flaws. “Our chips will have faults but they are each likely to affect only a single synapse or a single connection in the network,” Meier points out. “We can easily live with that. So we exploit the fault tolerance and use the entire wafer as a neural network.”
How could we use a neural computer? Meier stresses that digital computers are built on principles that simply do not apply to devices modelled on the brain. To make them work requires a completely new theory of computing. Yet another FACETS group is already on the case. “Once you understand the basic principles you may hope to develop the hardware further, because biology has not necessarily found the best solution.”
Beyond the brain?
Practical neural computers could be only five years away. “The first step could be a little add-on to your computer at home, a device to handle very complex input data and to provide a simple decision,” Meier says. “A typical thing could be an internet search.”
In the longer term, he sees applications for neural computers wherever there are complex and difficult decisions to be made. Companies could use them, for example, to explore the consequences of critical business decisions before they are taken. In today’s gloomy economic climate, many companies will wish they already had one!
The FACETS project, which is supported by the EU’s Sixth Framework Programme for research, is due to end in August 2009 but the partners have agreed to continue working together for another year. They eventually hope to secure a follow-on project with support from both the European Commission and national agencies.
Meanwhile, the consortium has just obtained funding from the EU’s Marie Curie initiative to set up a four-year Initial Training Network to train PhD students in the interdisciplinary skills needed for research in this area.
Where could this go? Meier points out that neural computing, with its low-power demands and tolerance of faults, may make it possible to reduce components to molecular size. “We may then be able to make computing devices which are radically different and have amazing performance which, at some point, may approach the performance of the human brain – or even go beyond it!”
Adapted from materials provided by ICT Results.

Guitarists' Brains Swing Together


ScienceDaily (Mar. 18, 2009) — When musicians play along together it isn't just their instruments that are in time – their brain waves are too. New research shows how EEG readouts from pairs of guitarists become more synchronized, a finding with wider potential implications for how our brains interact when we do.
Ulman Lindenberger, Viktor Müller, and Shu-Chen Li from the Max Planck Institute for Human Development in Berlin along with Walter Gruber from the University of Salzburg used electroencephalography (EEG) to record the brain electrical activity in eight pairs of guitarists. Each of the pairs played a short jazz-fusion melody together up to 60 times while the EEG picked up their brain waves via electrodes on their scalps.
The similarities among the brainwaves' phase, both within and between the brains of the musicians, increased significantly: first when listening to a metronome beat in preparation; and secondly as they began to play together. The brains' frontal and central regions showed the strongest synchronization patterns, as the researchers expected. However the temporal and parietal regions also showed relatively high synchronization in at least half of the pairs of musicians. The regions may be involved in processes supporting the coordinated action between players, or in enjoying the music.
"Our findings show that interpersonally coordinated actions are preceded and accompanied by between-brain oscillatory couplings," says Ulman Lindenberger. The results don't show whether this coupling occurs in response to the beat of the metronome and music, and as a result of watching each others' movements and listening to each others' music, or whether the brain synchronization takes place first and causes the coordinated performance. Although individual's brains have been observed getting tuning into music before, this is the first time musicians have been measured jointly in concert.
Journal reference:
Ulman Lindenberger, Shu-Chen Li, Walter Gruber and Viktor Müller. Brains Swinging in Concert: Cortical Phase Synchronization While Playing Guitar. BMC Neuroscience, (in press)
Adapted from materials provided by BMC Neuroscience, via EurekAlert!, a service of AAAS.

More Evidence That Intelligence Is Largely Inherited: Researchers Find That Genes Determine Brain's Processing Speed

SOURCE

ScienceDaily (Mar. 18, 2009) — They say a picture tells a thousand stories, but can it also tell how smart you are? Actually, say UCLA researchers, it can.
In a study published recently in the Journal of Neuroscience, UCLA neurology professor Paul Thompson and colleagues used a new type of brain-imaging scanner to show that intelligence is strongly influenced by the quality of the brain's axons, or wiring that sends signals throughout the brain. The faster the signaling, the faster the brain processes information. And since the integrity of the brain's wiring is influenced by genes, the genes we inherit play a far greater role in intelligence than was previously thought.
Genes appear to influence intelligence by determining how well nerve axons are encased in myelin — the fatty sheath of "insulation" that coats our axons and allows for fast signaling bursts in our brains. The thicker the myelin, the faster the nerve impulses.
Thompson and his colleagues scanned the brains of 23 sets of identical twins and 23 sets of fraternal twins. Since identical twins share the same genes while fraternal twins share about half their genes, the researchers were able to compare each group to show that myelin integrity was determined genetically in many parts of the brain that are key for intelligence. These include the parietal lobes, which are responsible for spatial reasoning, visual processing and logic, and the corpus callosum, which pulls together information from both sides of the body.
The researchers used a faster version of a type of scanner called a HARDI (high-angular resolution diffusion imaging) — think of an MRI machine on steroids — that takes scans of the brain at a much higher resolution than a standard MRI. While an MRI scan shows the volume of different tissues in the brain by measuring the amount of water present, HARDI tracks how water diffuses through the brain's white matter — a way to measure the quality of its myelin.
"HARDI measures water diffusion," said Thompson, who is also a member of the UCLA Laboratory of Neuro-Imaging. "If the water diffuses rapidly in a specific direction, it tells us that the brain has very fast connections. If it diffuses more broadly, that's an indication of slower signaling, and lower intelligence."
"So it gives us a picture of one's mental speed," he said.
Because the myelination of brain circuits follows an inverted U-shaped trajectory, peaking in middle age and then slowly beginning to decline, Thompson believes identifying the genes that promote high-integrity myelin is critical to forestalling brain diseases like multiple sclerosis and autism, which have been linked to the breakdown of myelin.
"The whole point of this research," Thompson said, "is to give us insight into brain diseases."
He said his team has already narrowed down the number of gene candidates that may influence myelin growth.
And could this someday lead to a therapy that could make us smarter, enhancing our intelligence?
"It's a long way off but within the realm of the possible," Thompson said.
Journal reference:
Ming-Chang Chiang, Marina Barysheva, David W. Shattuck, Agatha D. Lee, Sarah K. Madsen, Christina Avedissian, Andrea D. Klunder, Arthur W. Toga, Katie L. McMahon, Greig I. de Zubicaray, Margaret J. Wright, Anuj Srivastava, Nikolay Balov, and Paul M. Thompson. Genetics of Brain Fiber Architecture and Intellectual Performance. Journal of Neuroscience, 2009; 29 (7): 2212 DOI: 10.1523/JNEUROSCI.4184-08.2009
Adapted from materials provided by University of California - Los Angeles. Original article written by Mark Wheeler.

Tuesday, March 17, 2009

Where Does Consciousness Come From?


ScienceDaily (Mar. 17, 2009) — Consciousness arises as an emergent property of the human mind. Yet basic questions about the precise timing, location and dynamics of the neural event(s) allowing conscious access to information are not clearly and unequivocally determined.
Some neuroscientists have even argued that consciousness may arise from a single "seat" in the brain, though the prevailing idea attributes a more global network property.
Do the neural correlates of consciousness correspond to late or early brain events following perception? Do they necessarily involve coherent activity across different regions of the brain, or can they be restricted to local patterns of reverberating activity?
A new paper suggests that four specific, separate processes combine as a "signature" of conscious activity. By studying the neural activity of people who are presented with two different types of stimuli – one which could be perceived consciously, and one which could not – Dr. Gaillard of INSERM and colleagues, show that these four processes occur only in the former, conscious perception task.
This new work addresses the neural correlates of consciousness at an unprecedented resolution, using intra-cerebral electrophysiological recordings of neural activity. These challenging experiments were possible because patients with epilepsy who were already undergoing medical procedures requiring implantation of recording electrodes agreed to participate in the study. The authors presented them with visually masked and unmasked printed words, then measured the changes in their brain activity and the level of awareness of seeing the words. This method offers a unique opportunity to measure neural correlates of conscious access with optimal spatial and temporal resolutions. When comparing neural activity elicited by masked and unmasked words, they could isolate four converging and complementary electrophysiological markers characterizing conscious access 300 ms after word perception.
All of these measures may provide distinct glimpses into the same distributed state of long-distance reverberation. Indeed, it seems to be the convergence of these measures in a late time window (after 300 ms), rather than the mere presence of any single one of them, which best characterizes conscious trials. "The present work suggests that, rather than hoping for a putative unique marker – the neural correlate of consciousness – a more mature view of conscious processing should consider that it relates to a brain-scale distributed pattern of coherent brain activation," explained neuroscientist Lionel Naccache, one of the authors of the paper.
The late ignition of a state of long distance coherence demonstrated here during conscious access is in line with the Global Workspace Theory, proposed by Stanislas Dehaene, Jean-Pierre Changeux, and Lionel Naccache.
Journal reference:
Gaillard et al. Converging Intracranial Markers of Conscious Access. PLoS Biology, 2009; 7 (3): e61 DOI: 10.1371/journal.pbio.1000061
Adapted from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.

Cause For Severe Pediatric Epilepsy Disorder Identified

ScienceDaily (Mar. 16, 2009) — Researchers at the University of California, San Diego School of Medicine have discovered that convulsive seizures in a form of severe epilepsy are generated, not on the brain's surface as expected, but from within the memory-forming hippocampus.

The scientists hope that their findings – based on a mouse model of severe epilepsy – may someday pave the way for improved treatments of childhood epilepsy, which affects more than two percent of children worldwide. Their study was published online by the Proceedings of the National Academy of Science (PNAS) the week of March 16.
"A parent of an epileptic child will tell you that they think their child is going to die during their attacks," said senior author Joseph Gleeson, MD, director of the Neurogenetics Laboratory at the UC San Diego School of Medicine, professor in the department of neurosciences and Howard Hughes Medical Institute Investigator. "Parents of children with epilepsy, especially the most severe types of epilepsy, are desperate for a deeper understanding of the causes of the problems and for the development of new treatments."
One of the major causes of epilepsy in children is an alteration in the development of the cerebral cortex. The cerebral cortex is the main folded part of the brain, containing a large percentage of brain cells, and is integral to purposeful actions and thoughts. However, this complex structure is subject to all kinds of defects in development, many of them due to defective genes and many associated with epilepsy.
Cortical dysplasia, meaning disordered development of the cerebral cortex, is identified in 25 to 40 percent of children with the most severe and difficult-to-treat forms of epilepsy. These children often come to the attention of specialists due to stagnation in the acquisition of language and balance skills and accompanying epilepsy. The symptoms displayed by these children can range from very subtle – such as small muscle jerks or eyelid fluttering – to dramatic whole body, tonic-clonic spasms (a series of contractions and relaxations of the muscle) that can affect basic bodily function.
The Gleeson team, led by researchers Geraldine Kerjan, PhD and Hiroyuki Koizumi, PhD, has been studying a disorder called "lissencephaly." (In Greek, leios means smooth, and kephale means brain or head.) Children with lissencephaly have a smooth brain surface that lacks the normal hills and valleys that are characteristic of the human brain. The researchers were recently successful in developing a mouse model that showed some of the features of this disorder, usually the first step toward understanding the cause of a genetic disorder. But the severe epilepsy that is associated with lissencephaly was never displayed in any of the previous animals, so the team kept removing gene after gene until they hit upon a strain that showed epilepsy.
"We study the gene "doublecortin," which is defective in some forms of epilepsy and mental retardation in humans," said Kerjan, lead author of the study. "However, only after we removed a combination of two of the genes in the doublecortin family did we uncover epilepsy."
According to Gleeson, the findings were dramatic, as almost none of the mice in this strain survived to adulthood. Thinking that the deaths might be due to epilepsy, the scientists recorded electroencephalograms, which measure electrical activity produced by the firing of neurons in the brain, and found severe epilepsy in all of the mice tested. Even more surprising was the site of the epileptic focus – or site from which the seizures were generated – which was located beneath the surface of the brain, in the hippocampus.
"Researchers had thought that the cause of the seizures in this disease must be the brain surface, since this is the part that looks the most abnormal on brain MRI scans," said Gleeson. "However, we found that the epilepsy focus was actually deeper in the brain, within the hippocampus, the main memory-forming site."
The research team intends to continue studying in studying the mice, to explore potential mechanisms and utilize this model to test new treatments.
Additional contributors to the study include Edward B. Han and Stephen F. Heinemann, the Salk Institute; Celine M. Dubé and Tallie Z. Baram, UC Irvine; and Stevan N. Djakovic and Gentry N. Patrick, UC San Diego Department of Neurobiology.
The study was funded in part by the National Institutes of Health, the Burroughs Wellcome Fund, the Howard Hughes Medical Institute and the Epilepsy Foundation.
Adapted from materials provided by University of California - San Diego, via EurekAlert!, a service of AAAS.


Monday, March 16, 2009

Stress May Cause The Brain To Become Disconnected


ScienceDaily (Mar. 16, 2009) — Does stress damage the brain? In the March 1st issue of Biological Psychiatry a paper by Tibor Hajszan and colleagues provides an important new chapter to this question.
This issue emerged in the 1990’s as an important clinical question with the observation by J. Douglas Bremner and colleagues, then at the VA National Center for Posttraumatic Stress Disorder (PTSD), that hippocampal volume was reduced in combat veterans with PTSD. This finding was replicated by several, but not all, groups. In particular, it did not appear that this change was associated with acute PTSD.
The importance of this finding was further called into question as a group associated with the Harvard Medical School found that reduced hippocampal volume predicted risk for PTSD among twins, rather than emerging as a consequence of PTSD. Yet limitations of this twin study reduced the strength of this inference, as there were relatively high rates of early life trauma in the twins without combat-related PTSD, i.e., a potential environmental source for the reductions in hippocampal volume associated with later risk for PTSD. This group also showed that cortical volume reductions in other brain regions, such as the pregenual anterior cingulate cortex, were more clearly linked to trauma than were the hippocampal changes in these twins.
“This collection of clinical findings highlights an important limitation of clinical neuroimaging studies. These studies have the ability to raise important questions about brain structure in a general sense, but we still rely on studies of postmortem human tissue and animal research to determine the specific nature of neural changes,” explains Dr. John Krystal, Editor of Biological Psychiatry and affiliated with both Yale University School of Medicine and the VA Connecticut Healthcare System.
This is where research conducted in animals has provided critical information. Initial data by investigators, such as Robert Sapolsky at Stanford University, suggested that stress might promote the death of neurons, suggesting that the volume reductions in patients with PTSD might reflect the loss of nerve cells. More recent research by Bruce McEwen and colleagues at Rockefeller University indicates that stress can cause neurons to shrink or retract their connections. This could be critically important to the ability of these neurons to work together in highly inter-connected networks. But what is the link between this type of “neural remodeling” and the behavioral changes that follow extreme stress exposure?
The new paper by Hajszan and colleagues at Yale University suggests that in learned helplessness, an animal model for depression and PTSD, stress-related reductions in synapses in the hippocampus are directly related to the emergence of depression-like behavior. These data help to make the case that stress-related changes in the structure of nerve cells may have important behavioral consequences, explains Dr. Hajszan.
“The importance of our findings is derived from the well-known fact that synapses have a great potential for rapid changes, which may underlie sudden mood swings. More importantly, it is feasible to restore hippocampal synapses in a very short period of time (hours or even minutes), which opens up exciting new avenues for developing rapid-acting antidepressants that may provide immediate relief from depressive symptoms.”
It cannot yet be said that reductions in cortical volumes in patients with PTSD reflect reductions in the number of synapses. However, these findings underscore the potential importance of studying post-mortem human tissue to determine whether humans also show this pattern of neural changes. Dr. Krystal notes that “settling this issue could help us to better understand recent epidemiologic data suggesting that most of the adjustment problems of soldiers returning from Iraq and Afghanistan with mild traumatic brain injury (TBI) or post-concussive syndrome are attributable to PTSD.”
He adds, “We have tended to think of PTSD and mild TBI as unrelated at the neural level. However, with growing evidence from animal studies that PTSD may be associated with loss of neural connections, it may turn out that PTSD and mild TBI are two distinct, but interacting, ways that soldiers might be affected by their combat experience. “ Research is ongoing in the authors’ lab and in others as they continue to make progress in understanding how the brain is affected by depression and stress, and in developing targeted medications.
Adapted from materials provided by Elsevier, via AlphaGalileo.

Reward Elicits Unconscious Learning In Humans


ScienceDaily (Mar. 16, 2009) — A new study challenges the prevailing assumption that you must pay attention to something in order to learn it. The research, published in the March 12th issue of the journal Neuron, demonstrates that stimulus-reward pairing can elicit visual learning in adults, even without awareness of the stimulus presentation or reward contingencies.
"Recent studies have raised the question of whether visual skill learning requires an active goal directed process or whether learning can occur automatically without any task, stimulus awareness, or goal directed behavior," says study author Dr. Aaron Seitz from the Department of Psychology at the University of California, Riverside. Dr. Seitz and colleagues Drs. Dongho Kim and Takeo Watanabe from Boston University designed a novel experimental paradigm to take the "task" out of perceptual learning.
Study participants were asked to view a computer monitor, maintain their gaze on a central spot and enjoy the occasional drop of water that was delivered to their mouths through a tube. The drop of water was considered a reward because subjects were required to abstain from eating and drinking for five hours before the experimental session. The visual stimuli that were paired with the liquid rewards were viewed with one eye and were imperceptible to the subjects because contour rich patterns were continuously flashed to the other eye.
"The use of this procedure allowed us to examine the specific hypothesis that reward-related learning signals are sufficient to cause improvements in visual sensitivity for visual stimuli paired with rewards," explains Dr. Seitz. The researchers found that stimulus-reward pairing was sufficient to cause learning, even when the subject was not aware of the learned stimuli or stimulus-reward conditions. The learning effects were specific to the eye receiving the stimuli, a condition indicative of an early, monocular stage of visual processing.
These results suggest that automatic reinforcement mechanisms (such as those released at times of reward), rather than directed attention, determine improvements in sensory skills.
"Our findings support the suggestion that visual skill learning is generally an unconscious process and that goal-directed factors, such as directed attention, serve mostly to bias how learning takes place rather than actually gating the learning process," hypothesizes Dr. Seitz. The authors are careful to acknowledge that future studies are required.
The researchers include Aaron R. Seitz, Boston University, Boston, MA, University of California, Riverside, Riverside, CA; Dongho Kim, Boston University, Boston, MA, University of California, Riverside, Riverside, CA; and Takeo Watanabe, Boston University, Boston, MA.
Adapted from materials provided by Cell Press, via EurekAlert!, a service of AAAS.

Nanotechnology Coating Could Lead To Better Brain Implants To Treat Diseases


ScienceDaily (Mar. 16, 2009) — Biomedical and materials engineers at the University of Michigan have developed a nanotech coating for brain implants that helps the devices operate longer and could improve treatment for deafness, paralysis, blindness, epilepsy and Parkinson's disease.
Currently, brain implants can treat Parkinson's disease, depression and epilepsy. These and the next generation of the devices operate in one of two ways. Either they stimulate neurons with electrical impulses to override the brain's own signals, or they record what working neurons are transmitting to non-working parts of the brain and reroute that signal.
On-scalp and brain-surface electrodes are giving way to brain-penetrating microelectrodes that can communicate with individual neurons, offering hope for more precise control of signals.
In recent years, researchers at other institutions have demonstrated that these implanted microelectrodes can let a paralyzed person use thought to control a computer mouse and move a wheelchair. Michigan researchers' say their coating can most immediately improve this type of microelectrode.
Mohammad Reza Abidian, a post-doctoral researcher in the Department of Biomedical Engineering who is among the developers of the new coating, says the reliability of today's brain-penetrating microelectrodes often begins to decline after they're in place for only a few months.
"You want to be able to use these for at least a couple years," Abidian said. "Current technology doesn't allow this in most cases because of how the tissues of the brain respond to the implants. The goal is to increase their efficiency and their lifespans."
The new coating Abidian and his colleagues developed is made of three components that together allow electrodes to interface more smoothly with the brain. The coating is made of a special electrically-conductive nanoscale polymer called PEDOT; a natural, gel-like buffer called alginate hydrogel; and biodegradable nanofibers loaded with a controlled-release anti-inflammatory drug.
The PEDOT in the coating enables the electrodes to operate with less electrical resistance than current models, which means they can communicate more clearly with individual neurons.
The alginate hydrogel, partially derived from algae, gives the electrodes mechanical properties more similar to actual brain tissue than the current technology. That means coated neural electrodes would cause less tissue damage.
The biodegradable, drug-loaded nanofibers fight the "encapsulation" that occurs when the immune system tells the body to envelop foreign materials. Encapsulation is another reason these electrodes can stop functioning properly. The nanofibers fight this response well because they work with the alginate hydrogel to release the anti-inflammatory drugs in a controlled, sustained fashion as the nanofibers themselves break down.
"Penetrating microelectrodes provide a means to record from individual neurons, and in doing so, there is the potential to record extremely precise information about a movement or an intended movement. The open question in our field is what is the trade-off: How much invasiveness can be tolerated in exchange for more precision?" said Daryl Kipke, a professor in the Department of Biomedical Engineering and the director of the U-M Center for Neural Communication Technology.
In these experiments, the Michigan researchers applied their coating to microelectrodes provided by the U-M Center for Neural Communication Technology.
A paper on this research, called "Multifunctional Nanobiomaterials for Neural Interfaces," is published in Advanced Functional Materials. It is the cover story on the February 24 issue.
Abidian's co-author is David Martin, a professor in of Materials Science and Engineering; Biomedical Engineering; and Macromolecular Science and Engineering. Biotectix, a U-M spin-off company founded by Martin, is actively working to commercialize coatings related to those discussed in this paper. This research is supported by the National Institutes of Health, the Army Research Office Multi-disciplinary University Research Initiative and College of Engineering Translational Research funding.
Adapted from materials provided by University of Michigan.

Sunday, March 15, 2009

Tiny Brain Region Key To Fear Of Rivals And Predators


ScienceDaily (Mar. 15, 2009) — Mice lose their fear of territorial rivals when a tiny piece of their brain is neutralized, a new study reports.
The study adds to evidence that primal fear responses do not depend on the amygdala – long a favored region of fear researchers – but on an obscure corner of the primeval brain.
A group of neuroscientists led by Larry Swanson of the University of Southern California studied the brain activity of rats and mice exposed to cats, or to rival rodents defending their territory.
Both experiences activated neurons in the dorsal premammillary nucleus, part of an ancient brain region called the hypothalamus.
Swanson's group then made tiny lesions in the same area. Those rodents behaved far differently.
"These animals are not afraid of a predator," Swanson said. "It's almost like they go up and shake hands with a predator."
Lost fear of cats in rodents with such lesions has been observed before. More important for studies of social interaction, the study replicated the finding for male rats that wandered into another male's territory.
Instead of adopting the usual passive pose, the intruder frequently stood upright and boxed with the resident male, avoided exposing his neck and back, and came back for more even when losing.
"It's amazing that these lesions appear to abolish innate fear responses," said Swanson, who added: "The same basic circuitry is found in primates and people that we find in rats and mice."
The study was slated for online publication the week of March 9 in Proceedings of the National Academy of Sciences.
Swanson predicted that his group's findings would shift some research away from the amygdala, a major target of fear studies for the past 30 years.
"This is a new perspective on what part of the brain controls fear," he said.
He explained that most amygdala studies have focused on a different type of fear, which might more accurately be called caution or risk aversion.
In those studies, animals receive an electric shock to their feet. When placed in the same environment a few days later, they display caution and increased activity of the amygdala.
But the emotion experienced in that case may differ from the response to a physical attack.
"We're not just dealing with one system that controls all fear," Swanson said.
Swanson and collaborators have been studying the role of the hypothalamus in the fear response since 1992.
Because of its role in basic survival functions such as feeding, reproduction and the sleep-wake cycle, the hypothalamus seems a plausible candidate for fear studies.
Yet, said Swanson, "nobody's paid any attention to it."
The PNAS study is the most recent of several by Swanson on fear and the hypothalamus. The few other researchers in the area include Newton Canteras of the University of Sao Paulo in Brazil, who collaborated with Swanson on the PNAS study, as well as Robert and Caroline Blanchard of the University of Hawaii.
The other authors on the PNAS study were Simone Motta, Marina Goto, Flavia Gouveia and Marcus Baldo, all from the University of Sao Paulo.
The Brazilian government funded the study.
Journal reference:
Simone C. Motta, Marina Goto, Flavia V. Gouveia, Marcus V. C. Baldo, Newton S. Canteras, and Larry W. Swanson. Dissecting the brain's fear system reveals the hypothalamus is critical for responding in subordinate conspecific intruders. Proceedings of the National Academy of Sciences, 2009; DOI: 10.1073/pnas.0900939106
Adapted from materials provided by University of Southern California.

'The Unexpected Outcome' Is A Key To Human Learning

ScienceDaily (Mar. 15, 2009) — The human brain’s sensitivity to unexpected outcomes plays a fundamental role in the ability to adapt and learn new behaviors, according to a new study by a team of psychologists and neuroscientists from the University of Pennsylvania.
Using a computer-based card game and microelectrodes to observe neuronal activity of the brain, the Penn study, published March 13 in the journal Science, suggests that neurons in the human substantia nigra, or SN, play a central role in reward-based learning, modulating learning based on the discrepancy between the expected and the realized outcome.
“This is the first study to directly record neural activity underlying this learning process in humans, confirming the hypothesized role of the basal ganglia, which includes the SN, in models of reinforcement including learning, addiction and other disorders involving reward-seeking behavior,” said lead author Kareem Zaghloul, postdoctoral fellow in neurosurgery at Penn’s School off Medicine. “By responding to unexpected financial rewards, these cells encode information that seems to help participants maximize reward in the probabilistic learning task.”
Learning, previously studied in animal models, seems to occur when dopaminergic neurons, which drive a larger basal ganglia circuit, are activated in response to unexpected rewards and depressed after the unexpected omission of reward. Put simply, a lucky win seems to be retained better than a probable loss.
Similar to an economic theory, where efficient markets respond to unexpected events and expected events have no effect, we found that the dopaminergic system of the human brain seems to be wired in a similar rational manner -- tuned to learn whenever anything unexpected happens but not when things are predictable," said Michael J. Kahana, senior author and professor of psychology at Penn’s School of Arts and Sciences.
Zaghloul worked with Kahana and Gordon Baltuch, associate professor of neurosurgery, in a unique collaboration among departments of psychology, neurosurgery and bioengineering. They used microelectrode recordings obtained during deep brain stimulation surgery of Parkinson’s patients to study neuronal activity in the SN, the midbrain structure that plays an important role in movement, as well as reward and addiction. Patients with Parkinson’s disease show impaired learning from both positive and negative feedback in cognitive tasks due to the degenerative nature of their disease and the decreased number of dopaminergic neurons.
The recordings were analyzed to determine whether responses were affected by reward expectation. Participants were asked to choose between red and blue decks of cards presented on a computer screen, one of which carried a higher probability of yielding a financial reward than the other. If the draw of a card yielded a reward, a stack of gold coins was displayed along with an audible ring of a cash register and a counter showing accumulated virtual earnings. If the draw did not yield a reward or if no choice was made, the screen turned blank and participants heard a buzz.
“This new way to measure dopaminergic neuron activity has helped us gain a greater understanding of fundamental cognitive activity," said Baltuch, director of the Penn Medicine Center for Functional and Restorative Neurosurgery.
The work is supported by grants from the National Institutes of Health, the Conte Center and the Dana Foundation.
Journal reference:
Kareem A. Zaghloul, Justin A. Blanco, Christoph T. Weidemann, Kathryn McGill, Jurg L. Jaggi, Gordon H. Baltuch, and Michael J. Kahana. Human Substantia Nigra Neurons Encode Unexpected Financial Rewards. Science, 2009; 323 (5920): 1496 DOI: 10.1126/science.1167342
Adapted from materials provided by University of Pennsylvania.

Saturday, March 14, 2009

What Drives Brain Changes In Macular Degeneration?


ScienceDaily (Mar. 13, 2009) — In macular degeneration, the most common form of adult blindness, patients progressively lose vision in the center of their visual field, thereby depriving the corresponding part of the visual cortex of input. Previously, researchers discovered that the deprived neurons begin responding to visual input from another spot on the retina — evidence of plasticity in the adult cortex.
Just how such plasticity occurred was unknown, but a new MIT study sheds light on the underlying neural mechanism.
"This study shows us one way that the brain changes when its inputs change. Neurons seem to 'want' to receive input: when their usual input disappears, they start responding to the next best thing," said Nancy Kanwisher of the McGovern Institute for Brain Research at MIT and senior author of the study appearing in the March 4 issue of the Journal of Neuroscience.
"Our study shows that the changes we see in neural response in people with MD are probably driven by the lack of input to a population of neurons, not by a change in visual information processing strategy," said Kanwisher, the Ellen Swallow Richards Professor of Cognitive Neuroscience in MIT's Department of Brain and Cognitive Sciences.
Macular degeneration affects 1.75 million people in the United States alone. Loss of vision begins in the fovea of the retina — the central area providing high acuity vision that we use for reading and other visually demanding tasks. Patients typically compensate by using an adjacent patch of undamaged retina. This "preferred retinal locus" (PRL) is often below the blind region in the visual field, leading patients to roll their eyes upward to look at someone's face, for example.
The visual cortex has a map of the visual field on the retina, and in macular degeneration the neurons mapping to the fovea no longer receive input. But several labs, including Kanwisher's, previously found that the neurons in the visual cortex that once responded only to input from central vision begin responding to stimuli at the PRL. In other words, the visual map has reorganized.
"We wanted to know if the chronic, prior use of the PRL causes the cortical change that we had observed in the past, according to what we call the use-dependent hypothesis," said first author Daniel D. Dilks, a postdoctoral fellow in the Kanwisher lab. "Or, do the deprived neurons respond to stimulation at any peripheral location, regardless of prior visual behavior, according to the use-independent hypothesis?"
The previous studies could not answer this question because they had only tested patients' PRL. This new study tests both the PRL and another peripheral location, using functional magnetic resonance imaging (fMRI) to scan two macular degeneration patients who had no central vision, and consequently had a deprived central visual cortex.
Because patients habitually use the PRL like a new fovea, it could be that the deprived cortex might respond preferentially to this location.
But that is not what the researchers found. Instead, the deprived region responded equally to stimuli at both the preferred and nonpreferred locations.
This finding suggests that the long-term change in visual behavior is not driving the brain's remapping. Instead, the brain changes appear to be a relatively passive response to visual deprivation.
"Macular degeneration is a great opportunity to learn more about plasticity in the adult cortex." Kanwisher said. If scientists could one day develop technologies to replace the lost light-sensitive cells in the fovea, patients might be able to recover central vision since the neurons there are still alive and well.
Chris Baker of the Laboratory of Brain and Cognition (NIMH) and Eli Peli of the Schepens Eye Research Institute also contributed to this study, which was supported by the NIH, Kirschstein-NRSA, and Dr. and Mrs. Joseph Byrne.
Adapted from materials provided by Massachusetts Institute of Technology.

A human failure, seen at face value

Research probes why we have difficulty recognizing faces in photo negatives.

Anne Trafton,MIT News Office,March 13, 2009
Humans excel at recognizing faces, but how we do this has been an abiding mystery in neuroscience and psychology. In an effort to explain our success in this area, researchers are taking a closer look at how and why we fail.
A new study from MIT looks at a particularly striking instance of failure: our impaired ability to recognize faces in photographic negatives. The study, which appears in the Proceedings of the National Academy of Sciences this week, suggests that a large part of the answer might lie in the brain's reliance on a certain kind of image feature.
The work could potentially lead to computer vision systems, for settings as diverse as industrial quality control or object and face detection. On a different front, the results and methodologies could help researchers probe face-perception skills in children with autism, who are often reported to experience difficulties analyzing facial information.
Anyone who remembers the days before digital photography has probably noticed that it's much harder to identify people in photographic negatives than in normal photographs. "You have not taken away any information, but somehow these faces are much harder to recognize," says Pawan Sinha, an associate professor of brain and cognitive sciences and senior author of the PNAS study.
Sinha has previously studied light and dark relationships between different parts of the face, and found that in nearly every normal lighting condition, a person's eyes appear darker than the forehead and cheeks. He theorized that photo negatives are hard to recognize because they disrupt these very strong regularities around the eyes.
To test this idea, Sinha and his colleagues asked subjects to identify photographs of famous people in not only positive and negative images, but also in a third type of image in which the celebrities' eyes were restored to their original levels of luminance, while the rest of the photo remained in negative.
Subjects had a much easier time recognizing these "contrast chimera" images. According to Sinha, that's because the light/dark relationships between the eyes and surrounding areas are the same as they would be in a normal image.
Similar contrast relationships can be found in other parts of the face, primarily the mouth, but those relationships are not as consistent. "The relationships around the eyes seem to be particularly significant," says Sinha.
Other studies have shown that people with autism tend to focus on the mouths of people they are looking at, rather than the eyes, so the new findings could help explain why autistic people have such difficulty recognizing faces, says Sinha.
The findings also suggest that neuronal responses in the brain may be based on these relationships between different parts of the face. The team found that when they scanned the brains of people performing the recognition task, regions associated with facial processing (the fusiform face areas) were far more active when looking at the contrast chimeras than when looking at pure negatives.
Other authors of the paper are Sharon Gilad of the Weizmann Institute of Science in Israel and MIT postdoctoral associate Ming Meng, both of whom contributed equally to the work..
The research was funded by the Alfred P. Sloan Foundation and the Jim and Marilyn Simons Foundation.

Friday, March 13, 2009

'Mind-reading' Experiment Highlights How Brain Records Memories

ScienceDaily (Mar. 13, 2009) — It may be possible to "read" a person's memories just by looking at brain activity, according to research carried out by Wellcome Trust scientists. In a study published in the journal Current Biology , they show that our memories are recorded in regular patterns, a finding which challenges current scientific thinking.
Demis Hassabis and Professor Eleanor Maguire at the Wellcome Trust Centre for Neuroimaging at UCL (University College London) have previously studied the role of a small area of the brain known as the hippocampus which is crucial for navigation, memory recall and imagining future events. Now, the researchers have shown how the hippocampus records memory.
When we move around, nerve cells (neurons) known as "place cells", which are located in the hippocampus, activate to tell us where we are. Hassabis, Maguire and colleagues used an fMRI scanner, which measures changes in blood flow within the brain, to examine the activity of these places cells as a volunteer navigated around a virtual reality environment. The data were then analysed by a computer algorithm developed by Demis Hassabis.
"We asked whether we could see any interesting patterns in the neural activity that could tell us what the participants were thinking, or in this case where they were," explains Professor Maguire, a Wellcome Trust Senior Research Fellow. "Surprisingly, just by looking at the brain data we could predict exactly where they were in the virtual reality environment. In other words, we could 'read' their spatial memories."
Earlier studies in rats have shown that spatial memories – how we remember where we are – are recorded in the hippocampus. However, these animal studies, which measured activity at the level of individual or dozens of neurons at most, implied that there was no structure to the way that these memories are recorded. Hassabis and Maguire's work appears to overturn this school of thought.
"fMRI scanners enable us to see the bigger picture of what is happening in people's brains," she says. " By looking at activity over tens of thousands of neurons, we can see that there must be a functional structure – a pattern – to how these memories are encoded. Otherwise, our experiment simply would not have been possible to do."
Professor Maguire believes that this research opens up a range of possibilities of seeing how actual memories are encoded across the neurons, looking beyond spatial memories to more enriched memories of the past or visualisations of the future.
"Understanding how we as humans record our memories is critical to helping us learn how information is processed in the hippocampus and how our memories are eroded by diseases such as Alzheimer's," added Demis Hassabis.
"It's also a small step towards the idea of mind reading, because just by looking at neural activity, we are able to say what someone is thinking."
Professor Maguire led a study a number of years ago which examined the brains of London taxi drivers, who spend years learning "The Knowledge" (the maze of London streets). She showed that in these cabbies, an area to the rear of the hippocampus was enlarged, suggesting that this was the area involved in learning location and direction. In the new study, Hassabis, Maguire and colleagues found that the patterns relating to spatial memory were located in this same area, suggesting that the rear of the hippocampus plays a key role in representing the layout of spatial environments.
Journal reference:
Hassabis, D. et al. Decoding neuronal ensembles in the human hippocampus. Current Biology, 12 March 2009
Adapted from materials provided by Wellcome Trust, via EurekAlert!, a service of AAAS.

Neuroscientists Map Intelligence In The Brain

ScienceDaily (Mar. 12, 2009) — Neuroscientists at the California Institute of Technology (Caltech) have conducted the most comprehensive brain mapping to date of the cognitive abilities measured by the Wechsler Adult Intelligence Scale (WAIS), the most widely used intelligence test in the world. The results offer new insight into how the various factors that comprise an "intelligence quotient" (IQ) score depend on particular regions of the brain.
Neuroscientist Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology at Caltech, Caltech postdoctoral scholar Jan Gläscher, and their colleagues compiled the maps using detailed magnetic resonance imaging (MRI) and computerized tomography (CT) brain scans of 241 neurological patients recruited from the University of Iowa's extensive brain-lesion registry.
All of the patients had some degree of cognitive impairment from events such as strokes, tumor resection, and traumatic brain injury, as assessed by testing using the WAIS. The WAIS test is composed of four indices of intelligence, each consisting of several subtests, which together produce a full-scale IQ score. The four indices are the verbal comprehension index, which represents the ability to understand and to produce speech and use language; the perceptual organization index, which involves visual and spatial processing, such as the ability to perceive complex figures; the working memory index, which represents the ability to hold information temporarily in mind (similar to short-term memory); and the processing speed index.
The researchers first transferred the brain scans of all 241 patients to a common reference frame, an approach pioneered by neuroscientist Hanna Damasio of the University of Southern California, a coauthor of the study. Using a technique called voxel-based symptom-lesion mapping (a voxel is the three-dimensional analog of a pixel, and represents a volume of about 1 cubic millimeter), Adolphs and his colleagues then correlated the location of brain injuries with scores on each of the four WAIS indices.
"The first question we asked was if there are any parts of the brain that are critically important for these indices or if they are very distributed, with intelligence processed globally in a way that can't be mapped," Adolphs says. With the exception of processing speed, which appears scattered throughout the brain, the lesion mapping showed that the other three cognitive indices really do depend on specific brain regions.
For example, lesions in the left frontal cortex were associated with lower scores on the verbal comprehension index; lesions in the left frontal and parietal cortex (located behind the frontal lobe) were associated with lower scores on the working memory index; and lesions in the right parietal cortex were associated with lower scores on the perceptual organization index.
Somewhat surprisingly, the study revealed a large amount of overlap in the brain regions responsible for verbal comprehension and working memory, which suggests that these two now-separate measures of cognitive ability may actually represent the same type of intelligence, at least as assessed using the WAIS.
The details about the structure of intelligence provided by the study could be useful in future revisions of the WAIS test so that its various subtests are grouped on the basis of neuroanatomical similarity rather than on behavior, as is the case now.
In addition, the brain maps produced by the study could be used as a diagnostic aid. Clinicians could combine the maps with their patients' Wechsler test results to help localize likely areas of brain damage. "It wouldn't be sufficient to be diagnostic, but it would provide information that clinicians could definitely use about what parts of the brain are dysfunctional," Adolphs says.
The converse--using brain-scan results to predict the IQ of patients as measured by the Weschler test--may also be possible. Although the results wouldn't be as clear-cut as they are in patients with brain lesions, Adolphs says, "you could take a large sample of healthy brains and measure the relative volumes of specific brain areas and draw some associations with these IQ factors."
The work was supported in part by the Akademie der Naturforscher Leopoldina, the National Institutes of Health, and the Gordon and Betty Moore Foundation.
Journal reference:
Jan Gläscher, Daniel Tranel, Lynn K. Paul, David Rudrauf, Chris Rorden, Amanda Hornaday, Thomas Grabowski, Hanna Damasio, Ralph Adolphs. Lesion Mapping of Cognitive Abilities Linked to Intelligence. Neuron, 2009; 61 (5): 681-691 DOI: 10.1016/j.neuron.2009.01.026
Adapted from materials provided by California Institute of Technology, via EurekAlert!, a service of AAAS.

Spotless Mind? Fear Memories In Humans Weakened With Beta-blocker Propranolol

ScienceDaily (Mar. 12, 2009) — A team of Dutch researchers led by Merel Kindt has successfully reduced the fear response. They weakened fear memories in human volunteers by administering the beta-blocker propranolol. Interestingly, the fear response does not return over the course of time.

The findings were published in the March 2009 issue of Nature Neuroscience.
Until recently, it was assumed that the fear memory could not be deleted. However, Kindt's team has demonstrated that changes can indeed be effected in the emotional memory of human beings.

Storing changes:
Before fear memories are stored in the long-term memory, there is a temporary labile phase. During this phase, protein synthesis takes place that ‘records’ the memories. The traditional idea was that the memory is established after this phase and can, therefore, no longer be altered. However, this protein synthesis also occurs when memories are retrieved from the memory and so there is once again a labile phase at that moment. The researchers managed to successfully intervene in this phase.
During their experiments the researchers showed images of two different spiders to the human volunteers. One of the spider images was accompanied by a pain stimulus and the other was not. Eventually the human volunteers exhibited a startle response (fear) upon seeing the first spider without the pain stimulus being administered. The anxiety for this spider had therefore been acquired.
One day later the fear memory was reactivated, as a result of which the protein synthesis occurred again. Just before the reactivation, the human volunteers were administered the beta-blocker propranolol. On the third day it was found that the volunteers who had been administered propranolol no longer exhibited a fear response on seeing the spider, unlike the control group who had been administered a placebo. The group that had received propranolol but whose memory was not reactivated still exhibited a strong startle response. The fear response was measured using two electrodes under the eye that measured the eye-blinking reflex. The response measured is one directly initiated by the amygdala, the emotional centre of the brain.

Searching in deleted items:
Cognitive behavioural therapy is currently the prevailing and most effective method for treating anxiety disorders. During such a treatment the patient is exposed to the fear-eliciting stimulus without the feared consequence occurring. This method frequently only achieves short-term results and the fears often return over the course of time.
Interestingly, after the treatment with propranolol and memory reactivation, fear memories can no longer be recalled by means of a much-used method in which the individual pain stimuli are re-administered. This indicates that the anxiety memory is either completely erased or could no longer be found in the memory. It should be noted, however, that the human volunteers could remember the association between the spider and the pain stimulus but that this no longer elicited any emotional response. In the next phase of the research, Kindt and her colleagues shall investigate the long-term effects of administering propranolol.

Treatment of anxiety disorders:
The researchers expect that the results from this study can contribute to a new procedure for the treatment of patients with anxiety disorders. The method intervenes in the memory in a completely different way to conventional treatments. Whereas the traditional cognitive behavioural therapies frequently focus on the creation of new memories, this method focuses on the weakening of the existing emotional memory.
In 2007, Merel Kindt received a Vici grant from NWO for her innovative research. This study was carried out by Merel Kindt, Marieke Soeter and Bram Vervliet at the Universiteit van Amsterdam.

Journal reference:
Merel Kindt, Marieke Soeter & Bram Vervliet. Beyond extinction: erasing human fear responses and preventing the return of fear. Nature Neuroscience, 2009; 12 (3): 256 DOI: 10.1038/nn.2271
Adapted from materials provided by Netherlands Organization for Scientific Research.

High IQ Linked To Reduced Risk Of Death

ScienceDaily (Mar. 13, 2009) — A study of one million Swedish men has revealed a strong link between cognitive ability and the risk of death, suggesting that government initiatives to increase education opportunities may also have health benefits.
Dr David Batty, a Wellcome Trust research fellow at the MRC Social and Public Health Sciences Unit in Glasgow, and colleagues, found that a lower IQ was strongly associated with a higher risk of death from causes such as accidents, coronary heart disease and suicide.
The researchers studied data from one million Swedish men conscripted to the army at the age of 18. After they had taken into account whether a person had grown up in a safer, more affluent environment, they found that only education had an influence on the relationship between IQ and death.
The researchers say the link between IQ and mortality could be partially attributed to the healthier behaviours displayed by those who score higher on IQ tests.
"People with higher IQ test scores tend to be less likely to smoke or drink alcohol heavily, they eat better diets, and they are more physically active. So they have a range of better behaviours that may partly explain their lower mortality risk," says Dr Batty.
Previous studies have suggested that preschool education programmes and better nourishment can raise IQ scores. The study suggests this may also have previously unforeseen health benefits, further validating government efforts to improve living conditions and education.
Dr Batty suggests there may also be benefits from simplifying health information for the public.
"If you believe the association between IQ and mortality is at least partially explained by people with a lower IQ having worse behaviours - which is plausible - then it might be that the messages used to change health behaviours are too complicated," he says.
"Messages about diet, including how much or what type of alcohol is beneficial, aren't simple, and the array of strategies available for quitting smoking are diverse and actually quite complicated. If you clarify the options available to people who want to, say, quit smoking, in the short term that may have an effect."
A second study, also co-authored by Dr Batty, used data from more than 4000 US soldiers and followed them for 15 years. The study found the same relationship between IQ scores and mortality, as well as a significant association between higher neuroticism and increased mortality risk.
Journal references:
Batty et al. IQ in Early Adulthood, Socioeconomic Position, and Unintentional Injury Mortality by Middle Age: A Cohort Study of More Than 1 Million Swedish Men. American Journal of Epidemiology, 2008; 169 (5): 606 DOI: 10.1093/aje/kwn381
Weiss et al. Emotionally Stable, Intelligent Men Live Longer: The Vietnam Experience Study Cohort. Psychosomatic Medicine, 2009; DOI: 10.1097/PSY.0b013e318198de78
Adapted from materials provided by Wellcome Trust.