NEW WAYS OF ACQUIRING, ANALYZING, VISUALIZING, AND COMMUNICATING DATA ADVANCE FRONTIERS IN NEUROSCIENCE
For immediate release.
NEWS RELEASE
NR-41-07 (11/04/07)
For more information, please contact Sara Harris at (619) 525-6325 or sharris@sfn.org
NEW WAYS OF ACQUIRING, ANALYZING, VISUALIZING,
AND COMMUNICATING DATA ADVANCE FRONTIERS IN NEUROSCIENCE
SAN DIEGO, November 4, 2007 - Advances in neuroscience are being fueled by powerful new methods for acquiring, analyzing, visualizing, and communicating research information, say some of the field's leading scientists.
"Technological advances are accelerating the pace of discovery and allowing experiments that were not even dreamed of a decade ago," says David Van Essen, president of the Society for Neuroscience. "They occur across many realms, ranging from the increasingly rapid sequencing of genes and proteins to the amazing ability to visualize brain structure and function in living animals."
These advances have helped foster greater integration among the diverse approaches that clarify brain function in health and disease. For example, the ability to make a variety of conditional gene knockouts in mice has inspired countless collaborative projects whose teams have cutting-edge expertise in genetics, physiology, anatomy, development, pharmacology, and behavior.
This year's annual meeting presidential special lectures illustrate how leading neuroscientists conceptualize and make use of new technologies to advance the field. They focus on three general areas -- neuroinformatics, neuroanatomy, and neural computation.
Neuroinformatics is a relatively new field that provides tremendous opportunities for coping with the extraordinary flood of neuroscience-related data facing every neuroscientist. Improved methods for data mining and data sharing will play a key role in accelerating progress in neuroscience research during the 21st century. The second area is neuroanatomy, where recent advances in imaging-based circuit analysis are opening new horizons using both optical and magnetic resonance imaging (MRI)-based approaches. The third area is neural computation -- how specific circuits transfer information and carry out computations that represent what brain function is all about.
"A grand goal in neuroscience research is to understand how the interplay of structural, chemical, and electrical signals in nervous tissue gives rise to behavior," says Mark Ellisman, PhD, of the University of California, San Diego. "We are rapidly approaching this horizon as neuroscientists make use of an increasingly powerful arsenal for obtaining data-from the level of molecules to nervous systems -- and engage in the arduous and challenging process of adapting and assembling neuroscience data at all scales of resolution and across disciplines into computerized databases."
A consolidated strategy for integrating neuroscience data has been to provide a multi-scale structural or spatial scaffold on which existing and accruing elements of neuroscience knowledge can be located and relationships explored from any network-linked computer. However, Ellisman says, even data taken at similar scales from different sites, like structural and functional magnetic resonance imaging (fMRI) data, are difficult to merge in the absence of agreed upon standards that would allow such noninvasive brain imaging data to be brought together. Similarly, efforts to integrate multiscale data from different methods using a common spatial framework are hampered by incomplete descriptions of the microanatomy of nervous systems.
Progress toward overcoming these obstacles and integrating neuroscience knowledge are found in key projects including the Biomedical Informatics Research Network (BIRN) and the Cell-Centered Database (CCDB). The BIRN is a broad initiative sponsored by the National Institutes of Health that fosters large-scale collaborations in biomedical science by building on networks linking investigators to advanced information technology resources. Large-scale, cross-institutional imaging studies involving human subjects are underway on Alzheimer's disease, schizophrenia, and autism, using structural and functional MRI. Other BIRN projects focus on mouse animal models relevant to multiple sclerosis, Alzheimer's disease, and Parkinson's disease. They bring together data from multiple imaging methods, from whole brain MRI to molecular scale electron microscopy, providing a more complete anatomical continuum to help organize data coming from modern genomic and proteomic methods.
The CCDB takes up the important challenge of providing a spatial scaffold for assembling knowledge pertaining to structures on a minute scale -- dimensions that encompass macromolecular complexes, organelles, synapses, and other integral components of the fantastically complex structure we call the brain.
"This effort, in concert with other community efforts to build accurate computerized representations of the brain and all its elements, will provide continuous structural scaffolds for organizing brain information," Ellisman says. "Recent studies demonstrate that such multiscale models, assembled in computers from multiple data sources, can enable the testing of hypotheses about the fundamental properties of the nervous system that are pertinent to both normal and abnormal brain function."
Two presidential lectures will discuss how advances in neuroimaging are opening up new vistas for deciphering the complex circuitry of the brain.
Heidi Johansen-Berg, DPhil, of the Oxford Center for Functional MRI of the Brain in England, has pioneered the development and application of new brain scanning technology that allows scientists to determine the "wiring patterns" of the living human brain. The brain is made up of many different regions, each with a specialized role to play in the functioning of the whole organism. These regions communicate with each other via a complex network of fiber pathways.
"Defining these pathways is crucial to our understanding of how the brain works, as the communications network dictates how we receive and evaluate information about the world and how we produce co-coordinated responses," Johansen-Berg says.
Until recently, it was possible to visualize these wiring patterns only by dissecting post-mortem brains or by performing invasive tracer studies in animals. Now, a new type of brain scan called diffusion MRI allows researchers to visualize connections noninvasively in the living human brain. This technique works by measuring the diffusion of water molecules in the brain. Because water molecules diffuse more quickly along a brain fiber than across it, images can be used to estimate the routes taken by the fiber pathways. The technique for tracing these fibers using diffusion MRI, known as tractography, opens up many new possibilities for testing how brain pathways develop and age, how they vary between individuals, and how they are disrupted in disease.
Diffusion tractography has already supplied novel insights into human brain anatomy and how it is affected by disease. For example, one important goal in neuroscience is to define the borders between different regions of the cerebral cortex, as these regions perform different functions and can be affected differently by disease. These borders were first mapped a century ago by anatomists who inspected slices of postmortem brain under a microscope and plotted where the patterns of nerve cells changed. Like early maps of the earth's surface, these initial maps of the brain were inaccurate in many regions.
Progress in refining these borders has been slow, and until recently it was not possible to define these boundaries in living people. "However, we know that each region has a unique wiring pattern, and so now, by tracing the connections of the cerebral cortex using diffusion tractography and detecting where the wiring patterns change, it is possible to define regional borders in the living human brain for the first time," says Johansen-Berg. "Such noninvasive definition of brain regions will help inform functional brain mapping studies and has potential clinical application, for example, in improving neurosurgical targeting."
Karel Svoboda, PhD, of the Howard Hughes Medical Institute (HHMI) at Janelia Farm in Virginia, focuses on circuit analyses at the cellular level using novel optical techniques. The human brain contains about 100 billion neurons, each neuron connected to thousands of other neurons through tiny junctions called synapses. Synapses are critical for most aspects of brain function, and defects at synapses are thought to underlie many types of neurodevelopmental and neurodegenerative disorders. The typical synapse consists of a presynaptic terminal and a postsynaptic dendritic spine. Synapses are tiny-approximately 1 micrometer-and tightly packed in the brain.
Synapses also are among the most dynamic structures in the brain, and the timescales of synaptic plasticity have a huge range. "At the fast end of the spectrum, neurotransmitter release can be modulated in a use-dependent manner in as little as a few milliseconds, a phenomenon that is thought to contribute to short-term memory," Svoboda says. "On the slow end of the spectrum, changes in synaptic strength are believed to encode long-term memories, implying that the structure and function of some synapses might be maintained over years."
Now, time-lapse images of synapses are beginning to provide answers to some long-standing questions about synaptic function and plasticity. For example, a technique called 2-photon calcium imaging has for the first time allowed scientists to measure the spread of neurotransmitter in tissue, revealing that synaptic transmission is point to point, i.e., synapse specific. It also shows that some synapses can persist for years, perhaps as long as the animal, providing a structural correlate for long-term memory storage. Other synapses appear and disappear.
In other research, Sebastian Seung, PhD, of the Massachusetts Institute of Technology and HHMI, looks at new approaches to neural computation. In the past few decades, mathematical models of neural networks have been used to demonstrate a number of basic principles: 1) The synaptic connections of a network can be organized to support certain patterns of neural activity, which in turn generate behavior; 2) activity-dependent synaptic plasticity can enable a network to self-organize; and 3) a network can iteratively improve its performance via reward-dependent synaptic plasticity.
How do brains learn skills from repeated practice? According to recent theories of "hedonistic" and "empiric" synapses, one way is through the reinforcement of random variations in neural activity by a global reward signal. If the synapses in a neural network are modified according to specific rules prescribed by these theories, the network can learn to maximize reward. The theories draw from a long tradition of research on reinforcement learning in computer science and are conceptually related to the "selfish" gene of Darwinian evolution.
For decades, the idea of the Hebb synapse has dominated the discourse of neuroscientists about learning and memory. Canadian psychologist Donald Hebb theorized in 1949 that if, during a learning experience, two neurons tend to be active simultaneously, the synapse between them might be strengthened, linking the cells into the same circuit.
Theories of hedonistic and empiric synapses are meant to help neuroscientists think outside the Hebbian box and to search for other types of synaptic plasticity that could be important for learning. A promising system is a bird called the zebra finch, which learns to sing through extensive practice. Recent studies suggest that song learning requires a specific brain area that performs "experiments" on the song-generating brain areas, says Seung. "These 'experiments' are used for trial-and-error learning, and might be conducted by the hypothetical empiric synapses."
Today, the greatest challenge facing the science of neural networks is to improve our ability to test theories. Although theorists have formulated many plausible hypotheses, it remains impossible or impractical to test most of them conclusively. "Fortunately, the experimental methods available to neuroscientists are developing rapidly, driven by the convergence of genetics, imaging, and computation," Seung says. "Soon, it may become possible to test neural network theories in more direct ways."
Because of emerging technological advances, it is becoming possible to construct maps of the brain that are more detailed than ever before. These maps, called connectomes, describe the structure of the brain's neural networks at the level of individual neurons and synapses. Connectomes are determined by imaging a specimen of brain tissue in three dimensions at nanoscale resolution, and then using computational methods to extract information about neural connectivity from the images. The computational challenges are daunting, since brain volumes as modest in size as a cubic millimeter can yield data sets containing terabytes (a million million bytes) or petabytes (a million billion bytes) of information.
"In my laboratory, image analysis algorithms for connectomics are created using the machine learning approach," Seung says. "If automated image analysis could be made sufficiently reliable, this would help give rise to a new field called connectomics, defined by the high-throughput generation of data about neural connectivity, and the subsequent mining of that data for knowledge about the brain."