paper-mining nips knowledge-graph by maker-knowledge-mining
Conference on Neural Information Processing Systems
The Conference on Neural Information Processing Systems (NIPS) is a machine learning and computational neuroscience conference held every December. The conference is a single track meeting that includes invited talks as well as oral and poster presentations of refereed papers. It began in 1987 as a computational cognitive science conference, and was held in Denver, United States until 2000. From 2001 until 2010 the conference was held in Vancouver, Canada. In 2011 the conference was held in Granada, Spain. Beginning in 2012, the conference will move to Lake Tahoe for two years. In 2014, the conference will be held in Montreal, Canada.
Papers in early NIPS proceedings tended to use neural networks as a tool for understanding how the human brain works, which attracted researchers with interests in biological learning systems as well as those interested in artificial learning systems. Since then, the biological and artificial systems research streams have diverged, and recent NIPS proceedings are dominated by papers on machine learning, artificial intelligence and statistics, although computational neuroscience remains an aspect of the conference.
Besides machine learning and neuroscience, other fields represented at NIPS include cognitive science, psychology, computer vision, statistical linguistics, and information theory. The 'Neural' in the NIPS acronym is now something of a historical relic, and the conference spans a wide range of topics.
The proceedings from the conferences have been published in book form by MIT Press and Morgan Kaufmann under the name Advances in Neural Information Processing Systems.
The meeting is related to the COSYNE meetings held in Salt Lake City, United States.
from wiki http://en.wikipedia.org/wiki/Conference_on_Neural_Information_Processing_Systems
Computational neuroscience
Computational neuroscience is the study of brain function in terms of the information processing properties of the structures that make up the nervous system. It is an interdisciplinary science that links the diverse fields of neuroscience, cognitive science, and psychology with electrical engineering, computer science, mathematics, and physics.
Computational neuroscience is distinct from psychological connectionism and from learning theories of disciplines such as machine learning, neural networks, and computational learning theory in that it emphasizes descriptions of functional and biologically realistic neurons (and neural systems) and their physiology and dynamics. These models capture the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, proteins, and chemical coupling to network oscillations, columnar and topographic architecture, and learning and memory.
These computational models are used to frame hypotheses that can be directly tested by biological and/or psychological experiments.
History
The term "computational neuroscience" was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the book Computational Neuroscience. The first open international meeting focused on Computational Neuroscience was organized by James M. Bower and John Miller in San Francisco, California in 1989 and has continued each year since as the annual CNS meeting The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at the California Institute of Technology in 1985.
The early historical roots of the field can be traced to the work of people such as Louis Lapicque, Hodgkin & Huxley, Hubel & Wiesel, and David Marr, to name a few. Lapicque introduced the integrate and fire model of the neuron in a seminal article published in 1907; this model is still one of the most popular models in computational neuroscience for both cellular and neural networks studies, as well as in mathematical neuroscience because of its simplicity (see the recent review article published recently for the centenary of Lapicque's original paper). About 40 years later, Hodgkin & Huxley developed the voltage clamp and created the first biophysical model of the action potential. Hubel & Wiesel discovered that neurons in the primary visual cortex, the first cortical area to process information coming from the retina, have oriented receptive fields and are organized in columns. David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within the hippocampus and neocortex interact, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work of Wilfrid Rall, with the first multicompartmental model using cable theory.
Major topics
Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena.
Single-neuron modeling
Even single neurons have complex biophysical characteristics. Hodgkin and Huxley's original model only employed two voltage-sensitive currents, the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and shunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.
The computational functions of complex dendrites are also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.
Some models are also tracking biochemical pathways at very small scales such as spines or synaptic clefts.
There are many software packages, such as GENESIS and NEURON, that allow rapid and systematic in silico modeling of realistic neurons. Blue Brain, a project founded by Henry Markram from the École Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of a cortical column on the Blue Gene supercomputer.
Development, axonal patterning, and guidance
How do axons and dendrites form during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons.
Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is the minimal wiring hypothesis, which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.
Sensory processing
Early models of sensory processing understood within a theoretical framework are credited to Horace Barlow. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form of efficient coding, where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another.
Current research in sensory processing is divided among a biophysical modelling of different subsystems and a more theoretical modelling of perception. Current models of perception have suggested that the brain performs some form of Bayesian inference and integration of different sensory information in generating our perception of the physical world.
Memory and synaptic plasticity
Earlier models of memory are primarily based on the postulates of Hebbian learning. Biologically relevant models such as Hopfield net have been developed to address the properties of associative, rather than content-addressable, style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- and long-term memory, localizing in the hippocampus. Models of working memory, relying on theories of network oscillations and persistent activity, have been built to capture some features of the prefrontal cortex in context-related memory.
One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstable synapses are easy to train but also prone to stochastic disruption. Stable synapses forget less easily, but they are also harder to consolidate. One recent computational hypothesis involves cascades of plasticity that allow synapses to function at multiple time scales. Stereochemically detailed models of the acetylcholine receptor-based synapse with the Monte Carlo method, working at the time scale of microseconds, have been built. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades.
Behaviors of networks
Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike most artificial neural networks, sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks. It is also unknown what the computational functions of these specific connectivity patterns are, if any.
The interactions of neurons in a small network can be often reduced to simple models such as the Ising model. The statistical mechanics of such simple systems are well-characterized theoretically. There has been some recent evidence that suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions. It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence of two-photon microscopy and calcium imaging, we now have powerful experimental methods with which to test the new theories regarding neuronal networks.
In some cases the complex interactions between inhibitory and excitatory neurons can be simplified using mean field theory, which gives rise to the population model of neural networks. While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.
Cognition, discrimination, and learning
Computational modeling of higher cognitive functions has only recently begun. Experimental data comes primarily from single-unit recording in primates. The frontal lobe and parietal lobe function as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.
The brain seems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing and recognizing faces. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines.
The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice. Integrative neuroscience attempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.
The Computational Representational Understanding of Mind (CRUM) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making.
Consciousness
One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life. Francis Crick and Christof Koch made some attempts in formulating a consistent framework for future work in neural correlates of consciousness (NCC), though much of the work in this field remains speculative.
See also
Biological neuron models
Brain-computer interface
Connectionism
Mind uploading
Neural coding
Neural engineering
Neural network
Neurocomputational speech processing
Neuroinformatics
Simulated reality
Technological singularity, a hypothetical artificial intelligence that would exceed the capabilities of the human brain
from wiki http://en.wikipedia.org/wiki/Computational_neuroscience