23
Design Considerations for a Computational Architecture of Human Cognition

Narayan Srinivasa

Center for Neural and Emergent Systems, HRL Laboratories LLC, USA

23.1 Introduction

How does the brain produce cognitive behavior and is it possible to abstract cognition using computers? This question has intrigued us for a very long time. Cognition is related to processes such as thinking, reasoning, memorizing, problem-solving, analyzing, and applying. Most attempts to understand brain function from a cognitive perspective are primarily derived by describing it as the computation of behavioral responses from internal representation of stimuli and stored representations of information from past experience. The origins of this description can be traced back to two key developments in the early twentieth century. Alan Turing's pioneering work [1] in machine theory defined computation as formally equivalent to the manipulation of symbols in a temporary buffer. Similarly, the pioneering work on telephone communication by Shannon and Weaver [2] resulted in a formal definition of information where the informational content of a signal was inversely related to the probability of that signal arising from randomness.

These developments launched computer science into prominence, and as computers grew in functional complexity, the analogy between computers and the brain began to be widely recognized. The basic premise for this analogy was that computers and the brain received information from the external environment and both acted upon this information in complex ways. It soon appeared that digital computers joined human brains as the only examples of systems capable of complex reasoning. This analogy between computers and the brain (also known as the computer metaphor) provided a candidate mechanism to explain cognition as akin to a digital computer program that can manipulate internal representation according to a set of rules. The computer metaphor also appeared to clearly identify with memories and provided an explanation for Descartes mind-body dualism [3] where mental entities are akin to software, whereas physical mechanisms found in the brain are akin to hardware.

The extensive use of the computer metaphor has resulted in the applied notions of symbolic computations and serial processing to construct human-like adaptive behaviors. The task of brain science has become focused on answering the question of how the brain computes [4]. The key issues, such as serial versus parallel processing, analog versus digital coding, and symbolic versus nonsymbolic representations, are being addressed using the computer metaphor wherein perception is akin to input, action is akin to output, and cognition is akin to computation. However, traditional algorithms that are derived based on adopting the computer metaphor have yielded very limited utility in complex, real-world environments despite several decades of research to develop machines that exhibit cognitive behaviors. This impasse has resulted in a rethinking of the notion of how cognitive behavior might be realized in machines that seek to emulate the brain.

23.2 Features of Biological Computation

To make progress in understanding human cognition, it is important to realize how biological computation is very different from the digital computers of today (see Figure 23.1 for a summary). The brain is composed of very noisy analog computing elements including neurons and synapses. Neurons operate as relaxation oscillators. Synapses are implicated in memory formation in the brain and can only resolve between three to four bits of information at each synapse [5]. The dynamics of these elements are asynchronous [6] and thus clock-free [7]. Despite being clock-free, coordination can be manifest via interactions between neurons in the brain because they operate as oscillators. The interaction can be very weak, and sometimes hardly perceptible, but it most often causes a qualitative transition: a neuron adjusts its rhythm in conformity with the rhythms of other neurons. This adjustment of rhythms due to an interaction is the essence of synchronization and is a universal phenomenon found in nature.

img

Figure 23.1 A summary comparison between the various aspects of computation between a digital computer and a brain

The scale of these interactions can span from a few neurons all the way to the entire network of neurons in the brain during various time instants that depend upon on the context of its interaction with its environment. This implies that the brain is a complex physical system. It is also known that the brain exhibits persistent asynchronous background activity (PABA) even in the absence of external inputs and a careful analysis has revealed rich spatio-temporal patterns of activity related to past experience that was previously neglected as background noise [8,9]. Recent insights into the relationship between PABA and brain organization suggest that the brain operates in a regime of criticality, without a single, dominant temporal or spatial scale [10–13], as reflected in power spectra with “1/f noise” (see Figure 23.2). In other words, the dynamics of brain interactions are scale-free.

img

Figure 23.2 Plot showing the inverse relationship between the scale of interaction (x axis) and the number of neurons interacting at each scale (y axis) in a typical cortical laminar simulation. The slope of this relationship is at −3/2 which is characteristic of scale-free behavior in cortical networks found in the brain

Memory and computation are integrated at all scales, unlike digital systems where they are clearly separated. The synapses are implicated in memory formation due to biophysical changes to the receptors of the postsynaptic neuron. These changes can be both short-term and long-term in nature. In the short-term version of these changes, the synaptic efficacy at a given postsynaptic neuron is modulated depending upon the dynamics of the available neurotransmitter resources and the fraction of these resources that are utilized based on the frequency of presynaptic action potential [14]. A number of recent experimental studies [15–20] suggest that repeated pairing of pre- and postsynaptic activity in the form of action potentials, or spikes, can lead to long-term changes in synaptic efficacy as well. The sign and magnitude of the change in synaptic efficacy depend on the relative timing between the pre- and postsynaptic spikes and this is known as spike timing-dependent plasticity (STDP). The synaptic conductance represents a form of fully distributed memory with no single synapse that corresponds to any representation or encoding. Since the synaptic conductances are fully distributed and they are implicated in memory, this implies that, in general, there is no single synapse or single neural firing activity that corresponds to a particular item or concept [7]. This means that the brain is symbol-free.

Brain structure has evolved with the blue print of a cortex composed of laminar circuits that have six layers of neurons with a great variety in both the types of neurons and the types of synaptic receptors [7]. This basic and unique cortical structure appears to repeats itself throughout the cortex and has been established and preserved for many millions of years. This self-similar or scale-free property of cortex enables the brain to exhibit macroscopic behavior independent of the microscopic details of the cortex. Furthermore, the brain is organized in a neither completely regular nor completely random form. To interpolate between these two extremes, an interesting concept of small-world networks was introduced [21]. In these networks, neurons are locally coupled in a dense form, but in addition, are also connected through sparse long-range connections linking physically distant brain regions. The so-called small-world networks have intermediate connectivity properties but exhibit a high degree of clustering (as in regular networks) with a small average distance between vertices (as in random networks). The connection patterns of the cerebral cortex consist of pathways linking neuronal populations across multiple levels of scale, from whole brain regions to local mini-columns [22]. There is mounting evidence that the brain is indeed organized as a small-world network [23–25]. In fact, there is also evidence for complete spatial synchronization when connections with small synaptic path lengths (i.e., number of connections between synapses) are enabled between several small worlds [26].

This organization of the brain into a small world network follows two major principles – the degree of local clustering and degree of separation (i.e., synaptic path length) which are in competition but both are needed to achieve large scale traffic with minimum wiring [7]. Local clustering is composed of strongly interacting laminar modules but sequential communication through them is highly inefficient. However, keeping synaptic path length fairly constant with brain size (Figure 23.3) is a necessity for maintaining efficient global communications. This feature also allows effective integration of heterogeneous and nonlocal sources of information for multiple goals and is a hallmark of human-like cognition [7,27]. The brain thus operates in a grid-free fashion.

img

Figure 23.3 A small world network is shown (top) where the black circles represent locally densely connected thalamocortical circuits (bottom left). Each laminar layer within any thalamocortical circuit can be simply summarized as an excitatory–inhibitory (EI) network with densely connected neurons within it

In summary, the brain exhibits clock-free, scale-free, symbol-free and grid-free dynamics that evolved to enable animals to face the challenges of a continuously changing environment. These features are very different from digital computers and thus computer metaphor-based theories of cognition are very different from what is actually found in the brain.

23.3 Evolution of Behavior as a Basis for Cognitive Architecture Design

In order to understand the process of designing an architecture inspired by mammals that can explain the emergence of complex cognitive behaviors, it is important to focus on the type of behavior. At the most basic level, there are two types of behavior. The first behavior is reflexive in nature and it is inborn, genetically determined in detail, and involves the centers in the spinal cord and base of the brain. The second behavior is learned from individual experiences via brain–body–environment interactions and is neither inborn nor genetically determined.

To survive, the animal had to develop different forms of behavior and produce different results depending on the context. The evolution of the nerve cell was important because it was uniquely positioned to influence the senses and motor parts in an extremely energy efficient fashion. The complexity of the animal's behavioral repertoire grew with the capacity of neural cells to influence each other and the body in order to realize new ways to produce action. Evolution of more complex forms of nerve cells and associated cellular mechanisms with clock-free, scale-free, symbol-free and grid-free dynamics offered a rich set of variations to the existing modes of operation of the nervous system [28]. This meant that the complex nervous system or the brain was now able to self-organize via nonlinear relationships between its constituent components. The more complex the brain evolved in an animal, the more it implied that the brain could generate more possible ways to self-organize and this in turn offered a rich set of behavioral repertoires that the animal exhibited in response to rapidly changing environments. This evolutionary process thus enabled the animal to survive in a far more robust manner under these changing environments.

Cognitive behavior can best be understood within the context of experience gained during continuous interactions between the brain, body, and the environment (or BBE – see Figure 23.4). This is because the brain along with the body it controls and its environment have co-evolved via learning to have extensive matching between their properties. Thus the nervous system alone cannot be the focus for understanding cognitive behavior. Feedback from its body movements and the dynamical properties of the environment itself with also play a vital role in the generation of cognitive behavior. The role of nervous system is not to direct or program behavior but to shape it and evoke appropriate patterns/possibilities of dynamics from the entire coupled system. Thus, credit or blame assignment for cognitive behavior is not assigned to any one piece of the coupled system but to the BBE system as a whole.

img

Figure 23.4 Emergence of cognitive behavior should be grounded in the idea of brain–body–environment interactions. Credit for the emergence of proper behavior given the context is not assigned to any one piece of the coupled system but to the brain–body–environment system as a whole, as in the case of the frog catching a fly with its hyper-elastic tongue with exquisite sensorimotor control and timing

23.4 Considerations for a Cognitive Architecture

The dependence on BBE interactions for cognitive behavior implies that the most basic requirement for the system would be that of timing. In other words, the system must be capable of coordination of its actions between these three components such that proper cognitive behaviors are realized subject to constraints. But given that a basic feature of biological computation is its clock-free nature, the coordination has to be manifest via interactions between various elements of the architecture. This is possible if the basic elements in the architecture operate as oscillators. Radio communication and electrical equipment, violins in an orchestra, crickets producing chirps, and numerous man-made systems such as lasers have a common feature: they produce rhythms [29–31]. As mentioned in Section 23.2 even weak interaction between oscillators (that are self-maintaining like the neurons in the brain) can synchronize with a common phase of oscillation (also referred to as phase locking [29]). It is possible to create phase locked oscillatory clusters by weakly coupling several physical oscillators such as spin torque oscillators [32–34] and resonant body oscillators [35]. But the demands of flexibility in cognitive behaviors during interaction imply that a single clock driving everything in the architecture is not a viable solution. The alternative trick that evolution has produced in the brain is to provide the intrinsic mechanisms to modulate the interactions between these oscillators in a flexible fashion based on contingencies. This would be akin to modulating the coupling strength between weakly coupled physical oscillators.

The oscillators produced in the brain correspond to the neural impulse trains that serve as the carriers of the results of local interactions. For example, neurons and neural circuits exhibit endogenous oscillatory properties without any other external inputs [7,36]. Various chemicals are known to modulate the effects of neurotransmitters [37–42]. The scale-free aspect of brain computation enables a wide range of temporal and spatial variations in modulation such as gap junctions at the fast end and volume transmitters that diffuse through intercellular fluid rather than across a synapse at the slow end. These may control the graded release of transmitters rather than all or none release patterns [42–44] that result in modulatory control of population activity such that a wide range of oscillatory rhythms can be generated. In addition, small-world network connectivity aids in this process to rapidly connect distant parts of the brain to enable synchronization in disparate parts of the brain. This idea of small-world networks has been leveraged in several recent FPGA-type implementations, including reducing the delay in FPGA routing structures [45]. This in turn means that various functional forms of the same basic architecture (Figure 23.5) can be realized, which is essential for the generation of cognitive behaviors under various contingencies.

img

Figure 23.5 An example of a small world network composed of four EI networks is shown on the left marked “original network.” Here each EI network is interacting with all other EI networks in a fully connected fashion. Depending on the context in which the animal finds itself, this architecture is flexible enough to change this fully connected behavior due to modulations of oscillations in the network that are based on context

The design of a cognitive architecture based on the features described above offers some key trade-offs. The first trade-off is the type of oscillators that can be used for the neuron-like basic elements. We can either choose from an assortment of CMOS like neural circuits that operate as a relaxation type oscillator [46] to nanoscale oscillators [32,35]. The nanoscale oscillators offer much higher operating frequencies with a very small footprint. This capability can enable much higher throughputs. Furthermore, these nanoscale oscillators are self-maintaining oscillators with their ground state being an energy minimizing oscillatory state with its own natural frequency. When several oscillators are weakly coupled, the system can synchronize while minimizing the energy consumed during the process, thus making the design energy efficient.

Another trade-off is to consider options for coupling neuron-like elements for both synchronization and modulatory influences on the architecture. The coupling between the neurons could be electrical (as in [46]) or through the substrate [32,35]. While electrical coupling gives more controllability, substrate-based coupling can be faster and more energy efficient. The same trade-offs come into play when we consider modulatory influences. In the brain, there seems to be a variety of mechanisms at varying scales of influence that play a role in modulatory signals. These could be broadcast either electrically or via other means, including wireless- or substrate-based coupling.

23.5 Emergent Cognition

The most followed view of cognition as promoted by cognitive psychology is the idea that there is a single executive system [47,48] that is responsible for planning and decision-making that is primarily in the frontal lobes [49–51]. This system is assumed to be independent from the systems responsible for action, such as the sensorimotor control system [52,53]. But there has been a major revision to this idea due to more compelling evidence recently from neuroscience and neurophysiological experiments that suggests a more distributed participation of various systems, including several cortical and subcortical areas of the brain during cognitive acts such as decision-making [54]. This idea of distributed participation is also closely linked to the concepts of small-world networks that enable rapid and scale-free interaction between distant brain regions.

The most recent [55] of many models [56–60] seems to offer an interesting hypothesis about how cognitive function such as decision-making emerges under the challenge of continuously changing environments and contingencies. The authors describe a multi-level model in which decisions emerge as a consensus from distributed neural activity [55] at various brain regions, with some related to sensorimotor control while others are related to more abstract aspects of behavior. These levels are reciprocally connected (as in a small-world network) and share biases that arrive from various brain regions. These biases could be influenced by the external environment related factors that assign values for actions and abstract concepts in brain regions that in turn modulate oscillations in the brain [61]. In fact, the neurons in brain areas such as the anterior cingulate cortex, orbitofrontal cortex, and lateral prefrontal cortex are known to differentially respond to the probability, magnitude, and effort associated with different options. When these oscillatory patterns do not directly produce any action, the system is said to produce cognitive behavior manifested in the form of plans or thoughts. There are several prominent models that propose that the brain is deciding among actions during this phase. When some of these patterns in effector-specific sensorimotor regions reach a threshold [62–66] they are acted upon and the system produces cognitive behavior in the form of actions. Furthermore learning during this interaction also enables stronger links between neurons in the sensory and motor brain regions that are correlated, thus enabling quicker decision making under familiar situations and contingencies.

23.6 Perspectives

In order to design a computational architecture for human cognition, it is important to ground the architecture based on brain–body–environment interactions. Using a small-world network with clock-free, scale-free, symbol-free, and grid-free computational features that allow for oscillatory interactions of its computing elements and modulations of these interactions, it may be possible in the future to design a computational architecture that could learn to plan and make decisions under constantly changing environments and contingencies in a manner reminiscent of human cognition.

References

  1. 1. Turing, A.M. (1936) On computable numbers with an application to the Entscheidungs problem. Proceedings of the London Mathematical Society, 2(42), 230–265.
  2. 2. Shannon, C. and Weaver, W. (1949) The Mathematical Theory of Information, University of Illinois Press, Urbana, IL.
  3. 3. Dennett, D.C. (1978) Current issues in the philosophy of mind. American Philosophical Quarterly, 15(4), 249–261.
  4. 4. Srinivasa, N. and Cruz-Albrecht, J. (2012) Neuromorphic adaptive plastic scalable electronics: Analog learning systems. IEEE Pulse, (2012), 51–56.
  5. 5. Barrett, A.B. and van Rossum, M.C.W. (2008) Optimal learning rules for discrete synapses. PLoS Computational Biology, 4(11), e1000230. doi: 10.1371/journal.pcbi.1000230
  6. 6. Renart, A., De la Rocha, J., Bartho, P. et al. (2010) The asynchronous state in cortical circuits. Science, 327, 587–590.
  7. 7. Buzsaki, G. (2009) Rhythms of the Brain, Oxford University Press, Oxford.
  8. 8. Freeman, W.J. (2005) A field-theoretic approach to understanding scale-free neocortical dynamics. Biological Cybernetics, 92(6), 350–359.
  9. 9. Chialvo, D.R. (2010) Emergent complex neural dynamics. Nature Physics, 6(10), 744–750.
  10. 10. Werner, G. (2010) Fractals in the nervous system: conceptual implications for theoretical neuroscience. Frontiers in Physiology, 1. doi: 10.3389/fphys.2010.00015
  11. 11. Bak, P., Tang, C., and Wiesenfeld, K. (1987) Self-organized criticality: an explanation of 1/f noise. Physical Review Letters, 59(4), 381–384.
  12. 12. Beggs, J.M. and Plenz, D. (2003) Neuronal avalanches in neocortical circuits. Journal of Neuroscience, 23(35), 11167–11177.
  13. 13. Petermann, T., Thiagarajan, T.C., Lebedev, M.A. et al. (2009) Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Sciences, 106(37), 15921–15926.
  14. 14. Markram, H. and Tsodyks, M. (1996) Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature, 382, 807–810.
  15. 15. Markram, H., Lubke, J., Frotscher, M., and Sakmann, B. (1997) Regulation of synaptic efficacy by coincidence of postsynapticAPs and EPSPs. Science, 275, 213–215.
  16. 16. Bi, Q.Q. and Poo, M.M. (1998) Activity-induced synaptic modification in hippocampal culture, dependence on spike timing, synaptic strength and cell type. Journal of Neuroscience, 18, 10464–10472.
  17. 17. Caporale, N. and Dan, Y. (2008) Spike timing-dependent plasticity: a Hebbian learning rule. Annual Review Neuroscience, 31, 25–46.
  18. 18. Debbane, D., Gahwiler, B.H., and Thompson, S. (1998) Long-term synaptic plasticity between pairs of individual CA3 pyramidal cells in rat hippocampal slice cultures. Journal of Physiology, 507, 237–247.
  19. 19. Levy, W.B. and Steward, D. (1983) Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus. Neuroscience, 8, 791–797.
  20. 20. Magee, J.C. and Johnston, D. (1997) A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons. Science, 275, 209–213.
  21. 21. Watts, D.J. and Strogatz, S.H. (1998) Collective dynamics of small world networks. Nature, 393, 440–442.
  22. 22. Sporns, O. (2006) Small-world connectivity, motif composition, and complexity of fractal neuronal connections. BioSystems, (2006), 56–64.
  23. 23. Yu, S., Huang, D.B., Singer, W. et al. (2008) A small world of neuronal synchrony. Cerebral Cortex, 18(2), 2891–2901.
  24. 24. Bassett, D.S. and Bullmore, E. (2006) Small-world brain networks. Neuroscientist, 12(6), 512–523.
  25. 25. Volman, V., Baruchi, I., and Ben-Jacob, E. (2005) Manifestation of function-follow-form in cultures neuronal networks. Physical Biology, 2(2), 98–110.
  26. 26. Roxin, A., Riecke, H., and Solla, S.A. (2004) Self-sustained activity in a small-world network of excitable neurons. Physical Review Letters, 92, 198101.
  27. 27. Edelman, G.M. (1989) The Remembered Present: A Biological Theory of Consciousness, Basic Books, New York.
  28. 28. van Schaik, Carel. (2006) Why are some animals so smart? Scientific American, 294(4), 64–71.
  29. 29. Andronov, A., Vitt, A.A., and Khaykin, S.E. (1966) Theory of Oscillations, Pergamon Press, New York.
  30. 30. Ementrout, G.B. and Kopell, N. (1984) Frequency plateaus in a chain of weakly coupled oscillators. SIAM Journal on Mathematical Analysis, 15(2), 215–237.
  31. 31. Glass, L. (2001) Synchronization and rhythmic processes in physiology. Nature, 410, 277–284.
  32. 32. Pufall, M.R., Rippard, W.H., Kaka, S. et al. (2005) Frequency modulation of spin-transfer oscillators. Applied Physics Letters, 86(8), http://dx.doi.org/10.1063/1.1875762.
  33. 33. Kaka, S., Pufall, M.R., Rippard, W.H. et al. (2005) Mutual phase-locking of microwave spin torque nano-oscillators. Nature, 436, 389–392.
  34. 34. Bertotti, G., Mayergoyz, I., and Serpico, C. (2009) Nonlinear Magnetization Dynamics in Nanosystems, Elsevier, Amsterdam.
  35. 35. Weinstein, D. and Bhave, S.A. (2010) The resonant body oscillator. NanoLetters, 10, 1234–1237.
  36. 36. Freeman, W.J. (2005) A field theoretic approach to understanding scale free neocortical dynamics. Biological Cybernetics, 92, 350–359.
  37. 37. Bloom, F.E. and Lazerson, A. (1988) Brain, Mind and Behavior, Freeman, London.
  38. 38. Cooper, J.R., Bloom, F.E., and Roth, R.H. (1986) The Biochemical Basis of Neuropharmocology, Oxford University Press, Oxford.
  39. 39. Dowling, J.E. (1992) Neurons and Networks, Harvard University Press, Harvard.
  40. 40. Siegelbaum, S.A. and Tsien, R.W. (1985) Modulation of gated ion channels as mode of transmitter action (ed. D. Bousfield), Neurotransmitters in Action, Elsevier, Amsterdam, pp. 81–93.
  41. 41. Fuxe, K. and Agnati, L.F. (1987) Receptor-Receptor Interactions, Plenum, New York.
  42. 42. Fuxe, K. and Aganti, L.F. (1991) Volume Transmission in the Brain: Novel Mechanisms for Neural Transmission, Raven, New York.
  43. 43. Bullock, T.H. (1981) Spikeless neurones: Where do we go from here? (eds A. Roberts and B.M.H. Bush), Neurones without Impulses, Cambridge University Press, Cambridge, pp. 269–284.
  44. 44. Krames, Elliot S., Peckham, P. Hunter, and Rezai, Ali R. (eds) (2012) Neuromodulation, vol. 1–2, Academic Press, New York.
  45. 45. Nishioka, Y., Iida, M., and Sueyoshi, T. (2010) Small world network to reduce delay in FPGA routing structures. International Journal of Innovative Computing, Information and Control, 6(2), 551–566.
  46. 46. Cruz-Albrecht, J., Yung, M., and Srinivasa, N. (2012) Energy-efficient, neuron, synapse and STDP integrated circuits. IEEE Transactions on Biomedical Circuits and Systems, 6(3), 246–256.
  47. 47. Norman, D.A. and Shallice, T. (1980) Attention to action: willed and automatic control of behavior, Center for Human Information Processing Technical Report no. 99.
  48. 48. Baddeley, A. and Hitch, G. (1974) Working memory, in The Psychology of Learning and Motivation (ed. I.G.A. Bower), Academic Press, New York, pp. 47–90.
  49. 49. Baddeley, A. and Sall Della, S. (1996) Working memory and executive control. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 351, 1397–1404.
  50. 50. Shallice, T. (1982) Specific impairments of planning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 298, 199–209.
  51. 51. Srinivasa, N. and Chelian, S.E. (2012) Executive control of cognitive agents using a biologically inspired model architecture of the prefrontal cortex. Biologically Inspired Cognitive Architectures, (2012), 13–24.
  52. 52. Fodor, J.A. (1983) Modularity of Mind: An Essay on Faculty Psychology, MIT Press, Cambridge, MA.
  53. 53. Pylyshyn, Z.W. (1984) Computation and Cogntion: Toward a Foundation for Cognitive Science, MIT Press, Cambridge, MA.
  54. 54. Cisek, P. (2007) Cortical mechanisms of action selection: the affordance competition hypothesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 362, 1585–1599.
  55. 55. Cisek, P. (2012) Making decision through a distributed consensus. Current Opinion in Neurobiology, 22, 927–936.
  56. 56. Padoa-Schioppa, C. (2011) Neurobiology of economic choice: a good-based model. Annual Review of Neuroscience, 34, 333–359.
  57. 57. Cisek, P. (2006) Integrated neural processes for defining potential actions and deciding between them: a computational model. Journal of Neuroscience, 26, 9761–9770.
  58. 58. Pastor-Bernier, A. and Cisek, P. (2011) Neural correlates of biased competition in premotor cortex. Journal of Neuroscience, 31, 7083–7088.
  59. 59. Favilla, M. (1997) Reaching movements: concurrency of continuous and discrete programming. Neuroreport, 8, 3973–3977.
  60. 60. Badre, G., Kayser, A.S., and D'Esposito, M. (2010) Frontal cortex and the discovery of abstract action rules. Neuron, 66, 315–325.
  61. 61. Rangel, A. and Hare, T. (2010) Neural computations associated with goal-directed choice. Current Opinion in Neurobiology, 20, 262–270.
  62. 62. Roitman, J.D. and Shadlen, M.N. (2002) Response of neurons in the lateral interparietal area during a combined visual discrimination reaction time task. Journal of Neuroscience, 22, 9475–9489.
  63. 63. Michelet, T., Duncan, G.H., and Cisek, P. (2010) Response competition in the primary motor cortex: corticospinal excitability reflects response replacement during simple decisions. Journal of Neurophysiology, 104, 119–127.
  64. 64. Pearson, B., Nelson, M.J., and Andersen, R.A. (2008) Free choice activates a decision circuit between frontal and parietal cortex. Nature, 453, 406–309.
  65. 65. Lebedev, M.A., Doherty, J.E., and Nicolelis, M.A. (2008) Decoding of temporal intervals from cortical ensemble activity. Journal of Neurophysiology, 99, 166–186.
  66. 66. Bennur, S. and Gold, J.I. (2011) Distinct representations of a perceptual decision and the associated occulomotor plan in the monkey lateral intraparietal area. Journal of Neuroscience, 31, 913–921.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.151.71