22
Explorations in Morphic Architectures

Tetsuya Asai1 and Ferdinand Peper2

1Graduate School of Information Science and Technology, Hokkaido University, Japan

2Center for Information and Neural Networks, National Institute of Information and Communications Technology, USA

22.1 Introduction

Biological systems give us examples of amorphous, unstructured devices capable of noise- and fault-tolerant information processing. They excel in massively parallel spatial problems, as opposed to digital processors, which are rather weak in that area. The morphic architecture was thus introduced in the Emerging Research Architecture section of ITRS 2007, to refer to biologically inspired architectures that embody a new kind of computation paradigm in which adaptation plays a key role to effectively address the particulars of problems [1]. This chapter focuses on recent progress of two morphic architectures that offer opportunities for emerging nanoelectronic devices: neuromorphic architectures and cellular automaton architectures.

22.2 Neuromorphic Architectures

The term neuromorphic was introduced by Carver Mead in the late 1980s to describe VLSI systems containing electronic analog circuits that mimic neuro-biological architectures in the nervous system [2]. Traditional neurocomputers employ components that are biologically rather implausible, like static threshold elements that represent neurons, whereas neuromorphic architectures are closer to biology. An example of a neuromorphic VLSI system is the silicon retina [3] (Figure 22.1) that is modeled after the neuronal structures of the vertebrate retina.

img

Figure 22.1 Example of neuromorphic chips (silicon retina)

The appeal of neuromorphic architectures lies in: (i) their potential to achieve (human-like) intelligence based on the unreliable devices typically found in neuronal tissue, (ii) their strategies to deal with anomalies, emphasizing not only tolerance to noise and faults, but also the active exploitation of noise to increase the effectiveness of operations, and (iii) their potential for low-power operation. Traditional von Neumann machines are less suitable with regard to item (i), since for this type of tasks they require a machine complexity (the number of gates and computational power), that tends to increase exponentially with the complexity of the environment (the size of the input). Neuromorphic systems, on the other hand, exhibit a more gradual increase of their machine complexity with respect to the environmental complexity [4] (Figure 22.2). Therefore, at the level of human-like computing tasks, neuromorphic machines have the potential to be superior to von Neumann machines. Points (ii) and (iii) are strongly related to each other in von Neumann machines, since tolerance to noise runs counter to lowering power supply voltage and bias currents. Neuromorphic architectures, on the other hand, suffer much less from this trade-off. Unlike von Neumann machines, which can correct bit errors only to a certain extent through the use of error control techniques, neuromorphic machines tend to keep working even under high error rates.

img

Figure 22.2 Machine versus Environmental complexity (extracted from DARPA's SyNAPSE program [4])

Like the areas of the human brain, neuromorphic machines (LSIs) are application-specific. Significant performance benefits can be achieved by employing them as supplemental CMOS, as an addition to a von Neumann system, which provides universal computation ability. Neuromorphic systems thus have a diversified functionality, and consequently can be categorized as more than Moore candidates. Table 22.1 shows trends in the development of neuromorphic systems and their applications. The application category “Information processing” is only a single item in the table, which understates the potentially large benefits in terms of machine complexity of human-like information processing, since functionalities like prediction, associative memory, and inference offer new opportunities require huge machine complexity if they were implemented by von Neumann architectures. The Emerging Research Architecture section of ITRS 2009, for example, introduced an inference engine based on Bayesian neural networks [5], and in 2010 Lyric Semiconductor introduced NAND gates where the input probabilities are combined using Bayesian logic rather than the binary logic of conventional processors. Lyric Error Correction (LEC) uses this probabilistic logic to perform error detection and correction with about 3% of the circuitry and 8% of the power that would be needed for the equivalent conventional error correction scheme [6].

Table 22.1 Applications and development of neuromorphic systems

Application category Proposed architectures
Information processing Associative memory (CMOS [7], SET [8,9]), data mining and Inference (CMOS) [10], winner takes all selection (CMOS) [11], noise-driven computing (see Table 22.2), locomotion controller (CMOS) [12]
Intelligent sensors Vision Edge detection (CMOS [13], SET [14]), motion detection (CMOS [15], SET [16]), stereo vision (CMOS) [17], visual tracking (CMOS) [18], adaptive gain control (CMOS) [19], orientation selection (CMOS) [20], high-speed sensors (CMOS) [21]
Others Band-pass filtering (CMOS) [22], echolocation (CMOS) [23], noise canceling and figure–ground separation in auditory systems (CMOS) [24], olfactory systems (CMOS) [25]
Artificial life Reaction–diffusion computers (CMOS, SET) [26], artificial fish [27] and octopus [28] (CMOS)
Technology CrossNets [29], address–event representation [30], CDMA neural networks (CMOS) [31], artificial neurons (CMOS [32], SET [33]), memristive synapses [34,35], 3D implementation [36], brain–machine interface [37]

ITRS 2007 (Emerging Research Architecture section) did not address neuromorphic “Intelligent sensors,” since they were considered ancillary to the central focus on information processing of ITRS at the time. However, intelligent sensors have been re-spotlighted, because there exist vast opportunities for high-performance architectures that combine them with emerging nanoelectronic devices. So far CMOS implementations (in the items “Vision” and “Others”; see Table 22.1) and blueprints of SET implementations (in item “Vision”) have been proposed.

Another approach to build neuromorphic systems is inspired by biochemical reactions in living organisms. Reaction–diffusion computers, for example, are based on a biochemical reaction named after Belousov and Zhabotinsky [38], and they are able to efficiently solve combinatorial problems through the use of natural parallelism. Electronic implementations of this type of information processing require strong nonlinear I-V characteristics to mimic the chemical reactions involved, and there exist many opportunities for emerging nanoelectronic devices in this respect.

On the technology side, one of the key issues for neuromorphic systems is how neuronal elements are implemented. An important consideration in this respect is the level of abstraction of a biological neuron, which can range from (almost) truthful to a very simple model, such as an integrate and fire neuron. Depending on the technology used [single electron transistor (SET) neurons, RTD, memristors, etc.], this level may vary, and opportunities for emerging nanoelectronic devices exist in this respect. Another important issue is how nonvolatile analog synapses are implemented. Many attempts have been made to design analog synaptic devices based on existing flash memory technologies, but they have experienced difficulties in designing appropriate controllers for electron injection and ejection as well as increasing the limit on the number of rewriting times. Memristive devices (e.g., resistive RAMs and atomic switches) offer a promising alternative for the implementation of nonvolatile analog synapses. They are applied in the CMOL architecture, which combines memristive nano-junctions with CMOS neurons and their associated controllers. In ITRS 2007, CMOL (CrossNets) was introduced in terms of nanogrids of (ideally) single molecules fabricated on top of a traditional CMOS layer, but the concept has since been expanded to use a nanowire crossbar add on as well as memristive (two-terminal) crosspoint devices like nanowire resistive RAMs [39]. The CMOL architecture will be further expanded to include multiple stacks of CMOS layers and crossbar layers. This may result in the implementation of large-scale multi-layer neural networks, which have thus far evaded direct implementations by CMOS devices only.

A final important issue is noise tolerance and noise utilization in neural systems and their possible application in electronics. Noise and fluctuations are usually considered obstacles in the operation of both analog and digital circuits and systems, and most strategies to deal with them focus on suppression. Neural systems, on the other hand, tend to employ strategies in which the properties of noise are exploited to improve the efficiency of operations. This concept may be especially useful in the design of computing systems with noise-sensitive devices (e.g., extremely low-power devices like SET and subthreshold analog CMOS devices).

Table 22.2 shows examples of noise-driven neural processing and their possible applications in electronics. Stochastic resonance (SR) is a phenomenon where a static or dynamic threshold system responds stochastically to a subthreshold or suprathreshold input with the help of noise. In biological systems SR is utilized to detect weak signals in a noisy environment. SR on some emerging nanoelectronic devices (a SET network and GaAs nanowire FETs) has been demonstrated. SR can be observed in many bi-stable systems and will be utilized to facilitate the state transitions in emerging logic (bi-stable) memory devices. Noise-driven fast signal transmission is observed in neural networks for the vestibulo-ocular reflex, where signals are transmitted with an increased rate over a neuronal path when nonidentical neurons and dynamic noise are introduced. Implementation in terms of a SET circuit has demonstrated that, when several nonidentical pulse-density modulators were used as noisy neurons, performances on input–output fidelity of the population increased significantly as compared to that of a single neuron circuit. Phase synchronization among isolated neurons can be utilized for skew-free clock distribution where independent oscillators are implemented on a chip as distributed clock sources, while the oscillators are synchronized by a common temporal noise. Noise in synaptic depression can be used to facilitate the operation of a neuromorphic burst-signal detector, where the output range of the detector is significantly increased by noise. Noise shaping in inhibitory neural networks has been demonstrated in subthreshold CMOS, where static and dynamic noises can positively be taken if one could not remove a certain level of noise or device mismatches. The circuits exploit properties of device mismatches and external (temporal) noise to perform noise shaping one-bit AD conversion (pulse-density modulation).

Table 22.2 Noise-driven neural processing and its possible applications

Neurophysiological phenomena Type of noise Applications and device examples
Stochastic resonance [40] Dynamic/static Sensors and logic memory (CMOS [41], SET [42], GaAs nanowire FET [43])
Fast signal transmission on slow transmission pathway [44] Dynamic/static Fast signal transmission, Pulse-density modulation (CMOS [45], SET [46])
Phase synchronization among isolated neurons [47] Dynamic Phase synchronization among isolated circuits/PLL (CMOS) [48]
Synaptic depression [49] Static Burst signal detection (SET) [50]
Noise shaping in inhibitory neural networks [51] Dynamic/static Noise shaping AD conversion (CMOS [52], SET [53])

22.3 Cellular Automata Architectures

A Cellular Automaton is an array of cells, organized in a regular grid. Each cell can be in one of a finite number of states from a predefined state set, which is usually a set of integers. The state of each cell is updated according to transition rules, which determine the cell's next state from its current state as well as from the states of the neighboring cells. The neighbors of a cell are usually the cells directly adjacent in orthogonal directions of a cell, like the north, south, east, and west neighbor in the case of a two-dimensional grid (von Neumann neighborhood), but other neighborhoods have also been used in experiments. The functionality of each cell is defined by the transition rules of the cellular automaton. The transition rules are usually the same for all cells, but heterogeneous sets of rules have also been considered, as well as programmable rules. A cell can typically be expressed in terms of a Finite Automaton, which is a model in computer science well-known for its simple but effective structure.

Cellular automata were initially proposed by von Neumann in the 1940s as a model of self-reproduction, but most of the interest they have attracted since then has been motivated by their ability to conduct computation in a distributed way. Though cellular automata have the name of their inventor in common with von Neumann architectures, they represent a radically different concept of computation.

The appeal of cellular automata as emerging research architectures lies in a number of factors. First, their regular structure has the potential for manufacturing methods that can deliver huge numbers of cells in a cost-effective way. Candidates in this respect are bottom-up manufacturing methods, such as those based on molecular self-assembly. Second, regularity also facilitates reuse of logic designs. The design of a cell is relatively simple as compared to that of a microprocessor unit, so design efforts are greatly reduced for cellular automata. Third, errors are more easily managed in the regular structures of cellular automata, since a unified approach for all cells can be followed. Fourth, wire lengths between cells are short, or wires are completely unnecessary if cells can interact with their neighboring cells through some physical mechanism. Fifth, cells can be used for multiple purposes, from logic or memory to the transfer of data. This makes cellular automata configurable in a flexible way. Sixth, cellular automata are massively parallel, offering a huge computational power for applications of which the “logical structure” fits the topology of the grid of cells.

Cellular automata may be less suitable for certain application due to the following factors. First, there is a relatively large overhead in terms of hardware. Cells tend to require a certain minimal level of complexity in order to be useful for computation [54]. In practice this means that they are configurable for logic, memory, or data transfer. The density of such functionality per unit of area tends to be lower than that of more conventional architectures. A large hardware overhead may be acceptable, though, if cells are available in huge numbers at a low cost, especially if the cellular automaton can be mapped efficiently on a certain application. Second, input and output of data to cells may be difficult. The use of the border cells of a grid for input and output is infeasible in case of huge numbers of cells, because it fails to employ all cells in parallel. Parallel input and output to cells through optical means or by wires addressing individual cells like in memory have more potential in this context. Third, it is difficult to configure cells into various patterns of states. Such configuration and reconfiguration functionality is required to give the cellular automaton its functionality for a certain computational task. Here, similar solutions as those for the input and output of data need to be employed to access cells in parallel.

There are two approaches for implementing cellular automata in hardware: fine-grained and tiny-grained. Systems that are coarse-grained are considered outside the class of cellular automata, since they are associated with multi-core architectures. Fine-grained cellular automata have cells that can be configured each as one or a few logic gates, or as a simple hub for data transfer. A cell typically contains a limited amount of memory in the order of 10–100 Bytes. Cells are usually addressed on an individual basis for input and output, or for configuration. Typically, the transition rules governing the functionality of a cell are changed during configuration. An example of a fine-grained approach is the Cell Matrix [55], which is a model capable of universal computation. Fine-grained cellular automata have a good degree of control over configuration and computation, but this comes at the price of relatively complex cells, limiting the use of cost-effective manufacturing methods that can exploit the regularity of these architectures.

The other approach to cellular automata is tiny-grained. Cells in this model have extremely simple functionality, in the order of a few states per cell and a limited number of (fixed) transition rules. The small number of states translates into memory requirements of only a few bits per cell, whereas the nonprogrammable nature of the transition rules drastically reduces the complexity of cells. The simple nature of rules poses no problems if the rules are designed to cover the functionality intended for the cellular automaton. An example of a tiny-grained approach is proposed [56] which is capable of universal computations as well as the correction of errors. Tiny-grained cellular automata have the potential of straightforward realizations of cells on nanometer scales. The challenge is to design models with as few states and transition rules per cell as possible. The theoretical minimum is two states and one transition rule. In synchronously timed models this has been approached by the Game of Life cellular automaton (two states, two rules) and in asynchronously timed models (no clock) by the Brownian Cellular Automata [57] (three states, three rules). Both models are universal. The number of states and rules should be considered only as rough yardsticks, since ultimately the most important measure is the efficiency at which cells can be realized in a technology.

Most hardware realizations of cellular automata to date are Application-Specific. In this context cellular automata are used as part of a larger system to conduct a specific set of operations with great efficiency. Applications have typically a structure that can be mapped efficiently on the hardware, and the approach followed is generally tiny-grained, since cells are optimized for one or a few simple operations. Image processing applications are the most common in such hardware realizations [58–60], since they can be mapped with great efficiency on two-dimensional cellular automata. Though the focus in the past was mostly on operations like filtering, thinning, skeletonizing, and edge detection, recent applications of cellular automata include the watermarking of images with digital image copyright [61,62]. Cellular automata have also been used for the implementation of a Dictionary Search Processor [63], memory controllers [64], and the generation of test patterns for Built-In Self-Test (BIST) of VLSI chips [65,66]. An overview of application-specific cellular automata is given in [67].

It is possible that the role of cellular automata in architectures will gradually increase with technological progress, from being merely used as dedicated subprocessor to the main part of the architecture. To do that, cellular automata would need a capacity that their application-specific cousins lack: computation universality, that is, the ability to carry out the same class of computations as our current computers do. This term is mostly used in a theoretical context, to prove equivalence to a universal Turing machine or more narrowly, to support a complete set of Boolean logic functions. The extreme inefficiency of Turing machines carries with it the misunderstanding that generallity is equivalent to inefficiency, but this is often far from the truth. A general approach to carry out operations efficiently on a cellular automaton is to configure it as a logic circuit. Cells will then be used as logic gates or for transferring data between logic gates. In fine-grained cellular automata a cell is typically sufficiently complex to be able to function as one or a few gates. In tiny-grained cellular automata, on the other hand, clusters of cells need to work together in order to obtain logic gate functionality. A cluster typically consists of up to 10 tiny-grained cells, its size depending on the functionality covered. This may seem a large overhead, but cells tend to be much less complex than in fine-grained cellular automata, making this approach feasible. Furthermore, cells used merely to transfer data – and this is the majority of the cells – see much less of their hardware unused when carrying out this simple task.

Cellular automata have seen only limited attempts at realization on a nanometer scales Molecule Cascades [68] use CO molecules on a Cu(111) grid to conduct simple logic operations. The CO molecules jump from grid point to grid point, triggering each other sequentially, like dominos. This process is quite slow and error-prone, though there appears to be potential for improvement. The mechanical nature of the operations, though, means that this cellular automaton is unlikely to reach competitive speeds. Another attempt uses layers of organic molecules on a gold grid [69]. Interactions between molecules take place via the tunneling of electrons between them. The rules that have been identified as governing those interactions appear to be influenced by the presence of electrons in the grid. This may limit the control over the operation of the cellular automaton, but it also carries the promise of efficient ways to configure the grid.

22.4 Taxonomy of Computational Ability of Architectures

Whereas von Neumann architectures generally refer to the use of memory resources separated from computational resources to store data and programs, there is an increasing need for taxonomy of those architectures that are based on different concepts. Figure 22.3 illustrates the world of “information processing” including from present Boolean-based processing of von Neumann machines to nonNeumann machines and introducing four possible architectures on it, that is, More Neumann, More than Neumann, Less than Neumann, and Beyond Neumann.

img

Figure 22.3 Taxonomy of information processing

The term More Neumann refers to those architectures that differ from the classical von Neumann architecture only in terms of numbers (e.g., presenting multi- or many-core architectures). While the stored memory concept is still followed in More Neumann architectures, a certain level of parallelism is assumed, like in multi-core systems. The More Neumann architecture has been grown from ancient Less than Neumann architectures (Figure 22.4, left), and thus the performance lies in the metrics in terms of parallelism, computational ability, and programmability, as shown in Figure 22.4, right.

img

Figure 22.4 Concept of More Neumann, More than Neumann, Less than Neumann, and Beyond Neumann

More than Neumann refers to architectures that do not suffer from the von Neumann bottleneck between computation and memory resources, that is, these resources are integrated to a high degree. These architectures tend to have a highly distributed character in which small elements have extremely limited memory and computation resources to the extent that each element individually is Less than Neumann (i.e., incapable of being used as a full-fledged von Neumann architecture, like a “logic element” unit in FPGA), yet the combination of these elements lifts them to a higher level of competence for certain applications (Figure 22.4). In More than Neumann architectures reorganization or reconfiguration usually plays the role that programmability has in von Neumann architectures. Programming a More than Neumann architecture thus involves an appropriate organization or configuration of the individual elements in order to make them perform a certain function. This reorganization may take the form of setting/adjusting the memories of the individual elements, but it may also involve a relinking of interconnections between the elements. In the context of neuromorphic architectures the elements take the form of neurons and synaptic connections between them, and synapses can be adjusted based on a learning process, while in some architectures new synaptic connections can be created and old ones destroyed. In the case of cellular automata, the elements are the cells, and their functionality is changed by setting their memory states to appropriate values. More than Neumann architectures are typically capable of high performance on certain classes of problems, but much less so on other problems (or may be even unable to handle other problems). Neuromorphic architectures have their strengths in problems that involve learning, classification, and recognition, but they do less well on traditional computing problems. Cellular automata are strong in applications that demand a regular structure of logic or data and a huge degree of parallelism.

Beyond Neumann refers to architectures that can solve certain computational problems fundamentally faster than would be possible on the architectures outlined above (Figure 22.4), like quantum computing architectures. Problems such as these typically require computation times that are exponential as measured in terms of their input. The fundamental limits that restrict the computational power of architectures ranging from von Neumann to More than Neumann are exceeded in Beyond Neumann architectures through adopting novel operating principles. Schemes that use analog values instead of digital (neuromorphic architectures, dynamic systems, etc.), that use superposition of bit values (quantum computing schemes), or that use an analog timing scheme (asynchronous architectures) are prime candidates for this category. The flow of information in architecture may also characterize it as Beyond Neumann. While Turing machines embody the traditional Input–Processing–Output flow, modern computers (even von Neumann ones) are used in a more interactive mode with humans, like in gaming, or with other computers, when connected in networks. Biological brains have a somewhat related concept of input and output, but different in its implementation: their processing of information appears to be an autonomous process, that may (or may not) be modulated by the input signals in the environment [70]. This allows biological organisms to flexibly select important signals from the environment, while ignoring irrelevant ones. Underneath this all lies an impressive neural machinery, yet to be uncovered, that can solve problems with unrivaled efficiency. Many of the above elements (analog-valued signals, asynchronous timing in combination with selective synchronization, chaotic dynamics) are thought to play an important role in neural information processing. While Beyond Neumann architectures are promising in principle, it needs to be emphasized that currently no practical implementations of them have been reported.

22.5 Summary

This chapter introduced a concept of the morphic architecture that refers to architectures adapted to effectively address a particular problem set, and exhibited recent progress of two morphic architectures that offer opportunities for emerging nanoelectronic devices: neuromorphic architectures and cellular automaton architectures.

Morphic architectures will be employed in a broad class of mixed-signal systems that are focused on a particular application and that draw inspiration for their structure from the application. In some cases, computation is performed in the analog manner, which may offer orders of magnitude improvement in performance and power dissipation, albeit with reduced accuracy. As an example, biologically inspired inference networks for cognition may yield to a partial analog implementation and provide substantial gains in performance relative to their digital counterparts.

Finally, taxonomy for emerging models of computation was introduced, because of an increasing need of classifying emerging architectures that are based on nonNeuman concepts. Concepts of four possible architectures were introduced: More Neumann, More than Neumann, Less than Neumann, and Beyond Neumann architectures. The morphic and cellular automaton architectures introduced in this chapter will be categorized into More than Neumann architectures that consist of huge collection of ancient Less than Neumann (i.e., incapable of being used as fully fledged von Neumann architecture) elements.

References

  1. 1. ITRS (2007) Edition, Emerging Research Devices, Chapter: Emerging Research Architectures, Section: Morphic Computational Architecture.
  2. 2. Mead, C. (1989) Analog VLSI and Neural System, Addison-Wesley, Reading, MA.
  3. 3. UZH (2010) http://siliconretina.ini.uzh.ch/wiki/index.php (accessed 16 July 2013).
  4. 4. DARPA (2012) http://www.facepunch.com/threads/1105228-DARPA-Synapse-phase-2-targets-integrated-neuromorphic-chip (accessed 16 July 2013).
  5. 5. ITRS (2009) Edition, Emerging Research Devices, Chapter: Emerging Research Architectures, Section: Inference Computing.
  6. 6. ARS (2010) http://arstechnica.com/hardware/news/2010/08/probabilistic-processors-possibly-potent.ars (accessed 16 July 2013).
  7. 7. Bui, T.T. and Shibata, T. (2008) Compact bell-shaped analog matching-cell module for digital-memory-based associative processors. Japanese Journal of Applied Physics, 47(4), 2788–2796.
  8. 8. Saen, M., Morie, T., Nagata, M., and Iwata, A. (1998) A stochastic associative memory using single-electron tunneling devices. IEICE Transactions on Electronics, E81-C(1), 30–35.
  9. 9. Morie, T., Matsuura, T., Nagata, M., and Iwata, A. (2003) A multinanodot floating-gate MOSFET circuit for spiking neuron models. IEEE Transactions on Nanotechnology, 2(3), 158–164.
  10. 10. Domingos, P.O., Silva, F.M., and Neto, H.C. (2005) An efficient and scalable architecture for neural networks with backpropagation learning, in Proc. Int. Conf. Field Programmable Logic and Applications, pp. 89–94.
  11. 11. Oster, M., Yingxue, W., Douglas, R., and Liu, S.-C. (2008) Quantification of a spike-based winner-take-all VLSI network. IEEE Transactions on Circuits and Systems I, 55(10), 3160–3169.
  12. 12. Nakada, K., Asai, T., and Amemiya, Y. (2004) Biologically-inspired locomotion controller for a quadruped walking robot: Analog IC implementation of a CPG-based controller. Robotics and Mechatronics, 16(4), 397–403.
  13. 13. Kameda, S. and Yagi, T. (2006) An analog silicon retina with multichip configuration. IEEE Transactions on Neural Networks, 17(1), 197–210.
  14. 14. Kikombo, A.K., Schmid, A., Asai, T. et al. (2009) A bio-inspired image processor for edge detection with single-electron circuits. Journal of Signal Processing, 13(2), 133–144.
  15. 15. Brinkworth, R.S.A., Shoemaker, P.A., and O'Carroll, D.C. (2009) Characterization of a neuromorphic motion detection chip based on insect visual system, in Proc. Int. Conf. Intelligent Sensors, Sensor Networks and Information Processing, pp. 289–294.
  16. 16. Kikombo, A.K., Asai, T., and Amemiya, Y. (2009) An elementary neuro-morphic circuit for visual motion detection with single-electron devices based on correlation neural networks. Journal of Computational and Theoretical Nanoscience, 6(1), 89–95.
  17. 17. Tsang, E.K.C., Lam, S.Y.N., Yicong, M., and Shi, B.E. (2008) Neuromorphic implementation of active gaze and vergence control, in Proc. IEEE Int. Symp. Circuits and Systems, pp. 18–21.
  18. 18. Indiveri, G. (1999) Neuromorphic analog VLSI sensor for visual tracking: circuits and application examples. IEEE Transactions on Circuits and Systems II, 46(11), 1337–1347.
  19. 19. Meng, Y. and Shi, B.E. (2008) Adaptive gain control for spike-based map communication in a neuromorphic vision system. IEEE Transactions on Neural Networks, 19(6), 1010–1021.
  20. 20. Choi, T.Y.W., Merolla, P.A., Arthur, J.V. et al. (2005) Neuromorphic implementation of orientation hypercolumns. IEEE Transactions on Circuits and Systems I, 52(6), 1049–1060.
  21. 21. IEEE (2012) http://spectrum.ieee.org/computing/hardware/a-flyeye-inspired-speed-sensor (accessed 16 July 2013).
  22. 22. Lyon, R.F. and Mead, C. (1988) An analog electronic cochlea. IEEE Transactions on Acoustics Speech and Signal Processing, 36(7), 1119–1134.
  23. 23. Horiuchim, T.K. (2005) Seeing in the dark: Neuromorphic VLSI modeling of bat echolocation. IEEE Signal Processing Magazine, 22(5), 134–139.
  24. 24. Park, H.M., Oh, S.H., and Lee, S.Y. (2003) A filter bank approach to independent component analysis and its application to adaptive noise cancelling. Neurocomputing, 55(3–4), 755–759.
  25. 25. Koickal, T.J., Hamilton, A., Tan, S.L. et al. (2007) Analog VLSI circuit implementation of an adaptive neuromorphic olfaction chip. IEEE Transactions on Circuits and Systems I, 54(1), 60–73.
  26. 26. Adamatzky, A., De Lacy Costello, B., and Asai, T. (2005) Reaction-Diffusion Computers, Elsevier, Amsterdam.
  27. 27. Fujita, D., Asai, T., and Amemiya, Y. (2011) A neuromorphic MOS circuit imitating jamming avoidance response of eigenmannia. Nonlinear Theory and Its Applications, 2(2), 205–217.
  28. 28. Momeni, M. and Titus, A.H. (2006) An analog VLSI chip emulating polarization vision of octopus retina. IEEE Transactions on Neural Networks, 17(1), 222–232.
  29. 29. Türel, Ö. and Likharev, K.K. (2003) CrossNets: Possible neuromorphic networks based on nanoscale components. International Journal of Circuit Theory and Applications, 31(1), 37–52.
  30. 30. Costas-Santos, J., Serrano-Gotarredona, T., Serrano-Gotarredona, R., and Linares-Barranco, B. (2007) A spatial contrast retina with on-chip calibration for neuromorphic spike-based AER vision systems. IEEE Transactions on Circuits and Systems I, 54(7), 1444–1458.
  31. 31. Yuminaka, Y., Sasaki, Y., Aoki, T., and Higuchi, T. (1998) Design of neural networks based on wave-parallel computing technique. Analog Integrated Circuits and Signal Processing, 15(3), 315–327.
  32. 32. Scholarpedia (2012) http://www.scholarpedia.org/article/Silicon_neurons (accessed 16 July 2013).
  33. 33. Oya, T., Schmid, A., Asai, T. et al. (2005) On the fault tolerance of a clustered single-electron neural network for differential enhancement. IEICE Electronics Express, 2(3), 76–80.
  34. 34. Jo, S.H., Chang, T., Ebong, I. et al. (2010) Nanoscale memristor device as synapse in neuromorphic systems. Nano Letters, 10(4), 1297–1301.
  35. 35. Ohno, T., Hasegawa, T., Tsuruoka, T. et al. (2011) Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nature Materials, 10(8), 591–595.
  36. 36. Koyanagi, M., Nakagawa, Y., Lee, K.-W. et al. (2001) Neuromorphic vision chip fabricated using three-dimensional integration technology, in ISSCC Dig. Tech. Papers, pp. 270–271.
  37. 37. Yamaguchi, M., Shimada, A., Torimitsu, K., and Nakano, N. (2010) Multichannel biosensing and stimulation LSI chip using 0.18-um CMOS technology. Japanese Journal of Applied Physics, 49(2), 04DL14.
  38. 38. Wikipedia (2012) http://en.wikipedia.org/wiki/Belousov–Zhabotinsky_reaction (accessed 16 July 2013).
  39. 39. Likharev, K.K. (2008) Hybrid CMOS/nanoelectronic circuits: opportunities and challenges. Journal of Nanoelectronics and Optoelectronics, 3(3), 203–230.
  40. 40. Gammaitoni, L., Hanggi, P., Jung, P., and Marchesoni, F. (1998) Stochastic resonance. Reviews of Modern Physics, 70(1), 223–287.
  41. 41. Utagawa, A., Asai, T., and Amemiya, Y. (2011) Stochastic resonance in simple analog circuits with a single operational amplifier having a double-well potential. Nonlinear Theory and Its Applications, 2(4), 409–416.
  42. 42. Oya, T., Schmid, A., Asai, T., and Utagawa, A. (2011) Stochastic resonance in a ballanced pair of single-electron boxes. Fluctuation and Noise Letters, 10(3), 267–275.
  43. 43. Kasai, S. and Asai, T. (2008) Stochastic resonance in Schottky wrap gate-controlled GaAs nanowire field effect transistors and their networks. Applied Physics Express, 1(8), 083001.
  44. 44. Hospedales, T.M., van Rossum, M.C.W., Graham, B.P., and Dutia, M.B. (2008) Implications of noise and neural heterogeneity for vestibulo-ocular reflex fidelity. Neural Computation, 20(3), 756–778.
  45. 45. Utagawa, A., Asai, T., and Amemiya, Y. (2011) High-fidelity pulse density modulation in neuromorphic electric circuits utilizing natural heterogeneity. Nonlinear Theory and Its Applications, 2(2), 218–225.
  46. 46. Kikombo, A.K., Asai, T., and Amemiya, Y. (2011) Neuro-morphic circuit architectures employing temporal noises and device fluctuations to improve signal-to-noise ratio in a single-electron pulse-density modulator. International Journal of Unconventional Computing, 7(1–2), 53–64.
  47. 47. Nakao, H., Arai, K., and Nagai, K. (2005) Synchrony of limit-cycle oscillators induced by random external impulses. Physical Review E, 72(2), 026220.
  48. 48. Utagawa, A., Asai, T., Hirose, T., and Amemiya, Y. (2008) Noise-induced synchronization among sub-RF CMOS analog oscillators for skew-free clock distribution. IEICE Transactions on Fundamentals, E91-A(9), 2475–2481.
  49. 49. Senn, W., Segev, I., and Tsodyks, M. (1998) Reading neuronal synchrony with depressing synapses. Neural Computation, 10(4), 815–819.
  50. 50. Oya, T., Asai, T., Kagaya, R. et al. (2006) Neuronal synchrony detection on signle-electron neural network. Chaos, Solitons and Fractals, 27(4), 887–894.
  51. 51. Mar, D.J., Chow, C.C., Gerstner, W. et al. (1999) Noise shaping in populations of coupled model neurons. Neurobiology, 96(18), 10450–10455.
  52. 52. Utagawa, A., Asai, T., Hirose, T., and Amemiya, Y. (2007) An inhibitory neural-network circuit exhibiting noise shaping with subthreshold MOS neuron circuits. IEICE Transactions on Fundamentals, E90-A(10), 2108–2115.
  53. 53. Kikombo, A.K., Asai, T., Oya, T. et al. (2009) A neuromorphic single-electron circuit for noise-shaping pulse-density modulation. International Journal of Nanotechnology and Molecular Computation, 1(2), 80–92.
  54. 54. Zhirnov, V., Cavin, R., Leeming, G., and Galatsis, K. (2008) An Assessment of Integrated Digital Cellular Automata Architectures. IEEE Computer, 41(1), 38–44.
  55. 55. Durbeck, L. and Macias, N. (2001) The cell matrix: An architecture for nanocomputing. Nanotechnology, 12(3), 217–230.
  56. 56. Peper, F., Lee, J., Abo, F. et al. (2004) Fault-tolerance in nanocomputers: A cellular array approach. IEEE Transactions on Nanotechnology, 3(1), 187–201.
  57. 57. Lee, J. and Peper, F. (2008) On Brownian Cellular Automata, in Proc. Automata, pp. 278–291.
  58. 58. Preston, K., Duff, M.J.B., Levialdi, S. et al. (1979) Basics of cellular logic with some applications in medical image processing. Proceedings of the IEEE, 67(5), 826–856.
  59. 59. Sunayama, T., Ikebe, M., Asai, T., and Amemiya, Y. (2000) Cellular vMOS circuits performing edge detection with difference-of-gaussian filters. Japanese Journal of Applied Physics, 39(4B), 2278–2286.
  60. 60. Asai, T., Sunayama, T., Amemiya, Y., and Ikebe, M. (2001) A vMOS vision chip based on cellular-automaton processing. Japanese Journal of Applied Physics, 40(4B), 2585–2592.
  61. 61. Mankar, V.H., Das, T.S., and Sarkar, S.K. (2007) Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization, in Proc. World Acad. of Sc., Eng., and Techn, pp. 20–29.
  62. 62. Shin, J., Yoon, S., and Park, D.S. (2010) Contents-based digital image protection using 2-D cellular automata transforms. IEICE Electronics Express, 7(11), 772–778.
  63. 63. Motomura, M., Yamada, H., and Enomoto, T. (1992) A 2K-word dictionary search processor (DISP) LSI with an approximate word search capability. IEEE Journal of Solid-State Circuits, 27(6), 883–891.
  64. 64. Wasaki, K. (2008) Self-stabilizing model of a memory controller based on the cellular automata. International Journal of Computer Science and Network Security, 8(3), 222–227.
  65. 65. Hortensius, P.D., McLeod, R.D., Pries, W. et al. (1989) Cellular automata-based pseudorandom number generators for built-in self-test. IEEE Transactions on Computer-Aided Design, 8(8), 842–859.
  66. 66. Dasgupta, P., Chattopadhyay, S., Chaudhuri, P.P., and Sengupta, I. (2001) Cellular automata-based recursive pseudoexhaustive test pattern generator. IEEE Transactions on Computers, 50(2), 177–185.
  67. 67. Ganguly, N., Sikdar, B.K., Deutsch, A. et al. (Dec. (2003)) A Survey on Cellular Automata, in, Technical Report, Centre for High Performance Computing, Dresden University of Technology.
  68. 68. Heinrich, A.J., Lutz, C.P., Gupta, J.A., and Eigler, D.M. (2002) Molecule cascades. Science, 298(5597), 1381–1387.
  69. 69. Bandyopadhyay, A., Pati, R., Sahu, S. et al. (2010) Massively parallel computing on an organic molecular layer. Nature Physics, 6(5), 369–375.
  70. 70. Destexhe, A. and Contreras, D. (2006) Neuronal computations with stochastic network states. Science, 314(5796), 85–90.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.142.144