Вы находитесь на странице: 1из 18

- A Brief History of Connectionism_Medler98.pdf Read: No Title: A Brief History of Connectionism Authors: Medler, D.A.

Published on: NEURAL COMPUTING SURVEYS - VOLUME 1 - 1998 Downloaded from: http://www.soe.ucsc.edu/NCS/vol1.html Abstract: Connectionist research is firmly established within the scientic community, e specially within the multi-disciplinary field of cognitive science. This diversity, however, has created an environme nt which makes it difficult for connectionist researchers to remain aware of recent advances in the field, le t alone understand how the field has developed. This paper attempts to address this problem by providing a brief g uide to connectionist research. The paper begins by defining the basic tenets of connectionism. Next, the dev elopment of connectionist research is traced, commencing with connectionism's philosophical predecessors, moving to early psychological and neuropsychological influences, followed by the mathematical and computing contributions to conne ctionist research. Current research is then reviewed, focusing specifically on the different types of network archit ectures and learning rules in use. The paper concludes by suggesting that neural network research - at least in cognitive science - should move towards models that incorporate the relevant functional principles inherent in neurob iological systems. Summary: Not available Keywords: To complete - A CMOS Field-Programmable Analog Array_Lee91.pdf Read: No Title: A CMOS Field-Programmable Analog Array Authors: Lee, E.K.F. and Gulak, P.G. Published on: IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 26, NO. 12, DECEMBER 1991 Downloaded from: IEEE Xplorer

Abstract: The design details and test results of a field-programmable analog array (FP AA) prototype chip in 1.2-um CMOS are presented. The analog array is based on subthreshold circuit techniques and consists of a collection of homogeneous configurable analog blocks (CABS) and an interconnection network. Interconne ctions between CABs and the analog functions to be implemented in each block are defined by a set of configurat ion bits loaded serially into an on-board shift register by the user. Macromodels are developed for the analog functio ns in order to simulate various neural network applications on the field-programmable analog array. Summary: Not available Keywords: To complete - A neuronal learning rule for sub-millisecond temporal coding_Gerstner96.ps Read: No Title: A neuronal learning rule for sub-millisecond temporal coding Authors: Gerstner, W. et al. Published on: Nature September 1996 Downloaded from: http://www.keck.ucsf.edu/~kempter/Publications/1996/nature96.ps.gz Abstract: A PARADOX that exists in auditory and electrosensory neural systems is that t hey encode behaviourally relevant signals in the range of a few microseconds with neurons that are at least one order o f magnitude slower. The importance of temporal coding in neural information processing is not clear yet. A central question is whether neuronal firing can be more precise than the time constants of the neuronal processes involved9. Here we address this problem using the auditory system of the barn owl as an example. We present a modelling study b ased on computer simulations of a neuron in the laminar nucleus. Three observations explain the paradox. First, spiking o f an 'integrate-and-fire' neuron driven by excitatory postsynaptic potentials with a width at half-maximum height of 250 mus, has an accuracy of 25 mus if the presynaptic signals arrive coherently. Second, the necessary degree of coherence in the s ignal arrival times can be attained during ontogenetic development by virtue of an unsupervised hebbian learning rule. L earning selects connections with matching delays from a broad distribution of axons with random delays. Third, the learning ru le also selects the correct delays from two independent groups of inputs, for example, from the left and right ear. Summary:

Not available Keywords: To complete - A Programmable Analog Cellular Neural Network CMOS Chip for High Speed Image P rocessing_Kinget95.pdf Read: No Title: A Programmable Analog Cellular Neural Network CMOS Chip for High Speed Image Processing Authors: Kinget, P. and Steyaert, M.S.J. Published on: IEEE Journal of Solid-State Circuits, vol. 30, no3, MArch 1995 Downloaded from: IEEE Xplorer Abstract: A high speed analog image processor chip is presented. It is based on the cel lular neural network architecture. The implementation of an analog programmable CNN-chip in a standard CMOS tech nology is discussed. The control parameters or templates in all cells are under direct user control and are tunable over a continuous value range from 1/4 to 4. This tuning property is implemented with a compact current scaling circuit based o n MOS transistors operating in the linear region. A 4 x 4 CNN prototype system has been designed in a 2.4 um CMOS technology an d successfully tested. The cell density is 380 cells/cm* and the cell time constant is 10 ps. The current drain for a ty pical template is 40 pA/cell. The real-time image processing capabilities of the system are demonstrated. From this prototype i t is estimated that a 128 x 128 fully programmable analog image processing system can be integrated on a single chip using a sta ndard digital submicron CMOS technology. This work demonstrates that powerfull high speed programmable analog processing sy stems can be built using standard CMOS technologies. Summary: Not available Keywords: To complete - Competitive Hebbian learning through spike-time-dependent synaptic plasticity_ Song00.pdf Read: Yes Title: Competitive Hebbian learning through spike-time-dependent synaptic plasticity Authors: Song, S., Miller, K.D. and Abbott, L.F.

Published on: Nature America 2000 Downloaded from: ????? Abstract: Hebbian models of development and learning require both activity-dependent sy naptic plasticity and a mechanism that induces competition between different synapses. One form of experimental ly observed long-term synaptic plasticity, which we call spike-timing-dependent plasticity (STDP), depends o n the relative timing of pre- and postsynaptic action potentials. In modeling studies, we find that this form o f synaptic modification can automatically balance synaptic strengths to make postsynaptic firing irregula r but more sensitive to presynaptic spike timing. It has been argued that neurons in vivo operate in such a balan ced regime. Synapses modifiable by STDP compete for control of the timing of postsynaptic action potentials. Inp uts that fire the postsynaptic neuron with short latency or that act in correlated groups are able to compete most successfully and develop strong synapses, while synapses of longer-latency or less-effective inputs are weakened. Summary: This paper states that Hebbian learning relies on two critical mechanisms: ac tivity-dependent modification (well-know) and competition (not well-know). Previous models require global intracellular signaling to reflect the state of many synapses. To avoid this, Spike-Timing-Dependent Plasticity (STDP) synapt ic modification form is shown. This rule state that long-term stregthening of synapses occur for short time difference between pre- and post-synaptic action potencials. On other hand, long-term weakening is a result of short time diff erence betwenn post- and pre-synaptic action potentials. STDP is a stable mechamism, reduce the latency of the neur on, strongly affected by correlations between different inputs if this correlation decay rapidly, is insensitive to average rate and rate variations. Thus STDP shows the basic feature of Hebbian learning, the strenghening of correla ted groups pf synapses, while displaying the desirable features of firing-rate independence and stability and a novel dependence on correlation decay time. Keywords: STDP, learning competition, latency shortening, time correlation - Evolutionary Analog Circuit Design on a Programmable Analog Multiplexer Array_ Vellasco02.pdf Read: No Title: Evolutionary Analog Circuit Design on a Programmable Analog Multiplexer Array Authors: Santini, C.C. et al.

Published on: Field-Programmable Technology, 2002. (FPT). Proceedings. 2002 IEEE Internatio nal Conference on Downloaded from: IEEE Xplorer Abstract: This work discusses an Evolvable Hardware (EHW) platform for the synthesis of analog electronic circuits. The EHW analog platform, named PAMA (Programmable Analog Multiplexer Array), is a reconfigurable platform that consists of integrated circuits whose internal connections can be progra mmed by Evolutionary Computation techniques, such as Genetic Algorithms, to synthesize circuits. T he PAMA is classified as Field Programmable Analog Array (FPAA). FPAAs have just recently appeared, an d most projects are being carried out in universities and research centers. They constitute the state o f the art in the technology of reconfigurable platforms. These devices will become the building block of a forthcoming class of hardware, with the important features of self-adaptation and self-repairing, through au tomatic reconfiguration. The PAMA platform architectural details, concepts and characteristics are discuss ed. Three case studies, with promising results, are described: an operational amplifier, a logarithmic amp lifier and a membership function circuit of a fuzzy logic controller. Summary: Not available Keywords: FPAA, Evolvable Hardware, Genetic Algorithms, PAMA, fuzzy logic - Evolvable Hardware for Autonomous Systems_Stoica04.pdf Read: No Title: Evolvable Hardware for Autonomous Systems Authors: Stoica, A. Published on: CEC-2004 Tutorial, Portland, Oregon Downloaded from: http://ehw.jpl.nasa.gov/Content/Public/TutorialCEC2004/TutorialCEC2004.pdf Abstract (Tutorial Overview): - Introduction to EHW - Reconfigurable and Morphable Hardware - Algorithms for self-configuration and evolution - Demonstrations of Evolvable Systems - Application Examples - System Aspects

- Resources for EHW Engineers Summary: Not available Keywords: To Complete

- Extended Address Event Representation Draft.pdf Read: No Title: Extended Address Event Representation Draft Standard v0.4 Authors: Undefined Published on: ??? Downloaded from: http://www.neuroengineering.upenn.edu/boahen/pdf/methAER.pdf Abstract: Address Event Representation (AER) is a representation for the communication of asynchronous events between a collection of units doing computation in parallel. Although a large part of the intent of the AER standard is to allow communication between neuromorphic circuitry, we make no commitme nt to the details of the units involved: they may be neuromorphic integrated circuits, digital circuits, com puter software emulations of neural systems, biological neurons, or standard computer programs. We shall g enerally use neuromorphic circuitry as an example medium when discussing issues in this document. That is not to be taken as prescriptive (or proscriptive). Summary: Not available Keywords: AER protocol standard - GRACE_Generative Robust Analog Circuit Exploration_Terry.pdf Read: No Title: GRACE: Generative Robust Analog Circuit Exploration Authors: Terry, M.A. et al. Published on: Proceedings of Applications of Evolutionary Computing, EvoWorkshops 2006: (Ev

oHOT), Lecture Notes in Computer Science 3907, pp 332-343, Springer Verlag. Downloaded from: http://people.csail.mit.edu/unamay/publications.html Abstract: We motivate and describe an analog evolvable hardware design platform named G RACE (i.e. Generative Robust Analog Circuit Exploration). GRACE combines coarse-grained, topological circu it search with intrinsic testing on a Commercial O-The-Shelf (COTS) field programmable device, the Anadigm AN2 21E04. It is suited for adaptive, fault tolerant system design as well as CAD flow applications. Summary: Not available Keywords: To complete - Implementing neural models in silicon_Smithtutorial_BICS04.pdf Read: No Title: Implementing Neural Models in Silicon Authors: Smith, L Published on: Presented on BICS 2004 Downloaded from: http://www.cs.stir.ac.uk/~lss/BICS2004/Tutorials/Smithtutorial.pdf Abstract: - Why silicon implementation - Real neurons (a very quick introduction) - What to implement - Implementation Technologies - Spiking systems - Synapses - Concluding thoughts Summary: Not available Keywords: To complete - Integrate-and-Fire models with adaptation are good enough_Jolivet05.pdf Read: No Title: Integrate-and-Fire models with adaptation are good enough: predicting spike t

imes under random current injection Authors: Jolivet, R. et al. Published on: Advances in Neural Information Processing Systems 18}, Y. Weiss and B. Schlkop f and J. Platt, MIT Press, Cambridge, pp. 595--602 Downloaded from: http://books.nips.cc/papers/files/nips18/NIPS2005_0056.pdf Abstract: Integrate-and-Fire-type models are usually criticized because of their simpli city. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equival ent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We nd that the resulting effective model is suf cient to predict the spike train of the real pyramidal neuron wi th high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a sig ni cant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency a daptation are shown to be a key feature in this context since they are necessary for the model to conne ct between different driving regimes. Summary: Not available Keywords: To complete - Large-Scale Field-Programmable Analog Arrays for Analog Signal Processing_Hall 05.pdf Read: No Title: Large-Scale Field-Programmable Analog Arrays for Analog Signal Processing Authors: Hall, T.S. et al. Published on: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, VOL. 52, NO. 11, NOVEMBER 2005 Downloaded from: IEEE Xplorer Abstract: Field-programmable analog arrays (FPAAs) provide a method for rapidly prototy ping analog systems. Currently available commercial and academic FPAAs are typically based on operational am plifiers (or other similar analog

primitives) with only a few computational elements per chip. While their spec ific architectures vary, their small sizes and often restrictive interconnect designs leave current FPAAs li mited in functionality and flexibility. For FPAAs to enter the realm of large-scale reconfigurable devices such as mo dern field-programmable gate arrays (FPGAs), new technologies must be explored to provide area-efficient accurate ly programmable analog circuitry that can be easily integrated into a larger digital/mixed-signal system. Recent ad vances in the area of floating-gate transistors have led to a core technology that exhibits many of these qualiti es, and current research promises a digitally controllable analog technology that can be directly mated to commer cial FPGAs. By leveraging these advances, a new generation of FPAAs is introduced in this paper that will dra matically advance the current state of the art in terms of size, functionality, and flexibility. FPAAs have been fabricated using floating-gate transistors as the sole programmable element, and the results of characteriza tion and system-level experiments on the most recent FPAA are shown. Summary: Not available Keywords: To Complete - Low power VLSI implementation of pulse-based neural networks with CMOS control led conductance_Han06.pdf Read: No Title: Low power VLSI implementation of pulse-based neural networks with CMOS contro lled conductance Authors: Han, I.S. Published on: To be published Downloaded from: email Abstract: This paper describes a new pulse-based neural network VLSI based on the tunab le CMOS linear conductance. The controlled conductance produces the synaptic or neuron function, which ar e inspired by the biological plausibility and low power supply. The synaptic computation speed is up to maximum pulse frequency. The power co nsumption is reduced, as active synapses only consume the power. The neuron based on the controlled conductan ce demonstrates the asynchronous pulse generation with a refractory period, and the behaviour of integration-a nd-firing with Address-Event Representation (AER). The test circuit was fabricated in AMIS 0.7 micron CMOS technology. The exper

imentation exhibits the behaviour of linear controlled conductance, as observed in SPICE simulation. Summary: Not available Keywords: To Complete

- Low-Power Programmable Signal Processing_Hasler05.pdf Read: No Title: Low-Power Programmable Signal Processing Authors: Hasler, P. Published on: Proceedings of the 9th International Database Engineering & Application Sympo sium (IDEAS05) Downloaded from: IEEE Xplorer Abstract: This paper present the potential of using Programmable Analog Signal processi ng techniques for impacting low-power portable applications like imaging, audio processing,and speech rec ognition. The range of analog signal processing functions available results in many potential opportunities to incorporate these analog signal processing systems with digital signal processing systems for improved overall system performance. Programmable, dense analog techniques enable these approaches, based upon pro grammable transistor approaches. We show experimental evidence for the factor of 1000 to 10,000 power efciency improvement for programmable analog signal processing compared to custom digital implementations in Vector Matrix Multipliers (VMM), CMOS imagers with computation on the pixel plane with high ll factors, and Large-S cale Field Programmable Analog Arrays (FPAA), among others. Summary: Not available Keywords: To Complete - Neuron Function_The Mystery Persists_Schreiner01.pdf Read: No Title: Neuron Function: The Mystery Persists

Authors: Schreiner, K. Published on: IEEE INTELLIGENT SYSTEMS, NOVEMBER/DECEMBER 2001 Downloaded from: http://www.cs.washington.edu/homes/diorio/MURI2003/Publications/x6Intelligenc er.pdf Abstract: Not Available Summary: Not available Keywords: To Complete - Neural Computation_A research topic for Theoretical Computer Science_Some Thou ghts and Pointers_Mass01.pdf Read: No Title: Neural Computation: A research topic for Theoretical Computer Science? Some T houghts and Pointers. Authors: Maass, W. Published on: Current Trends in Theoretical Computer Science, Entering the 21th Century, pa ges 680-690. World Scientific Publishing, 2001. In Rozenberg G., Salomaa A., and Paun G., editors. Downloaded from: http://www.igi.tugraz.at/maass/psfiles/123a.pdf Abstract: We address difficulties and opportunities for research contribuitions from th eoretical computer science in the area of neural computation. In addition some pointers to sources for f urther information are provided. Summary: Not available Keywords: To complete

- Overview of Field Programmable Analog Arrays as Enabling Technology for Evolva ble Hardware for High Reliability Systems_Plante03.pdf Read: No

Title: Overview of Field Programmable Analog Arrays as Enabling Technology for Evolvable Hardware for High Reliability Systems Authors: Mickens, L. et al. Published on: Evolvable Hardware, 2003. Proceedings. NASA/DoD Conference on Downloaded from: IEEE Xplorer Abstract: The recent commercial availability of Field Programmable Analog Arrays (FPAAs ) is leading designers of high reliability space and ground support systems to consider how these devices ca n enable new applications. They hold promise for analog systems that require reactive evolvability such as those that correct defective mechanical deployments. They are also suited to evolving circuits, which chan ge with temporary or degenerative electrical conditions such as those associated with power systems during spec ific periods of orbit in space flight and with the effects of aging on electronic components. FPAA s are see n as a key enabler to a system in development that will enable circuit development in a hardware environment that can be programmed and tailored for a given system s electrical input/output/load environment. Summary: Not available Keywords: To complete - Palmo_Pulse-Based Signal Processing for Programmable Analog VLSI_Hamilton02.pd f Read: No Title: Palmo: Pulse-Based Signal Processing for Programmable Analog VLSI Authors: Papathanasiou, K., Brandtner, T. and Hamilton, A. Published on: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCE SSING, VOL. 49, NO. 6, JUNE 2002 Downloaded from: IEEE Xplorer Abstract: This paper presents novel signaling and circuit techniques for the implementa tion of programmable analog and mixed signal very large scale integration (VLSI). The signaling technique use s pulsewidth modulated digital signals to convey analog signal information between programmable analog cells

. A circuit for a generic programmable analog cell is introduced and is analyzed for harmonic distortio n performance. The equivalence of the cell to a switched-capacitor (SC) Miller integrator is proven. Voltage - and current-mode (CM) implementations of the generic programmable analog cell are introduced; an el egant-fast current controlled comparator is presented, and results from working analog VLSI implementations provided. Summary: Not available Keywords: FPAA, palmo, programmable analog VLSI, pulse-based signal processing - Reconfigurable VLSI Architectures for Evolvable Hardware_From Experimental Fie ld Programmable Transistor Arrays to Evolution-Oriented Chips_Stoica01.pdf Read: No Title: Reconfigurable VLSI Architectures for Evolvable Hardware: From Experimental Field Programmable Transistor Arrays to Evolution-Oriented Chips Authors: Stoica, A. et al. Published on: IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 9, NO. 1, FEBRUARY 2001 Downloaded from: IEEE Xplorer Abstract: Evolvable hardware (EHW) addresses on-chip adaptation and self-configuration through evolutionary algorithms. Current programmable devices, in particular the analog ones, lack evolution-o riented characteristics. This paper proposes an evolution-oriented field programmable transistor array (FPTA), re configurable at transistor level. The FPTA allows evolutionary experiments with reconfiguration at various leve ls of granularity. Experiments in SPICE simulations and directly on a reconfigurable FPTA chip demonstrate how the evolutionary approach can be used to automatically synthesize a variety of analog and digital circuits. Summary: Not available Keywords: To complete - Silicon Spike-Based Synaptic Array and Address-Event_Vogelstein04.pdf Read: No

Title: Silicon Spike-Based Synaptic Array and Address-Event Transceiver Authors: Vogelstein, R.J., Mallik, U.and Cauwenberghs, G. Published on: Proc. IEEE Int. Symp. Circuits and Systems (ISCAS'2004), Vancouver Canada, Ma y 23-26, 2004. Downloaded from: http://bach.ece.jhu.edu/pub/papers/iscas04_ifat.pdf Abstract: An integrated array of 2,400 spiking silicon neurons, with reconfigurable syn aptic connectivity and adjustable neural spike-based dynamics, is presented. At the system level, the chip serv es as an address-event transceiver, with incoming and outgoing spikes communicated over an asynchronous event-dri ven bus. Internally,every cell implements a spiking neuron that models general principles of synaptic operat ion as observed in biological membranes. Synaptic conductance and synaptic reversal potential can be dynamically modul ated for each event. The implementation employs mixed-signal charge-based circuits to facilitate digital control of s ystem parameters and minimize variability due to transistor mismatch. In addition to describing the structure of the si licon neurons,we present experimental data characterizing the operation of the 3mm 3mm chip fabricated in 0.5 m CMOS technology. Summary: Not available Keywords: To complete - Silicon Synaptic Homeostasis_Bartolozzi06.pdf Read: No Title: Silicon Synaptic Homeostasis Authors: Bartolozzi, C. and Indiveri, G. Published on: To be published Downloaded from: email Abstract: Synaptic homeostasis is a mechanism present in biological neural systems used to stabilize the networks activity. It acts by scaling the synaptic weights in order to keep the neuron s firing rate within a functional range, in face of chronic changes of their activity level, while p

reserving the relative differences between individual synapses. In analog VLSI spikebased neural networks, homeo stasis is an appealing biologically inspired means to solve technological issues such as mismatch, temperature dr ifts or long lasting dramatic changes in the input activity level. Here we present a new synaptic circuit, the Diff-Pair-Integrator, designed to reproduce the biological temporal evolution of post-synaptic currents, and compatible with implementation of spike-based learning and homeostasis. We describe the silicon synapse and show how it can be used in conjunction with a software control algorithm to model synaptic scaling homeostatic mecha nisms. Summary: Not available Keywords: aVLSI; neuromorphic; synapse; homeostasis; spike-based - Spike-timing-dependent synaptic plasticity_from single spikes to spike trains_ Panckev04.pdf Read: No Title: Spike-timing-dependent synaptic plasticity: from single spikes to spike trains Authors: Panchev, C.and Wermter, S. Published on: Neurocomputing, Vol. 58-60, pp. 365-371, 2004. Downloaded from: http://www.his.sunderland.ac.uk/ps/neuro-panwer.pdf Abstract: We present a neurobiologically motivated model of a neuron with active dendri tes and dynamic synapses, and a training algorithm which builds upon single spike-timing-dependent syna ptic plasticity derived from neurophysiological evidence. We show that in the presence of a moderate level of noise, the plasticity rule can be extended from single to multiple pre-synaptic spikes a nd applied to effectively train a neuron in detecting temporal sequences of spike trains. The trained n euron responds reliably under different regimes and types of noise. Summary: Not available Keywords: To complete - Supervised and Unsupervised Learning with Two Sites of Synaptic Integration_Ko

rding01.pdf Read: No Title: Supervised and Unsupervised Learning with Two Sites of Synaptic Integration Authors: Krding, K. and Knig, P. Published on: Journal of Computational Neuroscience 11, 207-215, 2001, Kluwer Academic Publ ishers. Downloaded from: http://www.koerding.com/pubs/393450-Kording.pdf Abstract: Many learning rules for neural networks derive from abstract objective functi ons. The weights in those networks are typically optimized utilizing gradient ascent on the objective function. In those networks each neuron needs to store two variables. One variable, called activity, contains the bottom-up sensory-fugal information involved in the core signal processing. The other variable typically describes the der ivative of the objective function with respect to the cell's activity and is exclusively used for learning. Thi s variable allows the objective function's derivative to be calculated with respect to each weight and thus t he weight update. Although this approach is widely used, the mapping of such two variables onto physiology is unclear, and these learning algorithms are often considered biologically unrealistic. However, recent research on th e properties of cortical pyramidal neurons shows that these cells have at least two sites of synaptic integratio n, the basal and the apical dendrite, and are thus appropriately described by at least two variables. Here we discu ss whether these results could constitute a physiological basis for the described abstract learning rules. As examples we demonstrate an implementation of the backpropagation of error algorithm and a specific self-supervised learning al gorithm using these principles. Thus, compared to standard, one-integration-site neurons, it is possible to incorpo rate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity . Summary: Not available Keywords: To complete - VLSI reconfigurable networks of integrate-and-fire neurons with spike-timing d ependent plasticity_Indiveri_TNE05.pdf Read: No Title: VLSI reconfigurable networks of integrate-and-fire neurons with spike-timing

dependent plasticity Authors: Indiveri, G. Published on: The Neuromorphic Engineer, volume 2, Number 1, March 2005 Downloaded from: http://www.ine-web.org Abstract: In the past few years we have seen an increasing number of projects and demon strations at the Telluride Neuromorphic Engineering Workshops that involve multi chip spiking systems interfaced via the address- event representation (AER). Ind eed, a renewed interest in spiking neural networks is leading to the design and fabrication of an increasing number of VLSI AER devices that implement various t ypes of networks of integrate-and- re (I&F) neurons.1-5 Such devices can be used in conjunction with PCI-AER boards of the type described in the article by Dant e et al. on page 5. In this case, they can be considered as a new generation of hardware neural-network emulators that enable researchers to carry out simulatio ns of large networks of spiking neurons with complex dynamics in real-time, poss ibly solving computationally demanding tasks. Summary: Not available Keywords: To complete

- What makes a dynamical system computationally powerful_Legenstein05.pdf Read: No Title: What makes a dynamical system computationally powerful? Authors: Legenstein, R and Maass,W Published on: New Directions in Statistical Signal Processing: From Systems to Brain. MIT P ress, 2005, S. Haykin, J. C. Principe, T.J. Sejnowski, and J.G. McWhirter, edito rs Downloaded from: http://www.igi.tugraz.at/maass/psfiles/165.pdf Abstract: We review methods for estimating the computational capability of a complex dy namical system. The main examples that we discuss are models for cortical neural microcircuits with varying degrees of biological accuracy, in the context of on line computations on complex input streams. We address in particular the questio n to what extent earlier results about the relationship between the edge of chao s and the computational power of dynamical systems in discrete time for o -line computing also apply to this case. Summary:

Not available Keywords: To complete

Read: No Title: Authors: Published on: Downloaded from: Abstract: Summary: Not available Keywords: To complete

Вам также может понравиться