Research before 1996
Former directions of research
Former activities include hardware accelerators, in particular the Mantra machine. The projects which have been
finished by 1996 are reviewed below.
- Hardware for Auditory System..
Can we model the first few processing from the cochlea to spiking neurons by simple hardware modules?
What are the effects of non-linear and adaptive mechanisms on the performance of the cochlear preprocessing?
What is the relevance of phase locking with spiking neurons for pitch detection?
The project is supervised by
Prof. E. Vittoz, from the Electronics Laboratory (EPFL-DE) and Swiss Centre for Electronics and Micromechanics (CSEM).
Potential Applications of auditory preprocessing to speech recognition are studied in a joint project with
Prof. M. Hasler from the Circuit and Systems Laboratory.
The project ended in 1998.
- Neural Network Algorithms
Two directions of research have been selected: self-organizing feature maps
and multilayer perceptrons. For the Kohonen model many interesting
theoretical questions are still open, e.g., convergence criteria and best
distance measure. Multilayer networks with discrete outputs and/or weights
are investigated using tools from discrete mathematics. In particular, the
computational power of such models has been analyzed and new combinational
optimization algorithms have been proposed. New methods for constructing and
training multilayer networks with continuous and discrete weights have also
been developed. They are better suited for large size problems because they
involve local computations; in addition, convergence is faster than standard
techniques such as back-propagation.
- Neural Network Accelerators
Digital neural network systems - also named neuro-accelerators - are usually
linked to a workstation, or used as a computation server on a local network.
Neuro-accelerators are absolutely necessary to progress in neural network
research, especially if real time performance is important.
A systolic architecture has been selected for building an accelerator based
on VLSI dedicated chips. Three generations of VLSI circuits, adequate for
Hopfield, Kohonen and back-propagation have been designed. The MANTRA I machine
constructed at the Centre Mantra features a peak performance of 400 MCPS.
The project ended in 1996
- Applications of Neural Networks
Several applications are under way and are listed below. Due to hardware and
software evolution, and to new neural network algorithms, it is considered
important that the implementation of an application does not take more than
- Security of electric power systems
The application of a Kohonen network for the security of electric power
systems is one possible approach for defining limits of safe operations.
- Meteorological observation
The Swiss Institute of Meteorology is concerned by recognizing meteorological
situations next to airports. The AMETIS I system is partly manual; a neural
network updated with 30 values every 30 seconds is planned to help the expert
system to take the best decisions.
The MANTRA I Machine
- Systolic array of 40 x 40 processing elements.
- 400 million connections per second (CPS) peak performance.
- 130 million connection updates per second (CUPS) peak back-propagation
- 10 million connection updates per second (CUPS) measured Kohonen
learning rate on reduced size machine (50% efficiency).
- On-chip learning: Delta, perceptron, back-propagation and Kohonen models.
- Virtual weight matrix.
- DSP TMS320C40 processor control.
- SBus interface.
Based on a 2D systolic array of processing elements, the MANTRA I machine is
a high-performance low-cost neurocomputer capable of working on any size
neural network. Hooked to a SUN SPARCstation via the SBus interface, it can
directly access the computer central memory and accelerate the neural
computation by a factor of 100.
The MANTRA I machine will be programmed through a library of C routines
dedicated to the selected models. Graphic interaction will be provided for
Designed to be applied to on-line control systems, it is particularily suited
for applications requiring fast learning of large artificial neural networks.
This is important for robotic applications, artificial vision, etc.
Get some images of the machine