On the Improvement of a Neural Controller Through Soft Modelling
Artificial neural networks are known to learn by examples. Even when dealing with control problems where no “right” sequence of control actions is known in advance, the “backpropagation through time” algorithm still allows to accomplish supervised training in a fully data-driven framework. Here we show how the prior knowledge supplied by a very crude model of the system to be controlled can be exploited to derive a suitable objective function for the neural controller and boost its performances. The method is exemplified on the “inverted pendulum” problem, which is a quite popular testing bench of emerging control design techniques. Experimental results supported the effectiveness of the proposed approach: once trained off-line and then inserted in a real-time control loop, a network with 6 hidden units was able to balance a mechanical pendulum above the middle of the slide guides from initial angles between the vertical axis and the pole in the range ±30°. The slide guides were simply too short to let us do more. However, extensive software simulations confirmed that more striking results could be achieved. In fact a single network with 25 hidden units routinely succeeded in swinging up and balancing an accurate model of the pendulum above any pre-defined position.
Hardware: Analog Neural Signal Processor
Several reasons indicate ANNs as a practicable means to realize some of the outstanding opportunities that analog computation could offer. Among these are: massive parallelism, fault tolerance, noise immunity, moderate resolution requirements, but especially their own nature of adaptive systems suited to directly process analog signals coming from the external world. However, in practice the need for efficient analog memories and linear multipliers severly limited the diffusion of analog ANN devices in useful applications.
To reduce the complexity of these circuits, both biological systems and neuromorphic architectures suggest two guidelines: first, merging weight storage with synaptic computation; second, exploiting the properties owned by simple devices instead of attempting to put predefined functional relationships on silicon.
The availability of flash EEPROM technology gave us the opportunity of pursueing both strategies: so a feed-forward ANN has been physically implemented using NMOS floating gate transistors as the core of synaptic elements. A weight value is stored in every cell with 5 bit resolution by modifying the threshold voltage, while the input is connected to the control gate. The current flowing into the cell is mirrored once or twice to account for both excitatory and inhibitory contributions in the synapse, and then conveyed into a summing line towards the neuron.
This is a simple transresistance amplifier consisting of an inverter with a resistive feedback. Writing and erasing operations are performed by means of variable slope ramps 10 ms long.
The whole chip contains a single layer of 16 neurons with 16 synapses each. It is 4 mm * 4 mm wide and consumes about 240 mW during normal operation. Since input and output levels are CMOS compatible, several chips can be cascaded to build up partially connected architectures.
For simulations purposes, the floating cell – current mirror system (i.e. the synapse) has been treated as a black box providing a current to the summing line, given the input and the floating gate voltages. Several measurements have been taken in different conditions for both excitatory and inhibitory synapses to build up two tables describing their static i/o relationships.
The same has been done for the neuron, whose output voltage depends on the total current flowing into the summing line. Such tables are directly used during the training procedure to calculate the outputs of the whole system (feed-forward phase) as well as the error signal to be applied to the weights (back- propagation phase) through ordinary interpolation methods.
In this way, departures from “ideal” behaviours have been automatically taken into account.