Abstract
A recurrent neural network is presented which performs quadratic optimisation subject to bound constraints on each of the optimisation variables. The optimisation strategy employed by the neural network falls into the general class of gradient methods for constrained nonlinear optimisation, and is compared briefly with the strategies employed by conventional techniques for bound-constrained quadratic optimisation. Conditions on the quadratic problem and the network parameters are established under which exponential asymptotic stability is achieved. These conditions are shown to provide a tighter bound on the degree of exponential stability than that previously established for this network. Through suitable choice of the network parameters, the system of differential equations governing the network activations is preconditioned in order to reduce its sensitivity to noise and roundoff-errors and to accelerate convergence.
Original language | English |
---|---|
Pages | 918-923 |
Number of pages | 6 |
Publication status | Published - 1994 |
Externally published | Yes |
Event | Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) - Orlando, FL, USA Duration: 27 Jun 1994 → 29 Jun 1994 |
Conference
Conference | Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) |
---|---|
City | Orlando, FL, USA |
Period | 27/06/94 → 29/06/94 |