Novel training algorithm based on quadratic optimisation using neural networks

Ganesh Arulampalam, Abdesselam Bouzerdoum

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

In this paper we present a novel algorithm for training feedforward neural networks based on the use of recurrent neural networks for bound constrained quadratic optimisation. Instead of trying to invert the Hessian matrix or its approximation, as done in other second-order algorithms, a recurrent equation that emulates a recurrent neural network determines the optimal weight update. The development of this algorithm is presented, along with its performance under ideal conditions as well as results from training multilayer perceptrons. The results show that the algorithm is capable of achieving results with less errors than other methods for a variety of problems.

Original languageEnglish
Title of host publicationConnectionist Models of Neurons, Learning Processes, and Artificial Intelligence - 6th International Work-Conference on Artificial and Natural Neural Networks, IWANN 2001, Proceedings
PublisherSpringer Verlag
Pages410-417
Number of pages8
EditionPART 1
ISBN (Print)3540422358, 9783540422358
DOIs
Publication statusPublished - 2001
Externally publishedYes
Event6th International Work-Conference on Artificial and Natural Neural Networks, IWANN 2001 - Granada, Spain
Duration: 13 Jun 200115 Jun 2001

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume2084 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference6th International Work-Conference on Artificial and Natural Neural Networks, IWANN 2001
Country/TerritorySpain
CityGranada
Period13/06/0115/06/01

Fingerprint

Dive into the research topics of 'Novel training algorithm based on quadratic optimisation using neural networks'. Together they form a unique fingerprint.

Cite this