Abstract
Training optimization plays a vital role in the development of convolution neural network (CNN). CNNs are hard to train because of the presence of multiple local minima. The optimization problem for a CNN is non-convex, hence, has multiple local minima. If any of the chosen hyper-parameters are not appropriate, it will end up at bad local minima, which leads to poor performance. Hence, proper optimization of the training algorithm for CNN is the key to converge to a good local minimum. Therefore, in this paper, we introduce an evolutionary convolution neural network (ModPSO-CNN) algorithm. The proposed algorithm results in the fusion of modified particle swarm optimization (ModPSO) along with backpropagation (BP) and convolution neural network (CNN). The training of CNN involves ModPSO along with backpropagation (BP) algorithm to encourage performance improvement by avoiding premature convergence and local minima. The ModPSO have adaptive, dynamic and improved parameters, to handle the issues in training CNN. The adaptive and dynamic parameters bring a proper balance between the global and local search ability, while an improved parameter keeps the diversity of the swarm. The proposed ModPSO algorithm is validated on three standard mathematical test functions and compared with three variants of the benchmark PSO algorithm. Furthermore, the performance of the proposed ModPSO-CNN is also compared with other training algorithms focusing on the analysis of computational cost, convergence and accuracy based on a standard problem specific to classification applications, such as CIFAR-10 dataset and face and skin detection dataset.
Original language | English |
---|---|
Pages (from-to) | 2165-2176 |
Number of pages | 12 |
Journal | Soft Computing |
Volume | 25 |
Issue number | 3 |
DOIs | |
Publication status | Published - Feb 2021 |
Keywords
- Backpropagation
- Convolution neural network
- Particle swarm optimization
- Visual recognition