Abstract
Training a deep neural network on a large dataset to convergence is a time-demanding task. This task often must be repeated many times, especially when developing a new deep learning algorithm or performing a neural architecture search. This problem can be mitigated if a deep neural network's performance can be estimated without actually training it. In this work, we investigate the feasibility of two tasks: (i) predicting a deep neural network's performance accurately given only its architectural descriptor, and (ii) generalizing the predictor across different datasets without re-training. To this end, we propose a dataset-agnostic regression framework that uses a novel dual-LSTM model and a new dataset difficulty feature. The experimental results show that both tasks above are indeed feasible, and the proposed method outperforms the existing techniques in all experimental cases. Additionally, we also demonstrate several practical use-cases of the proposed predictor.
Original language | English |
---|---|
Article number | 127544 |
Number of pages | 10 |
Journal | Neurocomputing |
Volume | 583 |
DOIs | |
Publication status | Published - 28 May 2024 |
Keywords
- AutoML
- Dataset-agnostic
- Deep learning
- Neural architecture search
- Neural network performance predictor