Neuroevolutionary Ticket Search — Finding Sparse, Trainable DNN Initialisations

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

80 Downloads (Pure)

Abstract

The Lottery Ticket Hypothesis (LTH) asserts that a randomly initialised overparameterised Deep Neural Network (DNN) contains a sparse subnetwork that, when trained (up to) the same amount as the original network, performs just as well. So-called winning tickets are vastly more efficient to train than dense networks; however, finding such tickets currently relies on pre-training an overparameterised network via Iterative Magnitude Pruning (IMP). Due to the increasing demand for computing resources, there are major incentives to find good search procedures for winning tickets (which can drastically reduce the training time required to solve a range of problems). In this paper, we propose a new method for the evolution of sparse DNN initialisations that generates results commensurate with winning ticket search procedures: we refer to the method as Neuroevolution Ticket Search (NeTS). By training an overparameterised network using gradient descent for use as a baseline only, we show that NeTS quickly converges to near state-of-the-art performance in terms of network size and trainability. Additionally, we find that NeTS appears to be a competitive alternative to IMP in terms of wall clock time.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations 2023
Subtitle of host publicationWorkshop on Sparsity in Neural Networks
Number of pages11
Publication statusPublished - 5 May 2023

Keywords

  • neuroevolution
  • machine learning
  • sparsity
  • Evolutionary algorithms

Fingerprint

Dive into the research topics of 'Neuroevolutionary Ticket Search — Finding Sparse, Trainable DNN Initialisations'. Together they form a unique fingerprint.

Cite this