Feed-forward neural networks (aka multilayer perceptrons) have been widely applied to supervised learning problems since the mid-1980s. Over this time, thousands of different datasets have been used in thousands of different experimental studies, with results reported in the literature. This research has helped to fuel tremendous progress in the field. However, documented reproduction of published experimental results has never been attempted in many cases. The availability of computational resources, software libraries and datasets creates the opportunity to attempt to reproduce and even expand on experiments that previously took a large amount of time. In addition, it is possible to run experiments not just to try and locate single best minimizer of the training loss function, but to collect and explore numerous convergence points on a loss landscape, to better understand the properties of problem instances (e.g. in relation to multimodal optimization and exploratory-driven techniques such as quality-diversity search).
The goal of this competition is to challenge researchers to: (a) attempt to reproduce an existing experimental study and report on their findings (including successes, failures and lessons learned); and/or (b) carry out an experimental study on the loss landscape of a neural network training problem instance to reveal new insights (including finding multiple high-quality solutions and points of attraction for different training algorithms).
Reproducing an experimental study will depend on many factors, including the availability of the dataset(s) used, details provided about the network size, activation functions, training algorithm, weight initialisation technique and hyperparameter settings. If some details are not available, participants may take different approaches to attempt to reproduce the experiment (e.g. using recommended values or searching for a feasible configuration that lines up with the original results). If similar results cannot be produced, participants may attempt to suggest possible reasons for the observed differences.
Studying a problem instance also requires specifying a dataset and all experimental factors. Widely used datasets and network/parameter settings from the literature are recommended as this may provide some additional evidence to support or contradict results found in the past.
Submissions to this competition will be evaluated based on the extent to which they make a contribution to the reproducibility of experimental studies of feed-forward neural networks. This includes the results found, code provided, results data over large sets of training runs, etc.
Submissions should include as a minimum a brief report describing the aims, method, results and discussion of the study. Providing the code used and results data produced is also highly encouraged (e.g. via a publicly available repository). The report could be integrated into, e.g. a jupyter or colab notebook, in which case a brief “readme” submission via email is acceptable, with a link to the repository/notebook.
Deadline: 31st May, 2023. Submit via email to marcusg@uq.edu.au
Marcus Gallagher, University of Queensland, Australia
Anna Bosman, Unversity of Pretoria, South Africa
Saman Halgamuge, University of Melbourne, Australia
Mario Andres Munoz, University of Melbourne, Australia
Katherine Malan, University of South Africa
Roberto Santana, University of the Basque Country, Spain