Towards Errorless Training ImageNet-1k
This repository host MATLAB code and models for the manuscript, Towards Errorless Training ImageNet-1k, which is available at https://arxiv.org/abs/2508.04941. We give 6 models trained on the ImageNet-1k dataset, which we list in the table below. Each model is featured model of archtecture 17x40x2. That is, each model is made up of 17x40x2=1360 FNNs, all with homogeneous architecture (900-256-25 or 900-256-77-25), working in parallel to produce 1360 predictions which determine a final prediction using the majority voting protocol.
We trained the 6 models using the following transformation of the 64x64 downsampled ImageNet-1k dataset:
- downsampled images to 32x32, using the mean values of non-overlapping 2x2 grid cells and
- trimmed off top row, bottom row, left-most column, and right-most column.
This transformed data results in 30x30 images, hence 900-dimensional input vectors.
For a thorough description of our models trained on the ImageNet-1k dataset, please read our preprint linked above.
Model | Training Method | FNN Architecture | Accuracy (%) |
---|---|---|---|
Model_S_h1_m1 | SGD | 900-256-25 | 98.247 |
Model_S_h1_m2 | SGD | 900-256-25 | 98.299 |
Model_S_h2_m1 | SGD | 900-256-77-25 | 96.990 |
Model_T_h1_m1 | SGD followed by GDT | 900-256-25 | 98.289 |
Model_T_h1_m2 | SGD followed by GDT | 900-256-25 | 98.300 |
Model_T_h2_m1 | SGD followed by GDT | 900-256-77-25 | 97.770 |
*SGD = stochastic gradient descent | |||
**GDT = gradient descent tunneling |