Online importance sampling for stochastic gradient optimization

ICPRAM 2025

Visualization of the importance sampling at 3 different epoch and the underlying classification task. For each presented epoch, 800 data-point are presented with a transparency proportional to their weight according to our method. At epoch 800 our weights show high similarity to DLIS method while in practice some discrepancy differentiate the two method but are not visible in this simple example.

Abstract

Machine learning optimization often depends on stochastic gradient descent, where the precision of gradient estimation is vital for model performance. Gradients are calculated from mini-batches formed by uniformly selecting data samples from the training dataset. However, not all data samples contribute equally to gradient estimation. To address this, various importance sampling strategies have been developed to prioritize more significant samples. Despite these advancements, all current importance sampling methods encounter challenges related to computational efficiency and seamless integration into practical machine learning pipelines. In this work, we propose a practical algorithm that efficiently computes data importance on-the-fly during training, eliminating the need for dataset preprocessing. We also introduce a novel metric based on the derivative of the loss w.r.t. the network output, designed for mini-batch importance sampling. Our metric prioritizes influential data points, thereby enhancing gradient estimation accuracy. We demonstrate the effectiveness of our approach across various applications. We first perform classification and regression tasks to demonstrate improvements in accuracy. Then, we show how our approach can also be used for online data pruning by identifying and discarding data samples that contribute minimally towards the training loss. This significantly reduce training time with negligible loss in the accuracy of the model.

Downloads and links

BibTeX reference

@inproceedings{Salaun:2025:GradientIS,
  author = {Corentin Salaün and Xingchang Huang and Iliyan Georgiev and Niloy Mitra and Gurprit Singh},
  title = {Online importance sampling for stochastic gradient optimization},
  booktitle = {ICPRAM},
  year = {2025}
}