The Knowledge Within: Methods for Data-Free Model Compression

Matan Haroush, Itay Hubara, Elad Hoffer, Daniel Soudry; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8494-8502

Abstract


Background: Recently, an extensive amount of research has been focused on compressing and accelerating Deep Neural Networks (DNN). So far, high compression rate algorithms require part of the training dataset for a low precision calibration, or a fine-tuning process. However, this requirement is unacceptable when the data is unavailable or contains sensitive information, as in medical and biometric use-cases. Contributions: We present three methods for generating synthetic samples from trained models. Then, we demonstrate how these samples can be used to calibrate and fine-tune quantized models without using any real data in the process. Our best performing method has a negligible accuracy degradation compared to the original training set. This method, which leverages intrinsic batch normalization layers' statistics of the trained model, can be used to evaluate data similarity. Our approach opens a path towards genuine data-free model compression, alleviating the need for training data during model deployment.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Haroush_2020_CVPR,
author = {Haroush, Matan and Hubara, Itay and Hoffer, Elad and Soudry, Daniel},
title = {The Knowledge Within: Methods for Data-Free Model Compression},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}