G-2020-23-EIW13
Neural network sparsification using Gibbs measures
et référence BibTeX
Pruning methods for deep neural networks based on weight magnitude have shown promise in recent research. We propose a new, highly flexible approach to neural network pruning based on Gibbs measures. We apply it with a Hamiltonian that is a function of weight magnitude, using the annealing capabilities of Gibbs measures to smoothly move from regularization to adaptive pruning during an ordinary neural network training schedule. Comparing to several established methods, we find that our network outperforms those that do not use extra training steps and achieves a high accuracy much faster than those that do. We achieve a <3% reduction in accuracy on CIFAR-10 with ResNet-32 when pruning 90% of weights.
Paru en avril 2020 , 7 pages
Document
G2023-EIW13.pdf (260 Ko)