Retour aux activités
Discussion DS4DM autour d'un café
A Stochastic Proximal Method for Non-smooth Regularized Finite Sum Optimization
Dounia Lakhmiri – Polytechnique Montréal, Canada
Séminaire hybrique sur Zoom et dans la salle de séminaire du GERAD.
We consider the problem of training a deep neural network with non-smooth regularization to retrieve a sparse and efficient sub-structure. Our regularizer is only assumed to be lower semi-continuous and prox-bounded. We combine an adaptive quadratic regularization approach with proximal stochastic gradient principles to derive a new solver, called SR2. Our experiments on network instances trained on CIFAR-10 and CIFAR-100 with L1 and L0 regularization show that SR2 achieves higher sparsity than other proximal methods such as ProxGEN and ProxSGD with satisfactory accuracy.
![Federico Bobbio](/system/assets/000/001/951/1951.FedericoBobbio_card.jpg)
Federico Bobbio
responsable
![Gabriele Dragotto](/system/assets/000/001/291/1291.DragottoGabriele2_card.jpg)
Gabriele Dragotto
responsable
Lieu
Activité hybride au GERAD
Zoom et salle 4488
Pavillon André-Aisenstadt
Campus de l'Université de Montréal
2920, chemin de la Tour
Montréal Québec H3T 1J4
Canada
Pavillon André-Aisenstadt
Campus de l'Université de Montréal
2920, chemin de la Tour
Montréal Québec H3T 1J4
Canada