- Home
- Dipartimento
- Ricerca
- Didattica
- Corsi di laurea
- Corsi di studio
- Informazioni agli studenti
- Elenco insegnamenti - Programmi d'esame
- Archivio Elenco Insegnamenti - Programmi
- Orario delle lezioni e Calendario didattico
- Bacheca appelli Guida Online
- Calendario lauree
- Informazioni specifiche Calendario lauree
- Segreteria studenti
- Bandi
- Collegio universitario Luciano Fonda
- Mobilità internazionale
- Premi di studio
- Orientamento
- Sbocchi professionali
- Stage e tirocini
- Modulistica di Ateneo
- Post Lauream
- Servizi e strumenti
- Trasferimento della conoscenza
Seminario "Pathfinder: Quasi-Newton Variational Inference", prof. Bob Carpenter - Flatiron Institute, Center for Computational Mathematics, New York, 28 ottobre ore 16.30, Aula Conferenze 1_A, primo piano, Edificio D
Tipologia evento:
home
Sede:
Trieste
"Pathfinder: Quasi-Newton Variational Inference"
Relatore: prof. Bob Carpenter - Flatiron Institute, Center for Computational Mathematics, New York
Abstract
I will introduce the Pathfinder variational inference algorithm, which was motivated by finding good initializations for Markov chain Monte Carlo (i.e., solving the "burn-in" problem). It works by running quasi-Newton optimization (specifically, L-BFGS) on the target posterior (not the stochastic ELBO, as in other black-box variational inference algorithms). At each iteration of optimization, Pathfinder defines a variational approximation to the posterior, in the form of a multivariate normal distribution taking the low-rank plus diagonal inverse Hessian from the optimizer as covariance. It then selects the approximation with the lowest KL-divergence to the true posterior.
Multi-path Pathfinder runs multiple instances of Pathfinder in parallel and then uses importance resampling to produce a final set of draws. The single-path algorithm provides much better approximations (measured by Wasserstein distance or KL-divergence) than the previous state-of-the-art mean-field or full-rank black box variational inference schemes, and the multi-path algorithm is much better again for posteriors with multiple modes or complex geometry. The computational bottleneck is evaluating KL-divergence through the evidence lower bound (ELBO), but this step is embarrassingly parallelizable. Even without parallelization, Pathfinder is one to three orders of magnitude faster than the state of the art black box variational inference or using the no-U-turn Hamiltonian Monte Carlo sampler for warmup. It is also much more robust. We will show the results of evaluating on dozens of different models in the posteriordb test suite and also a range of high-dimensional and multimodal problems. This is joint work with Lu Zhang (first author who did most of the hard work), Aki Vehtari, and Andrew Gelman.
Luogo:
DEAMS - Aula Conferenze, I piano, Edificio D, Via A. Valerio n. 4/1 - Trieste
e
Online
________________________________________________________________________________
Microsoft Teams meeting
Join on your computer, mobile app or room device
Click here to join the meeting
Meeting ID: 363 706 022 306
Passcode: CeJuSn
Download Teams | Join on the web
________________________________________________________________________________
Promotore:
DEAMS - Prof. Leonardo Egidi
Informazioni:
Ultimo aggiornamento: 24-10-2022 - 13:36