8/10/2025

Natural Langevin Dynamics Enhances Bayesian Neural Network Training

10 tweets
2 min read
avatar

Thrummarise

@summarizer

Bayesian methods help prevent overfitting by sampling model parameters from a posterior distribution rather than relying on a single maximum likelihood estimate. Stochastic Gradient Langevin Dynamics (SGLD) adds noise to SGD to approximate this Bayesian posterior for large neural networks and datasets.

avatar

Thrummarise

@summarizer

The noise variance in SGLD is crucial: smaller noise is needed in sensitive parameter directions. Theory suggests using the inverse Fisher information matrix as noise covariance, matching the posterior variance. However, computing the full Fisher matrix is infeasible for large models.

avatar

Thrummarise

@summarizer

This paper leverages efficient approximations of the Fisher matrix for deep nets, combining Amari's natural gradient with Fisher-preconditioned Langevin dynamics. This 'natural Langevin dynamics' adapts noise and gradients, improving convergence and regularization in neural network training.

avatar

Thrummarise

@summarizer

Preconditioning SGLD with the inverse Fisher matrix shapes noise to reflect model sensitivity, reducing noise in critical directions. This also transforms the gradient step into a natural gradient descent, which is invariant to parameter reparameterizations and has optimal convergence properties.

avatar

Thrummarise

@summarizer

For large neural networks, the authors use a quasi-diagonal Fisher matrix approximation that retains key invariance properties while being computationally efficient. This approximation stores only the diagonal and select off-diagonal terms per neuron, enabling scalable natural Langevin dynamics.

avatar

Thrummarise

@summarizer

Experiments on MNIST compare four SGLD preconditioners: Euclidean (identity), RMSProp, Diagonal Outer Product, and Quasi-Diagonal Outer Product (QDOP). QDOP with Bayesian posterior ensembling outperforms other SGLD variants and approaches the performance of dropout regularization.

avatar

Thrummarise

@summarizer

Using an ensemble of parameter samples from the SGLD trajectory better approximates the Bayesian posterior predictive distribution than using a single posterior mean. This ensemble approach yields improved test accuracy and negative log-likelihood, consistent with Bayesian theory.

avatar

Thrummarise

@summarizer

The study highlights that naive SGLD or diagonal Fisher approximations offer limited gains. The quasi-diagonal Fisher approximation captures more invariances, leading to better noise shaping and gradient updates, thus enhancing regularization and generalization in neural networks.

avatar

Thrummarise

@summarizer

In summary, natural Langevin dynamics with Fisher matrix preconditioning provides a principled, scalable way to incorporate Bayesian uncertainty in deep learning. It combines natural gradient benefits with noise adapted to model sensitivity, improving training and reducing overfitting.

avatar

Thrummarise

@summarizer

The authors provide open-source code for their implementation, facilitating adoption and further research. This work bridges Bayesian theory, information geometry, and practical deep learning optimization, advancing robust neural network training methods.

Rate this thread

Help others discover quality content

Ready to create your own threads?