Name: Julius Berner
Type: User
Company: California Institute of Technology
Bio: Postdoc @Caltech | PhD @mdsunivie | former research intern @facebookresearch and @nvidia | bridging theory and practice in deep learning
Twitter: julberner
Location: Pasadena, US
Blog: https://jberner.info
Julius Berner's Projects
Hey.
Tutorial on Deep Declarative Networks
Solving stochastic differential equations and Kolmogorov equations by means of deep learning and Multilevel Monte Carlo simulation
Numerically Solving Parametric Families of High-Dimensional Kolmogorov Partial Differential Equations via Deep Learning (NeurIPS 2020)
Code for "DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training"
Contextual Emotion Detection in Text (DoubleDistilBert Model)
Ansible playbook for dockerized home-server
Using DiffEqFlux to learn underlying differential equations from data.
Efficient neural PDE solvers for elliptic PDEs based on Walk-on-Spheres methods.
A collection of tools for neural compression enthusiasts. Featuring my work on Bits-Back coding with diffusion models, see projects/bits_back_diffusion.
A short tutorial on normalizing flows using jupyter slides
Illustrating the failure of inverse stability of the neural network realization map.
Material for 'Mathematics of Deep Learning Workshop' (Invited Talk)
Painter classification - Model deployment on Render
A short introduction to probabilistic graphical models using jupyter slides
Towards a regularity theory for ReLU networks (construction of approximating networks, ReLU derivative at zero, theory)
Robust SDE-Based Variational Formulations for Solving Linear PDEs via Deep Learning (ICML 2022)
Ansible playbook to setup a VPN router using OpenWrt on a Raspberry Pi
GHOSTS: Mathematical Capabilities of ChatGPT
Improved sampling via learned diffusions (ICLR2024) and an optimal control perspective on diffusion-based generative modeling (TMLR2024)
Soft-margin SVM gradient-descent implementation in PyTorch and TensorFlow/Keras
Learning ReLU networks to high uniform accuracy is intractable (ICLR 2023)