Deep learning

Euclidean spaces – semialgebraic sets – latent, embedded manifolds – topology – function spaces – metric spaces

Generative models, inference (UQ) and topology

  • ‘Globally injective ReLU networks’, J. Mach. Learn. Res. 23 (2022) 1-55, with M. Puthawala, K. Kothari, M. Lassas and I. Dokmanić. View pdf
  • ‘Universal joint approximation of manifolds and densities by simple injective flows’, ICML 162 (2022) 17959-17983, with M. Puthawala, M. Lassas and I. Dokmanić. View pdf; ‘TRUMPETS: Injective flows for inference and inverse problems’, Uncertainty in Artificial Intelligence ’21 (2021) 1269-1278, with K. Kothari, A.E. Khorashadizadeh and I. Dokmanić. View pdf. ‘Conditional injective flows for Bayesian imaging’, IEEE Trans. Comput. Imaging 9 (2023) 224-237, with A.E. Khorashadizadeh, K. Kothari, L. Salsi, A.A. Harandi and I. Dokmanić. View pdf
  • ‘Deep invertible approximation of topologically rich maps between manifolds’ (2025), with M. Puthawala, M. Lassas, I. Dokmanić and P. Pankka. View pdf
  • ‘Deep invertible approximation of topologically nontrivial fibrations’ (2026), with M. Puthawala, M. Lassas and I. Dokmanić, not available yet.

Hilbert spaces, approximating probability distributions, memorization

  • ‘Conditional score-based diffusion models for Bayesian inference in infinite dimensions’, NeurIPS Proceedings: Advances in Neural Processing Systems 36 (2023) 24262-24290, with L. Baldassari, A. Siahkoohi, J. Garnier and K. Sølna. View pdf
  • ‘Preconditioned Langevin dynamics with score-based generative models for infinite-dimensional linear Bayesian inverse problems’, NeurIPS Proceedings: Advances in Neural Processing Systems (2025) in print, with L. Baldassari, J. Garnier and K. Sølna. View pdf
  • ‘On the convergence of Hilbert space MCMC with score-based priors and classifier-free guidance for nonlinear inverse problems’ (2025), with L. Baldassari, J. Garnier and K. Sølna, not available yet.
  • ‘Diffeomorphism equivariant sampling methods via Bézier curves in Bayes Hilbert spaces’ (2025), with D. Mis and M. Lassas, not available yet.

Low-dimensional structure and manifolds

  • ‘Reconstructing manifolds of large reach via measure learning in Bayes Hilbert spaces’ (2026), with D. Mis and M. Lassas, not available yet.

Foundation models

  • ‘Learned conditioning operators yielding a foundation model for Bayesian inference’ (2026), not available yet.

Neural operators, surrogate models and adjoint states, SciML

  • ‘Globally injective and bijective neural operators’, NeurIPS Proceedings: Advances in Neural Processing Systems 36 (2023) 57713-57753, with T. Furuya, M. Puthawala and M. Lassas. View pdf
  • ‘Out-of-distributional risk bounds for neural operators with applications to the Helmholtz equation’, J. Comp. Phys. 513 (2024) 113168, with J.A. Lara Benitez, T. Furuya, F. Faucher, A. Kratsios and X. Tricoche. View pdf
  • ‘Maps between graph measures and transformers for operator learning’ (2026), with K. Alkire, T. Furuya and M. Lassas, not available yet.

Computing

  • ‘Mixture of experts softens the curse of dimensionality in operator learning’ (2024), with A. Kratsios, T. Furuya, J.A. Lara Benitez and M. Lassas. View pdf
  • ‘Can neural operators always be continuously discretized?’, NeurIPS (2024) in print, with T. Furuya, M. Puthawala and M. Lassas. View pdf
  • ‘The algebra of neural operator approximation’ (2026), with T. Furuya, M. Puthawala and M. Lassas, not available yet.

Deep learning, interpretability and inverse problems

  • ‘Learning the geometry of wave-based imaging’, NeurIPS Proceedings: Advances in Neural Processing Systems 33 (2020) 8318-8329, with K. Kothari and I. Dokmanić. View pdf
  • ‘Learning double fibration transforms is data efficient’ (2025), with T.M. Roddenberry, L. Tzou, I. Dokmanić and R.G. Baraniuk, not available yet.
  • ‘Deep learning architectures for nonlinear operator functions and nonlinear inverse problems’, Mathematical Statistics and Learning 4 (2022) 1-86, doi:10.4171/MSL/28, with M. Lassas and C.A. Wong. View pdf
  • ‘Convergence rates for learning linear operators from noisy data’, SIAM/ASA Journal on Uncertainty Quantification 11 (2023) 480-513, with N.B. Kovachki, N.H. Nelson and A.M. Stuart. View pdf
  • ‘Approximating the Electrical Impedance Tomography inversion operator’ (2025), with N.B. Kovachki, M. Lassas and N.H. Nelson, not available yet.

Semialgebraic sets

  • ‘Semialgebraic Neural Networks: From roots to representations’, ICLR (2025) in print, with D. Mis and M. Lassas. View pdf
  • ‘Generative equilibrium operators’ (2026), not available yet.

Foundation models, measures and (interacting) particle systems

  • ‘An approximation theory for metric space-valued functions with a view towards deep learning’ (2023), with A. Kratsios, C. Liu, M. Lassas, and I. Dokmanić. View pdf
  • ‘Transformers are universal in-context learners’, ICLR (2025) in print, with T. Furuya and G. Peyré. View pdf
  • ‘Transformers through the lens of support-preserving maps between measures’ (2025), with T. Furuya and M. Lassas. View pdf

Kinetic theory context, collisions

  • ‘Neural equilibria for long-term prediction of nonlinear conservation laws’ (2025), with J.A. Lara Benitez, J. Guo, K. Hegazy, I. Dokmanić and M.W. Mahoney. View pdf
  • ‘Operator state space models from nonlinear conservation laws’ (2026), with H. Schluter, I. Dokmanić and M. Lassas, not available yet.
  • ‘Limit of transformers, reasoning and Boltzmann equation’, (2026), with T. Furuya, not available yet.

Training dynamics and feature dynamics

  • ‘Training dynamics of infinitely deep and wide transformers’ (2025), not available yet.
  • ‘Hypernetworks inducing heavy tailed distributions of weights while flattening the loss landscape’ (2025), not available yet.
  • ‘Training of infinitely deep residual “sequential” neural operators and optimal control’ (2026), not available yet.
  • ‘Dynamics of feature learning in injective flows through the embedding gap’ (2026), not available yet.