Deep learning

Euclidean spaces – semialgebraic sets – latent, embedded manifolds – topology – function spaces – metric spaces

Generative models, inference (UQ) and topology

  • ‘Globally injective ReLU networks’, J. Mach. Learn. Res. 23 (2022) 1-55, with M. Puthawala, K. Kothari, M. Lassas and I. Dokmanić. View pdf
  • ‘Universal joint approximation of manifolds and densities by simple injective flows’, ICML 162 (2022) 17959-17983, with M. Puthawala, M. Lassas and I. Dokmanić. View pdf; ‘TRUMPETS: Injective flows for inference and inverse problems’, Uncertainty in Artificial Intelligence ’21 (2021) 1269-1278, with K. Kothari, A.E. Khorashadizadeh and I. Dokmanić. View pdf. ‘Conditional injective flows for Bayesian imaging’, IEEE Trans. Comput. Imaging 9 (2023) 224-237, with A.E. Khorashadizadeh, K. Kothari, L. Salsi, A.A. Harandi and I. Dokmanić. View pdf
  • ‘Deep invertible approximation of topologically rich maps between manifolds’ (2025), with M. Puthawala, M. Lassas, I. Dokmanić and P. Pankka. View pdf
  • ‘Deep invertible approximation of topologically rich fibrations’ (2025), not available yet.

Hilbert spaces, sampling

  • ‘Conditional score-based diffusion models for Bayesian inference in infinite dimensions’, NeurIPS Proceedings: Advances in Neural Processing Systems 36 (2023) 24262-24290, with L. Baldassari, A. Siahkoohi, J. Garnier and K. Sølna. View pdf
  • ‘Convergence analysis for Hilbert space MCMC with score-based priors for nonlinear Bayesian inverse problems’ (2024), with L. Baldassari, A. Siahkoohi, J. Garnier and K. Sølna. View pdf
  • ‘Deep functional clustering and weighted quantization’ (2026), with A. Kratsios, Y. Li and P. McNicholas, not available yet.
  • ‘Measure gradient flows in Bayes Hilbert spaces, diffeomorphism-invariance and interacting particle systems’ (2025), with D. Mis and M. Lassas, not available yet.

Graphs; contextual stochastic block models

  • ‘Contextual stochastic block model: Quantum algorithm and recovery threshold’ (2026), with G. Covi and M. Lassas, not available yet.

Foundation models

  • ‘Bayesian perspectives on fine tuning’ (2026), not available yet.

Neural operators, surrogate models and adjoint states, SciML

  • ‘Globally injective and bijective neural operators’, NeurIPS Proceedings: Advances in Neural Processing Systems 36 (2023) 57713-57753, with T. Furuya, M. Puthawala and M. Lassas. View pdf
  • ‘Out-of-distributional risk bounds for neural operators with applications to the Helmholtz equation’, J. Comp. Phys. 513 (2024) 113168, with J.A. Lara Benitez, T. Furuya, F. Faucher, A. Kratsios and X. Tricoche. View pdf
  • ‘Vision transformers approximate dynamic operators, a measures perspective’ (2025), with K. Alkyre, not available yet.

Computing

  • ‘Mixture of experts softens the curse of dimensionality in operator learning’ (2024), with A. Kratsios, T. Furuya, J.A. Lara Benitez and M. Lassas. View pdf
  • ‘Can neural operators always be continuously discretized?’, NeurIPS (2024) in print, with T. Furuya, M. Puthawala and M. Lassas. View pdf
  • ‘Triangular neural operators and their structured, continuous discretization’ (2025), with T. Furuya, M. Puthawala and M. Lassas, not available yet.

Deep learning, interpretability and inverse problems

  • ‘Learning the geometry of wave-based imaging’, NeurIPS Proceedings: Advances in Neural Processing Systems 33 (2020) 8318-8329, with K. Kothari and I. Dokmanić. View pdf
  • ‘Learning integral geometric operators with cross attention is data efficient’ (2025), with T.M. Roddenberry, L. Tzou, I. Dokmanić and R.G. Baraniuk, not available yet.
  • ‘Deep learning architectures for nonlinear operator functions and nonlinear inverse problems’, Mathematical Statistics and Learning 4 (2022) 1-86, doi:10.4171/MSL/28, with M. Lassas and C.A. Wong. View pdf
  • ‘Convergence rates for learning linear operators from noisy data’, SIAM/ASA Journal on Uncertainty Quantification 11 (2023) 480-513, with N.B. Kovachki, N.H. Nelson and A.M. Stuart. View pdf
  • ‘Approximating the Electrical Impedance Tomography inversion operator’ (2025), with N.B. Kovachki, M. Lassas and N.H. Nelson, not available yet.; ‘Approximating the geometric inversion operator on manifolds under boundary rigidity’, not available yet.

Semialgebraic sets

  • ‘Semialgebraic Neural Networks: From roots to representations’, ICLR (2025) in print, with D. Mis and M. Lassas. View pdf
  • ‘Algebraic statistics through the lens of Semialgebraic Neural Networks’ (2025), not available yet.

Foundation models, measures and interacting particle systems

  • ‘An approximation theory for metric space-valued functions with a view towards deep learning’ (2023), with A. Kratsios, C. Liu, M. Lassas, and I. Dokmanić. View pdf
  • ‘Transformers are universal in-context learners’, ICLR (2025) in print, with T. Furuya and G. Peyré. View pdf
  • ‘Support preserving maps between measures and the essence of transformers’, (2025), with T. Furuya and M. Lassas, not available yet.

Kinetic theory analogs

  • ‘Neural equilibria for long-term prediction of nonlinear conservation laws’ (2025), with J.A. Lara Benitez, J. Guo, K. Hegazy, I. Dokmanić and M.W. Mahoney. View pdf
  • ‘Commutative Banach algebras and phase space lifting for generalizing SSMs rooted in lattice Boltzmann models’ (2025), not available yet.
  • ‘Control of kinetic models and foundation models’ (2026), not available yet.

Training dynamics and feature dynamics

  • ‘Training dynamics of infinitely deep and wide transformers’ (2025), not available yet.
  • ‘Hypernetworks inducing heavy tailed distributions of weights and flattening the loss landscape’ (2025), not available yet.
  • ‘Loss landscape and training of infinitely deep residual “sequential” neural operators’ (2026), not available yet.
  • ‘Dynamics of feature learning in injective flows through an embedding gap’ (2026), not available yet.