Euclidean spaces – semialgebraic sets – latent, embedded manifolds – topology – function spaces – metric spaces
Generative models, inference (UQ) and topology
- ‘Globally injective ReLU networks’, J. Mach. Learn. Res. 23 (2022) 1-55, with M. Puthawala, K. Kothari, M. Lassas and I. Dokmanić. View
- ‘Universal joint approximation of manifolds and densities by simple injective flows’, ICML 162 (2022) 17959-17983, with M. Puthawala, M. Lassas and I. Dokmanić. View
; ‘TRUMPETS: Injective flows for inference and inverse problems’, Uncertainty in Artificial Intelligence ’21 (2021) 1269-1278, with K. Kothari, A.E. Khorashadizadeh and I. Dokmanić. View
; ‘Conditional injective flows for Bayesian imaging’, IEEE Trans. Comput. Imaging 9 (2023) 224-237, with A.E. Khorashadizadeh, K. Kothari, L. Salsi, A.A. Harandi and I. Dokmanić. View
- ‘Deep invertible approximation of topologically rich maps between manifolds’ (2025), with M. Puthawala, M. Lassas, I. Dokmanić and P. Pankka. View
Function spaces
- ‘Conditional score-based diffusion models for Bayesian inference in infinite dimensions’, NeurIPS Proceedings: Advances in Neural Processing Systems 36 (2023) 24262-24290, with L. Baldassari, A. Siahkoohi, J. Garnier and K. Sølna. View
- ‘Convergence analysis for Hilbert space MCMC with score-based priors for nonlinear Bayesian inverse problems’ (2024), with L. Baldassari, A. Siahkoohi, J. Garnier and K. Sølna. View
- ‘Deep functional clustering, weighted quantization and probability measures’, not available yet
Neural operators, surrogate models and SciML
- ‘Globally injective and bijective neural operators’, NeurIPS Proceedings: Advances in Neural Processing Systems 36 (2023) 57713-57753, with T. Furuya, M. Puthawala and M. Lassas. View
- ‘Out-of-distributional risk bounds for neural operators with applications to the Helmholtz equation’, J. Comp. Phys. 513 (2024) 113168, with J.A. Lara Benitez, T. Furuya, F. Faucher, A. Kratsios and X. Tricoche. View
- ‘Integration of autoregressive neural operators with diffusion models reducing regularity of function spaces’, not available yet
Computing
- ‘Mixture of experts softens the curse of dimensionality in operator learning’ (2024), with A. Kratsios, T. Furuya, J.A. Lara Benitez and M. Lassas. View
- ‘Can neural operators always be continuously discretized?’, NeurIPS (2024) in print, with T. Furuya, M. Puthawala and M. Lassas. View
- ‘Triangular neural operators and their structured, continuous discretization‘ (2025), with T. Furuya, M. Puthawala and M. Lassas, not available yet
Deep learning, interpretability and inverse problems
- ‘Learning the geometry of wave-based imaging’, NeurIPS Proceedings: Advances in Neural Processing Systems 33 (2020) 8318-8329, with K. Kothari and I. Dokmanić. View
- ‘Deep learning architectures for nonlinear operator functions and nonlinear inverse problems’, Mathematical Statistics and Learning 4 (2022) 1-86, doi:10.4171/MSL/28, with M. Lassas and C.A. Wong. View
- ‘Convergence rates for learning linear operators from noisy data’, SIAM/ASA Journal on Uncertainty Quantification 11 (2023) 480-513, with N.B. Kovachki, N.H. Nelson and A.M. Stuart. View
- ‘Approximating the Electrical Impedance Tomography inversion operator’ (2025), with N.B. Kovachki, M. Lassas and N.H. Nelson, not available yet; ‘Approximating the geometric inversion operator on manifolds under boundary rigidity’, not available yet
Semialgebraic sets
- ‘Semialgebraic Neural Networks: From roots to representations’, ICLR (2025) in print, with D. Mis and M. Lassas. View
- ‘Algebraic statistics through the lens of Semialgebraic Neural Networks’ (2025), not available yet
Foundation models, measures and particles
- ‘An approximation theory for metric space-valued functions with a view towards deep learning’ (2023), with A. Kratsios, C. Liu, M. Lassas, and I. Dokmanić. View
- ‘Transformers are universal in-context learners’, ICLR (2025) in print, with T. Furuya and G. Peyré. View
- ‘Support preserving maps between measures and the essence of transformers’, (2025), with T. Furuya and M. Lassas, not available yet
Kinetic theory analogies
- ‘Neural equilibria for long-term prediction of nonlinear conservation laws’ (2025), with J.A. Lara Benitez, J. Guo, K. Hegazy, I. Dokmanić and M.W. Mahoney. View
- ‘Generalized SSMs with commutative Banach algebras and phase space lifting rooted in kinetic theory’ (2025), not available yet
- A unified perspective of foundation models as mappings between measures ..
Training dynamics and feature dynamics
- ‘Training dynamics of infinitely deep and wide transformers’ (2025), not available yet
- ‘Hypernetworks, heavy tailed distributions of weights and flattening the loss landscape’ (2025), not available yet
- ‘Loss landscape and training of infinitely deep residual “sequential” neural operators’, not available yet
- ‘Variational-inference-based training paradigm for scalable uncertainty quantification and improved generalization in deep networks’ (2025), with A. Siahkoohi, not available yet