PLENARY SPEAKERS
Arnaud Doucet
Google DeepMind
Arnaud Doucet is a Senior Staff Research Scientist at Google DeepMind. He earned his Ph.D. in Electrical Engineering from the University of Paris-XI Orsay in 1997. Over the years, he has held research positions at Oxford University, Cambridge University and the University of Melbourne before joining Google Deepmind in 2023. His research focuses on Bayesian methods, computational statistics, Monte Carlo methods, and generative modeling. Doucet has received several prestigious honors, including being named an Institute of Mathematical Statistics (IMS) Fellow in 2017, delivering the IMS Medallion Lecture in 2016, and receiving the Guy Silver Medal from the Royal Statistical Society in 2020. His recent work explores denoising diffusion models and computational optimal transport, with numerous publications featured in top statistical and machine learning journals.
Talk title: From Diffusion Models to Schrodinger Bridges: Generative Modeling Meets Optimal Transport
Abstract: Diffusion models have revolutionized generative modeling. Conceptually, these methods define a transport mechanism from a noise distribution to a data distribution. Recent advancements have extended this framework to define transport maps between arbitrary distributions, significantly expanding the potential for unpaired data translation. However, existing methods often fail to approximate optimal transport maps, which are theoretically known to possess advantageous properties. In this talk, we will show how one can modify current methodologies to compute Schrödinger bridges—an entropy-regularized variant of dynamic optimal transport. We will demonstrate this methodology on a variety of unpaired data translation tasks.
Volkan Cevher
Ecole Polytechnique Fédérale de Lausanne (EPFL)
Volkan Cevher is an Associate Professor at the Swiss Federal Institute of Technology Lausanne (EPFL) and an Amazon Scholar. He earned his B.Sc. in Electrical Engineering as valedictorian from Bilkent University in 1999 and completed his Ph.D. at the Georgia Institute of Technology in 2005. Before joining EPFL, he held research positions at the University of Maryland and Rice University. His research focuses on machine learning, optimization, signal processing, and information theory. He has been recognized with several prestigious awards, including being named an IEEE Fellow in 2024, receiving the ICML AdvML Best Paper Award in 2023, and winning the Google Faculty Research Award in 2018. Additionally, he was awarded an ERC Consolidator Grant in 2016 and an ERC Starting Grant in 2011. His recent publications explore topics such as stochastic optimization, federated learning, and adversarial robustness, with multiple papers featured in leading AI and machine learning conferences.
Talk title: Training Neural Networks at Any Scale
Abstract: At the heart of deep learning's transformative impact lies the concept
of scale--encompassing both data and computational resources, as well as
their interaction with neural network architectures. Scale, however,
presents critical challenges, such as increased instability during
training and prohibitively expensive model-specific tuning. Given the
substantial resources required to train such models, formulating
high-confidence scaling hypotheses backed by rigorous theoretical
research has become paramount.
To bridge theory and practice,
the talk explores a key mathematical ingredient of scaling in tandem
with scaling theory: the numerical solution algorithms commonly employed
in deep learning, spanning domains from vision to language models. We
unify these algorithms under a common master template, making their
foundational principles transparent. In doing so, we reveal the
interplay between adaptation to smoothness structures via online
learning and the exploitation of optimization geometry through
non-Euclidean norms. Our exposition moves beyond simply building larger
models--it emphasizes strategic scaling, offering insights that promise
to advance the field while economizing on resources.
Urbashi Mitra
University of Southern California
Urbashi Mitra is the Gordon S. Marshall Chair in Engineering at the University of Southern California, with previous academic roles at Ohio State University and Bellcore. She holds B.S. and M.S. degrees from the University of California, Berkeley, and a Ph.D. from Princeton University. Dr. Mitra has made significant contributions to IEEE, serving as the inaugural editor-in-chief for IEEE Transactions on Molecular, Biological and Multi-Scale Communications, a Distinguished Lecturer for IEEE Communication and Signal Processing Societies, and in leadership roles including chairing the ComSoc Communication Theory Technical Committee, the SPS Signal Processing for Communications and Networks Committee, and the Transactions on Wireless Communications Steering Committee. She is an IEEE Fellow and has received numerous prestigious awards, including the ComSoc Women in Communications Engineering Technical Achievement Award, U.S. Fulbright Scholar and UK Royal Academy of Engineering Distinguished Visiting Professorship distinctions, a USC Viterbi School of Engineering Senior Research Award and the NSF CAREER Award.Alexandre Gramfort
Meta
Alexandre Gramfort is a Senior Research Scientist at Meta Reality Labs in Paris, specializing in machine learning for building neuromotor interfaces using surface EMG signals. Previously, he was a Research Director at Inria, leading the MIND Team, and an Assistant Professor at Telecom Paris. His work spans machine learning, signal processing, and neuroscience applications. He is well known for his open-source contributions such as the scikit-learn software he co-created in 2010. He has received prestigious grants, including an ERC Starting Grant for SLAB in 2015 and an ANR Chaire on AI for BrAIN in 2019. He has also taught optimization, machine learning and neuroimaging courses at Institut Polytechnique de Paris and Université de Paris since 2015. His recent research focuses on generic neuromotor interfaces, domain adaptation, and electrophysiological data analysis.
Talk title: Training machines to decode electromyography signals for high bandwidth human-computer interfaces that work across people
Abstract: Brain computer interfaces (BCIs) have been imagined for decades to solve the interface problem by allowing for input to computing devices at the speed of thought. However high-bandwidth communication has only been demonstrated using invasive BCIs with interaction models designed for single individuals, an approach that cannot scale to the general public. In this talk, I will describe the recent development of a noninvasive neuromotor interface that allows for computer input using surface electromyography (sEMG). I will give examples where by training machine learning models on thousands of participants, it is possible to develop generic sEMG neural network decoding models that work across many people without the need for per-person calibration, hence offering the first high-bandwidth neuromotor interface that directly leverages biosignals with performant out-of-the-box generalization across people.
David G. Stork (IAPR Invited Speaker)
Stanford University
David G. Stork, PhD, David G. is an Adjunct Professor at Stanford University and a graduate in Physics from MIT and the University of Maryland; he also studied Art History at Wellesley College. He has held faculty positions in Physics, Mathematics, Computer Science, Statistics, Electrical Engineering, Neuroscience, Psychology, Computational Mathematical Engineering, Symbolic Systems, and Art and Art History variously at Wellesley and Swarthmore Colleges, Clark, Boston, and Stanford Universities, and the Technical University of Vienna. He is a Fellow of seven international societies and has published eight books, 220+ scholarly articles, and 64 US patents. His Pixels & paintings: Foundations of computer-assisted connoisseurship (Wiley) appeared this year and he is completing Principled art authentication: A probabilistic foundation for representing and reasoning under uncertainty.
Talk title: When computers look at art: Recent triumphs and future opportunities for computer-assisted connoisseurship of fine art paintings and drawings
Abstract: Our cultural patrimony of fine art paintings and drawings comprise some of the most important, memorable, and consequential images ever created, and present numerous problems in art history and the interpretation of "authored" stylized images. While sophisticated imaging (by numerous methods) has long been a mainstay in museum curation and conservation, it is only in the past few years that true image analysis—powered by computer vision, machine learning, and artificial intelligence—have been applied to fine art images. Fine art paintings differ in numerous ways from the traditional photographs, videos, and medical images that have commanded the attention of most experts up to now: such paintings vary extensively in style, content, non-realistic conventions, and especially intended meaning. Rigorous computer methods have outperformed even seasoned connoisseurs on several tasks in the image understanding of art, and have provided new insights and settled deep disputes in art history. Additionally, the classes of problems in art analysis, particularly those centered on inferring meaning from images, are forcing computer experts to develop new algorithms and concepts in artificial intelligence. This talk, profusely illustrated with fine art images and computer analyses, argues for the new discipline of computer-assisted connoisseurship, a merger of humanist and scientific approaches to image understanding. Such work will continue to be embraced by art scholars, and addresses new grand challenges in artificial intelligence.