CVPR 2026 · Full-Day Tutorial

Analytic Understanding of Diffusion Models

A deep dive into the mathematical foundations and analytic perspectives behind modern diffusion-based generative models.

June 3–4, 2026 Denver Convention Center 8:30 AM – 5:00 PM
New: We are releasing Analytic Diffusion Studio — a unified framework for training-free analytical diffusion models.
View on GitHub

Overview


Denoising diffusion models have achieved state-of-the-art results for generative modeling across images, video, and audio. Yet analysis of the training objective reveals a paradox: the objective admits a unique closed-form solution that is a function of the training data alone—and this solution can only reproduce training examples, exhibiting perfect memorization.

How, then, do deep diffusion models generalize? This tutorial explores the emerging family of analytical diffusion models that shed light on this question. Starting from the fundamentals, we build toward the current understanding of generalization mechanisms—score smoothing, inductive biases of neural architectures, training dynamics, and data geometry—all through the lens of optimal denoising.

The tutorial combines lectures with hands-on Jupyter notebook sessions so that attendees can run experiments, probe the theory, and build intuition firsthand.

Full day (8:30 AM – 5:00 PM)
5 speakers
Hands-on notebook sessions
For researchers with ML background

Speakers


Artem Lukoianov

Artem Lukoianov

MIT CSAIL
PhD student, author of “Locality in Image Diffusion Models Emerges from Data Statistics” (Spotlight, NeurIPS 2025).
Chenyang Yuan

Chenyang Yuan

Toyota Research Institute
Research Scientist and creator of smalldiffusion, an open-source library for controlled diffusion experiments.
Christopher Scarvelis

Christopher Scarvelis

MIT CSAIL, Rutgers University
Incoming Assistant Professor, Rutgers University and author of “Closed-Form Diffusion Models” (TMLR, 2025).
Mason Kamb

Mason Kamb

Stanford University
PhD student and first author of “An Analytic Theory of Creativity in Convolutional Diffusion Models” (ICML 2025, Oral).
Binxu Wang

Binxu Wang

Harvard Kempner Institute
Research Fellow studying the intersection of generative models and visual neuroscience, combining theory, interpretability, and optimization to understand how generative models function.

Schedule


The tutorial runs for a full day and combines lectures with interactive hands-on sessions. The schedule below is preliminary and subject to minor adjustments.

Morning Session — 8:30 AM – 12:00 PM

8:30 – 8:45
Welcome and Introduction
Tutorial overview, diffusion crash course, and notation.
All speakers
8:45 – 9:30
Hands-on: Paradoxes of Diffusion Models
Interactive experiments with closed-form score denoisers, optimal sampling, memorization, and training-set stability.
Chenyang Yuan
9:30 – 9:45 Break
9:45 – 10:45
Fundamentals of Diffusion Models
Forward and reverse processes, score matching, denoising objectives. Our analytical lens: studying the denoiser.
Chenyang Yuan
10:45 – 11:00 Break
11:00 – 12:00
Score Smoothing and Effective Linear Structures
How underfitting leads to score smoothing. Connections to linear models and closed-form analysis.
Christopher Scarvelis
12:00 – 1:30 Lunch

Afternoon Session — 1:30 PM – 5:00 PM

1:30 – 2:30
Wiener Filters and Analytical Denoisers + Hands-on
Data covariance, Wiener filters, locality, and PCA-based denoisers. Hands-on experimentation with the Analytic Diffusion Studio.
Artem Lukoianov
2:30 – 3:00
Guidance and Analytical Models
Convolutional inductive biases, equivariance, and how architecture enables creativity beyond the training set.
Mason Kamb
3:00 – 3:15 Break
3:15 – 4:15
Invited Talk
Speaker to be announced.
4:15 – 4:30 Break
4:30 – 5:00
Open Questions and Discussion
Panel discussion on open problems, future directions, and audience Q&A.
All speakers

Analytic Diffusion Studio


A Unified Framework for Training-Free Diffusion Models

Analytic Diffusion Studio is a modular codebase for training-free analytical diffusion models. Use it to reproduce the methods discussed in this tutorial, run your own experiments, or build new analytical denoisers—no training required, just uv run.

$ git clone https://github.com/analytic-diffusion/analytic-diffusion-studio.git
$ cd analytic-diffusion-studio
$ uv run generate.py --config configs/pca_locality/celeba_hq.yaml

Additional Resources