What Can('t) Transformers Do?

Workshop at NeurIPS 2025

Overview

Update 8/20: Submission deadline extended to August 29.

With most advances in large foundation models (LFMs) being empirical, our theoretical understanding of what transformers can compute, express, and learn still lags behind. This workshop will convene theorists and empiricists to chart a rigorous agenda for the next generation of LFMs, asking What can and can't transformer-based LLMs do?.

We welcome both formal analyses and empirically grounded studies that shed light on theoretical questions, aiming to close the gap between proofs and practice while fostering new, interdisciplinary collaborations.

Questions? You can reach us via transformers-this.workshop-2025is.@googlegroups.comdecoy..

Call for Papers

We invite contributions that either introduce new formal results—proofs, impossibility theorems, or constructive separations—and show why they matter in practice, or design rigorous, carefully controlled experiments whose primary aim is to test, falsify, or refine a theoretical claim about transformers and language models.

Deadlines

Keynote Speakers

Surbhi Goel

University Of Pennsylvania

Talk: TBD

Jon Kleinberg

Cornell University

Talk: TBD

William Merrill

NYU

Talk: TBD

Organizers

Tobias Schnabel

Microsoft Research

Kiran Tomlinson

Microsoft Research

Lena Strobl

Umeå University

Michael Hahn

Saarland University

Tentative Timetable

Time Activity
09:30 - 09:35 Welcome and Opening Remarks Organizers
09:35 - 10:10 Keynote Talk A TBD
10:10 - 10:45 Keynote Talk B TBD
10:45 - 10:55 Coffee Break All
10:55 - 11:30 Lightning Talks (3 × 10 mins) Various Speakers
11:30 - 12:30 Poster Session A All
12:30 - 14:00 Lunch Break All
14:00 - 14:35 Keynote Talk C TBD
14:35 - 14:45 Breakout Discussion Setup Organizers
14:45 - 15:30 Breakout Discussions All
15:30 - 15:40 Coffee Break All
15:40 - 16:00 Thoughts from Discussions + Closing Remarks All
16:00 - 17:00 Poster Session B All