"When and why is a simpler model better?"
This event is part of the Biophysics/Condensed Matter Seminar Series.
Science is filled with toy models: abstractions of complicated systems that ignore microscopic details even when they are known. For a special class of models in physics, the renormalization group rigorously justifies the use of effective theories containing just a small number of relevant parameters. This philosophy seems to apply more broadly, even when the renormalization group cannot be used. But why? In this talk I will discuss an information theory approach to answering this question, or at least towards quantifying it. I will first review that typical models are sloppy, defined by a hierarchy in parameter importance (1). I will argue that sloppiness is both necessary and sufficient for a microscopic system to be amenable to description by a simpler effective theory. I will then show how renormalizable models become sloppy as their data is coarse-grained (2). Finally I will discuss our recent efforts to use the structure of these models to choose simpler effective theories automatically (3).
(1) Transtrum, MK, et al. Perspective: Sloppiness and emergent theories in physics, biology, and beyond. The Journal of chemical physics 143.1 (2015) link
(2) Machta, BB, Chachra, R, Transtrum, MK and Sethna, JP. Parameter space compression underlies emergent theories and predictive models. Science 342, no. 6158 (2013). link
(3) Mattingly, HH, Transtrum, MK, Abbott, MC, & Machta, BB. Rational ignorance: simpler models learn more from finite data. (2017) arxiv.org/abs/1705.01166