SOLAR Lab Seminar: Mark Sellke, On Learning-Curve Monotonicity for Maximum Likelihood Estimators
Content
Speaker
Mark Sellke (Harvard/OpenAI)
Description
This talk is based on work, On Learning-Curve Monotonicity for Maximum Likelihood Estimators, where all the results were derived by AI models (variants of GPT-5.2 Pro) with humans only providing prompts and verification. This strikingly illustrates the seminar’s theme by demonstrating AI’s capacity to automate the demanding task of theoretical analysis and proof generation, profoundly challenging traditional views on human-driven discovery and scientific originality.
Abstract
The property of learning-curve monotonicity, highlighted in a recent series of work by Loog, Mey and Viering, describes algorithms which only improve in average performance given more data, for any underlying data distribution within a given family.
We establish the first nontrivial monotonicity guarantees for the maximum likelihood estimator in a variety of well-specified parametric settings. For sequential prediction with log loss, we show monotonicity (in fact complete monotonicity) of the forward KL divergence for Gaussian vectors with unknown covariance and either known or unknown mean, as well as for Gamma variables with unknown scale parameter. The Gaussian setting was explicitly highlighted as open in the aforementioned works, even in dimension 1.
Finally we observe that for reverse KL divergence, a folklore trick yields monotonicity for very general exponential families. All results in this paper were derived by variants of GPT-5.2 Pro. Humans did not provide any proof strategies or intermediate arguments, but only prompted the model to continue developing additional results, and verified and transcribed its proofs.