UMass AI&Sec Fall'25 Seminar: Benjamin Laufer, AI Ecosystems: Structure, Strategy, Risk and Regulation
Content

Speaker
Bio
Benjamin Laufer is a PhD student in the School of Computing and Information Sciences at Cornell Tech, where he is advised by Helen Nissenbaum and Jon Kleinberg, and affiliated with the AI, Policy and Practice Group and the Digital Life Initiative. He is interested in data-driven algorithmic systems and their implications for the public interest. His research uses tools and
methods spanning statistics, game theory, network science, and ethics. Prior to joining Cornell, Ben worked as a data scientist at Lime, where he applied machine learning to urban mobility decisions. He graduated from Princeton Unviersity with a B.S.E. in Operations Research and Financial Engineering with minors in Urban Studies and Environmental Studies. Ben's research is supported by a LinkedIn Fellowship. He has also spent time at Microsoft Research with the Fairness, Accountabiltiy, Transparency and Ethics Group. He was named a "rising star" by Stanford in Management Science and Engineering.
Abstract
The development of artificial intelligence is increasingly shaped by interactions between general-purpose model creators, downstream fine-tuners, regulators, and open-source communities. In this talk, I present a line of recent work that develops formal and empirical approaches to understanding these dynamics. First, I introduce a game-theoretic model of adaptation and
regulation. Bargaining shapes the division of surplus between upstream creators and downstream adaptors, and regulatory choices around safety investments can impact the equilibrium strategies. A key insight is that weak regulation targeted only at downstream actors can backfire, reducing overall safety—whereas stronger, well-placed standards can align incentives and
improve both safety and performance. Second, I turn to an empirical reconstruction of the machine learning ecosystem using 1.86 million models on Hugging Face. By mapping “family trees,” sprawling lineages of fine-tuning, we uncover evolutionary patterns that have implications for safety, security and governance. Ensuring safe and secure AI requires taking an ecosystem-level view, and considering the multiple actors and incentives involved in AI development.