Explainer
Why AI Researchers Are Worried About ‘Model Collapse’
Illustration by John Provencher
This article is for subscribers only.
In certain corners of the tech industry, it’s an article of faith that training artificial intelligence systems on larger amounts of online data will allow these tools to get better over time — possibly to the point of outperforming humans on certain tasks.
But a new research paper is casting some doubt on that approach and raising alarms about what might be a fatal flaw in how AI systems are developed. In the paper, published in Nature in July, researchers find that when AI models are trained on data that includes AI-generated content — as will likely be increasingly common — they eventually end up with deteriorated performance, a phenomenon dubbed “model collapse.”