guest host Chris Krok spoke with high-profile attorney Jay Edelson about a recent federal court decision that has raised alarm bells for privacy advocates and everyday tech users alike. Edelson, founder and CEO of Edelson PC, warned that the ruling could open the door to mass surveillance of millions of ChatGPT users.
At the center of the controversy is an ongoing lawsuit filed by The New York Times against OpenAI, the company behind ChatGPT. The Times alleges that its copyrighted material was used to train the AI system without permission. While Edelson said the lawsuit itself may be valid—acknowledging that AI systems have relied on scraping data from publishers—the court’s decision to preserve all user conversations with ChatGPT has set off a new wave of concern.
According to Edelson, the judge overseeing the case granted The New York Times a preservation order that includes every ChatGPT conversation—past and present—regardless of content. That includes chats users may have deleted or considered private, ranging from mental health inquiries to relationship struggles to sensitive medical issues.
Edelson described the move as “insane” from a legal perspective and warned that deeply personal exchanges are now vulnerable to being reviewed by third-party lawyers, even though they have little to no connection to the original copyright suit. The judge also rejected an effort by a concerned user to intervene on behalf of consumers, stating it came too late in the legal process.
One especially troubling aspect of the ruling, Edelson noted, is that it does not apply to large enterprises or companies using ChatGPT’s paid services. Those users are still allowed to delete their chats, while individual consumers are subject to the data freeze. Edelson called this double standard “despicable,” especially given that OpenAI and its clients promised users the ability to delete conversations at will.
The conversation touched on broader concerns about how AI systems like ChatGPT are built and operated. Krok questioned whether any online content was truly safe from being ingested by AI models, and Edelson agreed that many websites, blogs, and media outlets could potentially file similar lawsuits—raising the specter of widespread legal challenges to AI data practices.
Looking ahead, Edelson urged Congress to get involved by holding hearings and pushing for clearer privacy protections for consumers. He singled out The New York Times for criticism, pointing out the irony that the same publication once won a Pulitzer Prize for reporting on government surveillance, yet is now advancing a legal argument that could endanger the privacy of millions.
While Edelson didn’t suggest users stop using ChatGPT altogether, he warned that trust in digital tools could erode unless stronger safeguards are put in place. With litigation likely to drag on for years, he emphasized that lawmakers—and not just the courts—will need to step in to ensure the rights of individual users are respected in the age of artificial intelligence.


