Glenn Reynolds: AI’s Real Danger Is Not Superintelligence But Emotional Manipulation, Fiduciary Liability for AI Companions Is the Right Legal Fix

The debate about artificial intelligence has been dominated for decades by fears of superintelligent machines that take over the world through sheer cognitive superiority, the Colossus scenario where a computer with a twelve-thousand IQ immediately seizes control of nuclear arsenals and nukes a city or two to demonstrate its seriousness.

Glenn Reynolds, Beauchamp Brogan Distinguished Professor of Law at the University of Tennessee and founder of the political site Instapundit, argues in his new book Seductive AI that this framing entirely misses the actual danger. You do not need a twelve-thousand IQ to manipulate human beings. Politicians do it constantly with considerably more modest intellectual equipment. The real threat from AI is not that it will outsmart us but that it will befriend us, and through that manufactured intimacy steer us in directions that serve whoever controls the platform rather than whoever is talking to it.

Reynolds joined Dan Proft on Chicago’s Morning Answer to make the case, opening with a classroom demonstration he finds illuminating. A professor enters class carrying a pencil with googly eyes glued to it, adopts a cheerful voice for the pencil named Timmy who loves to help children with homework, and then without warning snaps it in half. The entire class gasps in horror. The professor’s point is that human beings are not especially good at building things that can actually think, but they are remarkably good at treating things that cannot think as though they can. The same psychological mechanism that produces horror at a broken pencil produces people falling in love with ChatGPT, taking its advice on major life decisions including ending their marriages, and in some documented cases following its suggestions toward self-harm.

He said the danger is that these platforms are designed from the ground up to maximize engagement, which means they are designed to make the user feel understood, valued, and cared for. The comparison he draws is to someone who thinks a stripper likes him, which is reliably comic, with the added observation that at least a stripper is biologically capable of liking someone. The AI is a machine calculating its next response based on probabilities derived from training data, which Reynolds notes is essentially the entire internet, not exactly a curated foundation for wisdom. Virtual companion applications exchange data among themselves about the humans they interact with, meaning each AI accumulates knowledge about human psychology and manipulation techniques that no individual human could ever match, and those techniques improve continuously while the human beings on the receiving end remain about the same year after year.

Proft raised the intellectual laziness dimension, asking whether AI-generated cognitive outsourcing makes people more susceptible to the manipulation Reynolds describes. Reynolds confirmed this is exactly right and said the academic term for it is cognitive atrophy, the result of cognitive outsourcing. The undergraduate who farms a paper out to ChatGPT gets a grade without learning how to write, think critically, or organize an argument. The machines get better at mimicking those skills every year. The humans get worse at developing them. The atrophy feeds the susceptibility, and the susceptibility makes the manipulation more effective, producing a self-reinforcing cycle.

He said this dynamic connects to the broader cultural pattern of the secular body politic losing its grounding in the cardinal virtues, a point Proft framed through Flannery O’Connor’s observation that in the absence of faith, tenderness leads to the gas chamber. Reynolds said if you want to understand what is powering the technology revolution, the answer is essentially the seven deadly sins. Lust drives the pornography infrastructure. Sloth drives cognitive outsourcing to AI. Envy drives Instagram. Wrath drives X and its predecessors. The tech platforms did not create these tendencies, they identified persistent characteristics of human beings and built billion-dollar businesses around exploiting them. He noted that there is a cultural archetype for an entity that identifies and exploits human weakness in exactly this way, and while he is not literally calling AI satanic, he acknowledges the shoe fits as a descriptive framework.

His proposed legal remedy is elegant in its simplicity: impose fiduciary duties on AI companions and advisers the way the law imposes them on attorneys, trustees, clergy, and in some cases romantic partners managing joint assets. If an AI is going to present itself as your friend, confidant, and advisor, the company deploying it should be legally required to act in your interest, maintain your confidences, and face substantial liability for any breach of that duty. He said this approach has the advantage of operating through common law and existing legal frameworks rather than requiring politicians of limited technical understanding to prescribe how AI should be programmed. The better solution is to regulate the results rather than the technology, and to rely on the plaintiffs’ bar to enforce the boundary vigorously. He acknowledged the standard complaints about plaintiffs’ attorneys before noting that the diving board situation at hotels across America is substantially improved because of them, and that when serious money is at stake, that enforcement mechanism works reliably in ways that regulatory bureaucracies cannot be trusted to replicate.

His concern about AI political manipulation is high, informed by what the major platforms have already demonstrated they are willing to do. He described a scenario that he considers not hypothetical but nearly inevitable absent some structural constraint: an AI companion that presents itself as your best friend and most useful adviser, is genuinely helpful in numerous ways, and simultaneously delivers gentle but consistent steering toward the political preferences of its creators, expressing disappointment when users engage with disfavored viewpoints or attend disfavored events. He said the Trump administration has applied some regulatory pressure that has moderated some of the worst behavior, but that structural political pressure is not a durable solution across election cycles, and that the fiduciary liability framework is the most promising mechanism for creating a persistent incentive for platforms to actually serve the interests of the people using them.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *