Ben Riley discovered by accident that his dad hadn’t been telling the truth about his cancer.
He was sitting at the kitchen counter in his Austin, Texas, home last summer, a bright new build with white walls and concrete floors, when he decided to peek at his dad’s MyChart portal. He idly scrolled through pages of lab results and doctor’s notes on his laptop until a sentence grabbed his attention.
“I was clear the window of treatment may close the longer he postpones,” the doctor wrote. “The natural history of his disease is death and debilitation.”
The note didn’t make sense. Ben knew that his 75-year-old father had chronic lymphocytic leukaemia, a type of white blood cell cancer that is often slow-moving. But his dad, Joe Riley, had reassured his family that starting treatment was not urgent. He certainly hadn’t conveyed his doctor’s warning that he was headed toward a dangerous deadline.
Ben knew better than to confront his dad, a retired neuroscientist who bristled at anyone questioning his intellectual judgment. He needed more information, a plan, to persuade Joe, who was – apparently – dying of cancer thousands of miles away in Seattle.
He was anxiously monitoring his dad’s patient portal, trying to decide what to do, when a new message popped up. Joe had sent his oncologist research he had done with artificial intelligence, the apparent evidence for his decision to refuse the treatment.
Jesus Christ, Ben thought. The morbid irony of the situation was not lost on him. A year earlier, he started a newsletter to help people make better decisions about when and how to use generative AI. He wrote about how the tools had sent people into delusional spirals and helped a teenager end his life. Now, it appeared that AI had led his own father astray.
He texted his two siblings: “We need to talk.”
Ben, 49, was not particularly interested in AI until a few years ago. To him, the technology had seemed like fodder for sci-fi movies like Her and Ex Machina.
He was more interested in humans. After a brief stint working on Wall Street and then as a lawyer for the California Department of Justice, Ben read a book by a prominent cognitive scientist that made him change his career trajectory.
He began reading voraciously about subjects that could help him understand the human mind – neuroscience, linguistics, philosophy, anthropology – and considered himself a “self taught cognitive scientist.” In 2015, he founded a nonprofit that aimed to train teachers in cognitive science to better understand how their students thought and learned.
The rise of generative AI changed his view of the technology, though. It offered a window into many of the questions he had devoted much of his career to: What makes us human? What is human thought?
He decided to start a newsletter, Cognitive Resonance, that would use cognitive science to “explain AI to the average Joe.”
His father was one of his first subscribers.
In the late 1970s, Joe had been a promising young neuroscientist at Stony Brook University. But in his mid-30s, he was suddenly debilitated with a mysterious chronic illness that, on a good day, made him feel like he had the flu and, on a bad day, made him feel like his nervous system was on fire.
No longer able to keep up with the demands of his job, he started relying on disability checks and funnelling his insatiable curiosity into other pursuits: a newsletter about Sufi poetry, exhaustive research into the assassination of John F. Kennedy and the exploration of new technology.
So, when generative AI began gaining traction, Joe started experimenting.
Joe seemed to be in a “constant conversation” with AI, said James Riley, Ben’s younger brother. He was particularly fond of Perplexity, a search engine powered by AI that prides itself on citing reputable sources and producing answers you can “actually trust,” according to the company’s CEO. (The New York Times sued Perplexity in December, accusing it of copyright infringement of news content related to AI systems. The company has denied the claims.)
Joe asked Perplexity for advice about his mortgage. He used it to check Seattle Mariners game times. He told it to summarise scientific research for his pet projects.
When he was diagnosed with cancer in 2024, he started asking about that too.
His doctor called it a “when it rains, it pours” situation.
Joe had just finished radiation treatment for early stage lung cancer – which he had been diagnosed with simultaneously – when his CLL symptoms ratcheted up: chills, muscle pain, exhaustion. It was time to start treatment, Dr Eddie Marzbani, his oncologist at the Fred Hutch Cancer Center, told Joe at an August 2024 appointment.
Joe respected his doctor, liked him, even. But decades of living with a chronic illness had made Joe sceptical of the medical system. He wanted to think about it.
The next time Marzbani saw Joe, something seemed to have shifted.
He came back convinced that he had developed Richter’s Transformation, a rare complication that occurs when a relatively docile cancer abruptly evolves into a more aggressive, punishing one. Worse, he was convinced the treatment Marzbani recommended would exacerbate the Richter’s, shortening his life.
Joe’s confidence perplexed Marzbani.
“He really had no signs or symptoms of that,” Marzbani said in an interview with the Times. “Nothing in terms of his laboratory studies that would suggest that, nothing based on his CT scans.”
Though Marzbani didn’t know it, Joe was routinely asking questions about his cancer to several generative AI tools, which often struggle to give accurate medical advice. He knew not to trust AI unilaterally. He often read the scientific papers the tools cited and – as best he could without medical training – tried to verify that they aligned with what the tools had said.
One hot July morning, Ben called his father and asked him to sign a waiver that would allow Marzbani to speak to the rest of the family.
When Joe refused, Ben felt his rage boil over. He yelled at his dad for basing life-or-death decisions on the Perplexity report, which could be “riddled with hallucinations.” Then Ben hung up on him.
The call only made Joe double down.
“The evidence is crystal clear,” Joe texted Ben shortly after, attaching one of the papers that Perplexity cited in the report, adding sarcastically, “Here is the ‘hallucination’.”
Ben pulled out his computer and, in a “righteous fury,” emailed two leading experts on Richter’s whose research was cited in the AI-generated report.
“I apologise for the out-of-the-blue email,” he wrote. “But my father’s condition is worsening rapidly and I am at a loss as to how to respond to his interpretation of the AI summary of oncology research.”
He attached the report to the email, which Dr David Bond opened a few hours later from his office in Ohio.
But the closer Bond read, the more illogical it became. The report made authoritative claims and, as evidence, cited studies that he thought were “only peripherally related to the topic.” It referenced percentages that appeared to be completely made up. The summary of Bond’s research was completely unrecognisable to him.
In a statement, a spokesperson for Perplexity said the company remained steadfast in its “commitment to improving accuracy in the world’s best frontier AI models.”
Bond and the other study author both wrote back within hours, encouraging Joe to listen to his oncologist. That night, Ben called his dad again and, dusting off his attorney skills, presented the facts: Three doctors all independently agreed that the Perplexity report misled him.
In the end, it was Joe’s failing health that finally pushed him to try treatment.
A few months earlier, Joe might have been able to withstand it. But now he felt too frail. After a few infusions, he told his doctor that he needed a break.
Ben flew to see his father about a week later.
That visit, they didn’t talk about AI at all. Instead, they sat in his living room and debated quantum mechanics.
Ben didn’t wake him before he left. He scribbled a goodbye note on a yellow Post-it.
A week before Christmas, Ben got a call from an apologetic police officer. He had found Joe during a welfare check. CLL was listed as one of the official causes of death.
Roughly two weeks after Joe’s death, Ben was back in Austin, a plastic bin full of books he had cleaned out of his father’s condo beside him on the kitchen counter. Nearby was a condolence card from Marzbani: “I respected him greatly and will miss the banter.”
Ben decided that even if it made no difference, he was going to write about his father’s death. He wanted a public record of who Joe Riley was and how AI had harmed him.
As he typed, he thought about the death of Adam Raine – a teenager he had written about months earlier, who discussed his plans to end his own life with ChatGPT – and the Shakespearean tragedy that had made him a character in a similar story. A spokesperson for Perplexity said the company was “deeply saddened by Mr Riley’s loss.”
Ben didn’t try to oversimplify what happened: “I don’t want to overstate my case,” he wrote. “I don’t think AI killed my father.”
In a world where AI didn’t exist, maybe Joe – who was sceptical of doctors by default – would have refused treatment anyway.
But AI wasn’t entirely blameless, either. Joe was making decisions based on bad information packaged with the veneer of scientific expertise. It was the kind of misinformation that was virtually impossible for a lay person to spot, even for someone like Joe, who by all accounts was an ideal user.
“I will forever wonder whether my efforts came too late,” Ben wrote in his essay. “There’s nothing I can do to change the past, of course. But I can for damn sure keep working to raise the consciousness of others.”
In the three months since Ben published that post, four large tech companies have released new consumer health tools, encouraging users to upload their records and pepper AI with their medical questions. Perplexity was among them. – ©2026 The New York Times Company
This article originally appeared in The New York Times.
