Trump's AI push raises red flags for poorest Americans


The Trump administration's use of AI sparks fears that more automated systems will weigh in on public benefit decisions. — Reuters

WASHINGTON: US President Donald Trump's unprecedented campaign to shrink the federal government is raising questions of how artificial intelligence is being used, especially in making decisions on people getting public benefits.

Advocates, public assistance benefactors and legal aid groups all warn of lessons already learned from AI's increasing use in determining public benefits in recent years.

"I'm not sure about the legal stuff, but I know about this part of it – I think a person should be there making the decisions," said Tammy Dobbs of Cherokee Village, Arkansas, who experienced the consequences of machine-based decision-making several years ago.

Dobbs, 65, has cerebral palsy and needs help with food preparation, cleaning, hygiene and other day-to-day tasks.

In 2016, she was receiving eight hours of state-assisted daily help when suddenly the visits were cut to five hours – with no explanation other than that a nurse's computer made the decision.

"The computer made the decision on how many hours I was supposed to get – not the nurse, no human being," Dobbs said.

The result was months of missed baths, bad food and an unkempt home, she said.

Shaping tomorrow’s leaders in communication

Legal aid groups say such stories stand as a warning as officials mull what role AI systems should play in the federal government, particularly as the Elon Musk-overseen Department of Government Efficiency (DOGE) spearheads Trump's cuts.

AI is also being used by Immigration and Customs Enforcement and other U.S. agencies to help make decisions such as whether undocumented migrants can be exempt from detention and conditions of their release.

Federal AI policy is shaping up to be "totally unrestrained,” most directly affecting low-income families’ ability to access benefits like Social Security and unemployment assistance, said Kevin De Liban, an advocate and former Legal Aid attorney.

In Arkansas, De Liban worked with Dobbs and other beneficiaries to get the state to address problems she and some 4,000 others were experiencing.

He now leads TechTonic Justice, a watchdog group that warned in a November report that almost all 92 million poor people in the United States have some fundamental aspect of their lives decided by AI. That could range from health care benefits to food assistance and housing screening, it said.

The report detailed a case in Michigan when 40,000 people between 2013 and 2015 were falsely accused of unemployment insurance fraud and how use of a new AI system in 2016 in Rhode Island resulted in 170,000 people wrongly losing food assistance.

"This is a problem that's coming for everybody," De Liban said. "But it just so happens that it's deployed first and most perniciously against the communities that have the fewest resources to fight back.”

In a series of recent letters seeking details, members of Congress cited concerns that DOGE is using AI systems to decide on programs to cut, "make critical decisions about government programs, services and benefits” and replace fired agency workers with AI-driven chatbots.

Aides say those letters have not received responses, though deadlines have passed.

The White House did not respond to a request for comment on its stance on the use of AI in the public benefits process.

'Harsher and more inconsistent'

While little has been disclosed about DOGE's AI use or plans, the federal government had logged more than 2,100 "use cases” of AI systems according to an official inventory from January.

Trump has rolled back previous guidelines that limited the federal use of AI, and a new AI "action plan” is due by July.

Also, during federal budget discussions this week, lawmakers in Congress sought to bar state efforts to regulate AI.

Past patterns suggest designers could be creating purposefully harsh AI systems, perhaps resulting in biases towards false negatives rejecting eligibility, said Ben Green, an assistant professor of information and public policy at the University of Michigan.

"There's an initial goal connected to austerity and budget cuts and viewing people getting welfare with suspicion rather than making sure people who are eligible are getting access," he said.

Not only has implementation with AI become "harsher and more inconsistent,” Green said, but AI tools can make extra-legal decisions that their designers did not anticipate.

For a recent study, he asked computer scientists to create hypothetical AI systems to design a tool to give advice on bankruptcy relief.

The results showed the AI systems misinterpreted the law and even invented new standards, he said.

Legal tech training

The Trump administration's AI focus has not addressed longstanding questions about accuracy, security and bias, said Elizabeth Laird, director of the Equity in Civic Technology program with the Center for Democracy & Technology in Washington, D.C.

"But the difference is we have the potential to see it play out at a scale we have never experienced before," she said.

In response, legal aid groups are moving to boost their ability to spot potential problems.

The Charlotte Center for Legal Advocacy in North Carolina is planning to conduct training on how to spot possible AI links, said Julieanne Taylor, who oversees its public benefits work.

"You need to be aware of the tech piece of it," she said. "We're realising we need to be paying attention." – Thomson Reuters Foundation

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Others Also Read


Want to listen to full audio?

Unlock unlimited access to enjoy personalise features on the TheStar.com.my

Already a member? Log In