AI's Impact on the Trump Mass Deportation Plan Will Be Significant and Controversial | Opinion

🎙️ Voice is AI-generated. Inconsistencies may occur.

Artificial intelligence (AI), still arguably in its infancy, has already changed daily life. From the way we choose our next binge-watch, analyze stocks, draft status reports—AI has impacted our world dramatically. Now, as the U.S. embarks on President Donald Trump's second term, AI has already become integral to the organization and operation of mass deportation.

The Department of Homeland Security (DHS) announced that its fiscal year 2025 budget dedicates millions of dollars to AI, including opening an AI Office in the DHS Office of the Chief Information Officer, and funding existing Immigration and Customs Enforcement (ICE) programs. These line items fall under "Responsible Deployment of AI," and DHS notes that these funds will "allow the implementation of beneficial AI tools to support mission execution while ensuring necessary safeguards are in place to protect the Department's use of AI, its infrastructure, and operations while protecting privacy, and civil rights and civil liberties." On the campaign trail, Trump promised that he would begin the largest deportation program in U.S. history, focusing on criminals, drug dealers, and human traffickers who are in the country illegally.

Anti-deportation supporters block the 101 freeway
Anti-deportation supporters block the 101 freeway while protesting the Trump administration's deportations on February 2, 2025, in Los Angeles, Calif. Mario Tama/Getty Images

The Biden administration routinely used AI in immigration and deportation. The Machine Learning Translation Technology Initiative (DHS-197) assists in communicating with non-English speakers. ICE uses machine learning to generate a Hurricane Score (DHS-2408) that reflects the probability that a noncitizen conditionally released from detention will fail to meet program requirements, which is then factored into decisions about the noncitizen's case. Autonomous surveillance towers at the border have also been in place for years, monitoring and tracking targets (not without incident).

AI tools like facial recognition synthesize and analyze large amounts of data to identify patterns and make predictions at a scope and speed that humans are incapable of achieving. These tools mimic human intelligence by applying algorithms to large data sets. The outputs of these algorithms can range from automated decision making to the provision of contextualized information on which law enforcement can act.

There is no shortage of information for Trump officials to immediately plug into AI tools to support deportation efforts. They have inherited Biden-era data from federal agencies, biometric data collected at the U.S.-Mexico border, and law enforcement drone and body camera footage. We've already seen indications that some state and local governments will voluntarily cooperate with the mass deportation program. This could include sharing immigration-related data collected by state and local police from traffic cameras, license plate readers, body cameras, law enforcement robots, and images and geolocation data from tollbooth programs. Private citizens and businesses may also voluntarily provide data from deportation-focused AI tools such as images collected by Ring cameras, smart doorbells, and personal security systems. Emerging technologies like smart glasses worn by private citizens, and eventually police and border patrol officers, may also be used for deportation-related purposes.

The use of AI for deportation programs raises similar concerns associated with AI generally—accuracy, bias, discrimination, and privacy. AI-generated results are not always accurate, as evidenced by well-publicized cases, including some revealed recently in which police arrested innocent people based on incorrect "matches" made by facial recognition. The tendency of algorithms to incorporate and reflect unconscious (and intentional) programmer bias is equally well-documented. Similarly, studies have shown that facial recognition tech is less effective when the subjects are women and people of color. Privacy concerns center around the government using AI to conduct surveillance of U.S. citizens who are not suspected of any wrongdoing and creating related government-controlled databases.

These risks can be mitigated by having humans evaluate the capabilities and limitations of different AI tools, monitor operation, validate outputs, and review the intended uses of results. This collaborative approach can help maximize AI benefits—analyzing massive amounts of information from different sources, identifying patterns, making predictions, performing targeted searches, and extracting specific information for law enforcement use while enabling the Trump administration to allocate resources effectively and quickly obtain actionable insights.

As with all technology, AI evolves rapidly—at times, in unexpected ways. AI took center stage at the Consumer Technology Association's Consumer Electronics Show in January. One award-winning device offered an "AI solution for crime prevention"—a module that uses AI-powered facial recognition and behavior analysis to "predict potential crimes and prevent illegal transactions." It doesn't take a stretch of the imagination to see how this use of AI could be powerful for deportation efforts.

While there are various critics of the entirety of this endeavor, including former Department of Homeland Security officials, there is no doubt that the Trump administration will utilize AI-powered tools. Still, this is just the latest installment in a longstanding debate about the appropriate use of technology by law enforcement for national security and public safety objectives. When new technologies emerge, questions resurface regarding where the line between effective policing and individual rights should be drawn. The sweeping capabilities of AI, and risks associated with it, raise the stakes for both supporters and opponents of the Trump administration's plans.

Leeza Garber is a cybersecurity and privacy attorney and expert, owns her own executive education company, and teaches at The Wharton School and Drexel's Thomas R. Kline School of Law.

Gail Gottehrer is vice president of global litigation, labor and employment, and government relations at a NYSE-listed company and an expert on cybersecurity, AI strategy, and emerging technologies.

The views expressed in this article are the writers' own.

Is This Article Trustworthy?

Newsweek Logo

Is This Article Trustworthy?

Newsweek Logo

Newsweek is committed to journalism that is factual and fair

We value your input and encourage you to rate this article.

Newsweek is committed to journalism that is factual and fair

We value your input and encourage you to rate this article.

Slide Circle to Vote

Reader Avg.
No Moderately Yes
VOTE

About the writer

Leeza Garber and Gail Gottehrer