ICE's New York Office Uses a Rigged Algorithm to Keep Virtually All Arrestees in Detention
IN 2013, U.S. Immigration and Customs Enforcement quietly began using a software tool to recommend whether people arrested over immigration violations should be let go after 48 hours or detained. The software’s algorithm supposedly pored over a variety of risk factors before outputting a decision.
A new lawsuit, however, filed by the New York Civil Liberties Union and Bronx Defenders, alleges that the algorithm doesn’t really make a decision, at least not one that can result in a detainee being released. Instead, the groups said, it’s an unconstitutional cudgel that’s been rigged to detain virtually everyone ICE’s New York Field Office brings in, even when the government itself believes they present a minimal threat to public safety.
The suit, which asks that ICE’s “Risk Classification Assessment” tool be ruled illegal and the affected detainees reassessed by humans, includes damning new data obtained by the NYCLU through a Freedom of Information Act lawsuit. The data illuminates the extent to which the so-called algorithm has been perverted. Between 2013 and 2017, the FOIA data shows, the algorithm recommended detention without bond for “low risk” individuals 53 percent of the time, according an analysis by the NYCLU and Bronx Defenders. But from June 2017 — shortly after President Donald Trump took office — to September 2019, that number exploded to 97 percent.
“This dramatic drop in the release rate comes at a time when exponentially more people are being arrested in the New York City area and immigration officials have expanded arrests of those not convicted of criminal offenses,” says the groups’ lawsuit. “The federal government’s sweeping detention dragnet means that people who pose no flight or safety risk are being jailed as a matter of course—in an unlawful trend that is getting worse.”
Individuals detained under what the lawsuit calls a “no-release policy” will remain jailed until they can be seen by an immigration judge. People arrested by ICE had no access to information about how they were classified by the algorithm — that’s why the FOIAs were necessary — and most don’t have access to lawyers at the time of their detention, Thomas Scott-Railton, a fellow at the Bronx Defenders told The Intercept. “The result,” he said, “is that people are detained for weeks, even months, without having been given the actual justification for their detention and without a real chance to challenge it.”
THE LAWSUIT ALLEGES that this algorithmic rubber stamp violates both the constitutional guarantee to due process and federal immigration law that calls for “individualized determinations” about release, rather than blanket denials with a computerized imprimatur. Reached by email, ICE New York spokesperson Rachael Yong Yow told The Intercept, “I am not familiar with the lawsuit you reference, but I am not inclined to comment on pending litigation.”
The risk assessment algorithm is supposed to provide a recommendation to ICE officers who are then meant to make the final decision, but the agency’s New York Field Office diverged from the algorithm’s ruling less than 1 percent of the time since 2017. When detainees are finally seen by a human, non-algorithmic immigration judge, the lawsuit says, “approximately 40% of people detained by ICE are granted release on bond.”
The Trump administration’s stepped-up immigration arrests of people without criminal convictions lay bare the perversity of the rigged no-release policy. “If the New York Field Office were actually conducting individualized determinations pursuant to its stated criteria,” the lawsuit says, “the percentage of people released should have actually increased since 2017 because more people arrested qualified for release.”
The technical reasons for this drastic change are clear. Algorithms are essentially problem-solving formulas that can operate at superhuman speed. ICE’s risk assessment algorithm originally functioned by automatically reviewing an immigration detainee’s personal history, weighing factors like their flight risk and threat to public safety, then spitting out one of four options: detention without bond, detention with the possibility of a release on bond, outright release, or a referral to a human ICE supervisor.
In 2018, Reuters reported that Trump’s inauguration brought a critical change to the risk assessment tool where the software was edited to simply remove the possibility of a “release” output. The NYCLU’s FOIA data also shows that the option for bond was removed in 2015. In other words, this ostensible problem-solving software was rigged to provide only one solution: detention.
BASED ON THE government’s own data, the decision-making tool functionally makes decisions about as well as a stopped clock would tell time. Rather than functioning as a tool that even attempts to aid human decision-making, FOIA data shows the opposite. The “Risk Classification Assessment” tool serves as a funnel to fast-track action in line with the Trump administration’s brutal immigration agenda.
For years, exactly how the ICE algorithm reached its ultimate decisions has been kept secret. “ICE has been anything but transparent about both the RCA’s algorithm and how the tool is used by officials in the field,” explained NYCLU attorney Amy Belsher. “And yet, these determinations have profound and severe impacts on the lives of the thousands of people ICE arrests every year.”
Unlike much of the secret code used in government or business, however, this secret algorithm was exposed because ICE rigged it: We now know exactly how it doesn’t work. The risk-assessment tool has almost ceased to be an algorithm altogether, rather serving only to give the impression of algorithmic justice. “Given what we now know about the manipulations to the tool,” added Belsher, “it appears the main function of the RCA is to provide a veneer of objectivity and fairness to a process that lacks it entirely.” In this computer-enabled vacuum of accountability, ICE’s New York personnel can point to the oracular algorithm to justify increased detentions, which in turn points to nothing but itself.
For New Yorkers handed an algorithm-sanctioned detention, the lawsuit says, the consequences can be immediate and crushing:
Once denied release under the new policy, people remain unnecessarily incarcerated in local jails for weeks or even months before they have a meaningful opportunity to seek release in a hearing before an Immigration Judge. While waiting for those hearings, those detained suffer under harsh conditions of confinement akin to criminal incarceration. While incarcerated, they are separated from families, friends, and communities, and they risk losing their children, their jobs, and their homes. Because of inadequate medical care and conditions in the jails, unmet medical and mental-health needs often lead to serious and at times irreversible consequences.
At no point have detainees ensnared by the ICE algorithm had any chance at recourse, explained Belsher, describing a legal process as opaque as the risk-assessment tool. “People are not able to meaningfully participate in ICE’s initial custody determination process, they are not given access to counsel and cannot submit evidence,” Belsher said. “ICE’s decision is administratively final; there is no process within the agency for people to challenge either the RCA’s recommendation or ICE’s custody determination. In fact, ICE does not even provide the person with the RCA’s determination of their flight or danger risk level or any other recommendation generated by the tool. People are given a basic form stating only that ICE has decided to detain them.”
The no-release policy is particularly tough on people with disabilities or health problems. “This practice of widespread detention is both cruel and needless,” Scott-Railton, of the Bronx Defenders, said in a press release, “and has particularly devastating consequences for people with physical or psychological disabilities who must fight their immigration cases while being held in inhumane conditions and without access to the health services they need.”
Sam Biddle is a reporter focusing on malfeasance and misused power in technology. While working at Gizmodo and Gawker, he covered stories ranging from vast corporate data breaches and celebrity hackers to trafficked webcam models and Facebook privacy. As the editor of Valleywag, he provided a critical, adversarial view of the startup economy and Silicon Valley culture. His work has also appeared in GQ, Vice, and The Awl.