By -

Following a string of high-profile decisions, the spotlight has been on immigration detention, particularly for people who have been in detention for a long time.

According to statistics released by the Department of Home Affairs, at 31 December 2023, there were 872 people in immigration detention facilities across Australia and 777 of those in detention facilities have a criminal history. The average period of time people spent in detention facilities was 625 days.

Given the number of people in immigration detention facilities, it is probably not surprising that the company responsible for operating some of Australia’s immigration detention facilities uses automated decision-making in their management of the facilities. However, what many people might not know is that individual detainees are allocated a security risk rating, which is determined by an algorithm.

The “Security Risk Assessment Tool” (SRAT) is used to ascertain whether a person is at low, medium, high or extreme risk for escape or violence. The rating affects whether the individual is accompanied to medical appointments and whether restraints are used. Despite the impact the rating has on their lives, the rating is not disclosed to the individual.

According to a spokesperson for the Australian Border Force, “The Department does not employ the use of automated decision making in immigration detention.”

“The Security Risk Assessment Tool (SRAT) is one risk assessment tool used to inform the management of detainees in the IDN (Immigration Detention Network) and uses quantitative and qualitative methods to assess and calculate risk based on known criteria for each detainee. Other risk assessments include (but not limited to) those for assessing health risks, including risk of self-harm, to ensure appropriate health services delivery.”

“The SRAT considers each detainee’s individual circumstances, including consideration of an individual’s capability (e.g. age, frailty, medical condition) and intent (e.g. immigration pathway, behaviour, prevalence of incidents), and is reviewed at regular intervals.”

Using algorithms in automated decision-making is not a new phenomenon. According to Dr Daniel Ghezelbash, Associate Professor and Deputy Director of the Kaldor Centre for International Refugee Law at UNSW Sydney, there “is a role and there are some benefits to automated decision-making. … [T]he starting point should be just acknowledgment of the … inconsistency and the role of social and cognitive biases in human decision-making.”

“… [W]e have a much higher tolerance for bias in human decision-making and the exercise of discretions than we do in decision-making through algorithms. …[T]his is one of the more egregious examples of what can go wrong when you don’t have the required checks and balances in place,” he says.

“[I]n the United States … ICE (Immigration and Customs Enforcement) used similar algorithms for deciding whether to release these detainees from immigration detention and that has been heavily criticised on similar grounds …,” he says.

“[T]hat took a very restrictive approach and took away the discretion from decision makers and basically used the algorithms as a kind of veneer or shield to justify detaining just about everyone.”

[T]he starting point should be just acknowledgment of the … inconsistency and the role of social and cognitive biases in human decision-making.

According to information provided by the Australian Border Force, the SRAT is designed to assess the potential risk of an individual within the IDN and is conducted from a risk-based approach.

The risk is calculated using six key indicators based on the number of overall incidents recorded in each category (contraband, demonstration, self-harm, escape, aggression and violence, criminal profile), any known history that may impact the risk profile and the weighting assigned to each in the risk matrix, coupled with an assessment.

A detainee’s security risk assessment captures each incident a detainee is involved in regardless of whether they were an alleged victim, an alleged offender or involved in any other capacity.

According to Ghezelbash, the “most important things are around transparency. So, knowing how the algorithm works and … when”. “[I]t’s a sliding scale as well. Like the more significant the rights are that are in question, the more the need for transparency around how the algorithm operates as well as the accountability mechanisms for appealing or reviewing decisions made by the algorithm.”

“I think those two things are certainly missing from this example. But also, it’s the worst, absolute worst environment to experiment with algorithmic decision-making … [we are] talking about individual liberty and restriction of individual liberty.”

“There are laws, policies, rules and practices that govern how people are treated in immigration detention facilities. Scrutiny, including from parliamentary committees, the Commonwealth Ombudsman and the Australian Human Rights Commission, assist to ensure that the health, safety and wellbeing of detainees are maintained,” says the Australian Border Force spokesperson.

Given the popularity of artificial intelligence and the use of technology to assist in decision making, there are legal and ethical implications to consider and Ghezelbash says having humans oversee the use of technology in decision-making is not necessarily the best nor the only answer.

“[E]ven when you have a human in the loop, I think that’s actually even more dangerous because it’s often an excuse for poorly designed algorithms without transparency and the reality is that the humans will generally follow the algorithm or the ‘computer says no, computer says yes’ approach. … [R]egardless of whether the human is in the loop or not, I think it’s really important that the adequate safeguards are in place and the algorithm is [of] suitable quality.”

“[M]y biggest concern is around the complete lack of transparency and then the flow on effects of that lack of transparency around accountability and the ability for people to even know … what decision was … made in relation to them and what evidence that decision was based on. … [W]ithout knowing that, being able to seek review or to challenge that decision becomes close to impossible,” he says.

“But I think the sad reality is that, in the immigration space and detention context and broader refugee context has been the testing ground for a lot of these technologies, where really that’s like the last place that we should be using them.”