What happens when crime is committed in cyberspace by invisible offenders? Can we inflict punishment on elusive data thieves, and on automated processes like Robodebt? An ethical perspective.
Crimes committed in cyberspace are at an all-time high. Data theft is the order of the day. Robodebt, an AI process, is reported to have stolen the peace of mind of over 443,000 people and robbed some of them of their homes, their mental health and even their lives. More recently developed AI processes are being used to research, judge, diagnose and recruit – with the same potential to give life-shattering advice. Europol recently reported that AI language models can fuel fraud, cybercrime, and terrorism.
But the miscreants remain unpunished, protected by the invisibility of cyberspace.
LSJ spoke to Dr Simon Longstaff, Executive Director of The Ethics Centre, about the conundrum: When an offender is invisible or anonymous, is it still possible to impose punishment, to deter future offenders? And whom would we punish?
“The challenge of punishment is written deep into our culture,” Longstaff says. Longstaff has headed the Centre since 1991, a year after it commenced operations. He studied law, but for him this was just a step on the path to a life occupied with ethics.
“I left school at 16 and worked, and one of my jobs was as a cleaner in the safety department on Groote Eylandt in the Gulf of Carpentaria. When I came back to Sydney I wanted to study law. I started doing it, wasn’t brilliant at it, but more importantly decided that the law as I could see it being practised was somewhat remote from justice. And that was a disappointment for me. Some people never recover: lawyers who went into law thinking they were going to be doing justice and ended up just administering the law. A few of the lucky ones get into positions where they can really do justice, but a lot don’t get that chance.”
For Longstaff, the transition into his current life came when he did postgraduate studies in philosophy at the University of Cambridge. As a person who has been wondering about the human condition since he was a child, he says, “That really me drove me to what I needed and what I do now.”
“In every case the formula for punishment is the same,” Longstaff says. “We inflict penalties on individuals and sometimes whole groups who refuse to comply with our demands. For a punishment to count as a punishment it must involve consequences that the recipient experiences as unpleasant.”
Punishing the absent offender
“But to punish those who are absent or have no form that we can grab on to,” Longstaff says, “there are a few general principles. Firstly, we tend to accord all people natural justice in our system, and that means in order for a person to be punished they must first be found guilty.
“And for them to be found guilty they must first be tried. In a manner which accords with natural justice they must be presumed innocent until shown to be guilty beyond reasonable doubt, at least in our system when it comes to criminal offences, and it’s very hard to do that in the case of an absent offender.
“There are some really rare cases where they might seek to make a conviction against a person who has not been arrested but is known. To say we will find hackers guilty when their identity is not known will be impossible, because you can’t ascribe guilt to a general class of people, you have to have some kind of identity, and then you have to go through the various fault elements.”
The issue is different in the case of Robodebt, and other automatic, autonomous systems like AI, Longstaff says. “Although the actual AI system itself is incapable of as yet at least having the kind of mind, the guilty mind that would be one of the elements that used to be called the mens rea to commit a crime, nonetheless it operates in a context where others do.”
“So perhaps the closest thing would be that for things like failures in the case of corporations around workplace health and safety or bribery offences and things of that kind, we tend to transfer a greater part of the responsibility to those who are responsible for the decisions it makes. For instance, people who are company directors. And one might also look at the general context within which the offending is taking place.”
Let loose to do its own thing
Longstaff cautions: “So this is why it becomes so important to determine that something like AI is constructed in such a way that its makers seek to limit, to the greatest extent possible, the chance that it might do something wrong. Because the machine, when let loose, will be doing its own thing.
“The other problem we have with trying to hold machines responsible, let alone punishing them, is that we don’t know how a lot of them work. Thought there’s quite a bit of research going into this, it’s so far not nearly achieved what we might hope.”
Something like AI should be constructed in such a way that those who make it seek to limit, to the greatest extent possible, the chance that it might do something wrong.
How can victims’ desire for justice be achieved, LSJ asks Longstaff, when the offender is invisible?
Longstaff says: “Those who survive these sorts of things like Robodebt may have a range of emotions. They might want restitution. There may also be people who want retribution, and I think the most likely figure to bear this in the case of something like Robodebt would be the minister or ministers responsible. Because that ties in with our whole democratic system. So firstly, they are actual people who are subject to punishment, and secondly, they will have exercised decision-making power that either allowed or disallowed this to take place.
“And certainly in a democratic system where this is a government action they are ultimately responsible for what happens, because that’s the whole basis on which executive power is exercised in a liberal democracy like Australia. The Minister is the one responsible for what is done in their department and they are responsible to Parliament for giving an account of that. And potentially, if what’s being done is illegal, then you’d think that they might be held responsible for that.
“If you think about the raw emotion of a person who’s gone through a Robodebt or something like that they’d be saying to themselves, if something’s to be done by way of retribution, let it be visited on the person who really made the decision, who was ultimately responsible. And that would be the person who made the machine, programmed the machine, who was in charge of that.”
Longstaff adds that it is important to acknowledge the victim’s – and indeed all society’s – need to see some form of punishment in place: “Providing that we punish the guilty and our punishments are proportionate to the crime they commit, the public is prepared to invest in the system as a whole and resist the temptation to take matters into their own hands. It’s when no one is punished (or when the punishment is disproportionate) that I think as a society we lose trust in the system as a whole. And that opens us up to that more troubling phenomenon where the rule of law is broken and the vigilantes emerge.”
Clever beyond our control
In terms of who should be punished for crimes like data theft, Longstaff says, “there are two levels. There are the criminals who seek to gain unlawful access, and if you identify them, and arrest them, then they should be prosecuted and punished to the full extent of the law.
“Independently of that, there will be people who have responsibilities to protect your data. And if they are negligent or inept in relation to that then they should be held accountable to the full extent of the law. So, the way it works in these systems, it’s not just an obligation on the criminal not to engage in their nefarious acts, there’s also a positive obligation on those who hold your data to take all proper measures in order to protect it.
“And if they don’t, then they also share some of the responsibility. Their negligent indifference contributes to [the loss] and they would be held responsible even though they didn’t initiate the criminal act.”
Who’s accountable when technologies become so sophisticated that it becomes clever beyond our control and causes harm?
Longstaff is unequivocal about this. “It shouldn’t be released into the world if it’s incapable of any control.”
An equivalent situation, he says, would be “if we allowed drug companies just to start handing our pharmaceuticals before they’ve been proven to be safe or efficacious. They have to go through a process of assessment, and you’ve got to know what’s in them, and they’ve got to be manufactured to a certain quality. And be subject to some kind of review or control.
“And I think we have to begin thinking about some of these more advanced technologies in the same way: Is it safe? Does it do what it’s supposed to do? Is it being produced to a standard society can rely upon when it’s been released into the open? If you make and release something that goes rampant and can’t be controlled then you as the maker will bear some responsibility for that. Just in the same way when Thalidomide was ultimately found to be damaging, the maker was ultimately responsible for the harm done.”
If you make and release something that goes rampant and can’t be controlled then you as the maker will bear some responsibility for that.
Longstaff says, “When you make a technology you build into it certain affordances: things that people can do with it. And a responsible technologist would limit the affordances that can arise for improper purposes. So, one classic example would that ideally the makers of handguns put in technology so that the handgun can only be fired by a person whose fingerprints match the register. In other words, that’s a form of technology which limits the affordances that the handgun can have to those who have a legitimate purpose, and are properly licensed and responsible.
“Suppose that technology existed,” Longstaff elaborates, “and you continued to make handguns anyone could pick up and use for any purpose without any of those limitations, then you would be responsible, because you failed to limit those affordances.
“If you could anticipate how technology like this might be used in ways that are not intended, and you do nothing at all to recognise or limit those affordances to do wrong, then most people would say you should be held responsible for that failure.”
Longstaff concludes, “I can see society saying there are people who bear some responsibility beyond those of the criminal actor, and we should look to them to be held to account in a just and proper manner, even though they were not the perpetrator. The offence lies in their reckless indifference. This won’t be a complete and adequate response to the anonymous malefactor, but it may be something that people reasonably expect as part of the solution.”
A hand on your shoulder
AI’s innate inadequacy is its lack of humanity, Longstaff says: “You can use AI with a greater degree of precision than any human being can bring, to diagnose certain forms of cancer. But no machine can put its hand on your shoulder and tell you that you’re going to die knowing what it is to be mortal. That’s what a human doctor can do. And even the best simulation of that won’t count the same because it’s not something that truly knows mortality.”
Despite the doomsayers’ view that our damaged and violent world will only become more so, Longstaff is optimistic. It will be people, not machines, who determine the future. “The most powerful force on this planet is human choice. The choices that people make shape the world.”
But he is less optimistic about whether we will continue applying an ethical lens to address dilemmas like those posed by anonymous crime in cyberspace.
“The trouble with ethics, it’s like that wonderful, somewhat chilling line from Blake in his poem The Four Zoas. He said, ‘Wisdom is sold in the desolate market where no one comes to buy’. And ethics is a bit like that. Something which everybody needs but very few want.”