In a 93 page judgment, his Honour Justice Bromwich handed down his decision in Tickle v Giggle for Girls Pty Ltd (No 2) [2024] FCA 960 (23 August 2024) (Tickle v Giggle). The decision has sparked interest as it is the first time that provisions in the Sex Discrimination Act 1984 (Cth) (SDA) have been tested in court since the SDA was amended in 2013.
Although the Court did not find that any direct discrimination had taken place, it found that “ignorance of Ms Tickle’s gender identity is no defence to the indirect discrimination claim. The imposed condition of needing to appear to be a cisgendered female in photos submitted to the Giggle App had the effect of disadvantaging transgender women who did not meet that condition, in particular Ms Tickle.”
Under section 22 of the SDA, it is unlawful for a person, who provides goods or services, to discriminate against another individual on the basis of gender identity, sex, sexual orientation and more.
Giggle tried to argue that section 22 of the SDA is not supported by a Commonwealth head of power but “the Federal Court held that the provision is supported by the external affairs power and corporations power, and applied to Giggle,” says Dane Luo, Doctor of Philosophy candidate at Oxford University.
The Court was required to consider a range of issues from gender identity discrimination, constitutional law to human rights.
Tickle v Giggle: Recap
Roxanne Tickle is a transgender woman. She was born male but underwent sexual reassignment surgery. As a result, she obtained an official updated Queensland birth certificate which recognised her as being of the female sex.
Around February 2021, Tickle downloaded the Giggle App and followed the registration process to gain access to the app. As part of the sign-up process, she uploaded a selfie of her face. This photo was assessed by a third-party artificial intelligence software that was designed to differentiate between the facial appearance of men and women. The photo was reviewed by the Sally Grover, the CEO of the Giggle App, who determined that the photo was of a man.
Tickle was unable to continue using the app and she subsequently commenced legal action against Giggle for Girls Pty Ltd and its CEO after she was allegedly blocked from the app due to her gender identity.
Tickle alleged that Giggle engaged in unlawful gender identity discrimination and their actions were contrary to section 22 of the SDA.
Why is the judgment significant?
According to Paula Gerber, Professor at Law at Monash University and an internationally renowned scholar with expertise in international human rights law, “this is the first case that has tested the provisions of the Sex Discrimination Act since it was amended in 2013 to add gender identity and sexual orientation and intersex status into the legislation as protected categories,” she says.
She stresses that “it is a really important and significant judgment. [Justice Bromwich] did a great job of unpacking what is sex, what is gender, what is gender identity in very clear, precise language because none of those terms are defined in the legislation.”
“[His Honour] used the term ‘cisgender’ and he recognised that just like everyone has a sexual orientation, everyone also has a gender identity … there is a spectrum of gender identities,” she says.
So far as Gerber is aware, “self-identification” is not a legal term, and it has not been defined by statute. “Self-identification is when a person says, this is my gender identity and it either aligns with the sex that was presumed and recorded at birth, or it doesn’t,” explains Gerber.
Legal recognition of change in sex varies across each state and territory. Gerber explains that “this is one of the problems in our Federation because the laws are inconsistent in different jurisdictions. New South Wales is the only one that still requires surgery before you can change the sex on your birth certificate.”
According to Gerber, there are a couple of important aspects to the decision. The first is the recognition by the Court that “sex is not immutable. Sex can be changed, [and] it’s not frozen in time. When you are born, doctors look at the genitalia and the external expression and they make an assumption about what your sex is … but particularly for intersex people that may be wrong,” she says.
Secondly, the Court found that there had been indirect rather than direct discrimination as the decision to remove Tickle from the app was based on a “visual assessment of a photo. Transwomen may have more difficulty satisfying that [criteria] than cisgender women,” says Gerber.
The decision raises important questions regarding the role of technology, AI and facial recognition technology.
She says, “this is an example that none of us had really considered in this way before, that an assessment of someone, whether they are a woman or not, [can] be based on a very fleeting scan of a photo.”
Gerber is concerned about the use of AI in this manner and says that use of technology to exclude transwomen by not identifying them as being women is going to be problematic moving forward.
Human in the loop and the use of AI
The decision raises important questions regarding the role of technology, AI and facial recognition technology.
Lauren Perry, Responsible Technology Policy Specialist, Human Technology Institute at the University of Technology Sydney (UTS) says “… this is right up our alley from a Human Technology Institute perspective, where we do a lot of work in this space trying to curb the harms from AI, facial recognition, looking at the kind of biases and the guardrails that need to be in place to address the biases,” she says.
The decision “is really important being the first case alleging gender identity discrimination that’s gone to the Federal Court,” she says.
Perry points out that in this instance, “… it actually isn’t the technology or the facial recognition system that caused the discrimination. It was actually the human in the loop coming back to review the decision which caused the discrimination,” she says.
Sarah Sacher, Policy Specialist, also at the Human Technology Institute at UTS agrees. “It’s also worth noting that facial recognition is kind of notorious for being less accurate for certain protected groups including women, including people with darker skin so that kind of ups the level of risk when you are using these systems that you’re going to potentially discriminate in some way,” she says.
As use of AI and facial recognition technology spreads, there is concern that the use of such technology can lead to bias and discrimination without proper systems or checks or balances in place.
Perry explains that these systems, including facial recognition and analysis systems, are created by humans and learn through “machine learning” whereby they are fed training data. The machine learning algorithm “learns based on all of these labels of the images that it is seeing that women have long hair, women have softer features, and it starts to make its own inferences,” she says.
“If there aren’t people in the background checking what the machine is starting to learn, checking how these labels end up being put out, then you end up having this perpetuation of historical biases that come out the other end.
“If you’ve got a machine making decisions around what a man looks like, what a woman looks like – that’s entirely fraught because there’s no single, or even common way that a person presents. … There are huge concerns with using these sorts of technologies anyway, for this kind of decision making, particularly about those very personal protected attributes,” says Perry.
Sacher points out that “it’s worth noting that this decision could just as easily have been made by the app. … it doesn’t really matter what tool is used to make the decision, ultimately discrimination law applies the same way, it’s about the decision,” she says.
She acknowledges that AI is still “… a novel technology. There’s lots of novel issues that arise but ultimately the law still applies.”
Perry and Sacher both agree that “just because the technology is accurate, doesn’t mean that you should be using it. You need to have those broader protections to place limitations on how and when it’s appropriate to use the technology.”
Sacher stresses that it’s crucial to have a human in the background checking the data and the decisions. “There needs to be that responsibility, there needs to be that accountability. There needs to be that check on these kinds of decision making. But having said that, as this case shows, humans are fallible,” she says.