The Law Society of NSW hosted its third in-house corporate lawyer’s forum for 2025, diving into neurotechnology's complex and rapidly evolving landscape and its potential impact on the workplace.
The discussion, facilitated by Dr Rob Nicholls, a senior research associate at the University of Sydney specialising in technology and regulation, brought together a panel of experts to explore the exciting possibilities and significant ethical challenges posed by these advancements.
The panel comprised Lorraine Finlay, Human Rights Commissioner at the Australian Human Rights Commission; Dr Allan McCay, Co-Director of The Sydney Institute of Criminology; Kate Peterson, Partner at McCullough Robertson; and Paul Monaghan, Senior Ethics Solicitor at The Law Society of NSW. Their diverse perspectives sparked a lively debate around the central question: Is neurotechnology in the workplace a promise or a peril?
Allan McCay provided a comprehensive overview of neurotechnologies, defining them as devices that “interact or monitor the nervous system,” capable of reading from and intervening within the brain and nervous system. He highlighted the transformative therapeutic potential, citing the Australian innovation of the Cochlear Implant and the advancements in deep brain stimulation for Parkinson’s disease. He also touched upon emerging technologies like brain-computer interfaces for individuals with paralysis, referencing companies like Neuralink and the Australian firm Synchron and wearable EEG headsets now available to consumers.
“There’s enormous upside to neurotechnologies,” McCay emphasised, pointing to their potential for accessibility and improved quality of life. However, he also noted the increasing use of external EEG devices in workplaces for monitoring alertness in high-risk industries and even in police interrogations, raising immediate questions about privacy and consent.
“Innovation and human rights aren’t mutually exclusive. They go hand in hand, but we need innovation to be matched with responsible and ethical usage, with strong human rights discussions being put at the heart of it,”
Lorraine Finlay echoed the potential for positive impact, stating, “This technology… can be a really powerful tool for strengthening human rights” through enhanced accessibility and workplace safety monitoring. However, she quickly shifted to the significant human rights concerns, particularly the right to privacy. “We never thought that the mind was something that could be pierced in that way,” she cautioned, highlighting the unprecedented level of surveillance that neurotechnology could enable, going beyond identifying individuals to understanding their thoughts and feelings.
However, a significant point of contention during the discussion was the issue of consent. Finlay stressed the importance of free and informed consent in the workplace, questioning whether employees could genuinely consent to wearing such devices if it was a condition of employment. She also raised concerns about potential discrimination against neurodivergent individuals or older workers if workplace monitoring systems were used in hiring, firing, or promotion decisions. While acknowledging existing legal frameworks around privacy and discrimination, Finlay argued that these laws were not designed with neurotechnology in mind, creating significant gaps that must be addressed.
“Innovation and human rights aren’t mutually exclusive. They go hand in hand, but we need innovation to be matched with responsible and ethical usage, with strong human rights discussions being put at the heart of it,” Finlay said.
Based on her expertise in workplace health and safety law, Kate Peterson acknowledged the intuitive appeal of technology that could prevent accidents. However, she immediately pointed to the inherent power imbalance in the employer-employee relationship. Using workplace surveillance legislation as an analogy, she questioned the true extent of employee awareness and consent regarding existing monitoring practices. Peterson emphasised the need for clear “guard rails” and a consensual framework that balances innovation with human rights and public interest. She also raised the legal concept of “reasonable instruction,” questioning whether it would be considered lawful and reasonable to mandate the use of neurotechnology and the sharing of neurological data as a condition of employment.
Finlay further elaborated on the complexities of consent within the power dynamics of employment, warning against the embedding of existing societal divides where vulnerable individuals may feel compelled to consent due to economic pressures.
Paul Monaghan approached the topic from an ethical standpoint, emphasising the fundamental right to freedom of thought and the potential for neurotechnology to infringe upon this. He drew parallels to historical struggles for basic freedoms, cautioning against a rush to adopt technologies that could erode these hard-won rights. Monaghan raised concerns about the “harvesting” of an individual’s knowledge, skills, and experiences through neuro-interfaces, particularly within professions like law, where expertise is highly valued. He suggested that professional conduct rules might need to be updated to explicitly protect the confidentiality of thoughts and knowledge in the face of neurotechnological advancements.
The panel also addressed whether amending the Privacy Act would sufficiently regulate neurotechnology. While acknowledging that existing technology-neutral legislation provides a starting point, Finlay argued for clear guidance on how current laws apply to these new technologies and bespoke frameworks to address specific gaps. Monaghan, however, questioned the fundamental right of any entity to impinge upon an individual’s thoughts, emphasising the ethical principles of informed consent and the right to withdraw from such experimentations.
Nicholls presented a compelling scenario to Peterson in which an employer could detect an employee’s agitation through a neuro-device. This raised complex questions about how such information should be managed and the potential for misinterpretation or bias. Peterson cautioned against creating a rigid definition of “normal” brain activity and the risk of unfairly penalising individuals who fall outside that definition.
Ultimately, the forum highlighted the urgent need for proactive and thoughtful consideration of neurotechnology’s ethical, legal, and societal implications in the workplace. While the potential benefits for accessibility, safety, and productivity are clear, the panellists highlighted the importance of establishing robust safeguards to protect fundamental human rights, ensuring genuine consent, and preventing discriminatory practices. This discussion was a crucial first step in navigating this complex terrain and shaping a future where neurotechnology can be harnessed responsibly and ethically.