By -

Since ChatGPT burst on to the world stage at the end of 2022, there has been much discussion of the implications of AI for legal work.

However, in considering this it is useful to also think of some of the value that humans might bring to this work and how AI might have a role in driving a trend towards interdisciplinary teams in law firms. I would like to draw on some of my own work as a legal scholar and my collaborative experience to elaborate on these points.

In doing this will consider two areas that I have worked on. One relates to neurotechnologies – technologies such those produced by Elon Musk’s brain-computer interface company Neuralink which do things like enabling people with paralysis to control external devices by thought alone. These technologies are striking and may be the beginning of a human journey in which we merge with AI.

But I will start with some work that is of a more philosophical nature and inspired by the publications of a NSW judge who was interested in consciousness, free will and decision-making.

Apples, oranges and legal work

Some of my research has considered the possible limitations of AI systems in the workplace and whilst I am convinced that AI can contribute to efficiency and access and might be very disruptive for the legal profession, I have tried to think about the legal tasks that might be more difficult for non-conscious machines.

One thing that lawyers sometimes do is to respond to issues of incommensurability. That is, they weigh values that have no common metric. This might be difficult for AI systems.

I lecture in criminal law and to illustrate this point I will take an example from that area of law I have researched; sentencing. In sentencing, judges (and also the prosecution and defence) have to consider the moral culpability of the offender and concerns about community protection. Judges have to weigh these factors when they impose sentences. But how many units of moral culpability is worth a unit of community protection? The problem is that this is an instance of apples and oranges- there is no common metric from which to algorithmically respond to this question within AI’s quantitative paradigm.

Influenced by and responding to the thinking of a philosopher and judge of appeal, the late David Hodgson of the NSW Supreme Court, I have argued that tasks involving incommensurability (of which this is an example) might be difficult for forms of AI that operate in a quantitative paradigm that focusses on maximising some values. However, if Hodgson is right, incommensurability is something that conscious humans and thus lawyers can have a rational response to.

Even though technology companies might try address incommensurables by having an AI system assign a numerical value to these factors perhaps by attempting to copy the ways humans (for example sentencing judges) have responded in the past, human judgement might be needed to do this kind of task properly especially in situations that are novel and quite dissimilar to older cases.

Given the issue of incommensurability is hardly unique to sentencing (more examples coming later) this point has wider significance for the automation of law and legal work.

I don’t want to suggest AI won’t have a very significant transformative effect on law and society more general, indeed I think it will. But I also want to think about the things that might be more difficult for AI systems and the value which humans bring to legal decision-making. I don’t say that this is the only advantage human lawyers might have – just that it could be one and perhaps the issue of incommensurability is not sufficiently addressed in considerations of the future of legal work and the future of work more generally (think of an architect who is balancing incommensurable factors such as beauty, longevity, and cost as factors in the design of a building).

But moving on from this philosophical take on apples, oranges, and AI, some of my work has considered more practical problems for law such as how the law should respond to neurotechnology. If we think of law firms responding to neurotechnology (which is a field that increasingly incorporates AI), and AI’s interactions with other disciplines such as synthetic biology, more teamwork might be needed.

image description
Large firms can also offer multi-jurisdictional advice which of course may also be attractive to those developing an emerging tech product

A need for more interdisciplinary teams?

Law firms might need more interdisciplinary teams. There might need to be more collaboration between people from diverse educational backgrounds within the practice of law.

In my own work on neurotechnology and law I have increasingly had to collaborate with others working in a variety of fields. I need to do this to get a proper grasp on what the neurotech issues are for law. For example, I have published with people with backgrounds in AI, biomedical engineering, neuroscience, neurology, psychiatry, philosophy, venture capital and other disciplines. These collaborators range from the world of silicon-based machines to biology, minds and markets.

Even if I am not collaborating, and am writing as a sole author, I often have to speak to one or more of these people to ensure I haven’t misunderstood the science, technology or commercial environment.  I often find I need to change my views as a result of what I have learned. I also find these collaborations help me to understand what the important issues are.

This interdisciplinarity is also to be found in the newly-formed Institute of Neurotechnology and Law (an organisation which I am President of). One of the aims of the institute is to understand and make others aware of the neurotechnology’s challenges for law. In order to do this the range of skill sets of the organisation’s expert panel is even wider than the abovementioned co-authoring list.

Moving to the practice of law, when one thinks of something like Elon Musk’s company Neuralink’s brain-computer interface, the company’s endeavours span a similar range. So in a law firm is neurotech work for the life sciences people or the technology people? It seems to be for both and will require interdisciplinary collaboration (which is of course possible and currently takes place, at least within large firms).

But it’s not just neurotech that will require this. Mustafa Suleyman, who was a cofounder of Deep Mind is currently head of a new Microsoft AI division, has emphasised the link between AI and advances in synthetic biology. Some lawyers will have to advise clients who work at this interface and thus firms will need interdisciplinary expertise there too and perhaps in other AI-adjacent fields.

If this line of thinking is right, then this seems to favour larger firms who can assemble interdisciplinary teams. Large firms can also offer multi-jurisdictional advice which of course may also be attractive to those developing an emerging tech product.

AI might well disrupt some legal business models (and perhaps society more generally), but it might also converge with and fuel other scientific and technological developments and thereby nudge the profession towards increasing interdisciplinarity, thereby favouring larger firms.

However, one question for such firms might be this: as teams become more complex and diverse, who will assemble, monitor, and coordinate the interdisciplinary groups? It seems to require a set of skills that involves project management, but also a bird’s eye view and understanding, together with interest in the interdisciplinary issues as a whole as they relate to a particular piece of work. A capacity to understand, work with, and motivate people from somewhat varied backgrounds may also be needed. This task is likely to be infused with both incommensurability and human factors, and be especially hard to automate.  The necessary skills may not necessarily be found in a lawyer, even a senior lawyer, with a specialist expertise in a particular area of law. This is something firms may need to think about.

A somewhat analogous challenge appears to have been recognised by the UK’s Regulatory Horizons Council which is a body that advises the government on emerging regulatory issues resulting from technological advancements. In their report on neurotechnology they note that given the range of government departments involved coordination of the UK’s regulatory response to neurotechnology, someone senior needs to be accountable for that task.

This of course is another task that would be tricky to automate. If one thinks of the UK’s Ministry of Defence, their Medical and Healthcare products Regulatory Agency, and their Department of Health and Social Care, it would be surprising to say the least if the coordinating person (or people) never faced any issues of incommensurability or the weighing of such factors was not a regular part of the job.

So, what to do?

One challenge for firms will be to understand what is happening with respect to technological development in order to keep up with or even anticipate changes to client needs, and another to access sufficient interdisciplinary expertise.

In relation to the first challenge, firms might want to think about how to keep abreast of trends perhaps by way of monitoring of sources of information about diverse technological developments, and thinking about who should consider them and ultimately drive change. Some thought may also need to be given to how to address an increasing coordination problem in large firms in respect of interdisciplinary work.

Some of those discussing the implications of AI for law have worried junior lawyers might have their training grounds eaten up by generative AI.  However, it may be that young lawyers who have recently studied relevant technical and scientific areas are useful in developing interdisciplinary teams with current knowledge of other disciplines. It may even be that PhDs in the various specialist areas would be desirable. Or perhaps, similarly to the need for those with a background outside of the law, especially in firms where they bring a deep understanding of silicon-based tech, law firms might need senior staff who have contemporary understandings of other domains such as biology.

AI may well disrupt legal workplaces but it is interesting and perhaps profitable to consider the kinds of legal task that require human judgement. Since ChatGPT broke into the public domain there has been a lot of discussion of the limits of AI. Perhaps one kind of task that is somewhat resistant to resistant to automation (or at least more difficult to automate) is the kind that involves the weighing of incommensurables.

But it does seem likely that AI will have a significant impact, and perhaps interdisciplinary teams will be needed to adapt to the changed environment.  This illustrates a more general point that AI is not the only area of scientific and engineering endeavour that is currently advancing, but it will increasingly have a role in those other areas. AI-adjacent work does not seem to be prominent in debates about AI and law and this may need more consideration.

Whether it is apples and oranges, or engineers and biologists, AI brings quite a lot for lawyers, law firms, and governments to weigh up.


Formerly a commercial litigator with Baker and McKenzie in Hong Kong, Dr Allan McCay is Co-director of The Sydney Institute of Criminology and an Academic Fellow at the University of Sydney Law School. He is President of the Institute of Neurotechnology and Law and a member of the Law Society of NSW’s AI Taskforce. Allan will be speaking at the Law Society’s Annual Conference in sessions on AI across both days. 


The 2024 Annual Conference, Day One: The Curious Lawyer
Presented by the Law Society Journal

The team behind the award-winning Law Society Journal are delighted to present a day of thought-provoking keynotes, dynamic panel discussions, and insightful masterclasses.
Showcasing some of the brightest and boldest voices in the legal profession we ask, explore, and debate the biggest questions facing curious lawyers today.

Tickets are selling fast – secure your place today.