Generative AI: A New Era of Efficiency for Law

A status of Themis - the goddess and personification of justice

Introduction

AI is coming for the legal sector and if implemented appropriately, it’s not the end of the world. Generative artificial intelligence (AI) is a form of machine learning that can create imagery, text, and video content based on prompts. Impressively, it can learn from these prompts to refine the actions or content it generates. Recently, generative AI has adopted conversational user interfaces - like Open AI’s ChatGPT, Apple’s Siri, Anthropic’s Claude, and Google’s Alexa - which take abstract data and convert it into human language in a manner that feels like a direct conversation with the machine. These interfaces are intended to mimic human interactions to create an intuitive experience for the end user. While the AI may resemble a human interaction, the results are purely algorithmic. Therefore, results are based on the most likely response based on training data (not necessarily the correct answer) and this comes with a risk of bias. Furthermore, the AI’s reasoning is programmable and the training data can be cherry picked. For instance, OpenAI (2023) explains that the text embeddings used to connect data are measured by the relatedness of text strings. This allows computer programmers to develop a temperature gauge on how relevant or creative the desired response should be by adjusting the allowable distance between the embeddings returned by the algorithm. As generative AI automation is built into the software used to conduct work, it puts into question how much control humans will have over their work.


The legal sector has the unique power to make use of AI automation as well as legislate it. AI legislation is an underdeveloped area which the Melbourne legal firm, Pinsent Masons, articulates with this succinct comparison: “More regulation goes into buying a cup of coffee than goes into generative AI”. AI in relation to the law is a complex topic. For instance, the potential classification of AI as a legal entity and the internationalisation of AI in contrast to federal laws are both related points but a full discussion is beyond the scope of this paper. Instead, this paper discusses practical, immediate approaches to the technical application of AI automation in Australia for the legal sector. This essay draws on sources from academics, software engineers, legal professionals, and technology writers to argue that:

  1. The legal sector is facing significant changes due to the integration of generative AI.

  2. Generative AI automation poses risks of bias, discrimination, knowledge gaps, misleading information, and poor accountability that impact lawyers as well as their clients.

  3. Technical principles must be implemented that improve the accountability, transparency, and explainability of AI automation within the legal sector.

The purpose of this essay is to outline steps that designers and software engineers can take to mitigate the risk of harm to lawyers and their clients in the immediate future. In doing so, it will allow the legal sector to benefit from the technical advancements that generative AI brings as well as formulate appropriate AI legislation to mitigate widespread risk. 


“Some experts knew it was coming but for the rest of us it was a sudden and unsettling change that made us ask some pretty searching questions about the nature of our work.”

Pinsent Masons (2023)


Generative AI is changing the legal sector

Based on the studies by Felton et al (2023), legal services are among the ​​top industries impacted by advances in language modeling. According to The LegalTech Fund, a venture capital company that specialises in law, there are over 100 companies incorporating generative AI in the legal sector as of October 2023. These companies are harnessing AI’s ability to increase efficiency and rigour for legal processes. 


Generative AI saves time. With its advanced ability to read, analyse and summarise mass information, AI presents significant time-saving benefits that can increase the productivity of legal teams. Reading and summarising documents is a use case that Pinsent Masons is already employing, saving several days per client. Other use cases include verification and dispute resolution which once involved tedious processes of fact-checking documents with manual records. Digitisation in combination with LLMs enable teams to speed up the process of searching and linking evidence as well as identifying conflicting records. Nay et al (2023) write that “LLMs can assist in tasks ranging from contract analysis to case prediction, potentially democratising access to legal advice, reducing the cost and complexity for those who might otherwise struggle to navigate the legal system.” The New York Times technology writer, Steve Lohr (2023), writes that “the technology seems like a very smart paralegal” and that the scope and effects of the efficiency gains are still being realised. A process that previously took days is now instantaneous.


LLMs improve rigour as AI can effectively fact-check on behalf of the user. For example, London-based legal firm Allen & Overy, has adopted Harvey AI to check the rigour and quality of their work before it is finalised. Financial Times journalist, Natalie Byrom (2023) argues that “training generative AI on relevant documents, such as judgments and court decisions, can dramatically improve the accuracy of its responses”. According to journalist and founder of AlgorithmWatch, Matthias Spielkamp (2017), “human decision-making is at times so incoherent that it needs oversight to bring it in line with our standards of justice”. Generative AI balances the inconsistency of human judgement with an automated, systematic check that improves the overall quality of the work.


The appeal of greater efficiency and rigour has influenced the adoption of generative AI in the legal sector. Over 100 legal AI services span business operations, litigation, research, corporate, contracting, finance, and more. Furthermore, the increased rigour and efficiency of AI automation have the added benefit of improving the fairness of the judicial system. However, there are instances of direct harm to lawyers and their clients as a result of this automation.


Risks of generative AI on lawyers and clients

The shortcomings of AI pose a risk of harm to humans that is exacerbated in the practice of law. Generative AI poses risks of bias, discrimination, knowledge gaps, misleading information, and a lack of accountability to lawyers and clients.


Any algorithm comes with a risk of bias. As OpenAI (2023) has pointed out in their product disclosure statement: “Our embedding models may be unreliable or pose social risks in certain cases, and may cause harm in the absence of mitigations… The models encode social biases, e.g. via stereotypes or negative sentiment towards certain groups. For example, we found that our models more strongly associate (a) European American names with positive sentiment, when compared to African American names, and (b) negative stereotypes with black women.” Based on the limitations cited by OpenAI, ChatGPT does not adhere to Australia’s anti-discrimination laws. Recent examples of algorithmic bias - such as AU’s Robodebt crisis, the UK’s postal service, and the US’s PredPol - have reproduced existing inequities or added new harmful bias and discrimination targeted at vulnerable citizens. Generative AI has the potential to systematise and magnify entrenched biases. This is exacerbated if the AI has gaps in its knowledge.


“We were just as annoyed as all of you that GPT-4’s knowledge about the world ended in 2021. We will try to never let it get that out of date again.”

Sam Altman, CEO of OpenAI (2023)


Generative AI has knowledge gaps. Until very recently, OpenAI’s models did not know of events that occurred after September 2021. Even with the latest version of ChatGPT Turbo, the knowledge cut off is April 2023. This means that ChatGPT has zero knowledge of the last 6 months involving the Russian-Ukraine war, Trump’s multiple indictments, the cost of living crisis, and the recent events between Palestine and Israel. For a tool that can shape the justice system, a lack of awareness of these major events is a significant blind spot. Furthermore, conversational UI can be misleading in the delivery of its answers. Github’s Principal Research Engineer, Amelia Attenberger (2023) observes: “When it responds, it is endlessly confident. I can't tell whether or not it actually understands my question or where this information came from.” Generative AI products have the potential to overlook unseen areas of the training data resulting in false conclusions.


A lack of accountability of the AI renders false conclusions even more pervasive. Linguistics professor Emily M. Bender, argues that the term ‘artificial intelligence’ is in fact, a marketing term intended to make automation “sound sophisticated, powerful, or magical” in a bid to curb accountability. If the majority of work is offloaded to a machine, the lawyer is no longer in control. This lack of control can have ramifications on the overall communication of precisely what the AI is doing. And this can have drastic impacts on the privacy and security of citizens. For example, there have been cases of unlawful AI-enhanced facial recognition systems in Australia and the UK in which the individuals being tracked had not consented nor were they given appropriate notice of the surveillance systems in use. Improper implementation of generative AI has led to poor accountability for breaches of security, privacy and misformation.


The risks posed by AI of systematised biases, blind spots in the data, efficiency of automation, confident delivery, and lack of accountability can lead to breaches of security and privacy. At best, it has resulted in minor administrative errors that can be quickly amended. At worst, like in the case of AU’s infamous RoboDebt crisis, an automatic decision-maker resulted in the uncontestable, systemic ruin of half a million vulnerable people. As The White House has exclaimed in their Blueprint for AI (2023): “Important progress must not come at the price of civil rights or democratic values”. The first place to mitigate risk is in the technology itself. 


AI implementation must adhere to stringent technical principles

Legal professionals and their clients must be protected from the risks associated with generative AI. While AI can improve productivity, humans are required to make value judgments. According to Byrom (2023) “generative AI models returning accurate legal information have led even ardent proponents of the technology to conclude that it should only be used to augment, rather than replace, the advice of lawyers.” By following rigid technical principles, risks can be mitigated through the accountability, transparency, and explainability of AI automation within the legal sector. This section will combine The White House’s Blueprint for AI (2023) with academic advice to make recommendations for the Australian legal tech industry. The purpose of these recommendations is to reinforce human comprehension and control of the technology in use so that informed decisions are enacted in a manner that supports the judicial system. 


The companies who create technology must be accountable for user’s security and privacy. This point is even more crucial when generative AI has been implemented into the software. Existing legislation offers a foundation for technical principles that support AI. Australia’s Privacy Act (1988) and anti-discrimination laws are important references to ensure people are safe from mistreatment. Based on these legislations, user privacy must be protected and any social bias is not acceptable. Consent must be sought from users before incorporating personal information to train the AI. To protect user security, platforms must commit to robust cybersecurity practices such as pre-deployment testing, risk identification, and mitigation as well as ongoing monitoring. Data warehouses must be encrypted and no health or payment information should be saved in the datasets used for training models. Accountability of AI remains an open debate that must be resolved. The technology itself should not be made accountable for outcomes generated by automated decision-makers. Instead, full accountability should lie on the organisations that create and make use of these systems.


Strengthening transparency is crucial for mitigating harm. Automation is particularly appealing for repetitive tasks but the interface must ensure that legal practitioners are aware of AI’s systematic processes so that the human can remain in control. For instance, a  timely and accessible notice must be presented to communicate that AI systems are in use. When applicable, users must be able to opt out of AI automation and divert to manual functions or human interactions. Transparency about the sources of training data and the limitations of its capabilities should also be highlighted. 


Any notice of AI systems must be explainable so that informed consent can be achieved. To borrow wording from The White House’s Blueprint for AI, the wording should be “generally accessible plain language documentation”. This is to ensure that the lawyer or client can comprehend how the AI works and therefore maintain professional or personal control over any AI automations. A relevant concept posed by researchers Chia et al (2023) is that “there are two distinct and separate concepts of autonomy, Person Autonomy and Machine Autonomy”. For the two versions of autonomy to work in harmony, clear communication is essential.


Mastering these technology principles within the legal tech sector has the potential to influence wider debates on AI legislation. Building on the writings of John J. Nay et al, LLMs could be used to predict when fiduciary duties are violated in a codebase. According to Spielkamp (2017), “the broader society—lawmakers, the courts, an informed public should decide what we want such algorithms to prioritize.” 

Conclusion

In 1950, the mathematician and computer scientist, Alan Turing theorised that computers could imitate human intelligence. Today, generative AI has reached a level of intelligence that is indistinguishable from a human for particular tasks. The introduction of generative AI has resulted in significant changes to the legal sector. While AI presents an opportunity to improve the efficiency and rigour of legal processes, it also poses risks of bias, discrimination, knowledge gaps, misleading information, and a lack of accountability to lawyers and their clients. It is important to note that AI is simply automation. The technical principles outlined in this paper will mitigate the risk of harm to lawyers and their clients through improved accountability, transparency, and explainability of the technical platforms. Fundamentally, technology should be used to augment (not replace) legal processes. By following these stringent technology principles, generative AI offers an opportunity to have a positive impact on the justice system. Furthermore, ethical AI implementation of the legal tech sector has the potential to have a positive influence on the wider technology industry.

Special thanks to Swetha Meenal Ananthapadmanaban and Thom Mackey for comments and copy edits.

Bibliography 

Allen & Overy. “A&O announces exclusive launch partnership with Harvey.” Allen & Overy - News and Insights, Published: 15 Feb 2023. allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partnership-with-harvey

Altman, Sam. “OpenAI DevDay, Opening Keynote.” Youtube, Posted: 7 Nov 2023. youtube.com/watch?v=U9mJuUkhUzk.

Appleton, Maggie. “Squish Meets Structure: Designing with Language Models.” Maggie Appleton, Published: 5 Sept 2023. https://maggieappleton.com/squish-structure.

Australian Government, Attorney General’s Office. “Australia’s anti-discrimination law.” 2023. ag.gov.au/rights-and-protections/human-rights-and-anti-discrimination/australias-anti-discrimination-law.

Australian Government, Federal Register of Legislation. “Privacy Act 1988.” 2023. legislation.gov.au/Details/C2023C00232.

Bender, Emily M. “Opening remarks on AI in the Workplace: New Crisis or Longstanding Challenge” (Medium, Published: 2 Oct 2023) medium.com/@emilymenonbender/opening-remarks-on-ai-in-the-workplace-new-crisis-or-longstanding-challenge-eb81d1bee9f 

Jarni, Blakkarly. “Facial recognition technology in use at major Australian stadiums.” Choice, Last updated: 05 July 2023. choice.com.au/consumers-and-data/data-collection-and-use/how-your-data-is-used/articles/facial-recognition-in-stadiums.

Byrom, Natalie. “AI risks deepening unequal access to legal information.” The Financial Times, Published: 2023. ft.com/content/2aba82c0-a24b-4b5f-82d9-eed72d2b1011.

Chesterman, Simon. Artificial intelligence and the limits of legal personality. Published by Cambridge University Press for the British Institute of International and Comparative Law. Quarterly 69, no. 4 (2020): 819-844.

Felten, Ed; Raj Manav and Robert Seamans. How will Language Modelers like ChatGPT Affect Occupations and Industries?. arXiv preprint arXiv:2303.01157, 2023. *

Hui Chia, Daniel Beck, Jeannie Marie Paterson and Julian Savulescu. Autonomous AI: what does autonomy mean in relation to persons or machines? Law, Innovation & Technology, DOI: 10.1080/17579961.2023.2245679, 2023.

Jowitt, Joshua. “Ian McEwan’s Machines Like Me and the thorny issue of robot rights.” Australia: The Conversation, Published: 17 April 17 2019. theconversation.com/ian-mcewans-machines-like-me-and-the-thorny-issue-of-robot-rights-115520.

The LegalTech Fund, “We’re thrilled to share the first version of our early-stage legalTech generative AI market map.” Linkedin, Posted: October 2023. linkedin.com/posts/the-legaltech-fund_were-thrilled-to-share-the-first-version-activity-7122701972826152960-KOZX

Lohr, Steve. “A.I. Is Coming for Lawyers, Again.” The New York Times, 10 April 2023.

Murray, Toby; Marc Cheong, Jeannie Marie Paterson. The Flawed Algorithm At The Heart Of Robodebt. Pursuit by University of Melbourne, 10 July 2023. *

Nay, John J., David Karamardian, Sarah B. Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H. Choi, and Jungo Kasai. Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence. arXiv preprint arXiv:2306.07075 (2023).

O’Neil, Cathy. Chapter 5: Civilian Casualties - Justice in the Age of Big Data in Weapons of Maths Destruction: How Big Data Increases Inequality and Threatens Democracy. Allen Lane, 2016.

OpenAI. “Embeddings.” OpenAI, Accessed: 7 Nov 2023. platform.openai.com/docs/guides/embeddings/what-are-embeddings.

Paterson, Jeannie Marie. Facial Recognition Technology. Brief, Published: April 2023

Pinsent Masons. “AI and the legal function.” Pinsent Masons - Brain Food, Accessed: 7 Nov 2023. pinsentmasons.com/thinking/brain-food/ai-and-the-legal-function.

Spielkamp, Matthias. Inspecting Algorithms for Bias. MIT Technology Review, 12 June, 2017.

Wattenberger, Amelia. “Why Chatbots Are Not the Future.” Wattenberger, Accessed: 7 Nov 2023. https://wattenberger.com/thoughts/boo-chatbots.

Wattenberger, Amelia. “Getting creative with embeddings.” Wattenberger, Accessed: 7 Nov 2023. https://wattenberger.com/thoughts/yay-embeddings-math.

The White House. “Blueprint for an AI Bill of Rights”. The White House, Published: 11 April 2023. whitehouse.gov/ostp/ai-bill-of-rights.

Next
Next

The well-rounded man of food TV.