AI in the Legal Field Is Inevitable. These Are the Ways We Should Implement It.

In the past few months, the use of ChatGPT has become widespread due to its uniquely open access. Suddenly artificial intelligence (AI) is available for everyone to use. This has brought up a pressing question that has been brewing for a while but now seems more urgent: what role will AI play in our society moving forward, and what roles are appropriate for intelligent machines? In many cases, it may be that artificial intelligence can take on parts of a job while ultimately leaving a human at the wheel. It will become important to apply the correct model of AI to different contexts. White box models are explainable in human terms and allow biases to be identified. Black box models use algorithms that are too complicated or hidden to be understood by humans.(1) In the legal profession, AI will become an important tool for lawyers in tasks that can be automated, such as document review, contract drafting, legal research, and assessments of success likelihood in particular cases.

To begin, some of the simpler tasks performed by lawyers should be automated to increase efficiency and precision. In many cases, AI can be used to spot errors and avoid costly mistakes.(2) Already, AI is used by big law firms for tasks like document review, the process by which documents are pored over for their sensitivity and relevance to a case, and contract review, the process of reading through a contract to make sure everything is written clearly and with acceptable terms.(3) These processes are tedious and subject to human error, and AI can help make the review more thorough and shorten the process. Furthermore, AI is valuable for conducting legal research through natural language processing, which can be used to find information such as relevant precedent and case law, streamlining another once-laborious process.(4) Corporations once paid high hourly rates to firms for work that can now be automated; one study of lawyers at big firms found that 13 percent of their work hours could be automated with current technology.(5) The amount of work that machines can take on will only go up as technology moves forward. Use of AI may impose high fixed costs, but machines do not need wages, healthcare, or benefits, and can do many tasks faster than any human, meaning that in the long run machines will be able to cut costs for firms.(6) Ultimately, the use of machine learning will become inevitable as firms that implement it start to offer lower prices for their services, forcing others to do the same in order to compete.

Furthermore, machine learning can also be extremely valuable for its predictive power. The power of prediction is extremely important in the legal field, where lawyers must decide whether or not to take a case as well as whether or not to advise their clients to settle.(7) As AI becomes more accurate at predicting outcomes, the number of cases that actually go to trial will likely decrease. Another area in which AI is valuable is litigation finance, in which a third party finances some of the costs of a case with a promise of a return if a clients’ case is successful. AI models are already being built specifically for predicting the outcomes of cases. To forecast outcomes, these models use the fact pattern of a case as well as relevant precedent.(8) As law firms start to adopt technology with higher predictive power, others will be forced to do the same or lose their competitive edge.

Of course, there are several ethical questions that arise from the use of AI in the legal field. The importance of equality and fairness in the legal system cannot be understated, and machines may be trained on biased data, inevitably leading to inequitable outcomes. For example, criminal judges in some states use a technology called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which assesses the risk of recidivism of specific defendants. This helps to inform officials in their decisions regarding bail, early release, and sentencing. However, a ProPublica study found this tool to be biased against Black people in its determination of the likelihood of future criminal conduct.(9) Bias in the outcomes of AI outputs can arise from two sources: biased training data or the internal workings of its model.(10) Biased training data can be modified or replaced. Bias arising from the internal workings of a model can be fixed as long as the model is explainable and clear.

To deal with these ethical questions, examining the technology behind artificial intelligence is important. Machine learning can take the form of black box or white box models. In black box models, the internal mechanisms of the model are hidden and cannot be analyzed.(11) This model is useful for situations in which accurate prediction is valuable but knowing how an output is reached is not important. In white box models, the internal workings of the model are transparent and can be analyzed. These two types of models may both have valuable applications in the legal industry. In every judicial application of AI, white box models should be used and tested thoroughly. The judicial system is based on justice and equality, and allowing black box models to determine outcomes threatens these principles, as the outputs of black box models are often based on the analysis of massive amounts of data through complicated or hidden mechanisms, which can allow bias or incomprehensibility to seep in.(12) In order to ensure fairness in such high-stakes decisions, transparency in the models behind outcomes is important in avoiding bias and allowing judges to make informed decisions. White box models should be used in cases where transparency and equity are of high value. Although these models can also be subject to bias, it is important not to discard them for this reason alone. After all, humans are extremely susceptible to bias in ways that are often not transparent. Bias in white box models can often be discovered and eliminated; bias in humans often cannot be. In the jobs of lawyers, however, black box models do have value, contrary to what some might think. In many contexts, black box models are able to perform more complicated processing, which allows them to come up with more accurate outputs.(13) It takes extra work on the programmer’s part to create explainable AI, but developers of black box models do not have to worry about that, meaning they can solely focus on producing the best possible product. In predicting the outcomes of cases in order to determine settlement amounts or whether to take on a case, black box models can be valuable. Since accuracy (rather than transparency) is the primary goal, black box models may be more appropriate in some situations.

Those who adopt and adapt to the use of AI will become more efficient, and those who do not will be left behind. Automation has already replaced many blue collar jobs, and now it is entering the world of white collar jobs as well. In many ways, however, machine learning will streamline (rather than replace) these jobs. Careful use of AI is always appropriate, as improper use of AI can raise ethical and legal problems or leave room for fallacies, biases, and more. Because of this, it is important to apply the correct AI model in different contexts. It is safe to say we will not see AI showing up to court or advising clients by itself anytime soon, because the human touch needed in these activities cannot be replaced just yet. It will take human skill and knowledge to implement AI in appropriate contexts across professions. We may as well start figuring it out now.

References

(1) Octavio Loyola-Gonzalez, “Black-Box vs. White-Box: Understanding Their Advantages and Weaknesses from a Practical Point of View,” IEEE Access 7 (2019): pp. 154096-154113, https://doi.org/10.1109/access.2019.2949286.

(2) Ibid.

(3) William J Connell, “Artificial Intelligence in the Legal Profession-What You Might Want to Know,” Computer and Internet Lawyer 35, no. 9 (September 2018), https://www.proquest.com/docview/2108812393?parentSessionId=NyEYddMdty%2Bn6y9Lv4V1VKeF0xAtQutpgKYiha44RHk%3D&pq-origsite=primo.

(4) Rob Toews, “AI Will Transform the Field of Law,” Forbes (Forbes Magazine, October 12, 2022), https://www.forbes.com/sites/robtoews/2019/12/19/ai-will-transform-the-field-of-law/?sh=45fa526c7f01.

(5) Steve Lohr, “A.I. Is Doing Legal Work. but It Won't Replace Lawyers, Yet.,” The New York Times (The New York Times, March 19, 2017), https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html.

(6) William J Connell, “Artificial Intelligence in the Legal Profession-What You Might Want to Know,” Computer and Internet Lawyer 35, no. 9 (September 2018), https://www.proquest.com/docview/2108812393?parentSessionId=NyEYddMdty%2Bn6y9Lv4V1VKeF0xAtQutpgKYiha44RHk%3D&pq-origsite=primo.

(7) Matthew Stepka, “Law Bots: How AI Is Reshaping the Legal Profession,” Business Law Today from ABA (Business Law Today, February 21, 2022), https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/.

(8) Rob Toews, “AI Will Transform the Field of Law,” Forbes (Forbes Magazine, October 12, 2022), https://www.forbes.com/sites/robtoews/2019/12/19/ai-will-transform-the-field-of-law/?sh=45fa526c7f01.

(9) Julia Angwin and Jeff Larson, “Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say,” ProPublica, December 30, 2016, https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say.

(10) Matthew Stepka, “Law Bots: How AI Is Reshaping the Legal Profession,” Business Law Today from ABA (Business Law Today, February 21, 2022), https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/.

(11) Michael Affenzeller et al., “White Box vs. Black Box Modeling: On the Performance of Deep Learning, Random Forests, and Symbolic Regression in Solving Regression Problems,” SpringerLink (Springer International Publishing, April 15, 2020), https://link.springer.com/chapter/10.1007/978-3-030-45093-9_35.

(12) Octavio Loyola-Gonzalez, “Black-Box vs. White-Box: Understanding Their Advantages and Weaknesses from a Practical Point of View,” IEEE Access 7 (2019): pp. 154096-154113, https://doi.org/10.1109/access.2019.2949286.

(13) Ibid.

Isabel Skomro

Isabel Skomro is a member of the Harvard Class of 2024 and a HULR Staff Writer.

Previous
Previous

The Blockade of Gaza and International Law: A Positivist View

Next
Next

YouTube and Misuses of Copyright Law