Recent media coverage has highlighted remarkable achievements by AI programs, including the ability to write speeches, pass exams, and outperform Jeopardy! champions. These advancements are also making an impact on the legal industry. However, risk management professionals emphasize that it is crucial to approach these developments with caution.
IBM defines Artificial Intelligence (AI) as a “field, which combines computer science and robust datasets, to enable problem-solving.” Utilizing algorithms, AI systems source available data to output rational predictions. A subfield of AI known as “Learning (or Large) Language Model” or LLM is a primary component of “Natural Language Processing.” Natural Language Processing focuses on enabling computers to comprehend and generate human language. For more on Learning Language Models, see OECD Digital Economy Papers (2023) “AI Language Models: Technological, Socio-Economic and Policy Considerations.” LLMs are trained on vast amounts of data (e.g., human speech, written word, research databases, and product inventories) and have developed how to predict a response to a prompt. Siri and Alexa are two common LLMs already in use in daily life.
One rapidly evolving Natural Language Processing tool called “ChatGPT” is garnering attention in the legal industry and featured in many recent news stories. ChatGPT and other applications like it such as Google’s Bard AI and Meta’s LLaMa represent a more sophisticated form of LLM. Experts estimate that ChatGPT utilizes over a trillion parameters (i.e., sources of data) to generate detailed written responses to prompts. This advanced system is undergoing tests for answering exam essay questions, producing marketing content, and facilitating language translation among many other potential uses.
As it relates to the legal sector, however, LLMs are currently incapable of conducting accurate or reliable legal research, appropriately analyzing case law, recommending sound legal strategies, or investigating facts. They are not substitutes for the legal analysis and research performed by trained, licensed lawyers or the diligent work of paralegals and legal assistants. LLMs do not consistently provide sources for the information they present, and often their responses to legal prompts can generate fake caselaw citations and quotes that convincingly mimic authenticity – thus creating a significant risk to its current use within the legal services sector.
Beyond the qualitative issue addressed above, attorneys must also avoid potential ethical violations of the duty of confidentiality as governed by Ohio Professional Conduct Rule 1.6 when providing prompts to LLMs. ChatGPT and other AI tools are data-driven systems and collect information for a wide range of uses well outside the bounds of the duties of confidentiality. Use of data that users enter as prompts is not transparent. ChatGPT’s current version includes notifications and advisements cautioning against the inclusion of confidential content; users should refrain from entering any sensitive data into queries or prompts for all AI tools.
AI technology has made significant strides, with LLMs and tools like ChatGPT already having clear utility and impressive functionality. However, it is vital to recognize the limitations of these systems, exercise skepticism regarding their output, and ensure adherence to all rules of professional conduct when utilizing technology.
OBLIC takes pride in delivering relevant and timely information to our insured attorneys. Please remember our complimentary loss prevention hotline – a resource for policyholders to access helpful recommendations, ethics consults and sample forms. We’re here to help resolve your questions and be of service to you!
|Gretchen K. Mote, Esq.
Director of Loss Prevention
Ohio Bar Liability Insurance Co.
|Merisa K. Bowers, Esq.
Loss Prevention Counsel
Ohio Bar Liability Insurance Co.