< Back
Best Practices for Use of Generative AI
Post on January 24th, 2024

OBLIC continues to monitor the developing use of AI for lawyers. A recent advisory opinion from Florida provides guidance.

The Florida Bar of Governors’ Review Committee on Professional Ethics issued Advisory Opinion 24-1 Regarding Lawyers’ Use of Generative Artificial Intelligence in January 2024. Advisory Opinion 24-1 concludes that ethical use of generative AI is permissible, “but only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligations.” Referencing a broadly reported case from New York, the Advisory Opinion quotes U.S. District Court Judge P. Kevin Castel, “technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.”

However, the Advisory Opinion identifies critical professional responsibility aspects that must be considered when employing generative AI and generally new technologies in law practice:

  • Duties of confidentiality
  • Obligations to candor to the tribunal and truthfulness in statements to others
  • The requirement to avoid frivolous contentions and claims
  • Duty to ensure reasonableness in fees and costs
  • Compliance with advertising rules
  • Responsibilities of reasonable oversight of subordinate attorneys, staff, and vendors

Florida’s Advisory Opinion organizes these aspects into four sections:

Confidentiality. Prioritizing confidentiality of client information as a top concern, the Advisory Opinion emphasizes this concern. Given that “machine learning” continues to develop responses to inquires based on additional input, generative AI “raises the possibility that a client’s information [or sensitive work product] may be stored within the program and revealed in response to future inquiries by third parties.” Referencing prior technology ethics opinions, the Florida Bar encourages attorneys to be astute to the practices and reputation of the provider (or developer). This section concludes with guidance that a proprietary or in-house generative AI tool (one that is trained on the limited universe of that firm’s work product and controlled inputs and inaccessible by non-firm attorneys or third parties) may allay confidentiality concerns.

Oversight of Generative AI. The Florida Advisory Opinion suggests guidance flowing from Florida Rule 4-5.3 (which is substantially similar to Ohio RPC 5.3) as to the oversight of an AI assistant. In this section, the Board of Governors Review Committee on Professional Ethics advises as follows:

  • A supervisory attorney should ensure that internal firm policies are in place to “reasonably assure” that use of generative AI is compatible with a lawyer’s professional obligations.
  • Work product generated by any AI tool, as with a non-lawyer assistant, must be reviewed by the lawyer. This means that any cases, statutes, or other propositions of law purported to be cited by the generative AI tool must be independently verifiedSeriously.
  • Reasonable oversight and awareness of practices, policies, and conduct applies to third party vendors, such as outside technology companies that the lawyer uses in their law practice.
  • While it might seem a bit outside of current capabilities, the Florida Advisory Opinion also reminds attorneys to avoid the delegation of tasks that must be done by an attorney. Generative AI tools should not be negotiating claims, rendering legal advice, or entering into a lawyer-client relationship.

Legal Fees and Costs. The Advisory Opinion reminds attorneys broadly of obligations and restrictions on ethical billing practices. Time should not be inflated to account for any efficiency gained by AI use and that attorneys should not bill time to a client to gain general competence either on a general area of law or general use of technology.  The Advisory Opinion specifically advises:

If a lawyer is unable to determine the actual cost [for using generative AI tool] associated with a particular client’s matter, the lawyer may not ethically prorate the periodic charges of generative AI and instead should account for those charges as overhead. (emphasis added)

Lawyer Advertising. Finally, the Advisory Opinion addresses the responsibility of attorneys to be compliant with advertising rules. In the context of developing generative AI, attorneys are reminded that Florida prohibits the use of the voice or image of a person who is not actually a lawyer or employee of the firm unless appropriate disclaimers are made. Lawyers are urged to be cautious with the use of intake chatbots (often deployed on lawyers’ websites), as these systems are often designed to be accommodating, which could result in the delivery of inaccurate information. The Florida Advisory Opinion further advises that attorneys disclose to prospective clients interacting with such chatbots that they are not in fact talking with a lawyer or employee of the firm. If a prospective client interacting with a lawyer’s chatbot is already represented by counsel, does the attorney have a duty to program the chatbot to discontinue further communication?

Further questions and guidance will emerge as the technology continues to develop and its use becomes more refined. While much of the discussion now is regarding generative AI, advanced analytic AI is simultaneously being developed. How will “decision intelligence” be used in our field to analyze legal strategy, likelihood of success, or risk? Will attorneys be responsible for relying on predicted outcomes rendered by this next generation of AI? Those questions will be addressed in due time. Here are best practices for the use of generative AI at this time from professional responsibility practitioners:

  • Due to the novelty of the technology, the inherent nature that an AI system is trained on input data, and the duties of confidentiality, the best guidance currently is to obtain client consent prior to using (especially) a third-party generative AI program. Just as cloud-based technology has become more common-place and accepted as generally secure now, at its outset, professional responsibility experts recommended obtaining client consent for its use.
  • Ensure truthfulness and accuracy of any work product generated by a generative AI tool.
  • Reasonably investigate privacy and security standards for any technology products used in your firm.
  • Establish internal firm policies as to the use of generative AI and vetting of third-party vendors.
  • Finally, prioritize confidentiality of client information through cybersecurity measures, appropriate data collection and destruction policies, and cautious use of third-party generative AI tools.

As we continue to monitor ethics decisions, advisory opinions, and grievances related to the use of novel technology, recommendations are being developed, subject to change as the technology further matures.

OBLIC’s Loss Prevention team is interested in how you’re utilizing AI. Do you anticipate the regular use of generative AI in your practice? Are you considering developing or implementing an AI tool to streamline your practice? We’d like to hear from you!

Gretchen K. Mote, Esq.
Director of Loss Prevention
Ohio Bar Liability Insurance Co.
Direct:  614.572.0620
[email protected]
Merisa K. Bowers, Esq.
Loss Prevention Counsel
Ohio Bar Liability Insurance Co.
Direct:  614.859.2978
[email protected]

 

This information is made available solely for loss prevention purposes, which may include claim prevention techniques designed to minimize the likelihood of incurring a claim for legal malpractice. This information does not establish, report, or create the standard of care for attorneys. The material is not a complete analysis of the topic and should not be construed as providing legal advice. Please conduct your own appropriate legal research in this area. If you have questions about this email’s content and are an OBLIC policyholder, please contact us using the information above.