< Back
Proposed Evidence Rules Tackle AI Evidence
Post on January 21st, 2025

In late 2024, the U.S. Courts Advisory Committee on Evidence Rules proposed amendments that seek to govern evidence generated by AI technology.

There are three principal amendments proposed by the Committee:

  • New Rule 707, which explicitly subjects AI-generated outputs to the same standards for a human expert witness as required by R. 702 (a) – (d).
  • Addition of (c) to Rule 901, which standardizes a test for challenging evidence purportedly created by AI.
  • Amendment to Rule 901(b)(9) to include examples of evidence that satisfies 901(a) requirements relevant to artificial intelligence.

U.S. District Judge Edmund Sargus (S.D. Ohio, Columbus) serves on the Advisory Committee on Evidence Rules and teaches a seminar on evidence at the Ohio State University Moritz School of Law. Judge Sargus explains the proposal as requiring that “AI-generated opinions meet the Daubert standards” that have traditionally applied to expert witnesses’ testimony. He expects that only one of the proposed rule changes will ultimately be adopted: either the amendments to R. 901 or the addition of R. 707. In the interim, Judge Sargus suggests that litigants be prepared for judicial review of evidence created through AI-technology under a Daubert test.

In a recent ABA Litigation Section CLE titled “Cybersecurity, Data Privacy, and Artificial Intelligence: Ethics and Best Practices for Lawyers and Law Firms”, the panelists reflected on the comment to Rule of Professional Conduct 1.1 that requires attorneys to be aware of the implications of technological changes in the practice of law. (Comment [8] to Ohio Rule 1.1 of Professional Conduct.) Being able to identify capabilities of AI and recognizing AI-generated products like images or even recordings may implicate a new standard of care in the future, especially when considering the standard of competence and obligations imposed by other Rules of Professional Conduct.

While new evidence rules may not be finalized for a couple of years, the proposed changes to the Federal Rules of Evidence highlight the need to be AI-aware and AI-conversant. As the use of generative AI and AI-driven analytics increases, this technology will permeate virtually every practice area and industry. Lawyers should effectively recognize and then challenge purported “deepfakes” or unreliable evidence created by AI.

Stay tuned for further developments and in the meantime, take a look at these related OBLIC Resources as you develop your technological competency:
New Developments on AI for Ohio Attorneys (October 23, 2024)
Cybersecurity and Your Firm (October 2, 2024)
Green-lighting Use of AI (July 11, 2024)

Questions? We’re here to help you navigate the intersection of ethics and technology in your practice. Call the Loss Prevention Hotline or email us.

Gretchen K. Mote, Esq.
Director of Loss Prevention
Ohio Bar Liability Insurance Co.
Direct:  614.572.0620
[email protected]
Merisa K. Bowers, Esq.
Loss Prevention & Outreach Counsel
Ohio Bar Liability Insurance Co.
Direct:  614.859.2978
[email protected]

This information is made available solely for loss prevention purposes, which may include claim prevention techniques designed to minimize the likelihood of incurring a claim for legal malpractice. This information does not establish, report, or create the standard of care for attorneys. The material is not a complete analysis of the topic and should not be construed as providing legal advice. Please conduct your own appropriate legal research in this area. If you have questions about this email’s content and are an OBLIC policyholder, please contact us using the information above.