LLMs in Law Practice: Safeguarding Client Confidentiality

This blog post is for informational purposes only and does not constitute legal advice. For specific legal concerns, please consult a qualified attorney.

Introduction

The legal industry is experiencing a major shift driven by large language models (LLMs). These AI tools are transforming how we draft contracts, conduct due diligence, and perform legal research, promising efficiency gains that were unimaginable a decade ago. Alongside this promise come well-founded concerns about confidentiality, particularly following high-profile incidents like the DeepSeek data breach, where millions of chat records were exposed.

This post outlines some principles-based, practical safeguards intended to help legal professionals navigate this evolving landscape responsibly.

Important Disclaimer: This post concerns US-based ethical practices and references American Bar Association (ABA) Model Rules. State bar rules may vary, so check your specific state's ethical guidelines.

Also worth noting: AI technology is evolving rapidly. Some suggestions (like data anonymization techniques) could become outdated quickly as AI technology advances. Consider this post a snapshot as of early 2025.

Ethical Obligations

The ABA’s Formal Opinion 512 (July 2024) reinforces lawyers’ obligations to protect client information when using AI, requiring informed client consent before inputting sensitive data. Some state bars, such as California’s COPRAC, echo this, explicitly cautioning lawyers against inputting confidential information into AI tools that retain data or use it for training.

The stakes are high. Breaches can lead to severe consequences, such as:

  • Ethical violations under ABA Model Rule 1.6, or equivalent local rules

  • Potential waiver of attorney-client privilege

  • Damage to client relationships and firm reputation

  • Penalties under state privacy laws

Understanding the Risk: Public vs. Private LLMs

Not all LLM implementations provide the same level of confidentiality:

  • Public LLMs (e.g., free versions of ChatGPT)

    • Typically store user inputs by default

    • Some providers train their models on this data on an “opt-out” basis.

    • Similar to having a client conversation in a crowded coffee shop where the owners can record everything, use your words to train their staff, and where other patrons might benefit from overhearing your discussion.

  • Private/Enterprise LLMs (e.g., ChatGPT Enterprise, Claude for Enterprise)

    • Typically provide technical and contractual safeguards similar to those used for other cloud software and data management platforms already in widespread use, which significantly reduces confidentiality risks relative to public LLMs

    • Generally don’t use client inputs for training

    • May offer zero-retention modes,

    • May offer enhanced data security (e.g., robust AES-256 and TLS encryption)

    • May offer more customer-favorable contractual protections

    • Overall, more like holding a client conversation in a private conference room with a signed NDA

This distinction is significant because private enterprise or internally-deployed LLMs are arguably equivalent to existing document management systems in widespread use today, in terms of confidentiality risk (assuming equivalent contractual and technical safeguards are in place).

Five Practical Strategies for Applying Ethical Confidentiality Standards to LLM Use

1. Clearly Communicate and Obtain Explicit Client Consent

  • Don’t bury AI consent in fine print. Clearly communicate the risks, benefits, and limitations of LLMs, how they’re used, and the implications for the client’s matter.

  • Provide an opportunity for Q&A, mutually agree on whether AI is appropriate for the matter, and document clear consent.

  • Consider offering the client a screen share demonstration of how LLMs can be used effectively with anonymized examples. This can be particularly helpful for clients who are anxious about or unfamiliar with AI technology.

  • Distinguish between client-facing AI use (which requires specific consent) and back-office applications. You likely don't need explicit consent when using AI for administrative tasks, research, or business operations unrelated to client matters, but transparency about your general AI practices builds trust.

  • The line has gotten blurry with word processing applications that now include built-in AI features. Since these are becoming native (i.e., not optional) to office software platforms like Google Workspace, it's worth acknowledging this reality with clients while committing to anonymize their data whenever possible.

2. Carefully Select Your LLM Provider

Prioritize vendors with:

  • No data retention or zero-retention modes.

  • SOC 2 compliance and strong encryption standards (AES-256, TLS).

  • Access controls and “no training” terms in the contract.

Providers like OpenAI and Anthropic offer B2B services with enterprise-grade confidentiality safeguards.

Conversely, some public consumer cloud services have experienced security incidents and generally have fewer technical and contractual protections. Practitioners should exercise extreme caution before using them for client work.

3. Use Anonymized Data

Regardless of your provider, consider adopting this simple rule of thumb: Approach LLM interactions the same way you would approach having a conversation about a client matter with a colleague in a public elevator. Use anonymized, generic, or hypothetical scenarios. Experienced attorneys already know how to do this by necessity. Apply the same skill to LLM prompts.

  • Challenge yourself: “Can I analyze this legal issue effectively without any identifying information?” In my experience, many times the answer is yes.

  • When crafting prompts, ask “Would I be comfortable discussing this matter exactly this way if I knew a third party could overhear me?”

4. Implement Clear Policies

  • Develop written guidelines for LLM use in client matters, building on familiar confidentiality heuristics that lawyers already use (elevator conversations, how you'd brief a new hire, etc.)

  • Require onboarding training covering AI confidentiality standards, risks, and best practices

5. Implement Strong Technical Controls for Internal Deployments

  • Encrypt data both at rest and in transit

  • Employ secure virtual private cloud (VPC) deployments (e.g., Azure OpenAI within a firm-controlled environment)

  • Consider “prompt firewalls” to intercept and remove sensitive identifiers before they reach the LLM

Looking Ahead

Emerging technologies may support even stronger privacy protections.

Homomorphic encryption and confidential computing, which could allow LLMs to process data without ever “seeing” the underlying information, may soon become viable options. These may seem obscure, but if proven reliable and cost-effective at scale they could pave the way to much broader AI adoption in the practice of law.

Conclusion

Transactional lawyers can ethically integrate AI by adopting a proactive approach to confidentiality.

Clear client consent, anonymized data, careful provider selection, strong internal policies, and robust technical safeguards form the foundation of responsible AI use.

Thanks for reading, and may you be well.

Jace

Previous
Previous

On Being Irreplaceable

Next
Next

AI’s Legal Blind Spot: Why Generic LLMs Struggle with Legal Nuance