On Being Irreplaceable
This blog post is for informational purposes only and does not constitute legal advice. For specific legal concerns, please consult a qualified attorney.
Introduction
In my recent “AI’s Legal Blind Spot” post, I tackled some ways that large language models (LLMs) can struggle with the precision and context-specific demands of legal work, and strategies that can help make them more reliable. Today, I want to flip the perspective and talk about what human lawyers bring to the table that makes us—at least in my opinion—irreplaceable.
To be clear: I’m not here to scare anyone away from AI. I’m convinced that lawyers who learn to use AI effectively are set up to thrive, maybe even better than before. I think it’s important to hone in on what we do best relative to AI, and focus on strengthening those skills.
In my view, lawyers who delay or choose not to use AI might struggle in the long term, not because AI replaces everything they do, but because it reshapes what clients value most.
Preference Prediction, Empathy, and Intuition
Decision-makers in business, government, and society are humans, and ultimately productive work revolves around what humans want and what they don't. LLMs can digest data and spot patterns faster than any human lawyer, but they don't inherently know why humans prefer one option over another.
An LLM getting it right in the wrong way.
I recently asked a reasoning LLM to help with the first draft of my firm’s privacy statement. Technically, it mostly nailed it. But one detail made me cringe: “we may share data with AI tool providers.” This wasn’t technically wrong (we had consent covered elsewhere), but emotionally?
Yikes.
Even a novice attorney will probably recognize that phrasing as alarming because of what’s not there. The LLM completely missed the emotional resonance, and the need to get it technically “wrong” to serve a different (but equally impactful) interest.
Most humans have an innate ability to gauge the emotional impact something is likely to have on other humans, and it’s not terribly difficult to see how this helps us navigate competing interests.
An experienced lawyer might humor the other side (or the client for that matter) with an “agreement to agree” clause in a contract if it doesn’t have practical risk and it’ll get the deal done. It’s not about being enforceable at that point, it’s about signaling intent and making everyone feel better.
Human preference is deeply emotional, intuitive, and to some degree unspeakable even if mutually understood.
I can’t explain why I prefer chocolate ice cream over strawberry, but other people intuitively know that I’m likely to have a preference, and beyond that they’ll know to ask me or anyone else before ice cream is served. Preferences can evolve over time, and even reverse depending on circumstances.
The “why” of all this can get very esoteric very fast, and touches on everything from early childhood development to free will to mirror neurons to the collective unconscious. These topics (and I’m sure I’m missing some) are all well beyond my expertise.
I have no idea if AI will ultimately be able to predict and model human preference persuasively, but as of this writing I haven’t seen anything that leads me to believe it’ll be “solved” in the near future.
Knowing When Context Changes
Humans also excel at recognizing context shifts. Going back to the privacy policy, at one point I asked the LLM to update a draft and saw something equally amusing and alarming in its thought process:
“Interestingly” is certainly a way to look at it.
It confused my process instructions (“don’t use placeholders”) with content requirements, which it’s hard to imagine even a middle school student doing. This could easily have been the result of my bad prompting technique at that point, but in any case it’s something a human would probably ask the teacher about.
This meta-awareness of subtle shifts in the type of cognitive work required is something humans are naturally pretty good at. AI sometimes trips over the gear shifts.
Orchestration and Robot Symphonies
Beyond intuition and meta-awareness, physics and cost constraints currently make the “giant robot brain” scenario seem implausible.
Compute and energy consumption scale drastically as LLMs get larger and smarter. The AI industry has shifted toward increasing efficiency as the cost of training new frontier models has reached the limits of existing data centers, requiring massive capital investment in building bigger ones.
Until costs fall substantially, businesses are likely to gravitate toward specialized, task-specific LLMs rather than one extremely expensive super-intelligent AI.
When (and if) agentic workflows are production-ready, there could be a real opportunity for human lawyers to become orchestrators (rather than yet another LLM doing it), directing teams of specialized LLM agents.
Maybe one reports on regulatory concerns, another reviews clauses against a playbook and business input, and a third checks for consistent cross-references and defined terms. The lawyer quality checks and integrates their work, and chooses the next assignment.
This model could be both cost-effective and also address ethical questions around unauthorized practice of law. And it would put humans squarely in the middle of the decision-making process.
Are We Really Irreplaceable?
Until robots can persuasively simulate human intuition, emotional intelligence, meta-awareness, and nuanced judgment, in my view the answer for anyone whose role requires these skills is an emphatic “yes”.
I can easily imagine a future where human lawyers become more essential and effective than ever through thoughtful use of AI.
Thanks for reading, and may you be well.
Jace