AI tools like ChatGPT are everywhere. They’re fast, accessible, and can draft anything from cover letters to recipes.
But when do people start using them to make legal decisions?
That’s where things go dangerously wrong.
In this episode of True Law Stories, Iowa truck accident attorney Tim Semelroth breaks down the growing trend of people relying on AI to build or, worse, argue their legal cases. With one wrong prompt, a person can damage their credibility, weaken their case, or make statements that insurance companies use against them.
And it’s happening more often than you think.
The Two Biggest Risks of Using ChatGPT for Legal Advice
Many people assume AI tools are neutral, helpful assistants. But when it comes to personal injury or other serious legal matters, that assumption can backfire.
Risk #1: Confidentiality.
When someone types personal details about their injury, criminal charge, or legal dispute into a public-facing AI tool, that information isn’t protected. Unlike conversations with an attorney, your prompts aren’t private and could end up being used against you.
Risk #2: Hallucinations.
AI tools don’t reason, they predict. They generate convincing-sounding answers, even if those answers are completely wrong. Tim Semelroth explains how ChatGPT has invented fake cases, cited laws from the wrong state, and misrepresented how traffic offenses affect civil lawsuits.
The biggest danger? It all looks accurate until it’s too late.
One Client Ignored Legal Advice After a ChatGPT Prompt
In the episode, Tim shares the story of a client who had been involved in a serious crash and was charged with a traffic offense. Tim advised him to hire a criminal defense lawyer to avoid making statements that could hurt his civil case.
But hours later, that client emailed back with confidence, because ChatGPT had told him the opposite.
He insisted the charge “couldn’t be used against him” in a civil case. What ChatGPT didn’t explain was how insurance companies operate, how settlements work, and how recorded statements can still come into play regardless of conviction.
The client nearly damaged his chance of recovery. All because of one AI-generated paragraph.
How Attorneys Actually Use AI Without Risking Their Clients
While Tim warns against using ChatGPT for legal decision-making, he isn’t anti-AI. In fact, his team uses generative AI regularly, for the right tasks.
He outlines how attorneys can safely use AI as:
- A brainstorming assistant
- A grammar and flow editor
- A tool for drafting non-confidential correspondence
- A way to speed up first drafts—not final filings
The key difference? Nothing leaves the office without a human lawyer reviewing it.
He also recommends tools like Perplexity over ChatGPT for research, since it cites real sources and gives attorneys the ability to verify before trusting an answer.
If you’re a personal injury client, a lawyer, or someone considering legal action, this episode could protect you from making a costly mistake. The line between help and harm is thinner than ever when it comes to AI tools.