As artificial intelligence becomes part of everyday business operations, it’s tempting to see it as a shortcut to productivity, especially when it comes to writing and managing policies. But when you hand over your compliance documentation to generative AI, you might be creating more problems than you solve.
According to Andrew Lawrence, CEO of de.iterate, the risks aren’t always obvious. “AI can make your writing sound cleaner and more professional,” he explains, “but sometimes those improvements come with a hidden cost.”
Generative AI tools are trained to enhance readability. They simplify, restructure and expand text to make it flow more naturally. But in doing so, they can inadvertently change meaning. And, in compliance, language precision is everything.
Lawrence gives a simple example: “You start with one clear requirement, and AI rephrases it so it reads better. But now, instead of one obligation, you’ve accidentally committed to three.”
Those subtle linguistic shifts may seem harmless in everyday writing, but in a compliance context, they carry legal, operational, and reputational consequences. Once a policy says you will, it becomes an expectation, and potentially an audit point.
Left unchecked, AI has a tendency to keep generating. It re-parses its own content, layering on synonyms and restatements until the document becomes bloated. The result? Policies that are longer, wordier, and harder for people to read, remember, and apply.
“When policies get too long, they stop doing their job,” says Lawrence. “We’re supposed to be managing risk. But if nobody can get through the document, or understand what’s actually required of them, the policy itself becomes a risk.”
Relying on AI to ‘fix’ your compliance documents can feel like a time-saver, but setting proper guardrails often takes longer than writing the content yourself.
“The benefit of doing it manually,” Lawrence notes, “is that you know exactly what you’re committing to. You can gatekeep what’s included. You can stop and ask: are we really going to do this, or are we just saying we do?”
That intentionality—deciding what belongs and what doesn’t—is something AI can’t replicate. It’s also essential for embedding compliance in day-to-day operations, rather than treating it as paperwork.
Perhaps the biggest risk of all is what happens to us. When we let AI take over the cognitive heavy lifting, we start to lose our critical thought patterns. If you’re not engaging with the content, you’re not retaining the information. It just sits at the periphery of your memory.
AI can be a powerful assistant, but not a substitute for human judgment. Especially in compliance, where every word carries weight, the smartest thing you can do is keep people in the loop.
In short: letting AI loose on compliance documentation might make it sound better, but it could also make it riskier. Precision, context, and human oversight still matter most.