Observation: Saying “no” to AI doesn’t stop AI use. It drives it underground.
Banning AI use doesn’t remove risk; it pushes it out of sight.
Observations
·
1 min
This is a pattern I keep seeing in all types of organisations, and one leaders often avoid.
When organisations say “no” to AI, usage doesn’t stop. It goes quiet.
The work still has to get done. Deadlines don’t move. So people adapt. They use personal accounts, paste documents into public tools, and rely on systems they don’t fully understand.
This creates a false sense of safety. On paper, the business appears conservative or compliant. In practice, risk accumulates invisibly: unmanaged data exposure, misunderstood outputs, and decisions shaped by unsanctioned tools.
The issue is rarely bad intent. When work is heavy and time is scarce, people will use whatever helps them get through the day.
This failure mode compounds when organisations delay change, as described in Delay creates hidden failure modes.



