Why Copilot can't fix what's actually broken
The reason AI keeps underwhelming in your organisation isn't the model, it's that knowledge work was never designed to be operated on. No smart layer fixes that.
Essays
·
5 min

There's a question that keeps surfacing in every business I work in: why does AI seem so capable in some contexts and so underwhelming in others?
An AI agent can write, test, and fix code with remarkable precision. The same technology, pointed at a folder of Word documents, Excel files, and email threads, struggles to do anything useful. Not because the model is less intelligent. Because the environment is completely different.
Code has structure. It has rules, tests, and clear feedback. When something breaks, the system tells you. When you make a change, you can verify whether it worked. The AI proposes, the system checks, and the loop continues until the work is right.
Knowledge work has none of this. The information that matters is scattered across documents, inboxes, calendar invites, and conversations that were never recorded. Dependencies live in people's heads. The reasons behind decisions are buried in threads nobody can find.
Consider the spreadsheet. People say the world runs on Excel, and they're right. Finance uses it because you can change a cell and see results immediately. It's reactive, visual, and fast. But underneath, an Excel file is a tangle of formatting, hidden links, volatile formulas, and dependencies that nobody fully maps. Change a number in one cell and something breaks three sheets away, silently. No error message. Just another number that looks fine but isn't.
In code, a mistake causes an error. In a spreadsheet, a mistake just looks like another number.
This matters because it's not just a spreadsheet problem. It's how all knowledge work operates. Documents mix data with formatting. Email threads carry decisions that nobody can trace afterwards. Meeting context disappears the moment everyone leaves the room. The information exists. The structure to hold it doesn't.
Microsoft's bet and why it keeps stalling
This is where the Copilot problem gets interesting.
Microsoft's entire productivity thesis is that a smarter layer on top of Word, Excel, and Teams will fix knowledge work. Copilot sits across the Office suite, reads your documents, attends your meetings, and offers to help. The pitch is compelling: AI that already knows your business because it lives where your work lives.
But here's what I see in practice: if the documents underneath are a tangle of mixed data, hidden dependencies, and decisions that live in someone's inbox, the smart layer is working blind. It can summarise. It can draft. It can speed things up inside the existing environment. What it can't do is tell you whether the work is actually right.
Copilot can propose a change to your pitch deck. But it has no way to check whether that change broke the logic three slides later. It can pull data from your spreadsheet into a summary. But it can't verify whether the spreadsheet itself is sound. It can draft a response based on an email thread. But if the real decision was made in a corridor conversation that was never captured, the draft is confidently wrong.
This isn't a criticism of the tech. The models are genuinely good. The problem is the environment they're operating in. You can't make a leaky system intelligent by putting a smart layer on top. You just get faster leaking.
The real constraint isn't the model, it's the work
The pattern I see in organisations that are stuck isn't a tool problem. It's a substrate problem. The work itself has no structure.
Most leaders I work with are spending the majority of their time in the first phase of any knowledge task: capturing information. Gathering, chasing, pulling things together from six different places. The thinking they're uniquely qualified for, the analysis, the judgement calls, the decisions that actually move the business, doesn't get airtime. Not because they're not capable. Because the environment they're working in doesn't support it.
This is why throwing a smarter model at the problem doesn't help. A more powerful AI pointed at the same scattered, unstructured environment will just be faster at getting confused. Copilot is smart. But smart on top of mess is still mess.
What would actually need to change
The organisations I see making real progress have stopped asking "which AI tool should we use?" and started asking a harder question: is our work structured well enough for any tool to be useful?
That means looking at how information actually flows through the business. Where does it get captured? Where does it leak? Where are the dependencies that live in people's heads instead of in a system? Where are leaders doing work that doesn't require their judgement, and why?
Most of the time, the honest answer is that the work isn't ready for AI. Not because the people aren't capable or the tools aren't good enough. Because the environment was never designed to be operated on. It was designed to be muddled through by humans who could hold the context in their heads. That worked when humans were the only option. It doesn't work when you're trying to hand part of the job to a machine that needs structure to function.
The real work isn't choosing between Copilot and Claude and Gemini. It's redesigning how information moves through your organisation so that whichever tool you choose has something solid to work with.
The question worth asking
If your AI investments keep underwhelming, the model probably isn't the problem. The environment is.
Before you buy another licence, run another pilot, or sit through another vendor demo, ask this: can you describe how a single piece of work moves through your organisation, from the moment information arrives to the moment a decision gets made? If you can't map it clearly, neither can the AI.
That's where it starts. Not with the tool. With the work.