The "gråsone": On AI, the opportunities and the limits
I use AI every day, both professionally and personally. But what I want to write about is not the tools, it’s what we do with the space they create. My most honest summary of using AI is that it is like working with the ultimate junior on your team. One who has read everything, experienced nothing, and is profoundly unbothered by the prospect of being confidently wrong.
This framing matters because the most useful question is no longer whether to use AI. That debate is about as productive as choosing between email and fax. The real conversation is how we choose to work with it, and what that choice makes possible for everyone in an organisation.
McKinsey found that 88% of organisations use AI in at least one business function, yet only 6% report meaningful financial impact. Deloitte puts it in starker terms: only 25% of organisations have successfully moved AI from pilot to production. These numbers are not a verdict on AI; they are a reminder that tools become transformative when people use them with purpose, skill, and courage. The gap will close when the people using them understand what they are doing and why. That, if you ask me, is where the most exciting opportunity now lies.
Three layers
When I think about where AI belongs in daily work, I fit it into three layers. Seeing it this way clarifies not only what AI is good at, but also where we become even more valuable.
The production layer is where things are drafted, formatted, summarised, and documented. Status reports, meeting notes, research synthesis, and action lists sit here. If you are still allocating senior attention to this, you are misallocating your most expensive resources. AI belongs here almost entirely, and it handles it well. This is a genuine gift because it clears the plate for work that actually requires human thinking, taste, and experience.
The structural layer is where frameworks, options, and scenarios take shape. Competitive analysis, scenario planning, risk mapping, and go-to-market structures all live here. AI brings the options to the table so you can see the game more clearly. Used well, it becomes an expansive lens rather than a cage, widening the space in which decision-making can operate.
The judgement layer is where the real decisions live. Which direction to pursue. What to walk away from. Whether the strategy is genuinely coherent or only internally consistent. Whether a brand idea is actually tangible or just sounds good. Whether the aesthetics, design, and message follow the crowd or reflect something genuinely original. This layer is non-negotiable and entirely human, and it becomes more important as AI absorbs the layers beneath it.
Many organisations have automated the production layer. A smaller number use the structural layer with genuine intent, and that is exactly where investment should go. The structural layer feeds the judgement layer. The better your frameworks, options, and scenario thinking, the clearer and more intentional your decisions become. Organisations that build this capability now will stand out in a landscape where raw productivity is increasingly commoditised. The goal is not to make people busier at the top, but to make the time they spend at the judgement layer worth more.
From an optimistic perspective, it gives us a real chance to redirect our energy from volume to value, if we are willing to understand the structural capabilities that get us there.
The gråsone
The most consequential work usually lives in the gråsone, as we call it in Norwegian. The grey zone. It is the space where data points one way and instinct points another. It is where the opportunity lies not in what the market currently looks like, but in what it could become.
Generative AI is, at its core, a pattern engine. Give it a structured problem, and it will synthesise and organise faster than anyone you have ever worked with. What it shouldn’t do is tell you which frame to set in the first place. The grey zone is not a weakness in the system. It is where value is created or destroyed, and the arena where leaders and creators matter most.
Consider product pricing. It is not just a number to bring revenue. It is a signal about how you see your product’s value, who you believe will pay for it, and how you intend it to be perceived by the market. Get that signal wrong and the data will eventually tell you, usually through margin pressure or a decrease in brand loyalty. Positioning works the same way. It is a deliberate bet on which part of the market is underserved, what those customers actually care about, and why you are the right one to serve them. Distribution choices, partnership decisions, and the question of when to move and when to hold all require someone with both the clarity to decide and the courage to commit.
The danger becomes very real when the judgement layer gets handed over. In 2018 and 2019, ASOS overhauled their warehouse infrastructure across the US and Europe with highly automated, algorithm-driven logistics systems. On a spreadsheet, it looked like a masterclass in efficiency. Any operations executive would get starry-eyed. But the rollout created logistical knots the business could not untangle: automation that optimised for volume while the actual customer experience deteriorated. Leadership trusted the optimisation reports while the gråsone was falling apart. ASOS issued three profit warnings in eight months, and their share price fell nearly 40% after the first warning alone. They had built a perfect machine that forgot it was in the business of fashion, not just fulfilment.
AI can map the landscape around decisions with extraordinary speed. It can surface price corridors, competitor moves, and white spaces you might never have considered. But choosing the position, setting the price, and committing to a direction still belongs to someone prepared to stand behind it. That is where human judgement shines.
Strategy, at its core, is the accumulation of well-chosen noes (more on this in a later article). AI will optimise brilliantly within the frame you give it. Defining that frame remains a human job, and a deeply creative one. This is a responsibility to embrace.
What this looks like in practice
The Estée Lauder Companies understood this early. They did not simply deploy an AI tool. They built ConsumerIQ, a generative AI platform that compresses weeks of research into minutes. The leadership mandate was not to let the AI decide; it was to free product developers and marketers to do what no model can do: sense the next trend before the data even exists. They treated AI as a high-speed research assistant so their people could return to being high-stakes creators. Most organisations are still doing it the other way around.
That same discipline, deciding what only you can do and letting AI handle the rest, shaped how I approached building Dobson Advisory. The infrastructure, the technical optimisation, the tasks that did not require direct judgement; those went to AI. What remained was understanding what the Nordic market is and what it has the potential to become, building the relationships that matter, deepening my thinking and investing time into client work.
My method, and one I recommend adopting, is simple. I don’t ask AI for answers to the harder questions. I ask it to argue against my position, to find the weak spots, and to surface the assumptions I have not yet examined. The direction remains mine. The instinct remains mine. But the pressure AI applies to my thinking, the friction it creates around untested assumptions, makes the work more defensible. Partly because it has no ego to protect and no career to consider.
The more I push back against it, the sharper my own thinking becomes. That friction cuts both ways, and it strengthens the human side of the partnership.
The last mile
AI raises the floor. It makes it easy to be adequate. But adequate is rarely enough to win markets or inspire people, whether customers or teams. The ceiling is still held up by those who are willing to make a choice and live with the consequences. When things go wrong, you cannot point to the prompt. That accountability: the willingness to own the consequence of a decision made under genuine uncertainty, is the last mile of leadership. It is also, on the days when it comes together, the most fulfilling work there is.
This is not a story of humans against machines. It’s a story of humans with machines, and of leaders choosing to step into that last mile rather than handing it off. The ones who get this right will build organisations where AI takes the grind, and humans make the decisions that truly matter. More fun for us.
The gap between those two outcomes is judgement. And judgement, unlike software, does not update overnight; it develops over time. Every leader willing to stay in the hard decisions, use AI with intent, and keep ownership of the outcome is building an advantage that no model can copy.
Cecilie Dobson
Founder, Dobson AdvisorySources: McKinsey & Company, The State of AI in 2025: Agents, innovation, and transformation, november 2025
mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-aiDeloitte, State of AI in the Enterprise 2026, april 2026
deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html Drapers "Asos profit drop blamed on warehouse troubles", july 2019
drapersonline.com/news/asos-profit-drop-blamed-on-warehouse-troubles, retailgazette.co.uk/blog/2019/08/asos-seeks-3-discount-orders-bid-cut-costs
WWD "Inside Estée Lauder CEO Stéphane de La Faverie's Fast-paced Reset of the Prestige Beauty Giant", may 2025
wwd.com/beauty-industry-news/beauty-features/inside-estee-lauder-ceo-stephane-de-la-faveries-reset-prestige-beauty-giant-1237109135