Beautiful Tools, Broken Systems: Rethinking work, not just retooling it, is the real challenge for agencies now.
I’ve been quiet here lately. Not absent, just immersed in conversations, coffees, and a string of meetings with agency teams across Southeast Asia. And beneath the energy and ambition, something’s been gnawing at me.
Agencies are trying to run before they can walk.
It’s surprising how fast the conversation is jumping ahead of where most organizations actually are. I walk in expecting to talk about capability-building, about rethinking workflows, about how teams can use AI to approach problems differently. But instead, the discussion skips straight to “agentic solutions.” Automation. Orchestration. Complex multi-step systems designed to run semi-autonomously on top of language models.
In theory, these are exciting ideas. But in practice, they’re premature. There’s a fundamental literacy gap that has yet to be solved.
From what I am seeing, many of these agency teams haven’t built even a baseline of fluency with generative AI. They don’t really know what the tools can do, or where their limitations are. They haven’t had the time, or the permission, to experiment. They don’t yet think with AI. But they’re being asked to build with it. And that’s a problem.
Because when you skip the foundational layer - the hands-on, habit-forming, mindset-shifting layer - you end up building systems that amplify misunderstanding. Not innovation.
This is the real risk of the “run before you can walk” moment we’re in. The ambition is outpacing the comprehension. And without comprehension, complexity is dangerous.
The Architecture of Yesterday, The Tools of Tomorrow
Agentic systems, for all their promise, require more than good intentions and API access. They require a shift in how we define problems, how we deconstruct workflows, and how we think about thinking. They demand clarity, not just about what AI can do, but about what we are trying to do in the first place.
But that clarity can’t be borrowed. It has to be earned. Through experimentation. Through friction. Through trying things and getting them wrong. Through developing a kind of muscle memory that only comes from actual use.
What I see instead are agencies trying to layer AI on top of processes that were never built to accommodate it. Teams are still working from briefs that assume linear timelines, fixed deliverables, and a narrow definition of craft. Creative departments are still structured for sequential handoffs. Strategy teams are still expected to arrive with answers in slide form. Legal still wants to read every word. And don’t even get me started with what procurement thinks, but it goes a bit like this: “You have AI tools, so you should be cheaper and faster.”
New Terrain Demands New Training
Somehow, AI is supposed to slot neatly into all this. Make it faster, smarter, cheaper without disturbing the underlying assumptions, as if transformation can be achieved through augmentation alone.
But AI doesn’t just accelerate work. It reshapes it. The steps are different. The cadence is different. The source of value is different. You don’t just get new tools. You get a new terrain. And if your team isn’t equipped to navigate that terrain, then building agentic systems is like handing them a spaceship when they’re still learning to drive.
What’s needed now isn’t complexity. It’s depth. Agencies don’t need to “scale AI.” They need to absorb it. Slowly. Deliberately. Systematically.
That means training, not just in how to prompt, but in how to frame. In how to deconstruct problems. In how to work with models as collaborators, not vending machines. It means carving out time to learn, not just demanding that people catch up. It means building internal confidence before external capability. And above all, it means recognizing that speed is not the same as progress.
I’m not against agents. In fact, I think the agentic future will be transformative. But only if it’s built on a fundamental understanding. Otherwise, we’re just automating old mistakes at scale. Garbage in, garbage out, faster, with a more elegant interface.
The Performance of Innovation
What I’m seeing now, almost everywhere I look, is a performance of innovation.
Agencies are adopting the aesthetics of transformation, throwing AI terms into pitch decks, showcasing prototypes, and circulating toolkits, but the core remains unchanged. There’s a kind of ritual happening: prompt a headline, generate a moodboard, ask ChatGPT to summarize something, and then, having touched the tools, the team retreats quietly back into the old way of working.
No one’s being malicious. No one’s lying. But you can feel the choreography of it. It’s not real integration, it’s symbolic adoption. It’s surface-level, made to signal relevance rather than rewire the machine.
The deeper habits haven’t shifted. The same timelines still govern the work. The same briefing rituals, the same static deliverables, the same slide decks with the same number of pages. Teams are still rewarded for polish over possibility. The definition of “done” hasn’t moved, even as the possibilities have exploded. Time to explore has not been protected. Risk-taking, iteration, and rethinking remain unpaid labor.
And so the very thing that AI could enable, new ways of thinking, faster prototyping, more generative cycles of creativity, is quietly suppressed by the gravity of old expectations.
The result is theatrical fiction. AI is there, in the room. But only as a guest. It is invited to speak, briefly, and then ushered out before it disrupts the process too much. No one’s really asking whether the process itself is still fit for purpose.
Because that’s a more complicated question, it takes courage and time. And time is the one resource agencies never feel they can afford to spend.
But when transformation becomes a performance, innovation turns ornamental. And ornamentation, no matter how futuristic it looks, won’t save you from obsolescence.
You Don’t Need a Prompt Library. You Need a Permission Structure.
It’s not prompting technique that most teams are lacking. It’s permission.
Permission to rethink the brief. To discard the linear process. To show work that’s unfinished, unpolished, and exploratory. To compress five steps into one or to spend an afternoon chasing something strange because it might lead somewhere better.
To deliver a prototype instead of a deck. To slow down, not speed up, if that’s what deeper thinking requires.
Generative AI doesn’t reward perfection. It rewards curiosity. It rewards iteration. It rewards the willingness to ask, “What else could this be?”
But most agencies weren’t built for that kind of exploration. They were built for hierarchy, efficiency, and predictability. The systems are designed to minimize risk, not generate possibility. The culture, in many places, still values polish over process.
So even when the tools are available, the behaviors that would make them useful are quietly discouraged. Experimentation becomes un-billable. Divergence becomes a reputational risk.
This is why so many AI initiatives stall, not because the models aren’t powerful, but because the people using them aren’t given the structural space to think differently. Until that changes, it won’t matter how advanced the technology becomes. You can’t train a team to innovate inside a system that punishes deviation.
This Isn’t a Technology Problem. It’s a Leadership One.
The most significant misunderstanding I see among agency leadership right now is the belief that AI adoption is a tooling problem. If you install the right platforms, roll out the right software, and set up the right dashboards, transformation will follow. The training module is a PDF that explains how to log in and advises not to share company or client information.
But this isn’t about infrastructure. It’s about mindset.
And mindset is shaped by leadership, not licensing agreements.
The real work isn’t getting the tools into people’s hands. It’s creating the conditions in which those tools can actually change how people work. That means rethinking incentives. Rethinking what good looks like. Rethinking how creative teams are briefed, how strategy is developed, and how value is measured.
Most agency systems are still built on inherited assumptions: That good creative takes time. That strategy is best delivered in slide decks. That thinking happens first, and making comes after. That talent is scarce, fragile, and hierarchical. That process is more important than possibility.
Generative AI dismantles those assumptions. It introduces abundance. It levels the playing field. It moves faster than the old process allows and reshapes what it means to collaborate, to explore, to decide.
But abundance is only useful if your organization knows what to do with it. And most don’t because the systems weren’t built for it. They were built to ration value, not to generate it.
This is why leadership matters now more than ever. The goal is not to approve tech budgets, but to sponsor behavioral change. To model a different relationship with creativity, with experimentation, with risk. To make space for new kinds of thinking. And to protect that space until it becomes the new normal.
The hardest part of this transformation isn’t teaching people how to use AI. It’s helping them let go of how they used to work.
And no tool can do that for you.
So what actually needs to happen?
Agencies need to pause, not in fear, but in clarity. They need to ask what parts of their process still serve them, and what parts belong to a world before generative intelligence. They need to look hard at where human effort is being wasted, where value is being defined by volume or polish, and where experimentation is being punished by process. Most of all, they need to stop asking “What can AI do?” and start asking “What do we want to do and how might AI help us do it differently?”
Because until you redesign the underlying system, no tool will make a difference. You’ll just be duct-taping rocket engines to a bicycle and wondering why it doesn’t fly.
So let’s spill the T: before you build agents, build fluency. Before you orchestrate, orient. Before you sprint, study the terrain.
Because this isn’t a race to ship, it’s a race to understand. And if you skip that part, if you move straight to the complex without first mastering the essential, you’re not innovating. You’re guessing.
And you won’t like what you build.
Fluency First: The Training Agencies Actually Need
If any of this feels familiar, if your team is eager to move forward but unsure where to start, this is the work we do at RockPaperScissors.
We don’t drop in with decks and toolkits. We build fluency. We create space for experimentation. And we help teams develop the foundational habits, mental models, and workflows that make AI actually usable, inside real agency systems, not ideal ones.
If your agency is ready to move past the performance and into practice, let’s talk.
Read More Books. Or at least read mine.
If you’re looking to build not just AI skills, but new ways of thinking, I have written a few things that might be a great place to start.
To Question Is to Answer is my first book on how to think critically and creatively in the age of AI.
And a companion eBook, Frameworks Reframed: Thinking Models for the AI Age, offers practical models and mental tools to help teams navigate ambiguity, rethink value, and solve problems with intelligence—human and machine.
Both are designed to help you shift from reacting to AI… to reasoning with it.


