Scraping, repetitive research, and throwaway prototypes are necessary, but they shouldn't consume the time that's best spent with users and stakeholders.

I automate the grunt work so I can spend more time engaging with people and gathering insights. The goal isn't to replace human judgment; it's to create capacity for it.

This post is about why that distinction matters, and it kicks off a series where I'll share the patterns and metaphors I use when mentoring and coaching teams in intelligent automation: how to spot the logical structure of a task, and how to find evidence-based solutions without boiling the ocean.

Why capacity, not replacement?

Discovery and delivery both involve a mix of high-leverage work (talking to users, synthesising what you've learned, deciding what to do next) and grunt work (pulling data, repeating the same checks, building one-off tools to answer a single question). Grunt work is real work; it has to happen. But it's not where your judgment adds the most value. If most of your week is spent on the latter, you have less time and energy for the former.

Automation that simply speeds up the grunt work, without changing who does it or how often, can help. But the bigger win is when automation creates capacity: it takes whole chunks of repetitive, rule-based effort off your plate so you can redirect that time toward the activities that actually need a human in the loop. The point isn't to hand everything to a script or a bot. It's to free you to do more of what only you can do: listening, interpreting, and deciding.

What I mean by "intelligent" automation

When I say intelligent automation, I don't mean "AI-powered" in the buzzword sense. I mean automation that is deliberately designed around the structure of the task: inputs, outputs, decision points, and edge cases. It's automation you can reason about, explain, and adjust when the world changes. That kind of thinking is what I want to encourage in the teams I work with. This series will explore it.

You'll see patterns for:

  • Spotting the logical structure of a task. When a process is repetitive or data-heavy, there's usually a clear structure underneath. Naming that structure (e.g. "this is a transform from A to B", "this is a filter then aggregate") makes it easier to automate in a way that stays maintainable and auditable.

  • Finding evidence-based solutions without boiling the ocean. It's tempting to automate everything in sight. The better approach is to identify the smallest automation that gives you enough evidence to decide the next step. Small, focused automations reduce risk and keep you iterating instead of building a monolith.

I'll use concrete examples and metaphors that have worked in coaching sessions, so you can reuse or adapt them when you're guiding your own teams.