It's Not What You Can Do — It's What You Can Get Done
It’s Not What You Can Do — It’s What You Can Get Done
Years ago I came across a simple reframe from Jeff Robbins that stuck with me. The idea: when you’re selling your company, you’re not just selling what you can do — you’re selling your relationships, your staff, your network, your ability to get the project done. The product is the outcome, not your personal skill set.
Robbins extended it into a practical unsticking technique:
When I find myself stuck on a task — something that I thought I could do, but I’m just not getting it done — I need to remind myself that maybe the solution isn’t for me to do it, but instead for me to find someone else to get it done. If I shift my mindset from “doing it” to “getting it done” it changes my tactics, opens me up for collaboration, learning, or maybe just paying someone else to do it for me.
That resonated then. It resonates more now.
The Bubble Gets Bigger
I sketched a diagram recently that tries to capture how I think about this shift. In the middle: the things you can do yourself. Surrounding that, a larger bubble: the things you can get done — through delegation, collaboration, hiring, tools. That outer bubble has always been bigger than the inner one, if you let it be.

AI changes the size of that outer bubble dramatically. Not incrementally — structurally.
The outer bubble is now effectively infinite.

What That Means in Practice
The traditional version of “getting it done” required finding the right person, briefing them, waiting, reviewing, iterating. That friction limited how often you’d actually delegate. Small tasks didn’t feel worth the overhead. The inner bubble stayed the default.
With AI agents, the friction nearly disappears. You describe what needs to happen. The agent handles the execution — writing, coding, uploading, publishing, cross-posting, searching, summarizing. The loop closes faster than it takes to explain the task to a person.
This isn’t theoretical for me. In the last few weeks:
- I described a Bluesky integration in a Telegram message. The agent wrote a GitHub Action, backfilled eight posts with images, and committed it — while I was on my phone.
- I sent a rough idea for a blog post. The agent drafted it, generated a hero image, uploaded it to R2, and pushed the commit.
- I asked for a Bluesky banner that matched my site’s color palette. It generated and saved the file before I’d finished the sentence.
None of these required me to do the work. They required me to direct it. That’s a different skill.
Delegation as Practice
The mindset shift Robbins described — from doing to getting done — turns out to be a learnable practice, not just a one-time reframe. And AI makes it cheaper to practice.
The key habits I’m building:
Describe outcomes, not steps. “Post this to Bluesky with an image” rather than “here’s the API endpoint, here’s the auth flow, here’s the image resize logic.” The agent figures out the steps. I care about the outcome.
Trust the loop, verify the result. I don’t review every line of code the agent writes. I review the output. Did it post? Does the post look right? Did it commit cleanly? If yes, done. If no, iterate.
Stay in the director chair. The hardest part of this practice isn’t technical — it’s psychological. The temptation to take over and do it myself is strong, especially when something breaks or feels slow. Resisting that is the practice. Staying in the seat that asks “what needs to happen next” rather than “let me just do this myself.”
Use memory deliberately. AI agents don’t have continuity the way people do. They reconstruct context from files. So the job of capturing decisions, credentials, and preferences falls to me — but once it’s written down, the agent can use it indefinitely. Good memory hygiene pays forward to every future session.
But the Costs Can’t Be Infinite
There’s a catch worth naming directly: AI capability may feel limitless, but AI cost is not. Every task you delegate to an agent burns tokens. Tokens cost money. If you delegate carelessly — throwing every thought at the most powerful model available — the bill grows fast and the value-to-cost ratio collapses.
Sam Altman framed the topic directly at a recent infrastructure summit. As Business Insider reports, he said: “We see a future where intelligence is a utility.” (He elaborated that intelligence would be metered and sold on usage, like electricity or water.) That’s not a metaphor — it’s a pricing model. If AI is going to be billed like a utility, you need to think about it like one. source
This is where the practice of delegation gets a new discipline: budgeting intelligence.
The same way a good manager knows when to assign a task to a senior engineer versus a junior one, a good AI workflow routes tasks to the right model at the right cost. A few principles I use:
Tier your models by task complexity. Conversational back-and-forth, simple lookups, and routine automation don’t need a frontier model. My agent runs on Claude Sonnet for daily tasks — fast, capable, cost-efficient. Heavy reasoning, architectural decisions, or complex multi-file code changes escalate to a more capable model only when needed.
Use caching aggressively. Large system prompts and context files cost tokens every time they’re re-read. Prompt caching means that repeated context — your memory files, your system prompt, your workspace state — gets stored and reused instead of retransmitted at full cost. On high-volume sessions, this alone cuts costs significantly.
Scope context tightly. The longer the context window you send, the more tokens you burn. Good agents don’t load everything — they load what’s relevant. Curated memory files, scoped tool calls, and short working contexts beat a single massive dump of every file in the repo.
Know when to stop the agent. Agentic loops are powerful but can spiral — an agent that reruns the same failing step five times burns five times the tokens for zero value. Build in explicit stopping conditions and approval gates for expensive or iterative tasks.
Treat token spend like compute spend. Infrastructure engineers have always had to balance capability against cost — you don’t run every job on your biggest instance. Token budgeting works the same way. The goal isn’t to minimize AI use; it’s to maximize the value you extract per dollar spent.
The infinite bubble metaphor is real, but the practical version looks more like a well-managed resource: vast capacity, intentional allocation, clear priorities for when to spend and when to conserve.
The practical implication is this: the constraint on what you can get done is no longer primarily your network, your budget, or the availability of skilled people. It’s your ability to direct clearly and delegate confidently.
That’s a more learnable constraint. And it scales differently.
A decade ago, a solo developer with a tight network could punch above their weight by knowing the right people. Today, that same developer with a well-configured agent setup can ship at a pace that used to require a team — not because AI replaces the team, but because it expands the outer bubble far enough that many things that used to wait for the right person or the right moment just… get done.
That’s the shift. The bubble is infinite now. The question is whether you’re willing to use it.
About the Author
Kevin P. Davison has over 20 years of experience building websites and figuring out how to make large-scale web projects actually work. He writes about technology, AI, leadership lessons learned the hard way, and whatever else catches his attention—travel stories, weekend adventures in the Pacific Northwest like snorkeling in Puget Sound, or the occasional rabbit hole he couldn't resist.