Three Consulting Frameworks We Actually Use

D.O.T.S., 10-star thinking, and boil-the-lake — the three frameworks that survive first contact with a real client P&L. How we use them, where they fail, and what we replace them with.

Every consulting firm has frameworks. Most of them are decorative — slide-ware that exists because the partner needed something to project on the wall. A framework earns its keep when it changes what you actually do on Monday morning.

At ZeroOne D·O·T·S AI we use three, and only three. They’ve survived live engagements with used-car platforms, chemical manufacturers, fintech lenders, and B2B SaaS operators. When they break, we know exactly where and why, and we’ve got replacements in the drawer. This post is the starter pack.

Framework 1: D.O.T.S.

The name of the firm. Not a coincidence.

D — Data. What’s the ground truth? What’s measured, what’s measurable, what’s missing? O — Operations. How does work actually get done today? What are the loops, the handoffs, the queues, the escalations? T — Tech. What’s the stack? What’s the shadow stack? What’s rusting in a corner? S — Strategy. Where is the business going? What bets has leadership committed to, explicitly or implicitly?

The order matters. Most consulting engagements start with Strategy and work backwards. That produces beautiful decks and no change. We start with Data because data tells you which strategy is actually fundable.

How we use it

Every engagement starts with a D.O.T.S. audit: one page per letter, written by an operator, reviewed with the client’s leadership. The letter with the biggest gap is where the project scopes itself.

4 of 5
engagements where the binding constraint turned out to be D (data) — not S (strategy), not T (tech)
ZeroOne engagement retrospectives, 2024–2026

Where it fails

D.O.T.S. is a diagnostic framework. It does not tell you how to execute. We’ve watched teams use it as a planning tool and end up with four parallel workstreams, none of which ship. The fix: pick one letter per quarter, constrain all work to that letter, ship, then pick the next.

Framework 2: 10-Star Thinking

Borrowed from Brian Chesky’s version, which he borrowed from a conversation with Joe Gebbia and (by lore) Steve Jobs. The question is simple: what would a 10-star version of this experience look like?

Chesky’s example was Airbnb’s check-in. A 5-star check-in is: you arrive, the key is where they said it would be. A 10-star check-in is: Elvis picks you up from the airport in a limo. You can’t build the 10-star version for every customer — but if you don’t articulate what it is, you’ll never push past 5 stars.[✓]

How we use it

In every design review, we ask: what does 10 stars look like for this feature? Not for scoring — for scope-setting. The 10-star version is usually infeasible, but it surfaces the dimension of delight the current design is missing. Then we calibrate down to what ships next sprint.

For an AI voice agent, 5 stars is: it understands you and gets the answer right 90% of the time. 10 stars is: it remembers you called last week, anticipates your follow-up question, and proactively offers the next action. Articulating 10 stars tells you the current 5-star design lacks memory — and memory becomes the roadmap.

Where it fails

10-star thinking breaks in two situations. First, when the team is not senior enough to imagine past their current reality — you get 6-star specs dressed up as 10. Second, when used on cost-sensitive B2B contexts where the customer explicitly wants functional and cheap, not delightful. An EMI reminder voice bot does not need a 10-star experience; it needs a 4-star one that costs 10% of what a human does.

Framework 3: Boil the Lake

The contrarian one. Standard consulting wisdom says: don’t boil the ocean. Focus. Find the 80/20. Ship the MVP. We disagree, for reasons that have changed in the last three years.

The wisdom was right when compute and people were scarce. You had 5 engineers, 12 weeks, and one budget. You couldn’t do everything, so you picked the biggest lever.

With AI, the marginal cost of completeness is near-zero. The same 5 engineers, augmented with LLM tooling, can cover 3x the surface area. “Ship the happy path, we’ll add edge cases later” produces systems that break in production. “Ship 100% coverage from day one” is suddenly affordable.

~3x
productivity multiplier for skilled engineers using AI tooling on well-scoped tasks, per controlled studies
GitHub Copilot productivity study + METR evaluations, 2024

Boil-the-lake means:

  • 100% test coverage, not 80%, because AI writes most of the tests
  • All edge cases handled at launch, not “happy path first”
  • Full error paths, retries, observability — at v1, not v2
  • Documentation alongside code, not “if there’s time”
[✓]

How we use it

Every engagement gets a “completeness score” instead of an MVP spec. We measure how much of the problem surface is covered at ship, not how fast we got something working. The standard is >90% coverage at ship, and we plan accordingly.

Where it fails

Boil-the-lake fails in pure discovery contexts — when you don’t yet know what the product is. For those, narrow MVPs still win because the goal is learning, not shipping. The heuristic: if you can articulate the full problem surface on a whiteboard in 30 minutes, boil the lake. If you can’t, the MVP is a learning instrument, not a product.

How the three fit together

On a real engagement:

  1. Week 1: D.O.T.S. audit. One page per letter. Find the binding constraint.
  2. Week 2: 10-star spec for the letter we’re attacking. What does the dream state look like?
  3. Weeks 3–12: Boil the lake on the scope. Ship complete, not partial.

The discipline is resisting the temptation to start anywhere else in that sequence. S-first engagements produce decks. T-first engagements produce shelfware. O-first engagements produce process improvements that die when leadership changes.

D-first engagements produce compounding advantages because data is the only asset that gets more valuable with use.

What we don’t use

A short list, since subtractive clarity matters:

  • Porter’s Five Forces — useful in an MBA class, not useful on Monday.
  • OKRs — we use them for internal alignment but not for client engagements. Clients already have them. We don’t need to impose a second layer.
  • Design Thinking as a process framework — the double-diamond diagram has never once changed what a team actually did. We steal the practices (interviews, prototypes) without the ceremony.
  • Blue Ocean / Jobs-to-be-Done — read them, useful vocabulary, but too abstract to drive an engagement.

The meta-framework

The three frameworks above share a property: they collapse when applied mechanically. D.O.T.S. without operator judgment becomes a checklist. 10-star thinking without constraint becomes fantasy. Boil-the-lake without scoping becomes over-engineering.

They only work in the hands of someone who has shipped the thing before. That’s the actual bar. Frameworks are scaffolding for taste; taste is what makes the scaffolding useful.

If that resonates — or if you want to argue with it — come talk to us. We’d rather debate frameworks in a live engagement than in a comment thread.


Meet Deshani is the founder of ZeroOne D·O·T·S AI. He writes about applied AI, consulting engagements, and products shipped from the field. More at meet.dotsai.in.