Hypothesis
Using a simple real-time classification script on existing Gia customer task data, we surface recurring delivery tasks they committed to in calls but still execute manually. By showing customers their own patterns live on a call — and asking “did you do that, how long did it take, how often does it happen?” — we will identify high-effort work they are willing to pilot with Gia at 1/10th their billable rate. If we are not at 5 pilot confirmations by May 15, we are not moving fast enough.
Working context — Tukan's framing
How Tukan described the experiment on the April 27 call. This frames how we run the audit and what we're really after underneath the surface ask.
| Posture | “Acting like an AI consultant” — running an audit for the customer, not selling to them |
| Framing | “A little bit of a head fake” — branded as a value-add audit, with intent to surface pilot opportunities |
| Data source | Action items extracted from their Gia call transcripts. Commitments they promised to do — not necessarily what they actually did. Matt flagged this distinction explicitly. |
| Scope | Their tasks AND their teammates' tasks if those teammates are on Gia — looking at recurring patterns across the firm |
| Mechanic | Live during a scheduled call. They press a hidden button in Gia, the script runs in real-time and classifies tasks into client delivery buckets |
| What success looks like in the room | Customer says “yes, here is this core unit of work I'm constantly doing” — claims a recurring task as their own |
| Pricing thought | “Spitballing — at 1/10th their billable hour” for the task taken off their plate weekly. Starting point, not a pitch. |
| What we don't know yet | “This is something we are trying to figure out.” Exploratory — we don't know what tasks will surface or what the threshold for handoff will be |
Metrics (X → Y by Z)
Baseline (X)
0
audit calls completed
Target (Y)
10
pilot agreements signed — 5 by May 15
Read date (Z)
May 31
check-in May 15 at 5 pilots
Assumptions being tested
Each open-ended question below is designed to elicit answers to one or more of these underlying assumptions without leading the customer into a pitch.
| # | Assumption |
| B1 | Committed action items in transcripts represent real recurring delivery work (not noise) |
| B2 | Customers will recognize and validate those tasks when shown them |
| B3 | They can quantify time, frequency, and ownership of those tasks |
| B4 | The gap between what they committed to and what they actually did is informative — surfaces tasks they may have dropped or not noticed as recurring |
| B5 | Seeing their own data surfaces tasks they hadn't consciously named as a problem |
| B6 | The team's task patterns reveal recurring delivery work, not just one person's habits |
| B7 | If the recurring work is genuinely painful, they'll say so unprompted — we don't have to push |
| B8 | They'll articulate their own conditions for offloading without us framing them |
| B9 | Existing-customer context (warm relationship) makes candor easier than cold research |
Open-ended questions
Run in this order, in four phases. Phase 4 is gated — only enter it if the customer has signaled willingness in Phase 3 (e.g. “I wish I didn't have to do this” or “that time would be huge”). Otherwise, end the call there. The pricing frame (1/10th billable rate) deliberately doesn't appear in the questions — it should only come up if they ask “could you do this?”
Phase 1 — Before the audit runs
| Question | Tests |
| “Tell me about your role and how you're personally involved in client delivery work.” |
v1.3 ICP fit, B6 |
| “Before we run the audit together, what already feels repetitive or recurring in your delivery work?” |
B5 |
| “Anything you've been meaning to fix but haven't gotten to?” |
B5, past behavior |
Phase 2 — During the audit (validation)
| Question | Tests |
| “Looking at what came up from your recent calls, does any of this match what you're actually spending time on?” |
B1, B2 |
| “Anything in here that surprises you — patterns you didn't realize were that recurring?” |
B5 |
| “Are there any of these you committed to but didn't actually do? I want to make sure we're separating real recurring work from one-offs.” |
B4 |
| “Of the ones you are doing regularly, which one stands out to you?” |
B2 |
Phase 3 — Quantify the task they named
| Question | Tests |
| “Walk me through how you actually do that, start to finish.” |
B3 |
| “Roughly how long does it take you each time? How often does it come up?” |
B3 |
| “Is it you doing it, someone on your team, or shared?” |
B6 |
| “If you got that time back, what would you actually do with it?” |
B7 (Matt's exact phrasing) |
Phase 4 — Gated. Only if they signal willingness in Phase 3
| Question | Tests |
| “Have you ever thought about getting that off your plate? What's stopped you?” |
B7, past behavior |
| “If you imagined this work going away, what would have to be in place for you to feel okay about that?” |
B8 (outcome-based, solution-agnostic) |
| Only if they describe a clear bar: “What would you need to see in the first few weeks to feel like it's actually working?” |
B8 (defines pilot success in their words) |
What we removed and why. Earlier drafts had “What would have to be true for you to trust handing this over to us?” and “If we ran a 30-day pilot...” and “What's the worst-case scenario you'd worry about if we did this for you?” All three assume the customer wants a vendor before they've claimed pain. Replaced with gentler past-behavior alternatives that stay in their frame, not ours.
Technical implementation
| Data source | Existing Gia customer tasks extracted from call transcripts and calendar (already in Gia database) |
| Script | Simple real-time classification script — categorizes tasks by client delivery type, surfaces recurring patterns across the team |
| Interface | Button in Gia UI (hidden in current build) — triggers audit on demand during the call |
| Build owner | Sanjeevi Ramachandran |
Target audience
Existing Gia customers only. Must fit the v1.3 ICP (3–50 employees, judgment-heavy consulting delivery). Speak to the delivery lead — not just the founder — unless the founder is personally doing delivery work.
Success signal
A pilot agreement where Gia takes on a specific recurring delivery task for the customer. The task is identified from their own audit data, not hypothesized. Lack of pilot confirmations by May 10 is a hard signal that the experiment needs to pivot.
Owner
| Execution | Tukan Das (calls + pilot pitches) |
| Build | Sanjeevi Ramachandran (classification script) |
| Advisor | Matt Cooper (weekly check-in on pilot confirmation rate) |
| North Star | 5 pilot agreements by May 15, 10 by May 31 — if not hitting pace, escalate immediately |