Week 1 Running a 20-Agent Studio: The Real Numbers
Key Takeaway
You can build a lot of infrastructure in a week. But infrastructure without measurement is just expensive guessing.
Week 1 of fully autonomous operations at LeanAI Studio is done. Here is the honest scorecard.
What the Machine Built
In 7 days, the agent team shipped:
- 10 waitlist landing pages across two validation waves
- 9 Google Ads campaigns launched and actively spending
- 6 GREENLIGHT verdicts issued on a second wave of 9 micro-SaaS bets
- All 4 original landing pages converted from Stripe checkout CTAs to waitlist email capture
- One reusable landing page template
- GDPR compliance retrofitted on all 4 original pages (cookie consent, privacy/terms pages)
- A blog post published and amplified on X and LinkedIn
- The agent org chart expanded from 13 to 20 agents
That is a lot of output for 7 days with zero human hours doing the actual work.
The Revenue Number
Zero dollars.
No paying customers. No Stripe checkouts. $2.70 in Google Ads spend with no conversions to show for it.
I am not surprised. This was always going to be a build week, not a revenue week. The machine had to construct the validation infrastructure before it could generate signal. That part worked.
Where It Broke
The feedback loop is broken at the measurement layer.
All 10 landing pages are live. But every single original landing page is missing one environment variable in Vercel: the webhook URL that pipes form submissions into a Google Sheet. The forms submit. Slack gets a ping. But the data goes nowhere. The 50-signup gate that triggers MVP build decisions cannot be measured accurately.
Three pages also have no Vercel Analytics enabled. So we are running paid ad traffic with no visibility into bounce rate, session length, or conversion funnel.
I need to fix this before anything else. Without measurement, the campaigns are running blind. A week of ad spend that cannot be properly attributed is a week of wasted signal.
The Channel Surprises
LinkedIn cold outreach is underperforming badly. Across 91 connection requests, the system produced roughly one substantive reply. That is a near-zero hit rate. The hypothesis was wrong: cold LinkedIn CRs at this ICP density are not producing the conversations the strategy assumed.
The community engagement channel (Reddit, Indie Hackers) produced exactly zero live replies in 7 days. The environment blocks Reddit, and Indie Hackers requires a one-time login that has not happened. This is not a strategy failure. It is an execution blocker I have to clear manually.
Google Ads is too early to read. Most campaigns launched in the last 24-48 hours. AssessKit has 18 impressions. DocGate has zero. Wait for week 2 data before drawing conclusions.
What I Am Doing This Week
Three things I need to handle directly this week:
1. Set the Google Sheets webhook environment variable on all 4 original Vercel projects. This is a 5-minute fix that unlocks accurate signup tracking. 2. Fix the ReviewRadar campaign URL mismatch in Google Ads. The campaign is paused because it points to a 404 page. 3. Log in to Indie Hackers once so the Validation Outreach Agent can engage community threads autonomously.
Everything else the machine handles.
The Honest Assessment
The studio infrastructure is structurally sound. The agents shipped at speed, without me approving every step. That was the whole point of the no-bottleneck redesign from the last post.
But infrastructure alone does not validate anything. The real test starts now, when campaigns are running, forms are collecting, and the measurement layer is working. If there is no demand signal by week 3, I have to start killing bets and narrowing focus.
Week 2 begins today. Fixing measurement is the job.