I Made $18 and Thought I Made $0: A VibeCoord Post-Mortem
My analytics said $0 revenue and 0 bot activations. Both were wrong. A full post-mortem on what I built, why it failed, and what the data actually said.
My analytics dashboard told me VibeCoord had zero revenue and zero bot activations. The dashboard was wrong on both counts.
I found out in February, after I’d already decided to hibernate the project. I was doing a final audit before writing it off — exporting CSVs, cross-referencing Stripe — and there it was: two charges from the same user, dev_snk0, totalling $18. Two separate $9 top-ups, weeks apart. Someone had found the product, used it until credits ran out, and come back to pay again.
PostHog said $0. Stripe said $18. PostHog said zero bots were ever activated. Session replays said 94% of users reached the invite-ready state.
That’s the thing about building in the dark: you can have broken instrumentation and a broken product at the same time, and they’ll each make the other look worse than it is. I spent the last three months of 2025 and the first two of 2026 thinking I’d built something nobody wanted. Turns out I built something a few people wanted, deployed it so it almost worked, and measured it so badly I couldn’t tell the difference.
This is the post-mortem.
What I Built
VibeCoord was an AI-powered Discord bot builder. You describe what you want in plain English — “a bot that assigns roles based on reactions” or “a moderation bot that mutes people who post links” — and it generates a working discord.js bot, containerises it in Docker, pushes it to AWS ECR, spins up an ECS task, and gets it running in your server. The whole pipeline from prompt to live bot.
The stack was full SaaS: React/TypeScript/Vite frontend, Node/Express/Prisma backend, PostgreSQL, AWS (ECS, ECR, RDS, S3), Terraform for infra, OpenAI Codex for generation, Discord OAuth for auth, Stripe for payments, PostHog for analytics. Real-time deployment status over SSE. A guided onboarding tour. Credit-based pricing with a free tier and $9 top-up packs.
I built it solo over somewhere between 500 and 800 hours. I stopped counting.
The pitch was simple: Discord has 19 million active servers. Most of them want custom bots. Building one from scratch requires knowing JavaScript, discord.js, how to host a Node process, and how to navigate the Discord Developer Portal. VibeCoord was supposed to remove all of that friction. Describe it, generate it, deploy it.
The Metrics: What I Thought vs What Was Real
Here’s the thing about PostHog showing you a zero: you believe it. Why wouldn’t you? You set up the tracking, you trust the numbers, you build your conclusions on them. So when I looked at the dashboard in January 2026 and saw $0 revenue and 0 bot activations, I concluded the product had no market fit and started winding down.
| Metric | PostHog (what I saw) | Reality |
|---|---|---|
| Revenue | $0 | $18 (Stripe, 2x $9 top-ups) |
| Bot joined server | 0 | Unknown, but >0 (events silently dropped) |
| Deploy succeeded | ~4 | 42 (PostHog undercounted 11x) |
| Bot online | 0 visible | 14 confirmed |
| User activation rate | ~0% | 94% reached invite-ready (session replays) |
Three separate bugs caused this:
Bug 1: Stripe webhook not connected to PostHog. The $18 was in Stripe’s dashboard the whole time. I just never cross-referenced it because PostHog told me what I thought I needed to know.
Bug 2: Auth header mismatch on telemetry. Generated bots were sending telemetry events to the backend with the header X-Bot-Token. The backend expected Authorization: Bearer. Every bot_joined_server event hit the endpoint and was silently dropped with a 401. No logs. No errors surfaced. Zero activations in PostHog, an unknown number of actual activations in reality.
Bug 3: Missing telemetry file in templates. bot_event_handled events couldn’t fire at all because the telemetry module wasn’t included in the generated bot template. This wasn’t discovered until December 29th — at which point I committed a fix, and then never deployed it because the infra had already been hibernated.
The December 29th commit is the part that stings. The fix existed. It sat in the repo undeployed while I wound down.
The lesson isn’t “test your analytics.” The lesson is: treat your instrumentation like you treat your production code. It can have bugs. It can silently fail. And if you’re making product decisions from broken data, you’re flying blind while thinking you have GPS.
The Funnel Breakdown
Let’s walk through where users actually went.
9,574 visitors → 414 signups (4.3%)
The 4.3% landing page conversion is genuinely solid. Industry average for SaaS landing pages is 2-5%. People understood what VibeCoord did and were interested enough to sign up. The top-of-funnel was working.
414 signups → 663 projects created
Users created more projects than there were signups, which means power users created multiple. Good signal — they were engaging.
663 projects → 1,110 AI generations
People were using the AI builder. The generation experience worked. They were describing bots, getting code back, iterating.
1,110 generations → 370 build credits used
This is where the first drop-off happens. Only about a third of generations progressed to an actual build. The other two-thirds looked at the output and stopped.
370 builds → 45 deploys started → 42 succeeded
93% deploy success rate. The infrastructure worked. When someone actually committed to deploying, it almost always worked.
42 deploys → 16 bots invited → 14 online
Of the 63 users who were shown an invite link, 16 (25%) clicked through. 14 confirmed online.
Here’s the brutal part of the funnel:
344 Discord OAuth initiations → 327 completed (95%)
OAuth was fine. Users were happy to connect Discord.
327 OAuth completers → 29 opened the bot token modal (8.9%)
Ninety-one percent of people who authenticated with Discord never attempted the bot token step. That’s not a funnel leak. That’s a wall.
The Discord Token Wall
To deploy a bot on Discord, you need a bot token. To get a bot token, you need to:
- Go to the Discord Developer Portal
- Create a new application
- Navigate to the “Bot” section
- Click “Reset Token” (or “Add Bot”)
- Copy the token
- Paste it into VibeCoord
For a developer, this takes three minutes. For a non-technical user who has never seen the Developer Portal, this is a foreign country with no map.
VibeCoord’s core thesis was that it would remove the technical barrier to bot creation. But the token step put the barrier right back. The product automated away the hard programming parts and left the confusing portal-navigation parts, which turns out to be exactly backwards. Non-technical users can handle writing a sentence. They can’t handle finding the “Reset Token” button inside a developer portal they’ve never opened.
The modal had instructions. It had a link to the Developer Portal. It probably wasn’t enough. The 91% drop-off says it clearly: the UX explanation was insufficient, or the ask was too friction-heavy, or both.
The fix would have been either:
- A step-by-step video tutorial embedded in the modal
- A completely different architecture where VibeCoord manages the bot application creation via the Discord API (OAuth2 with bot scope), removing the token step entirely
- Narrowing the target audience to developers only and leaning into that
I shipped none of those. I shipped a modal with instructions.
91% of users who authenticated with Discord never attempted the next step. That is not a copy problem or a UI problem. That is a product architecture problem.
The December Spike
In December 2025, something pushed VibeCoord onto someone’s radar. I don’t know exactly what — a Reddit post, a Discord server, a YouTube comment. The metrics spiked hard:
| Month | MAU | Signups | Deploys |
|---|---|---|---|
| November 2025 | 83 | Baseline | Baseline |
| December 2025 | 4,169 | 132 | 43 |
| January 2026 | 4,076 | 281 | 2 |
| February 2026 | 761 | Hibernated | 0 |
| March 2026 | 582 | Ghost traffic | 0 |
4,169 MAU in December. 132 signups. 729 users tried AI generation. 43 started a deploy. That’s 5.9% progression from generation to deploy.
The December spike was a drive-by. People found a cool toy, played with it, generated a bot they’d probably never deploy, and left. The product was genuinely interesting as a demo — “watch AI write your Discord bot” is inherently entertaining to watch. But entertainment and utility are different things. The users who came in December were there for the entertainment. The deploy step is where utility starts, and 94% of them never reached it.
January maintained similar MAU (4,076) but only 2 deploys. That’s not traffic dying — that’s the pipeline breaking. A 46x drop in deploy activity while MAU stays flat almost certainly means something in the build or deploy path broke in late December, and I didn’t notice until I was already winding down.
This is what “not monitoring your production systems” looks like in the metrics.
The Paying Customer
dev_snk0 is the most important data point in this entire post-mortem.
This person found VibeCoord, signed up, built a bot, deployed it successfully, and used it. When the credits ran out, they paid $9 to top up and keep going. Then — at some point later — they paid another $9.
They proved:
- The product worked end-to-end
- The value proposition was real enough to pay for
- A user could navigate the entire funnel including the token step
- The credit model made sense (buy more when you run out)
- Someone cared enough to come back
I never contacted them. Not once. I didn’t reach out to ask what they were building, whether the bot worked, what would make it better, or whether they’d want to pay for a subscription instead of top-ups. I have their user ID in my database and their email is probably derivable from their Discord account. I did nothing with it.
The orthodox startup playbook — the one you’ll read in every YC essay and indie hacker thread — says you talk to your users obsessively. You find the one person using your product and you call them. You learn everything you can. You treat them like a co-founder for a day.
I didn’t do that. I was too busy building features that nobody had asked for and watching an analytics dashboard that was lying to me.
One person paying $18 in a dead product is more signal than 4,000 monthly visitors. I had the signal. I missed it.
What I Got Wrong
1. I didn’t cross-reference my analytics.
PostHog said $0. I never opened Stripe to verify. This is a fundamental instrumentation error: if a metric is critical to your decision-making, you need at least two sources. Revenue is the most critical metric. I had one source, it was broken, and I never checked.
2. I shipped broken telemetry and didn’t catch it.
The auth header mismatch (X-Bot-Token vs Authorization: Bearer) was in production for months. The telemetry file missing from generated bot templates was in production until December 29th, and even then the fix was never deployed. I had no alerting on 4xx rates from my own telemetry endpoint. I had no test that validated the full event pipeline from generated bot to database.
3. The token UX was a known risk I didn’t solve.
I knew the Discord Developer Portal was intimidating to non-technical users. It was on my list. I shipped it anyway because the alternative — building OAuth-based bot application creation — was technically harder and I wanted to get to launch. The 91% drop-off was the cost of that decision.
4. I built for the wrong user.
The product was most useful to Discord server owners who weren’t developers. But the Discord Developer Portal step required them to act like developers. The people who could navigate the token step (actual developers) were the least likely to need AI to write their bot code. The target user and the capable user were almost completely non-overlapping.
5. I confused virality with traction.
4,169 MAU in December felt like momentum. It wasn’t. Traction is repeat usage, deploys, payment. The December spike was awareness, not adoption. I should have been optimizing for the funnel from generation to deploy, not celebrating visitor counts.
6. I never talked to users.
Not once did I reach out to a user. I had PostHog session replays, which I eventually looked at and found useful. But I never emailed, DM’d, or surveyed anyone. dev_snk0 paid me $18 and I left them in the void. You cannot build a product in a vacuum and be surprised when it doesn’t fit.
7. I hibernated before deploying the fix.
December 29th, the bug fix was committed. Infra was hibernated a few days later. The fix never shipped. Every bot that tried to send telemetry after that point continued to silently fail. The project died with a known fix sitting undeployed in main.
What I Got Right
The build quality was real.
This is a full SaaS product, built solo. Auth, payments, AI generation, a containerised deployment pipeline, real-time status updates over SSE, an onboarding tour, Terraform-managed AWS infra. It worked. The 93% deploy success rate (42/45) is not a fluke — the infrastructure was solid. I built something that functioned.
The landing page conversion was solid.
4.3% signup conversion on 9,574 visitors is a real number. The value proposition landed. People understood it, wanted it, signed up. Top-of-funnel was not the problem.
The generation experience engaged users.
1,110 AI generations across 414 signups means users were generating more than once. The core AI loop — describe a bot, get code, iterate — was engaging enough to pull users through multiple sessions. That’s a good foundation even if the downstream funnel didn’t convert.
I shipped.
500-800 hours, solo, real product, real deployment, real users, real revenue. Most side projects don’t reach any of those. I have a live URL, a Stripe account with money in it, and a database of 414 signups. That’s more than most ideas become.
The Meta-Lesson
VibeCoord failed for a mix of reasons: a structural UX problem, broken instrumentation, a mis-targeted product, and not talking to users. But the meta-failure was that I was making decisions from bad data and didn’t know it.
There’s a failure mode specific to technical founders who are also good at instrumentation: you build great measurement infrastructure, it breaks silently, and you trust the output more because you built it yourself. The confidence that comes from having PostHog set up, having Stripe connected, having event tracking — that confidence can make the broken signal more dangerous than no signal, because you stop looking for contradictions.
Session replays told me 94% of users reached invite-ready state. That contradicted the 0% activation number. I never looked at the replays until the post-mortem. The data was there. I didn’t look.
The other pattern: building as avoidance. Every hour I spent adding features was an hour I wasn’t sending a cold DM to a Discord server owner, writing a piece of content, or emailing dev_snk0 to ask why they paid. Building feels productive. Talking to users feels uncertain. So you build.
I have 370 build credit uses in my database. I have zero user conversations. That ratio is wrong.
Metrics without attention are just data you haven’t read yet. Building without talking to users is just guessing with extra steps.
What VibeCoord needed, after the initial build, wasn’t more features. It was one week of emailing users, watching replays, and fixing the token onboarding. It probably needed the full OAuth-based bot creation architecture — let VibeCoord create the Discord application on behalf of the user, handle the token internally, remove the portal step entirely. That’s a week of backend work. Instead I built more things, shipped a broken telemetry fix that never deployed, and wound down based on metrics I didn’t verify.
The post-mortem matters more than the product because it’s the only way to turn a failed project into a transferable asset. The $18 doesn’t matter. The specific funnel number (91% drop at the token step) is the thing I take to the next product. The pattern (broken instrumentation leading to wrong product decisions) is the thing I take to every product after that.
What’s Next
I’m not reviving VibeCoord in its current form. The product architecture would need a fundamental change — primarily the Discord OAuth bot creation flow — and I’d want to validate that change with actual users before rebuilding it.
What I am taking forward:
- Instrumentation gets E2E tested like production code. If a webhook matters, it gets an integration test. If an analytics event matters, it gets verified against a second source.
- The first 10 paying users get a personal email. Always. Not automated, not a sequence — an email from me, asking what they built.
- The funnel gets reviewed weekly against multiple data sources, not monthly against one.
- Target user definition comes before architecture decisions. The token step failed because I built the architecture before I understood who would actually use it.
dev_snk0, if you’re reading this: thank you for paying twice. You’re the proof the thing worked, and I owe you a conversation I should have started six months ago.
Short notes on building AI agents in production.
One email when something worth sharing ships. No fluff, no daily cadence, no recycled growth-thread noise.
Primary use: consulting updates, governed AI workflow lessons, and major project writeups.