Shipping a GitHub AI Tool as a Solo Dev: The 7-Day Build Log
I had the idea for Git Gazette sitting in my head for a while, but I hadn't actually mapped out what it would look like. When I finally sat down to build it, something hit me almost immediately: with AI-assisted development, you can spin up an MVP in a week that looks like a polished product. Not a hacky prototype — something that actually feels real.
Here's how the week went.
Day 1–2: Proof of Concept
The first thing I built was the core loop — read a GitHub repo via API, feed the data into Claude, and generate articles from it. That was the moment of truth. If the output was garbage, the whole idea dies.
It wasn't garbage. Honestly, one of the things that struck me the most during the entire build was how fast and how well everything worked on the LLM side. I didn't run into major errors with the generation. The articles came out readable, the personalities held up, and the cost per gazette wasn't too bad either. That early signal gave me the confidence to keep pushing.
Day 3–4: Characters and Prompts
With the core working, I shifted to the part that makes Git Gazette different — the columnist personalities. I wrote out how each feature would work and how I'd loosely implement it based on what I already knew. Then I asked Claude (not Claude Code — the conversational interface) to write a detailed .md implementation plan covering everything.
I added those spec files to the project, opened a few Claude Code sessions, fed the files into each session, and watched it implement. I had to do little to no actual coding — mostly corrections and configurations like environment variables. At one point I had to step in and refactor a navbar that Claude Code was duplicating across every page instead of componentizing properly, but that was about the extent of my manual intervention.
This workflow — spec in conversation, implement with Claude Code — ended up being the core pattern for the whole build.
Day 5–6: The GitHub API
The GitHub API is surprisingly good. Response times are great, the filtering options are solid, and you can pull issues, PRs, commits, releases, and security advisories without much friction.
The one limitation I hit: there's no API endpoint for trending repositories. If you want trending data, you have to scrape it manually. It's a known gap and there are community workarounds, but it's worth knowing if you're building anything that depends on discovery.
Beyond that, the API gave me everything I needed to build the full data pipeline — from raw repo activity to structured input for each columnist's prompt.
Day 7: Making It Feel Like a SaaS
The last piece that made it feel shippable wasn't a feature — it was the pricing page and the Stripe integration. There's something about wiring up payments that makes a project cross the line from "side project" to "actual product." Once I could see the pricing tiers live and the checkout flow working, it felt real.
On Pricing
The tiers landed at free (2 gazettes/month), Pro at $9/month, and Team at $29/month. How did I get there? Honestly, a mix of loose research with AI on comparable dev tool pricing, some math to make sure LLM token costs were covered at each tier, and a good amount of gut-feel.
I'm not even sure if it's the right price yet. But that's the thing about shipping fast — you put a number out there and let the market tell you if you're wrong. Time will tell, and adjusting pricing is one of the easiest things to change compared to the product itself.
The Takeaway
The whole build reinforced something I keep coming back to: the bottleneck isn't coding anymore. It's knowing what to build and how to spec it clearly. The actual implementation, with the right AI tooling, is the fast part. The thinking is the hard part.
If you're a solo dev sitting on an idea, the barrier to shipping has never been lower. A week, a clear spec, and the right tools — that's all it takes to find out if your idea has legs.