How one person uses AI to create, iterate, and ship — across enterprise software, law, security, and content — with zero ad spend.
Scott Willis · Enterprise Distribution Software Specialist · Pro Se Litigant · Builder
~45 minutes · Q&A at the end
These aren't tips from a blog post. They're patterns I learned by shipping real things and failing first.
The same problems you have at work — just with an AI collaborator
I fed Claude real production queries one at a time — not theoretical examples. It identified performance patterns I'd been living with for years. Things that looked correct but were silently slow.
We worked in batches: 7 scripts first, then 62 more. The second batch revealed a critical lesson — Claude generated placeholder syntax in several scripts. My "nothing ships until it's ready" rule caught every one before production.
The AI accelerated the work. My standards were the quality gate.
The warehouse ran on 9 fragile Excel macros that nobody fully understood. Instead of documenting them, I described the end-to-end workflow to Claude — scan, look up, validate, ship — and we rebuilt it from the intent, not the legacy code.
Tested against legacy output until we hit 99%+ match. The team validated daily before we cut over.
Multiple locations needed to move between enterprise distribution platforms. Instead of hiring a consultant, I walked Claude through the data domains one by one and we built a phased migration plan together.
The mapping strategy came out of conversation. The domain knowledge came from me. The structure came from the collaboration.
What happens when you can't afford a lawyer but you can think clearly
An opposing attorney filed a false address with the court — an address I hadn't lived at in over a decade — just 103 days after her own team served me in person at my real location. Based on that false filing, a judge entered a default custody order I never knew about. I lost access to my children for over 490 days.
I filed suit pro se. Every legal document — the petition, the response to their motion to dismiss, the bill of review — was co-authored with Claude. I brought the facts and the anger. Claude helped me structure them into something a court would take seriously.
There's now an active criminal investigation. Complaints have been filed with the relevant professional licensing bodies.
Case is set for trial.
Claude didn't write my case. I lived my case. Here's how the collaboration actually worked:
Building a threat intelligence honeypot from scratch
I wanted to understand how attackers actually behave — not from reading about it, but from watching it happen. So I built a site that looks like a vulnerable WordPress install, deployed it for free, and started logging every probe.
The domain name does double duty. It pulls in people researching the legal topic from my own case, and every bot scanning for vulnerable installs finds it automatically.
Netlify hosts the site for free and runs edge functions that act as a serverless firewall.
Supabase stores every probe in a database — IP, path, user agent, headers — structured and queryable.
Claude helps me analyze the logs, cluster threat actors by behavior, and build dashboards.
Total monthly cost: $0.
Real probes logged in the last 48 hours — every one hit my site automatically:
| What They Tried | Why |
|---|---|
| /wp-login.php | Trying to brute-force WordPress login credentials |
| /xmlrpc.php (POST) | Exploiting WordPress's remote API for amplification attacks |
| /wp-admin/setup-config.php | Checking if WordPress was installed but never configured |
| /admin/.env | Hunting for accidentally exposed passwords and API keys |
| /shell.php | Looking for web shells — evidence of a previous compromise |
| /backup.sql | Hoping someone left a database dump in a public folder |
| /phpmyadmin/ | Trying to access database admin panels directly |
Every probe is logged in real time. No servers to manage. No infrastructure to maintain.
What happens when you build something people actually want to read
Niche legal terms surfaced through Search Console — long-tail queries with low competition that mainstream content ignores. The honeypot site ranks for them because nothing else does. Free traffic, real intent.
Countries: US (35), Egypt, China, Ghana, Taiwan
Channels: Direct (64), Organic Search (5), Social (1), Referral (1)
Return rate: 3.5–8.3% come back weekly
personal ad spend
$0/month hosting
80+ bot probes / day = free threat intelligence
People I have never met or spoken to noticed my work and independently paid for advertising campaigns promoting safesapcrtx.org.
I didn't ask them to. I didn't know they were doing it. They just thought the content mattered enough to spend their own money on.
The content spoke for itself.
Claude is the primary builder. But I use other AIs to stress-test, analyze, and create feedback loops:
SEO audits, content gap analysis — "what's missing?"
Real-time social signal analysis — what's getting traction?
Bing indexing behavior, alternative perspectives
Google Search patterns, traffic analysis
Not every problem needs a pipeline or a multi-step workflow. Sometimes a clean prompt in a fresh session is faster and better than an elaborate system. I built complexity that didn't earn its keep.
Claude's Batch 2 placeholder bug taught me this permanently. The output looked right. It was structured correctly. But several scripts had syntax that would have broken in production. You lose nuance and edge-case details if you don't verify.
AI gives you speed, but speed costs something. You lose experiential texture. You lose the thinking-through-it process that sometimes is the point. I had to learn when to use Claude and when to just sit with the problem myself.
This is the actual cycle. It ran dozens of times across every project in this presentation.
Keep a context file you paste at the start of every new session: definition of done, previous failures, system constraints. Externalized memory beats hoping the model remembers.
After every successful ship, save the final output plus the exact feedback that got it there. That library is more valuable than any prompt template you'll find online.
15–20 minutes per loop, hard cap. If it isn't converging, the outcome wasn't defined clearly enough. Go back to step 1. Stop prompting.
Everything in this deck is real. So is everything below. Anyone selling you Claude as a magic co-founder is lying.
Stage 1 thinking: "How do I write a better prompt?" Stage 2 thinking: "How do I build a system around the model?" That's the actual leap.
• Giant kitchen-sink prompts
• Long messy chat threads
• "Just keep going" sessions until compaction breaks them
• Trust the model to remember last week
• Skip verification because the answer "sounds right"
• Break work into discrete phases with checkpoints
• Externalize memory: docs, files, system prompts, MCP tools
• Restart sessions with clean, curated context
• Use real tools (file I/O, APIs, databases) — not just chat
• Verify every claim against reality before shipping
Scott Willis · iamnotcheckingit@gmail.com