Week 2 sucked.
Not the fun kind of suck where you’re building cool stuff and learning fast.
More like I broke everything, lost access to tools I depended on, burned money, and seriously wondered what the hell I’m doing with my life.
That said, it was probably one of the most useful weeks so far.
Week 2 was not really about building. It was about learning what kind of systems you need if you want to build with AI without constantly lighting your own work on fire.
The Wake-Up Call: Dependency Is a Hidden Risk
Last Saturday, April 4, I lost access to my Claude Max subscription. Just gone.No warning. No graceful exit. Just poof. And I panicked.
Why? Because I had quietly built a dependency on a single model, Opus 4.6, while I was still learning how all of this worked.
That is like learning to drive in a Ferrari and then acting shocked when someone takes the keys. I mean, I PAID for the Ferrari, right?
What I considered
- Trying to bypass restrictions
- Pushing harder to regain access
- Pretending this was somehow a stable foundation
What I decided
- Not worth risking a ban
- Not worth building on shaky ground
- Not worth pretending dependency is the same thing as a system
Lesson
If your whole workflow depends on one model, you do not have a system. You have a single point of failure.
So I switched to ChatGPT Pro.
And yeah, it hurt.
ChatGPT Pro Reality Check
At first, it felt broken.
- Responses taking 5 to 6 minutes
- Sessions freezing
- “Something went wrong” errors
- Me wondering what exactly I was doing with my life
It is amazing how impatient you get once you get used to near-instant responses from a top-tier model.
At one point I genuinely thought:
Did ChatGPT just forget about me? This is a very strange feeling to have alone at your computer.
Turns out:
- It was not always the model
- Sometimes it was session overload
- Sometimes it was me misunderstanding how these systems actually work
- Sometimes I just need to chill
My biggest mistake: I tried to make the model send updates every 30 to 90 seconds.
That is not how this works. Large language models do not have little internal alarm clocks. They do not “check in” on their own. They run, then they respond.
Lesson
Ask for an estimate before the work starts, not fake status updates during the work.
The Website Disaster
I had a solid site. It worked.
- Booking flows
- Modals (website pop-ups)
- Schedules
- Notifications
- Sandboxed Stripe
Then I did the thing that always seems harmless right before it is not.
I cleared the context with /new.
You do that because you do not want the AI dragging around days of old context while trying to solve a fresh problem.
Then I asked Ava, my AI, to make a minor update to the site.
Total destruction.
- Broken HTML and CSS wrappers
- Dead links
- Missing modals
- Wrong schedule
- Hours of work gone in minutes
Why?
Because the AI read its notes, assumed the site still matched those notes, and started “fixing” things that were not broken.
Lesson
After any context reset, run a read-only audit before making changes.
No exceptions.
No Snapshots = No Mercy
At the time, I had:
- No Git
- No backups
- No rollback path
So when things broke, there was no undo.
Just me sitting there going:
Please fix what you broke.
Which, obviously, tends to make things worse.
Fix
I set up:
- Git
- database backups with
mysqldump - a site snapshot workflow
Now I can:
- save state
- roll back quickly
- stop reverse-engineering my own mistakes from memory like a maniac
Lesson
If you do not have version control, you are not building. You are gambling.
Root Access: Yeah, That Was Dumb
At one point I gave Ava full root access to my server. Which meant she could, in theory:
- modify anything
- delete anything
- break everything
Now, to be fair, she did not delete everything.
But she absolutely broke the site.
Fix
I removed root access and switched to a limited user with only the permissions actually needed.
Lesson
AI should get the minimum power required, not God mode.
This is one of those lessons that sounds obvious right after it hurts.
WordPress Reality: Files vs Database
I wasted some time searching for content in the code files. Could not find it.
Why?
Because WordPress stores different things in different places:
- layout and theme behavior live in files
- page and post content often live in the database
So Git alone was not enough.
Fix
Snapshot both:
- files with Git
- content with database backups
Lesson
If it is not in the files, it is probably in the database. Stop digging in the wrong place.
OpenClaw Was Blocking Me, Not the Model
I kept seeing:
Something went wrong.
So, naturally I blamed the model. Wrong.
The real issue was that OpenClaw security was blocking commands.
- Long SQL commands looked obfuscated
- The approval mode was set to block instead of ask
- I was treating a tooling problem like a model problem
Fix
I changed the approval behavior to ask: on-miss.
Funny enough, I had to use Claude Code from another machine to help fix the environment Ava was running in.
So yes, that round goes to Claude Code.
Lesson
When something fails, do not blame the model first. Check your infrastructure.
Gemini: A Fast Way to Burn Money
I switched to Gemini for stability. It worked. It also burned through around $10 to $11 in API credits in about 90 minutes.
That is a very efficient way to learn a billing lesson.
Lesson
You don’t save money with Gemini API pay-as-you-go.
I’m glad I learned that upfront, and I can see now why Anthropic wants us to pay per-token as compared to the flat monthly fee.
Too Many Tools, Not Enough Strategy
At one point I had this tool stack:
- Claude Max: primary model. Gone.
- ChatGPT Pro: replacement. Learning curve.
- Gemini API: backup. Expensive.
- Cursor: AI IDE. Overlap.
- VS Code: free IDE. Still solid.
Lesson
More tools do not automatically mean more power. Usually they just mean more confusion.
The /new Trap
Context fills up. You reset. The AI forgets everything. Then it confidently reinterprets reality from partial notes.
That is the trap.
Reality
- The AI does not truly resume
- It reinterprets
- And if your safety systems are weak, that reinterpretation gets expensive fast
Lesson
The answer is not just better notes. The answer is better safety systems.
Smaller sessions. Snapshots. Rollback. Verification before edits.
The Core Shift
This was the biggest mindset change of the week.
I went from:
Let the AI handle it.
To:
Build systems so the AI cannot destroy everything.
That is the actual game.
Not blind trust.
Not panic.
Not vibes.
Systems.
If I Had to Boil Week 2 Down
- Never depend on one model
- Always have version control and backups
- Limit AI permissions aggressively
- Verify before modifying
- Understand your tools before blaming them
Final Takeaway
Week 2 was not glamorous.
It was:
- losing access
- breaking systems
- burning money
- misunderstanding how AI tooling actually works
- then slowly building a real foundation underneath all of it
It hurt.
But it also made Week 3 possible.
And honestly, that is a trade I will take every time.
New to the tech terms? Check out the Glossary for plain-English definitions.
Leave a Reply