← Back to logs

Claude Code Changed How I Build Software

·5 min read·
AIClaude CodeProductivityDeveloper Tools

I stopped using AI as autocomplete and started using it as an implementation partner. Here's how Claude Code reshaped my entire development workflow.

Six months ago I was using Copilot like everyone else — tab-completing boilerplate, occasionally impressed, mostly indifferent. Then I started using Claude Code as my primary implementation partner and everything shifted.

This isn't a product review. It's a workflow breakdown from someone who ships production code with AI every single day.

The Mental Model Shift

Most developers treat AI coding tools as fancy autocomplete. You write a function signature, it fills in the body. That's useful but small.

The shift: think in tasks, not lines of code.

Instead of writing code and asking AI to finish it, I describe what I want at a higher level and let Claude handle the implementation. My job becomes architecture, review, and direction.

# What I used to do:
1. Write function skeleton
2. Let Copilot fill it in
3. Fix the parts it got wrong

# What I do now:
1. Describe the feature or fix
2. Claude Code implements it across multiple files
3. I review, adjust direction, iterate

The difference isn't speed — it's scope. I can tackle tasks that would've taken hours of boilerplate in minutes.

My Daily Workflow

Here's what a typical session looks like:

1. Start with context

I keep a CLAUDE.md at the project root with architecture decisions, conventions, and constraints. Claude reads it automatically. This is the single highest-leverage thing you can do — it turns generic AI assistance into project-aware collaboration.

# CLAUDE.md example
- Use server components by default, "use client" only for interactivity
- All data lives in content/data/*.json with thin TS wrappers
- Glass-card design system with violet/blue/cyan gradient accents
- Never add dependencies without asking

2. Work in conversation

I don't ask Claude to write isolated functions. I describe what I'm building and let the conversation evolve:

"Add a collapsible toggle for the client projects section. Keep ExperienceItem as a server component — extract just the collapsible part as a client component."

Claude creates the component, updates the imports, handles the state management. One prompt, multiple coordinated file changes.

3. Review like a senior engineer

This is where most people go wrong — they accept or reject wholesale. Instead, I review AI output the same way I'd review a junior engineer's PR:

  • Does the approach make sense architecturally?
  • Are there edge cases it missed?
  • Does the code match the project's conventions?
  • Is it over-engineered?

The last point is important. AI tends to add error handling, type guards, and abstractions you didn't ask for. I trim aggressively.

Patterns That Work

Parallel exploration

When I'm unsure about an approach, I'll ask Claude to explore multiple options:

"Show me three ways to handle dark mode persistence. Pros and cons of each."

This replaces 30 minutes of Stack Overflow and blog posts with a focused comparison tailored to my exact stack.

Data migration

Moving hardcoded TypeScript arrays to JSON data files? Tedious, error-prone work that AI handles perfectly:

"Migrate all static data from src/data/.ts to content/data/.json. Keep thin TS wrapper files for type-casting. Update all imports."

Seven files changed, zero bugs. This is the kind of task where AI earns its keep — mechanical transformations across many files with consistent patterns.

CSS debugging

Describing visual bugs in text is surprisingly effective:

"In dark mode, the glass cards appear whitish instead of the dark translucent look. The content is hard to read against the background."

Claude identified that :not(.dark) CSS selectors were matching intermediate DOM elements, not just <html>. A subtle specificity bug I might have spent an hour on.

What AI Is Bad At

Let's be honest about the failure modes:

Taste. AI doesn't know when something looks off. It can implement any design you describe, but it won't tell you that your color palette feels muddy or that a section needs more whitespace. You still need design sensibility.

System boundaries. When the bug is at the intersection of multiple systems — say, a CSS specificity issue caused by how a React framework applies class names at build time — AI can struggle to connect the dots without explicit guidance.

Knowing when to stop. Ask Claude to "improve" something and it'll keep going. Add error handling, add types, add tests, add documentation. You have to be specific about scope and firm about "that's enough."

The Productivity Math

I tracked my output over three months. Rough numbers:

  • Features shipped per week: ~40% more than pre-AI baseline
  • Time per feature: down, but not dramatically — maybe 25%
  • Scope per feature: this is the real win — I'm building more ambitious things because the implementation cost dropped

The time savings alone don't justify the workflow change. The scope expansion does. Projects I would've simplified or deferred are now shipping.

Getting Started

If you want to try this workflow:

  1. Set up CLAUDE.md in your project root. Spend 20 minutes documenting your conventions, stack decisions, and constraints. This pays for itself immediately.

  2. Start with refactoring tasks. Migration, restructuring, moving files — low risk, high volume, perfect for building trust in the workflow.

  3. Graduate to feature work. Once you're comfortable reviewing AI output, start describing features at a higher level and letting Claude handle implementation.

  4. Keep your architectural brain engaged. The moment you stop thinking critically about the code being written, quality drops. AI is your implementation partner, not your replacement.

The Bigger Picture

We're at an inflection point. The developers who figure out how to work with AI — directing it, reviewing its output, maintaining architectural ownership — are going to ship at a fundamentally different pace than those who don't.

It's not about AI writing your code. It's about you becoming the architect of a system where AI handles the implementation. The skills that matter shift from "can you write this function" to "can you design this system, evaluate tradeoffs, and maintain quality at higher velocity."

That shift is already happening. The question is whether you're adapting your workflow to match.