I see a lot of people, on social media and in my day job, who are hesitant, and sometimes openly negative about the idea of leveraging AI as a coding tool. Some are loud about it; others quietly judge while clinging to their Vim bindings.
First, and most importantly: power to them.
But they make me think. Being a person with “a lot of miles on my tires” in this industry, I usually assume resistance to change comes from fear or a lack of understanding. That was the case when physical server admins thought VMs were evil. It happened with Containers. It happened with Kubernetes. It happened with the Cloud.
We love to throw the word “revolution” around. The Virtualization Revolution, The Container Revolution. Marketing teams labeled these shifts as revolutions, but in reality, they were evolutionary.
Mainframe → Server → VM → Container.
That is a linear, evolutionary timeline to more effectively isolate processes in a system. The people who fought those changes were often protecting their sunk cost in older skills.
But this? This is different. This is a legitimate revolution.
The Candle Factory
Generative AI isn’t evolving how we develop software. It is changing the foundation of who develops software.
There is a quote often attributed to Oren Harari that I frequently butcher in customer meetings:
“The electric light did not come from the continuous improvement of candles.”
This is how we must view Agentic AI.
There is absolutely nothing wrong with how we’ve been developing software for the last 25 years. When done well, it works great. The internet runs on it. But if we try to bolt Generative AI into workflows designed for human-speed iteration, it feels clunky. It feels like a hindrance. That is the “candle factory” approach.
We need to make lightbulbs. Or maybe jump straight to LEDs.
The Primary Audience is No Longer You
Historically, we have built frameworks, linters, and entire languages around organization and readability for humans. Python’s whitespace rules, meaningful variable naming conventions, folder structures—they are all there so you can understand the code 6 months from now.
I don’t think humans will stop reading code before I retire. But we are already changing how we access it.
Today, when I open a codebase, I don’t browse the file tree. I open Gemini CLI or Claude Code and ask it to locate the workflows I need. I then open the specific file in VS Code or Antigrativy.
I still need to understand the code, but the organization shouldn’t be targeted at me. It should be targeted at my Agentic Pair Programmer.
Why? Because the Agent reads faster than I do. It creates context faster than I do. My goal is to save time getting to the position where my “smooshy human brain” can make strategic insights.
If we accept that Agents are the primary readers and writers of code, we have to change our “Trust Architecture.”
The Trust Paradox
Here is the contradiction: We cannot trust LLMs, but we need to trust the work they generate.
We know LLMs hallucinate. We know they make up packages. We know they love any types in TypeScript. Yet, to get the value of their speed, we have to let them write the code.
This requires inverting our Continuous Integration (CI) logic.
The Old Way (Human-Centric)
- Write code (Human)
- Run basic local tests (maybe)
- Push to repo
- CI runs heavy tests
- Code Review
- Merge
The New Way (Agent-Centric)
The Agent’s biggest contribution happens in Step 1. It generates massive amounts of code in the blink of an eye. Because we can’t trust the Agent, we must move validation left—right into the generation loop.
My current dev flow looks like this:
-
Strategy Phase: I pull context from a GitHub issue or a local
PRD.mdfile. The Agent reads this. Keeping the spec in the repo makes it faster for the Agent to access than checking an external Jira board. -
The Generation Loop:
-
I ask the Agent to code against the plan.
-
Crucial Step: Gemini CLI Hooks trigger automatically.
-
These hooks look for “bad practices” I know Agents love (like
anytypes). -
Pre-commit hooks run linter and type checks
-
The linter is set to strict. Zero warnings allowed.
> @brandcast-signage/api-client@2.5.2 lint > eslint src --ext .ts /home/jduncan/brandcast/packages/api-client/src/typedApiClient.ts 301:3 error 'DisplayMetadata' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars 2262:82 error Unexpected any. Specify a different type @typescript-eslint/no-explicit-any 2275:37 error Unexpected any. Specify a different type @typescript-eslint/no-explicit-any 2280:36 error Unexpected any. Specify a different type @typescript-eslint/no-explicit-any 2285:38 error Unexpected any. Specify a different type @typescript-eslint/no-explicit-any 2290:92 error Unexpected any. Specify a different type @typescript-eslint/no-explicit-any ✖ 6 problems (6 errors, 0 warnings) npm error Lifecycle script `lint` failed with error: npm error code 1 npm error path /home/jduncan/brandcast/packages/api-client npm error workspace @brandcast-signage/api-client@2.5.2 npm error location /home/jduncan/brandcast/packages/api-client npm error command failed npm error command sh -c eslint src --ext .ts -
The Agent tries to commit -> Linter fails -> Agent fixes it -> Linter passes -> Commit allowed.
-
-
Promotion:
- Once the Agent gets past the local “Guardrail Gauntlet,” I push to a remote branch.
- Now standard CI runs (regressions, integration tests). AI is often embedded here as well for Code Reviews, etc.
Conclusion
The guardrails I have in my process exist precisely because I don’t trust my Agentic Pair. But I am 100% going to take advantage of its speed.
I’m constantly evolving these guardrails, but they are all centered around one truth: The thing creating the vast majority of my code has no common sense.
If you are fighting this change, ask yourself: Are you defending the “art” of writing code, or are you just trying to improve the candle?