There is a lot of fear right now about “AI slop”—that flood of generic, commodity content threatening to drown out actual human insight. As someone building a publishing pipeline that heavily integrates AI agents, I think about this constantly. I also run into it constantly. There’s not doubt that it’s out there.

I recently read an analysis of Tony Stubblebine’s thoughts on how AI is changing writing, and it crystallized a shift in my own thinking that has been happening over the last few months.

For a long time, the gold standard for AI writing tools was mimicry. We judged models by how well they could ape our style. “Does this sound like Jamie?” was the acceptance criteria.

But Stubblebine points out a critical distinction: “Commodity content” vs. “High-value content.” Commodity content is just information. High-value content is thinking. It’s the result of a human mind grappling with a problem and sharing the scars and lessons from that battle.

This triggered a change in my own process. I realized my goal shouldn’t be to make an AI sound like me. My goal is to make the AI reveal what I know.

The “Second Brain” vs. The Ghostwriter

I’ve stopped trying to build a digital ghostwriter and started building a digital research assistant.

In my current workflow, I don’t just say “Write a post about Kubernetes.” That generates slop. Instead, I use what I call the “Deep Draft Protocol.”

  1. The Analysis: I have agents analyze my previous writing samples—not just for tone, but for structure. How do I deconstruct a problem? (Usually: Concept -> Concrete Example -> Implication).
  2. The Context: I feed the agent specific background files: my bio, my project history, the specific architectural decisions I’ve made in the codebase.
  3. The Constraint: I force the agent to act as a “Knowledgeable Guide.” Its job isn’t to invent ideas; it’s to extract my ideas from the context I provide and organize them logically.

This aligns perfectly with what I wrote about in The Boring Middle of AI Agents. AI isn’t going to have its own experiences or synthesize truly new ideas anytime soon. It lives in the middle—processing, sorting, formatting. The start (the idea) and the finish (the judgment) must be human.

Agentic Pair Programming for Words

I treat most writing like I treat coding.

When I code with an AI agent, I don’t just say “build an app” and walk away. I say, “Create a function that does X, using Y library, adhering to Z pattern.” Then I review the code. I catch the edge cases it missed. I refactor the clumsy logic. I build up the walls and processes and automations over time. This optimizes to generate better code faster from my inputs. I never want to be a rubber stamp. But we’re getting closer than ever to the holy trinity of Good, Fast, AND Cheap for the first time ever.

Writing is the same.

I start every post. I define the thesis. I provide the raw data that I want to dig deeper on. I define the “Deconstruction” pattern I want to use. The AI helps me flesh out the arguments, find the citations, and structure the flow. Then I edit and there are often feedback loops. A source doesn’t make sense in context. A paragraph is wonky. And we have iterative loops. My AI frameworks are designed to make those loops smoother, smaller, and faster. Then I press the “publish” button. Except that the publish button is a custom slash command that takes my final draft, looks at it with an eye to SEO (yup, it’s all marketing), and then moves the content to my blog repo with my final approval.

Speed of Expression

Why bother with the AI at all? Why not just type it out?

Velocity.

I can express the points and ideas I want to make much faster than I can type polished prose. I can dump a chaotic stream of consciousness into my agentic workflow and say, “Organize this into the ‘Lab Notebook’ style.”

The AI handles the friction of formatting and structure, allowing me to focus purely on the thinking.

This is where I think the world is going for idea sharing. We aren’t replacing writers; we are giving thinkers a power suit. The value isn’t in the typing; it’s in the perspective.

Does this apply to creative writing or any medium other than short/medium form technical blogging? I don’t know. It will certainly never write my memoir. But maybe my kid’s biography will have AI input. I certainly would have a different thought if I were generating this content for monetization and not just to share the ideas I have in the industry I work in. Copyright of AI-generated material is a legal and ethical place we’re still exploring as a society.

Feedback loops are critical here. If the output feels generic, it’s usually because my input was generic. If I don’t have a strong opinion or a unique experience to share, the AI can’t invent one for me. It can only reveal what I already know.

The Takeaway

If you’re using AI to write, stop asking, “Does this sound like me?”

Start asking: “Does this reveal how I think?”

If the answer is no, you just might be generating commodity content. And the world has too much of that already.