Working with LLMs
🧠The hardest part about building with AI isn't getting it to work— it's getting it to improve without destroying
When you ask an LLM to "refactor this," "improve that," or "make it cleaner," you expect small, surgical edits. What you often get is a rewrite that loses nuance, breaks structure, or wipes out intentional quirks.
Whether it's rewriting emotional text into LinkedIn boilerplate, or flattening a custom React component into generic nonsense, the same problem shows up: LLMs don't understand the difference between editing and replacing.
Why This Happens
Because LLMs are trained to generate the best next thing, not the smallest valid change. They don't see authorial intent. They don't know which parts are sacred and which are expendable. Unless you build in that context, "help" turns into harm.
What I've Tried
I've tried everything:
- Prompt engineering ("only fix grammar, keep tone")
- System messages
- Custom DSLs
- Diff-based prompting
- Guardrails
They help, but they're brittle. The truth is, most LLMs don't edit — they overwrite.
What Works Better
Building Structured Awareness
ASTs for code, segments and labels for writing
Freezing Parts of a Document
Hard constraints on what can't be changed
Using Embedding Similarity
Block destructive changes automatically
Reviewing AI Suggestions as Staged Diffs
Not in chat bubbles
Replacing Chat-Based UX
With tooling that gives real control over scope and intention
The Core Principle
If you want AI to feel like a collaborator and not a bull in a china shop, the core idea is this: be explicit about what should never change—and give it architectural boundaries to work within.
AI won't magically respect your style, your constraints, or your edge cases—unless you teach it how, and where to stop.
Practical Application
This principle is now baked into how I design tools and workflows involving LLMs—from content pipelines to internal dev tooling. And honestly, it applies far beyond code.
Curious how others are dealing with this: Have you found patterns that help AI improve without erasing?