AI as an Amplifier, Not a Utility
AI turns language into an interface for implementation.
1. The Builder’s World
There is a version of AI people talk about.
And then there is the version you discover when you actually use it consistently.
They are not the same thing.
AI’s real value is not doing old work faster.
It is this:
AI collapses the distance between an idea and a working system.
That is the part I think people are still missing.
If you already have a structured system, and you understand the problem you are trying to solve, AI can turn an idea into working implementation astonishingly fast.
Not perfectly.
Not without review.
Not without direction.
But fast enough that the nature of building changes.
In the old world, a new idea meant:
- specification
- planning
- implementation
- testing
- integration
- iteration
That could mean days, weeks, or months.
With AI, if the system is modular, testable, and well described, that loop can collapse into hours.
You describe the problem.
The AI proposes a change.
You guide it.
It implements.
You test.
You refine.
The result is not magic.
It is structured amplification.
A personal example makes this clearer.
Suppose you are writing a book and notice a recurring problem: the prose has visible AI-like artifacts.
So you build an artifact detection system.
The system finds obvious patterns. You rewrite the book. The first layer improves.
Then you notice a second layer: deeper artifacts, not obvious sentence-level issues but structural patterns.
So you build another detector.
Then you realise detection alone is not enough. You need visualisation. You need to see where artifacts cluster, how they move across chapters, and how one fix creates another pattern.
So you build that too.
In the old world, each of those steps could be a project.
With AI, if your system is ready for extension, each step becomes a conversation plus implementation loop.
That is the shift.
The future is not AI replacing developers.
The future is developers, writers, researchers, and operators turning ideas into systems faster than old organisations can package them.
2. Thinking Something Into Existence
This is the part that still feels strange, even after using AI every day.
You can now think something into existence.
Not by magic.
Not by typing one prompt and walking away.
But by holding a problem in your mind, explaining it clearly, and then using AI to move through the build loop faster than was previously possible.
The process looks something like this:
- You describe the problem to ChatGPT.
- You ask it to help shape the idea into a concrete design.
- You turn that design into small implementation steps.
- You use Copilot, Codex, or another coding assistant to implement each step.
- You generate tests.
- You run the tests.
- You bring the failures back into the loop.
- You refine until the thing works.
That is the real workflow.
Not prompt → answer.
More like:
idea → design → implementation → test → correction → working system
This is where AI becomes different from ordinary tooling.
A normal tool waits for you to know what to do.
AI helps you discover what to do next.
A Small Example
Suppose you are working on a writing system and notice a problem:
“My chapters have repeated AI-like phrasing. I want to detect those patterns automatically.”
In the old workflow, that could easily become a project:
- define the artifact types
- design the detector
- write parsing logic
- create result models
- build reports
- test against real chapters
- refine the rules
With AI, the loop changes.
You start by explaining the problem:
“I need a detector that scans Markdown chapters and flags repeated sentence structures, overused phrases, and suspiciously uniform paragraph rhythms.”
ChatGPT helps break that into components:
- input parser
- artifact rules
- issue model
- report format
- tests
Then you move into implementation:
“Create the first detector for repeated sentence openings. Keep it rule-based. Return structured results. Include tests.”
Copilot writes the first version.
You run it.
It fails on dialogue.
So you go back:
“Ignore quoted dialogue and headings. Add tests for both.”
The system improves.
Then you notice the next problem:
“This catches obvious artifacts, but not deeper structural repetition.”
So you build the next detector.
Then you realise lists of issues are not enough.
You need visualisation.
So you build a report.
Then you need chapter-level summaries.
Then you need trend analysis.
At each step, the system grows because the problem keeps revealing the next useful thing to build.
That is the builder’s world.
The Important Point
The AI did not replace the builder.
It accelerated the builder’s loop.
You still had to:
- notice the problem
- define what mattered
- reject bad outputs
- test the implementation
- decide what came next
But the distance between noticing and building collapsed.
That is what feels new.
Not that AI writes code.
Not that AI writes prose.
But that it lets you move from:
“I think I need this”
to:
“I have a working version of this”
in a single focused session.
Why Tests Matter
This only works if the loop has friction.
That friction is usually tests.
Tests turn AI from a guessing machine into a usable contributor.
Without tests, you are just accepting plausible output.
With tests, you create a boundary:
this works, or it does not.
That boundary lets you move quickly without losing control.
The same applies outside programming.
For writing, the “tests” might be:
- artifact reports
- editorial checklists
- continuity checks
- style rules
For research, they might be:
- citation checks
- extraction schemas
- scoring rubrics
- comparison tables
For operations, they might be:
- validation rules
- dashboards
- alerts
- acceptance criteria
The principle is the same:
AI generates. The system verifies. The human directs.
The Builder’s Loop
The real AI workflow is not a single prompt.
It is a loop:
Intent → Structure → Generation → Verification → Refinement
↑ ↓
└──────────── Repeat ─────────┘
That loop is the engine.
The better the structure, the faster the loop.
The faster the loop, the more ideas you can try.
And once you start building this way, the question changes.
It is no longer:
“Can AI do this?”
It becomes:
“Can I define this clearly enough, constrain it tightly enough, and verify it quickly enough?”
That is the new skill.
That is how you think something into existence.
The Moment It Changed For Me
I remember the moment this became real for me.
I was walking my dog, Noddy, talking into ChatGPT through the text box. I had been reading the Mr. Q paper and trying to understand how the method worked. I described what I understood, what I wanted to build, and how I thought the implementation should fit together. Then ChatGPT generated a full working MrQ implementation.
I wrote about that MRQ implementation here.
At the time, this felt absurd. I had not sat down at a keyboard and manually built the system line by line. I had talked through the idea, guided the direction, and watched the implementation appear.
That was the moment something shifted for me.
“Programming was suddenly different.”
Because in that moment, the hard part was not typing the code.
The hard part was:
- understanding the paper
- knowing what I wanted to build
- explaining the goal clearly
- judging whether the implementation made sense
- testing and refining the result
The chatbot was faster than me at producing the code. In that specific context, it was probably better than me at producing the first working version.
For me, that was the break.
Programming stopped being mainly about manual implementation and became something closer to directing a build process.
I was not just using AI to write code faster. I was thinking a system into existence.
3. The Important Condition
This works better when the human has:
- a real problem
- a clear direction
- enough domain knowledge to judge the result
- a system structured so changes can be safely added
Without that, AI produces noise.
With that, AI becomes a builder.
This distinction matters because a lot of the public discussion treats AI as if the model itself is the whole story.
It is not.
The model is only part of the system.
The real power appears when the human, the model, the codebase, the tests, the data, and the workflow are all arranged so that ideas can be tried quickly and safely.
That is where AI becomes more than autocomplete.
That is where it starts changing how building works.
4. The “100 Interns” Model
Here is a mental model that works.
AI behaves like a large number of fast, cheap, slightly chaotic contributors.
Imagine you suddenly have 100 interns.
They work 24/7. They respond instantly.
But:
- outputs vary in quality
- instructions must be precise
- every result needs review
If you had that capability, your role would change.
You would move from:
- doing the work
to:
- structuring and directing the work
That is the shift.
AI expands capacity, but it also expands coordination overhead.
Most workflows have not adapted to this model.
We still ask AI one question, get one answer, and move on.
That is like hiring 100 interns and never speaking to them.
If your workflow has not changed yet, that may be the sign that you are still using AI as a tool, not as a medium.
The point of the 100 interns model is not that AI gives you free labour.
It gives you coordination pressure.
A bad manager with 100 interns gets chaos.
A good system with 100 interns gets leverage.
That is why structure matters.
5. What Changes When You Work This Way
Once you adopt this model, the change is immediate.
Your role becomes less about direct execution and more about direction.
You define the problem.
You break it into pieces.
You describe the next useful step.
You review what comes back.
You decide whether to keep, reject, refine, or extend.
That is a different kind of work.
The bottleneck moves.
It is no longer simply:
Can I write this code?
It becomes:
Can I describe the change clearly enough, constrain it tightly enough, and verify it quickly enough?
That applies to programming.
It also applies to writing, research, operations, design, testing, and analysis.
The advantage does not come from AI being perfect.
It comes from the loop being fast.
When the loop is fast, you try more ideas.
When you try more ideas, you learn faster.
When you learn faster, the system evolves faster.
That is the real value.
6. A more Personal Solution
There is another part of this that is easy to miss.
Working with AI is not like learning a new framework or a new programming language.
It is more personal than that.
When you work through speech, conversation, correction, frustration, examples, and half-formed ideas, more of you enters the build process.
Your taste enters it.
Your judgment enters it.
Your way of explaining enters it.
Your impatience, your priorities, your shortcuts, your sense of what matters — all of that starts shaping the result.
That makes AI different from earlier programming tools.
Traditional programming languages force you to compress intent into rigid syntax.
AI lets you express intent more naturally, then gradually shape it into implementation.
That does not make the process less technical.
It may make it more artistic.
Closer to sketching.
Closer to directing.
Closer to painting over a canvas until the thing starts to resemble what you meant.
This is why I do not think there is one correct AI workflow.
There are principles:
- structure helps
- tests matter
- small loops work better than giant leaps
- verification keeps you honest
But the actual practice will be personal.
Some people will talk through systems.
Some will write precise specs.
Some will use diagrams.
Some will build test-first.
Some will explore messily and clean up later.
That is fine.
The point is not to copy someone else’s AI workflow.
The point is to find the loop that lets your intent become implementation without losing the parts that make it yours.
6. A Note of Caution
One assumption quietly sits underneath almost every AI discussion:
major technological progress automatically improves life for most people.
The historical record is less clear than that.
Since the 1970s, Western economies have experienced enormous technological advancement:
- personal computing
- the internet
- smartphones
- cloud infrastructure
- software automation
Yet many traditional markers of material security became harder, not easier, for the average household:
- home ownership
- raising children
- living on stable wages
- long-term financial security
This does not prove technology caused those problems.
It probably did not.
Policy, housing, globalisation, finance, labour markets, and many other forces matter.
But it does suggest something important:
technological advancement alone does not guarantee a better material life for the average person.
Technology clearly improved products, communication, medicine, and access to information.
What it did not automatically do was ensure that the gains were distributed evenly.
That matters for AI.
Because AI is an amplifier.
And amplification magnifies the structure of the system it enters.
If the surrounding system distributes gains broadly, AI may improve life broadly.
If it concentrates gains, AI may accelerate concentration.
History suggests neither outcome is automatic.
7. Why AI Didn’t Behave Like Electricity
For the last few years, AI has been framed as the next foundational technology.
The comparison is always the same:
AI will be like electricity.
A general-purpose capability that reshapes everything:
- industries reorganise
- productivity explodes
- costs collapse
- entirely new systems emerge
That expectation sets a high bar.
And so far, AI has not behaved like that.
The comparison breaks down at a basic level.
Electricity is:
- deterministic
- reliable
- infrastructure-level
- usable without retraining how people work
AI is none of those things.
AI is:
- probabilistic
- variable in output
- dependent on context
- sensitive to how problems are structured
The difference is fundamental:
Electricity replaces effort.
AI reshapes how effort is applied.
Electricity let you run a factory without water wheels.
AI lets you generate a function without typing every line, but you still need to design the module, validate the edge cases, and integrate it into a larger system.
The boilerplate disappears.
The architecture does not.
That is why AI feels simultaneously powerful and underwhelming.
It is powerful when used inside a structured loop.
It is underwhelming when treated like a utility you simply switch on.
8. The Marketing Layer
The industry did not present AI this way.
It presented AI as:
- a replacement for workers
- a general solution
- a near-term leap to full automation
That narrative was reinforced through:
- layoff announcements attributed to AI
- claims of “10x productivity” without context
- aggressive timelines toward general intelligence
The problem is not that these claims are entirely false.
The problem is that they describe a future capability as if it were a present one.
In real usage:
- AI often increases output
- but also increases the need for verification
- reduces some work
- but introduces new coordination work
The result is a credibility gap:
AI is clearly useful, but the way it is described does not match how it behaves in practice.
9. The Replacement Fallacy
A common version of the AI story says:
if one programmer becomes twice as productive, the company needs half as many programmers.
That is a simple story.
It is also usually the wrong model.
Programming is not a uniform task where ten identical people produce ten identical units of code.
Real software work includes:
- understanding the problem
- designing the system
- reading existing code
- handling edge cases
- integrating with other teams
- testing
- debugging
- deciding what should not be built
AI helps with parts of that.
But it does not make the surrounding work disappear.
A word processor made writing faster.
It made editing easier.
It removed friction from drafting, formatting, copying, and revision.
But a trillion word processors would not replace one serious writer.
They would not decide what the book is about.
They would not understand the audience.
They would not carry the argument.
They would not know what should be cut.
AI is much more powerful than a word processor, obviously.
But the principle is similar:
better tools reduce friction; they do not automatically replace judgment.
The same applies to programming.
AI can generate code.
AI can accelerate implementation.
AI can make a strong programmer much faster.
But it does not automatically replace the programmer, because the programmer’s real value was never just typing code.
The hype fails when it treats programming as typing.
Typing was never the whole job.
10. The AGI Problem
A large part of the discussion has shifted toward AGI: Artificial General Intelligence.
AGI is typically framed as:
- human-level reasoning
- autonomous decision-making
- system-wide capability
But AGI has three practical problems:
-
It is not well defined
Ask five experts, get five answers. -
It is not measurable in a practical engineering sense
There is no agreed “AGI benchmark” that settles the question. -
It shifts focus away from current constraints
We debate the end state instead of solving the messy middle.
These issues do not make AGI impossible.
They make it unhelpful as a practical engineering target.
A more useful framing is this:
current systems are highly capable in specific domains, but unstable when extended across larger, interconnected problems.
That is the real problem worth solving.
11. Capability vs. Composition
AI performs extremely well in bounded tasks:
- code generation: a single function, a SQL query
- structured writing: release notes, docstrings
- image synthesis: icons, mockups
- refactoring a well-defined module
However, when those same capabilities are extended into larger systems, performance degrades.
You see this as:
- loss of coherence
- inconsistent reasoning
- fragile long-term structure
- integration failures
The key limitation is not generation.
It is composition.
AI can generate components reliably.
It cannot yet maintain complex systems reliably.
This explains why small outputs feel like magic and large outputs feel like a draft you have to rewrite.
Asking Copilot to write a sortArray function is one thing.
Asking it to refactor a 10,000-line service with six interlocking modules is another.
A model can write a function.
It cannot yet maintain a codebase.
It can draft a chapter.
It cannot yet architect a book.
Generation is no longer the scarce resource.
Composition is.
A model can generate a function.
But a product is not a pile of functions.
A book is not a pile of paragraphs.
A company is not a pile of tasks.
The hard part is keeping the parts coherent as they grow.
This is where you come in.
12. The Adoption Gap
Even where AI is effective, adoption is uneven.
This is often blamed on:
- skepticism
- fear of job loss
- lack of understanding
But a more practical constraint is time.
Using AI effectively requires:
- restructuring tasks from linear to parallel
- defining inputs clearly
- iterating deliberately
- validating outputs
This is a skill.
And it is not trivial.
The people who could benefit most, such as experienced engineers, domain experts, and senior operators, are also the least able to:
- experiment extensively
- tolerate failure in production
- redesign workflows from scratch
So the default behaviour is predictable:
existing workflows persist, even when better ones are available.
That is not stupidity.
It is inertia.
The old workflow works.
The new workflow requires learning, risk, and time.
Most organisations are optimised for stability, not experimentation.
That is why AI adoption is slower and messier than the demos suggest.
13. The Ferrari Problem
At small scale, AI is easy to apply.
At large scale, it becomes difficult.
Consider a complex system:
- a large codebase with 500 modules
- an operating system kernel
- a production CI/CD pipeline
- a book with characters, themes, structure, and continuity
Allowing unconstrained AI changes introduces:
- inconsistency
- integration failures
- loss of predictability
- hidden maintenance cost
This is why large organisations do not simply:
“let AI rewrite everything”
The issue is not capability.
It is control.
More output does not mean better systems.
Without coordination, more output means more chaos.
14. Two Types of Companies
This leads to a split in how organisations approach AI.
AI-Adapted Companies
These are the companies we mostly have today.
They already have:
- products
- customers
- systems
- revenue models
- compliance requirements
- legacy code
- teams
- processes
They cannot simply rebuild themselves around AI.
So they integrate AI into existing systems.
They constrain its use.
They create bounded contexts.
They prioritise reliability over exploration.
For these companies, AI must operate inside defined swim lanes.
That is not failure.
That is reality.
AI-Native Companies
The second category is still emerging.
An AI-native company is not simply a normal company with AI features added.
It is a company where the workflow itself is built around:
- dynamic task execution
- continuous iteration
- human-AI collaboration
- rapid experimentation
- systems that can propose and execute next steps under constraints
Most companies are not there yet.
Most (all) of the economy still operates in the adapted model.
The near-term opportunity then is not to let AI run free. It is to create bounded places where AI can safely contribute.
15. How AI Actually Scales: Structure
To use AI effectively at scale, you need structure.
This is not a new idea.
Large software systems solved similar problems decades ago through:
- modular components
- well-defined interfaces
- controlled communication
- versioning
The same pattern applies to AI.
Instead of one large, unconstrained system, you build many small, bounded processes.
Each process has:
- defined inputs
- defined outputs
- validation
- logging
- failure handling
That is how AI becomes useful.
Not by making it more mystical.
By making it more constrained.
AI becomes reliable when it has boundaries.
A practical developer example:
Do not ask AI to:
refactor the whole microservice.
Ask it to:
rewrite this one function to use async/await, keep the same signature, and pass these three tests.
Now the output is checkable, composable, and safe.
This is where AI starts to scale.
Not as a giant brain.
As a bounded component inside a larger system.
16. The Real Constraint: Trust
This leads to the core issue.
AI is not widely deployed in critical systems because:
it is not consistently trustworthy at scale.
This is not a moral problem.
It is not a philosophical problem.
It is an engineering problem.
AI systems require:
- verification layers
- monitoring
- fallback mechanisms
- structured integration
- human review where risk is high
Until those exist as standard infrastructure, AI will remain uneven.
Useful, but risky.
Fast, but fragile.
Impressive, but hard to trust.
Trust is not created by model size.
Trust is created by systems around the model.
17. What Actually Needs to Be Built
The next phase of AI is not only about larger models.
It is about better systems.
For programmers and teams building real products, that means:
1. Verification Layers
Systems that evaluate AI outputs before they reach production.
Think of type checkers, but for correctness, coherence, policy, and intent.
2. Memory and Context Systems
Persistent, structured understanding across tasks.
Not just chat history.
A system that knows what has been built, why it exists, what constraints apply, and what changed last time.
3. Reasoning Frameworks
Controlled multi-step execution, not free-form generation.
Think of a workflow engine where each step is an AI call with clear inputs, outputs, and validation.
4. Composable Architectures
AI operating inside defined boundaries.
Each AI component has an API, a version, a responsibility, and a test suite.
This is where real progress will come from.
Not from waiting for GPT-6.
From building better scaffolding around the models we already have.
How to Start Using AI This Way
A practical starting point is smaller than most people expect.
Not a whole workflow.
Not a full product.
One real problem.
Something annoying, repeated, and bounded:
- a test you keep writing manually
- a review checklist
- a report format
- a parser
- a detector
- a small refactor
- a validation step
The loop can be simple:
- Describe the problem.
- Ask AI for a small implementation.
- Add tests or checks.
- Run it.
- Fix what breaks.
- Save the working pattern.
- Apply it again.
That is often enough.
Not “use AI more.”
Build one small system that makes the next task easier.
Then notice what changed.
Then build the next one.
That is how the builder’s world begins.
The useful pattern is not replacing the whole system at once.
It is creating one place where AI can safely amplify you.
Then another.
Then another.
19. Final Thought
AI is not electricity.
Electricity replaced effort.
AI reorganises effort.
That is why the public conversation keeps missing the point.
The value is not in asking better questions.
The value is in building better loops:
Intent → Structure → Generation → Verification → Refinement
That loop collapses the distance between an idea and a working system.
But it only works when structure exists.
Without structure, AI gives you output.
With structure, AI gives you leverage.
That is the difference.
The people waiting for perfect models will keep waiting.
The people building systems around today’s models will keep compounding.
AI is not a utility.
It is an amplifier.
And amplifiers reward structure.
But structure does not mean rigidity.
The strange thing about AI is that it can make building feel more personal, not less.
Because the interface is language, your way of thinking enters the system more directly than it did through syntax alone.
That is why this will not look the same for everyone.
The next stage of programming may be less like typing instructions into a machine and more like shaping a living sketch: describe, generate, test, refine, and keep going until the system starts to match what you meant.