When a developer writes a 200-word Claude Code prompt and 500 lines of working code emerge, the question of who “wrote” the code stops being rhetorical. It is a real question with real consequences for code review, intellectual property, and how teams attribute work. The clean answer is that code review becomes the new authoring — and that answer surfaces a much older problem the field has spent decades pretending was solved. This post is part 3 of a four-part series on whether Claude Code is a 5th-generation programming language. The pillar argued that the 5GL frame fits. This post is about what shifts when you take that frame seriously.

Quick summary

  • Authorship in the legal sense is unsettled. Current U.S. Copyright Office guidance treats works produced solely by AI without meaningful human authorship as not copyrightable. Pending litigation around training data and output reuse is still being adjudicated.
  • Authorship in the engineering sense is converging on a sensible answer: the human who specified the requirements, reviewed the diff, and accepted the change into the codebase is the author of record. The review step becomes the gate where authorial judgment is actually exercised.
  • The specification crisis was always there. Software engineering’s hard problem has never been typing the code. It was figuring out what the code should do. 3GL workflows hid this because typing felt like the work. 5GL workflows make it visible because the agent produces plausible code from any specification, including a wrong one.
  • The skill that compounds is judgment, not throughput. Writing a 5GL prompt clearly is a senior-engineering skill. Reading a generated diff with informed skepticism is a senior-engineering skill. Both were always valuable; both get more leverage when the typing is no longer the work.

Legally, code generated solely by AI is in unsettled territory in the United States and many other jurisdictions, with a clear baseline that pure-AI output is not copyrightable but humans-plus-AI work usually is. The current U.S. Copyright Office guidance — first published in early 2023 and refined in subsequent reports through 2025 — draws the line at “meaningful human creative authorship.” A prompt alone is generally not enough; substantial human arrangement, selection, and modification of the AI’s output usually is.

There is also active litigation. The class action filed against GitHub, OpenAI, and Microsoft in late 2022 over Copilot has worked its way through the courts with mixed results — some claims dismissed, others continuing. The dispute is mostly about training data and reproduction of license-bearing snippets, not authorship of net-new code, but it is the closest thing the field has to a courtroom test of these questions, and the eventual rulings will shape practice.

I am not a lawyer, and nothing in this post is legal advice. The point is just that a developer reading a Claude Code diff in 2026 cannot rely on a settled answer to “who owns this code?” The pragmatic working assumption most engineering organizations have landed on is the same one that already governs human-written code: whoever accepts it into the repository is responsible for it.

Code review as the new authoring

The pragmatic shift that has actually landed in working teams is that code review becomes the new authoring. That is not a new idea. Senior engineers and tech leads have been “authoring” code they did not personally type for as long as code review has been a practice. What changes with 5GL workflows is the proportion of code where this is true, and the speed at which the authoring decision has to be made.

In the old workflow, the typer and the reviewer were often the same person, separated by hours or days. The reviewer was thinking about a thing they had been thinking about all morning. With Claude Code, a single afternoon can produce diffs across half a dozen unrelated areas. The reviewer no longer has the luxury of having lived inside the code before reading it. Each diff is its own context-switch.

That makes the review skill more important, not less. Specifically:

  • Reading code with informed skepticism — looking for the wrong abstraction, the unhandled edge case, the unintended API change — was always the heart of senior review. It still is.
  • Knowing when to push back on a “working” diff because the architecture is wrong was always senior judgment. It still is, and the agent will not push back for you.
  • Understanding the system well enough to ask the right questions of the diff was always the moat. The agent does not have that understanding; the human reviewer does.

The teams I see handling this well treat the reviewer-as-author shift explicitly. They run shorter review queues, smaller diffs, more frequent merges. They keep human eyes on every change before it lands. They invest more in tests because tests are how the reviewer’s intent gets pinned down so the agent cannot drift away from it.

The teams I see handling this badly treat the agent’s output as if a junior wrote it and skim the review. That works until it does not, which usually means a security or correctness bug landing because the review missed something the agent confidently got wrong.

The specification crisis was always there

The hard part of programming has never been typing the code. It has been figuring out, exactly and unambiguously, what the code should do. Margaret Hamilton coined the term “software engineering” during the Apollo program partly to push back on the idea that programming was just typing: the engineering work happened upstream, in the design and specification of what the typing should produce. The field has spent the sixty years since rediscovering and forgetting and rediscovering this.

3GL workflows hid the problem. When a developer was simultaneously specifying, designing, and typing — usually in the same hour — it was easy to mistake the typing for the work. The specification got written in the developer’s head as the code came out of their fingers. If something was unclear, the developer noticed because the code did not write itself.

5GL workflows expose the problem. The agent will write any code, including code that does the wrong thing, from any specification, including a wrong one. The specification step does not get to hide inside the typing step anymore, because the typing step is no longer where the human is. If the spec is wrong, the result is plausible code that does the wrong thing — and that result looks identical, at first glance, to plausible code that does the right thing. The bugs trace back upstream.

This is the part I think is genuinely new. The specification crisis has always been there, but for most of computing history, it was masked by the difficulty of the typing. Now the typing is easy, the specification is hard, and the cost of getting it wrong is no longer offset by the time the wrongness took to type. The question “did I actually specify what I wanted, or did I just hope it would be obvious?” is one developers now have to answer in the open.

What this means for engineering as a craft

The skills that compound in this environment are the ones senior engineers have always been paid for: specification, decomposition, code review, architecture. The skills that compound less are the ones a junior could have done with practice: syntax recall, boilerplate writing, mechanical refactoring.

That is uncomfortable to say in a field that has spent the last decade democratizing access to programming. The democratization argument was that learning to code was learning a craft anyone could pick up with effort. That is still true. What is also true, and harder to talk about, is that the specific subset of programming that 5GL tools handle well is the subset that was always the easiest to teach: the typing. The harder subset — the part where you decide what is worth building, decompose it, specify it, review it, and own its operation in production — is the subset that did not get democratized, because it never could be. It is too dependent on the specific system, the specific team, and the specific business context.

5GL tools do not change that. They sharpen it. The senior engineer was always the one who figured out what to build and why, then directed the work. With Claude Code, more of the directing is explicit, and more of the doing is delegated. That is the shape of the change.

There is also a craft question underneath the practical one. Many of us did this work because we liked typing the code. We liked the small satisfaction of a function that compiled the first time, a refactor that made the diff smaller, a test that finally went green. What happens to that craft when the agent does the typing? I do not have a clean answer. I notice that I still get the same satisfaction when the design is right, when the architecture turns out to be the architecture I would have picked, when a tricky problem yields to a clear specification. The satisfaction shifts upstream. It does not vanish.

Practical guidance for working in this world

A few things I have started doing differently as the 5GL workflow has taken hold:

  • Treat the prompt like a design document. If the prompt is not clear enough that I would be comfortable handing it to a new engineer, it is not clear enough for the agent.
  • Read every diff. Even when the tests pass and the agent reports success. The review is the only place a human actually sees the code before it lands.
  • Strengthen the test suite. A repo with strong tests is a much better collaborator with an agent than a repo without, because the tests are the agent’s stopping condition.
  • Note attribution honestly. When AI tooling produces code, say so in the commit trailer. Most engineering teams have settled on a Co-Authored-By: line for the model. That is the right minimum; it lets future-you and future-reviewers know which parts of the history involved an agent.
  • Push back on architecture by hand. The agent will not tell you the architecture is wrong. It will implement whatever architecture you specified, plausibly, even if the architecture should not exist. That decision still belongs to you.

My take

The legal questions about AI-generated code are going to take years to settle, and the answers will vary by jurisdiction. The engineering question — who is responsible for the code that lands in the repo — has a much simpler answer. Whoever specified it, reviewed it, and accepted it. That has always been the answer. 5GL tools just make the typing of it less of the job.

The specification crisis was the part of software engineering nobody wanted to talk about because, for most of the field’s history, the typing was hard enough that it kept the spec problem hidden. With Claude Code and its peers, the spec problem is the visible problem. That is uncomfortable, but I think it is good. It puts the actual hard work of building software where it belongs: upstream of the keyboard.

For the rest of this series:

If you take the 5GL frame seriously, code review becomes the new authoring. That is the shift worth getting comfortable with first.