Back to blog
6 min read

Coding Agents Are Turning Developers Into QA Engineers

The new AI-assisted development workflow has developers spending less time writing code and more time testing. Plan the feature, let AI implement it, then manually verify every flow works. Welcome to your new job as a QA engineer.

LH

Long Horizon Team

Engineering

Something strange is happening to software developers. The job that used to be primarily about writing code is quietly transforming into something else entirely. With AI coding agents handling implementation, developers are spending an increasing amount of their time doing what QA engineers have always done: testing.

The New Development Workflow

The modern AI-assisted development workflow looks nothing like what we learned in school or practiced five years ago. Here's what a typical feature development cycle looks like now:

  • Plan the feature. You think through the requirements, edge cases, and how it should integrate with existing systems. This is still deeply human work.
  • Describe it to the AI. You write a prompt or spec explaining what you want. The better your description, the better the output.
  • AI implements it. The coding agent writes the components, API calls, state management, error handling—sometimes hundreds of lines in minutes.
  • You test everything. And this is where you spend most of your time now.

The Testing Explosion

When you write code yourself, you naturally test as you go. You run the app, see if your change works, fix the obvious issues, and move on. You have an intuitive sense of what might break because you wrote every line.

With AI-generated code, that intuition disappears. The code looks reasonable, follows patterns, and probably works for the happy path. But what about everything else? You find yourself creating elaborate testing scenarios:

  • New user vs. existing user. Does the onboarding flow work? What about users who've been around for years?
  • Free tier vs. paid tier. Are the feature gates working? Does upgrading unlock the right things?
  • Empty state vs. populated state. What does the dashboard look like with no data? With thousands of items?
  • Admin vs. regular user. Are permissions being enforced correctly?
  • Mobile vs. desktop. Does the responsive design actually work?
  • Error states. What happens when the API fails? When the network is slow?

Each scenario requires logging into different accounts, setting up specific conditions, and manually clicking through flows. It's tedious, time-consuming, and absolutely necessary.

The Regression Anxiety

Here's the part that keeps developers up at night: it's not just about testing the new feature. Every change could potentially break something else. AI agents don't have perfect context about your entire codebase. They might refactor something that has subtle dependencies elsewhere.

So now you're not just testing the new feature—you're regression testing everything that might be affected. Did the checkout flow still work? What about that edge case in the settings page? The notification system?

This is exactly what QA engineers have always dealt with. The difference is that developers used to be somewhat insulated from this burden. Now it's landing squarely on their shoulders.

The Account Juggling Act

One of the most absurd parts of this new workflow is the account management. To properly test a feature, you might need:

  • A brand new account to test first-time user experience
  • An account on the free tier with usage limits nearly reached
  • A paid account with full access
  • An admin account to verify permissions
  • An account with specific data conditions (empty, full, corrupted)

Developers are maintaining spreadsheets of test accounts, constantly logging in and out, clearing cookies, using incognito windows. It's a far cry from the "write code, ship it" workflow we imagined.

Why This Isn't Sustainable

The irony is thick: AI was supposed to make developers more productive, and in some ways it has. Features get implemented faster than ever. But the testing burden has grown to match—or exceed—the time saved on implementation.

This creates a few problems:

  • Developer burnout. Most developers didn't sign up to spend their days clicking through test scenarios. The creative, problem-solving aspects of the job are being squeezed out.
  • Inconsistent coverage. Manual testing is inherently inconsistent. Some scenarios get tested thoroughly, others get skipped when you're tired or rushed.
  • Velocity ceiling. You can only click so fast. The testing bottleneck caps how quickly features can actually ship.

The Path Forward: Agentic Testing

If AI agents can write the code, why can't they also do the testing? This is the logical next step, and it's where the industry is heading.

Imagine describing your test scenarios in natural language—"test the checkout flow as a new free user, then as an existing paid user"—and having an AI agent execute those tests, handle the account switching, and report back with evidence of what worked and what didn't.

This is what agentic testing enables. The same AI capabilities that revolutionized code writing can revolutionize code testing. Instead of developers manually clicking through scenarios, they describe what needs to be tested and review the results.

Reclaiming the Developer Role

The goal isn't to eliminate testing—it's to automate the tedious parts so developers can focus on what they're actually good at: designing systems, solving complex problems, and making architectural decisions.

With proper agentic testing in place, the workflow becomes:

  • Plan the feature (human creativity)
  • AI implements it (automated)
  • AI tests it across all scenarios (automated)
  • Review the evidence and ship (human judgment)

This is the workflow that actually delivers on the promise of AI-assisted development. Not developers becoming QA engineers, but AI handling both implementation and verification while humans focus on direction and decisions.

At Long Horizon, we're building the tools to make this possible. Our agentic testing platform handles the scenario creation, account management, and execution that's currently eating up developer time. The result is comprehensive test reports that give you confidence to ship without the manual grind.

Developers shouldn't have to become QA engineers. They should have AI QA engineers working alongside their AI coding agents.

Read More