Documentation menu

Debugging Test Runs

When tests fail or behave unexpectedly, Long Horizon gives you three powerful tools: slow down execution to watch what happens, step through one action at a time, or jump directly to a specific point in a previous run.

Run profiles

Choose how fast Long Horizon executes tests:

Normal

Full speed execution. Use for regular test runs when you expect things to pass.

Slow mode

Adds a delay between each action so you can visually follow what's happening. Great for spotting timing issues, animation problems, or race conditions that are invisible at full speed.

Step mode

Pauses after each action until you click continue. Inspect the DOM, check network requests, or examine console output between steps.

Take me here

Found a failure deep in a test run? Instead of replaying the entire scenario manually, Take me here fast-forwards to that exact point.

  1. Open a previous test execution in the viewer
  2. Find the step where things went wrong
  3. Click Take me here within the Actions menu
  4. Long Horizon replays up to that point, then pauses in step mode

From there, you can inspect the browser state, continue step-by-step, or run through the rest of the test.

Debugging workflow

Here's a typical approach to debugging a failing test:

  1. Review the failure — Check the error message and which step failed
  2. Try slow mode — Rerun the test in slow mode to see if timing is the issue
  3. Use Take me here — Jump to just before the failure to inspect the state
  4. Step through — Advance one action at a time, checking the DOM and console after each
  5. Fix and verify — Make your fix, then run in normal mode to confirm

Common issues

  • Test passes in slow mode but fails normally: You have a timing issue. The app needs more time to render or an API call hasn't completed. Add explicit waits or check for loading states.
  • Element not found: The selector might be wrong, or the element hasn't rendered yet. Use step mode to see what's actually on the page.
  • Test is flaky: Run it multiple times in slow mode. Flakiness often comes from race conditions, network timing, or animations.
  • Take me here doesn't work: Make sure your dev server is running and the app URL is correct. The browser needs to load your app to replay the test.

Manual test session control

While AI agents typically create test sessions and plan which tests to run, you can take manual control when needed.

Create a test session manually

Sometimes you want to run specific tests without involving the AI agent. Click New Session to create an empty test session, then pick which tests to include from your test library. This is useful when you know exactly what needs testing and want to skip the planning step.

Add tests to an existing session

After a session runs, you might realize a test was missing from the plan. Instead of creating a new session, click Add Test within the current session to include additional tests. This keeps related test runs grouped together and helps catch regressions the agent didn't anticipate.

Manual session control is particularly helpful when:

  • You're debugging a specific feature and want to run targeted tests
  • The agent's test plan missed an edge case you want to verify
  • You're doing exploratory testing and want to track results in one place
  • You need to re-run a subset of tests without regenerating the entire plan

Tips

  • Keep your dev server running during debugging sessions—Long Horizon needs it to load your app
  • Use browser DevTools alongside Long Horizon for deeper inspection (Network tab, Console, Elements)
  • If a test consistently fails at the same step, the issue is likely in your app code, not the test
  • Screenshot comparisons in reports can reveal visual differences you might miss watching live