The Developer's Guide to Working Better with QA

Written By  Crosscheck Team

Content Team

June 16, 2025 11 minutes

The Developer's Guide to Working Better with QA

The Developer's Guide to Working Better with QA

The tension between developers and QA is one of the oldest dynamics in software teams. Developers ship code; QA finds problems with it. From a certain angle, that makes QA the adversary — the team whose job is to surface your mistakes. A lot of dev-QA friction comes directly from this framing, and a lot of it is unnecessary.

The teams that ship high-quality software reliably don't treat QA as a gate at the end of the pipeline. They treat it as a collaborative function that runs alongside development — something both roles contribute to, not something one role hands off to the other. Developers who work well with QA understand what QA is actually trying to accomplish, make QA's job easier through the way they write and deliver code, and engage with bug reports as diagnostic information rather than criticism.

This guide covers the practical shifts in mindset and habit that make that collaboration real.


Reframe Quality as Shared Ownership

The first and most important shift is conceptual. Quality is not QA's responsibility. QA is one function in a quality system — but so are developers, product managers, designers, and anyone else who contributes to what ships.

When quality is treated as exclusively QA's domain, a predictable failure mode emerges: developers ship code with known rough edges because "QA will catch it," and QA becomes the single point of failure for an entire product's reliability. This creates a bottleneck, burns out QA engineers, and produces a culture where bugs are someone else's problem until they're formally filed.

Shared ownership means something more concrete:

  • Developers write and maintain automated tests, not because QA can't, but because the people closest to the implementation are best positioned to define its expected behavior.
  • QA engineers think about the system holistically — edge cases, user journeys, cross-browser behavior, accessibility — not just the happy path the developer tested against.
  • Bug reports are treated as team information, not verdicts about individual performance.
  • The goal of the whole team is fewer defects in production, which is different from the goal of any individual role.

This framing changes how every subsequent interaction works. When a bug is filed, it's not an indictment — it's useful data. When QA raises concerns about a design or a specification, it's not obstruction — it's the kind of forward-looking analysis that prevents rework.


Involve QA Before You Write a Single Line of Code

The single highest-leverage habit a developer can build is bringing QA into conversations before implementation begins. Most teams involve QA at the end of a sprint or release cycle — after the feature is built, tested by the developer, and handed over for "QA sign-off." By that point, the cost of changing anything significant is high. Architecture decisions have been made, the API is fixed, the UI behavior is established.

When QA is part of the conversation at the specification or planning stage, things go differently:

Edge cases get identified before they become bugs. A QA engineer reading a spec will immediately start thinking about what happens when the input is empty, when the API returns an error, when two users perform the same action simultaneously, or when a user navigates away mid-flow. These questions are cheap to answer during planning and expensive to answer during a release cycle.

Acceptance criteria get sharper. Vague acceptance criteria — "the form should submit correctly" — produce ambiguous implementations and contested bug reports. When QA reviews criteria before development starts, they tend to surface the ambiguities: correctly according to whom? What constitutes a failure state? What should happen if the user is offline? The resulting criteria are more specific, which means there's less room for interpretive disagreement later.

Test plans start earlier. QA engineers who know what's coming can begin preparing test cases, setting up test data, and identifying dependencies while development is still in progress. This compresses the testing phase and reduces the crunch at the end of the sprint.

In practice, this can be as lightweight as including a QA engineer in the planning meeting, sending them the spec for a five-minute review before tickets are created, or doing a brief kickoff call at the start of a feature. The format matters less than the timing — the earlier QA sees what's being built, the more useful their input becomes.


Write Code That's Testable by Design

Some code is inherently easy to test; some code is a nightmare. The difference isn't usually about the complexity of the feature — it's about how the code is structured.

Keep side effects predictable and isolated

Functions that reach out and touch the world — making network requests, reading from localStorage, writing to a database, modifying global state — are harder to test than functions that take inputs and return outputs. This doesn't mean avoiding side effects (every useful application has them), but it means isolating them so the logic around them can be tested independently.

A function that mixes a business rule calculation with a database write is testing two things at once. Split them: one pure function that computes the result, one function that persists it. The logic can now be unit-tested without a database connection.

Use dependency injection

Hardcoding dependencies inside a function or class makes it impossible to swap them out in tests. When a function instantiates its own HTTP client, its own logger, or its own time source, you can't control those in a test environment. Pass dependencies in instead:

// Hard to test — createOrder calls fetch directly
async function createOrder(cart) {
  const response = await fetch('/api/orders', { method: 'POST', body: JSON.stringify(cart) });
  return response.json();
}

// Testable — the HTTP client is injected
async function createOrder(cart, httpClient) {
  const response = await httpClient.post('/api/orders', cart);
  return response.data;
}

In tests, you pass a mock that returns controlled responses. In production, you pass the real client. The logic is the same; the testability is dramatically different.

Avoid hidden global state

Functions that read from or write to global variables — whether that's window, module-level variables, or a singleton service — create implicit dependencies between tests. Test A leaves some state that affects Test B. The test passes in isolation and fails in a suite, or passes in one order and fails in another.

If state needs to be shared across components, make it explicit — pass it as arguments, use a context system, or manage it in a store that can be reset between tests.

Write descriptive error messages

This is often overlooked but makes a significant difference for QA: when something goes wrong, say what went wrong. A generic Error: request failed is almost useless for reproducing a bug. An error message like Error: createOrder failed — cart is empty, expected at least one item tells the QA engineer exactly what condition was hit, which dramatically accelerates triaging.


Make Bug Reports Easy to File — and Respond to Them Well

A lot of developer frustration with bug reports comes from reports that are too vague to act on. "The checkout page is broken" is not a bug report. "When I add three items to the cart, navigate to checkout, and click 'Place Order' on Safari 17, the spinner never resolves and the order is not created" is a bug report.

The solution isn't to criticize QA for unclear reports. The solution is to make filing high-quality reports easy, and to build a culture where both sides treat the back-and-forth as a diagnostic conversation rather than a blame game.

From the developer side

Read bug reports charitably. A bug report is a description of observed behavior that doesn't match expected behavior. It's not an evaluation of your skill or care. Treat it as diagnostic information and respond with questions that help you understand the problem, not defensiveness about the implementation.

Ask for what you need to reproduce it, not to challenge it. If a report is missing a step or doesn't specify the environment, ask for it in a way that makes it easy to provide: "Could you share the browser version and OS? And were you logged in as a regular user or an admin when this happened?" Compare this to "Are you sure this is a bug? I can't reproduce it" — both might be looking for the same information, but one treats the reporter as a collaborator and one treats them as a suspect.

Close the loop. When you fix a bug, comment on it with what the root cause was and what the fix does. This gives QA context for retesting (they know what changed and can verify it specifically), and it builds shared understanding of the codebase over time. QA engineers who understand why bugs happen are better at finding them.

Don't mark bugs as "by design" to avoid fixing them. If the behavior in question is genuinely intended, explain why and what the intended behavior is. If it's not documented anywhere, the reporter had no way to know it was intentional. "By design" without explanation is a conversation stopper, not a resolution.

From the QA side

This article is addressed to developers, but worth acknowledging: better bug reports make everything easier. The gold standard of a bug report includes exact reproduction steps, expected vs. actual behavior, environment details (browser, OS, device, user role), and — ideally — a recording of the failure.

This is exactly the kind of detail that takes minutes to assemble after the fact from memory, but is captured automatically when the right tooling is in place.


Pair Testing: The Fastest Way to Close the Knowledge Gap

One of the most effective and underused practices in dev-QA collaboration is pair testing — where a developer and a QA engineer test a feature together, usually shortly before or after it's ready for formal QA.

The session typically works like this: the developer walks through the feature explaining what it does, how the edge cases are handled, and what assumptions the implementation makes. The QA engineer asks questions, tries things the developer didn't try, and flags anything that looks wrong. It's a live handoff rather than an async one.

Pair testing works because it closes a knowledge gap that creates a huge proportion of unnecessary bug reports. The developer knows why the code does what it does. The QA engineer knows how users approach the interface. Neither has the full picture, and in a typical handoff, neither ever does — QA files bugs about behavior the developer considers intentional, and the developer fixes the wrong thing because the reproduction case is missing context only they have.

In a pair testing session, those gaps surface and close in real time. Misalignments get resolved in minutes instead of through a multi-day back-and-forth on a ticket. The QA engineer learns enough about the implementation to write better test cases; the developer learns enough about QA's mental model to anticipate the kinds of issues that get filed.

You don't need to pair-test every feature. Reserve it for complex flows, features with significant new behavior, or anything that's historically produced a high volume of confusing bug reports. Even one session every couple of weeks builds understanding that pays dividends across every interaction.


Share the Context That QA Needs to Test Effectively

Developers have information that QA needs and often doesn't have: what changed, why it changed, what the risky areas are, and what was tested during development. When QA goes into a testing session without this context, they're exploring blind — which is valuable for exploratory testing but inefficient for structured validation of a specific change.

A few habits that make a real difference:

Write better PR and release notes. A pull request description that says "fixes checkout bug" tells QA nothing useful. One that says "fixes a race condition in the order submission handler — when two requests fired within 200ms, the second would fail silently; now queued correctly" tells QA exactly what to test, under what conditions, and what the expected behavior is.

Flag the blast radius. If a change touches something beyond the obvious area — a shared utility that's used in three other places, a change to an API response shape that affects multiple consumers — say so. QA can't test what they don't know is affected.

Share your test cases. If you wrote automated tests for a feature, share them with QA. Not as a substitute for their testing, but as a map of what you verified. They can build on it and focus their manual testing on the areas the automated tests don't cover.

Be available during the testing cycle. When QA is testing a feature you built, be reachable for quick questions. A two-minute Slack exchange — "is this the expected behavior when the field is empty?" — saves a bug ticket, a back-and-forth, a fix, and a retest cycle. Set the expectation that QA can interrupt you with quick questions during the testing window.


Build Feedback Loops That Close Quickly

The longer the gap between introducing a bug and catching it, the more expensive the fix. A bug caught in code review is a comment. A bug caught in development testing is an edit. A bug caught in QA is a ticket, a fix, and a retest. A bug caught in production is all of that plus incident response, user impact, and the reputational cost.

This is why the most effective dev-QA collaborations don't wait until the end of a sprint to run testing. They build testing into the development flow at multiple stages:

  • Unit and integration tests run on every commit, catching regressions immediately.
  • Developers test their own work against acceptance criteria before handing it to QA — not to do QA's job, but to filter out the obvious issues that slow down the QA cycle.
  • QA gets access to features in progress, not just after a formal handoff — a staging environment that reflects current development lets QA start testing early and provide feedback while changes are still cheap.
  • Bug triage happens frequently, not in a batch at the end of the release — issues surface and get prioritized quickly, rather than piling up into a pre-release crunch.

The goal of each of these is the same: move the detection point earlier. Every hour you spend moving bug detection upstream saves multiples of that time downstream.


What Good Dev-QA Collaboration Looks Like in Practice

Teams that do this well don't have a secret framework or a special process. They have a few shared habits that make the collaboration work:

They talk to each other early. QA is in planning. Developers and QA align on acceptance criteria before work starts. Nobody discovers what "done" means after the fact.

They share information proactively. Developers document what changed and what's risky. QA shares test plans so developers know what's being verified and can flag gaps.

They treat bug reports as shared data. A bug is a fact about the system, not a judgment about an individual. Both sides engage with it constructively.

They build tooling that makes the collaboration easier. The right tools reduce friction: environments that are easy to access, test data that's easy to set up, bug reporting that captures context automatically.

That last point is worth dwelling on. A significant source of dev-QA friction is low-quality bug reports — not because QA doesn't care about quality, but because assembling all the relevant context manually is tedious and incomplete. Screenshots don't capture what happened just before the bug. Written reproduction steps miss the environmental detail that turns out to matter. Console errors and network requests get lost unless someone knows to open DevTools before trying to reproduce.


How Crosscheck Supports Better Dev-QA Collaboration

Crosscheck is a browser extension built to eliminate the information gap that slows down dev-QA handoffs. When a QA engineer finds a bug, a single click captures everything: a screenshot, a full screen recording of the session, every console log and error, and a complete network request log — all bundled into one shareable report.

For developers receiving bug reports, this changes the experience entirely. Instead of "the checkout page is broken" with a screenshot of a spinner, they get the full session recording showing exactly what was clicked and in what order, the console errors that fired during the failure, and the network requests that show whether the API call was made, what it returned, and how long it took. Everything needed to reproduce and diagnose the issue is already there.

For QA engineers, it removes the most tedious part of filing a detailed bug report. They don't have to remember to open DevTools before testing. They don't have to manually copy console output or note which network calls failed. They don't have to write a precise sequence of twelve reproduction steps from memory. Crosscheck captures the session automatically, and the bug report documents itself.

The instant replay feature is particularly valuable for intermittent bugs — the kind that appear once during an exploratory session and never cooperate again. Because Crosscheck continuously buffers the session, clicking capture after a bug appears retroactively captures everything that led up to it, even if you weren't actively recording.

Better tooling doesn't replace the collaboration habits covered in this article — early involvement, shared ownership, pair testing, clear communication. But it removes the friction that makes good habits hard to maintain under deadline pressure. When filing a complete bug report takes one click instead of twenty minutes, more bugs get complete reports. When developers receive the full technical context with every bug, fewer bugs get stuck in back-and-forth over missing details.

If your team is looking to improve the quality of the dev-QA handoff without adding process overhead, try Crosscheck for free. It works in any browser, requires no configuration, and fits into the workflow your team already has.

Related Articles

Contact us
to find out how this model can streamline your business!
Crosscheck Logo
Crosscheck Logo
Crosscheck Logo

Speed up bug reporting by 50% and
make it twice as effortless.

Overall rating: 5/5