How to Use Solidroad's Manual QA

Last updated: March 28, 2026

Manual QA lets QA managers distribute real customer conversations to human reviewers for scoring. While Auto QA relies entirely on AI, Manual QA puts human judgment at the center — with optional AI assistance to speed up the process.


1. Creating a Manual QA Evaluation

Step 1: Choose the evaluation type

Go to Quality in the left-hand nav and click Create Evaluation. When prompted, select Manual QA as the evaluation type.

image.png

Step 2: Configure your evaluation

Everything is set up on a single page, covering three areas:

image.png

Setup & Filters

  • Give your evaluation a name and choose a data source (e.g. Intercom)

  • Select the scorecard you want reviewers to use

  • Add conversation filters to narrow down which conversations get pulled in (e.g. only conversations with more than 3 parts)

image.png

Assignment & Scheduling

  • Choose a frequency: one-off (runs immediately) or recurring (weekly, biweekly, or monthly)

  • For recurring evaluations, set the day, time, timezone, and how far back to look for conversations

Distribution Mode

This controls how conversations are divided across reviewers:

Mode

How it works

Example

Per Reviewer

You set how many conversations each reviewer gets

10 per reviewer × 2 reviewers = 20 conversations pulled total

All Matching

You set the total number of conversations, split evenly

10 total ÷ 2 reviewers = 5 each

Once configured, select which team members will act as QA reviewers and save.

image.pngimage.png

2. The Reviewer Inbox

Reviewers find and complete their assigned work in the Inbox.

Accessing it: Click Inbox in the left-hand nav. The number shown next to it is the count of incomplete reviews waiting for that reviewer.

image.png

Inside the inbox:

  • See all assigned conversations across every evaluation

  • Filter by evaluation name, status, or agent

  • Sort by date assigned

  • The default view shows Assigned (not yet submitted) reviews, oldest first — so reviewers always work through the backlog in order

image.png

3. Scoring a Conversation

Clicking into any conversation opens the scoring view, which is split into two panels:

  • Left panel — the full conversation transcript

  • Right panel — the scorecard, broken into sections

  • Top bar — shows which agent is being reviewed

image.png

Scoring each section:

  • Each section has a numeric score range (0 to the section maximum)

  • Special options: N/A (excludes the section from the total score) or Scorecard Fail (marks a critical failure)

  • Scores are saved automatically as drafts as you go — no risk of losing progress

image.png

Using AI Assist:

Two AI tools are available to help reviewers work faster:

  • AI Score — generates suggested scores for all sections automatically

  • AI Reasoning — shows the AI's rationale for each score

Reviewers can accept the AI's suggestions, adjust individual scores, or override entirely. The final call is always theirs.


4. Submitting a Review

When all required sections have been scored, click Submit. Solidroad will:

  1. Calculate the final score for that conversation

  2. Mark the review as complete

  3. Automatically advance to the next pending review in the queue

image.png

After submission, the review becomes read-only and displays the final scores along with any auto-fail or scorecard-fail flags that were triggered.


5. Monitoring Progress (For Managers)

Open any Manual QA evaluation to see a full progress breakdown:

  • Assignment status — how many reviews are Assigned, In Progress, or Completed

  • Per-reviewer progress — individual completion rates and conversation-level scores for each reviewer

  • Configuration summary — a right-panel recap of the evaluation's type, schedule, data source, scorecard, and filters

image.png

💡 All Manual QA data flows into Solidroad's reporting dashboard — the same place you'd find your Auto QA results.