Overview
A request for greater reasoning transparency in Auto Assist (Co-pilot) for procedure designers and QA testers.
Currently, the "Why this suggestion was generated" tooltip explains what the AI decided, but not the full reasoning chain behind it. This affects anyone building, testing, or maintaining Auto Assist procedures.
Problem Statement
When the AI deviates from expected procedure steps, there is no way to query which specific element — a tag, a phrase, a piece of ticket context — caused the deviation. Diagnosis requires repeated (and often brutal) trial-and-error testing.
Business Impact
During a recent beta procedure rollout, diagnosing an issue where the AI was conflating a shipping tag with warranty eligibility required 6+ hours of testing across 30+ test tickets. Each diagnosis required creating a new ticket, reproducing the exact conditions, and reverse-engineering the AI's decision from the tooltip summary. A transparency tool would have surfaced the root cause in minutes.
Current Workarounds
Relying on the "Why this suggestion was generated" tooltip, which provides a useful but high-level summary. It does not break down how individual tags, procedure conditions, or cross-procedure context contributed to the decision.
Ideal Solution
An interactive reasoning interface where admins can ask natural language questions about a suggestion — e.g. "Why did you skip Step 4?" or "How did this tag affect your decision?" — returning a step-by-step breakdown of the AI's decision logic for that specific suggestion.