[N.B. I posted this same request a week ago but the post never showed up, so I'm reposting it…]
Please give a quick overview of your product feature request or feedback and note who in your org is affected by this issue [ex. agents, admins, customers, etc.]. (2-3 sentences)
The internal note posted to the ticket thread when the end-user clicks “No, I still need help” in response to an autoreply from AI Agent Essentials is improper and should be changed. Our Agents are quite perturbed with the way this message shows up.
What problem do you see this solving? (1-2 sentences)
Recently AI Agent Essentials autoreplies were updated to prompt the end-user “Did this resolve your issue?”, with “Yes, solve my request” and “No, I still need help” options. If the user clicks these buttons, an internal note is added to the ticket stream (and the ticket is Solved for a “Yes”) response. The internal note used for the “No” case is inappropriate and perturbing to our Agents.
When was the last time you were affected by this lack of functionality, or specific tool? What happened? How often does this problem occur and how does this impact your business? (3-4 sentences)
This occurs any time the end-user clicks the “No” option. When a user clicks “No, I still need help”, what winds up in the ticket stream is “The requester marked the AI agent's reply as not helpful.”. But that isn't what the requester did at all. The requester said that they need help, not that the AI Agent's reply is unhelpful.
A significant portion of our tickets, maybe 50%, involve issues that a live Agent must address on our end (e.g., issues with the user's configuration in our system). Our Help Center clearly documents self-help options for our end-users - and is very clear with them when the issue they are having requires us to fix the issue for them. The AI Agent's response then clearly explains to the end-user when they need to wait for our Agent to help them out.
It is completely proper for the user to click “No, I still need help” in response to the autoreply, and it is completely improper to presume that the end-user clicking this option means that the AI Agent was unhelpful. And this discrepancy is upsetting my Agents.
Are you currently using a workaround to solve this problem? (If yes, please explain) (1-2 sentences)
No workaround exists, as we have no control over these internal notes.
What would be your ideal solution to this problem? How would it work or function? (1-2 sentences)
The internal note should state the same thing as the user stated by clicking the button - simply state, e.g., “The requester stated that they still require assistance” or something to that effect. Don't leave a note that implies things about AI Agent behavior that cannot be presumed.
I will reinforce that my Agents are extremely perturbed seeing these notes. This may seem like a small/trivial thing, but it should be easy to fix to make the message accurate.