Omnichannel API - Inquiry Regarding Engagement and Ticket Behavior | Community
Skip to main content

Omnichannel API - Inquiry Regarding Engagement and Ticket Behavior

  • March 27, 2025
  • 1 reply
  • 0 views

We are experiencing inconsistencies with the Omnichannel API and require clarification on the expected behavior concerning engagement and ticket data. Specifically, we have observed the following issues:

  1. Missing Closed Messaging Tickets in Engagement Lists:
    • Certain closed messaging tickets, despite confirmed agent engagement, are not appearing in the engagement lists.
    • We need to understand why these tickets are excluded and how to ensure accurate reporting of all agent-engaged conversations.
  2. Incomplete Engagement Data Due to Trigger-Based Unassignment:
    • Engagements are missing from the list when a chat is initially auto-assigned to an agent, subsequently unassigned by a trigger, and then re-offered and accepted by the same agent.
    • The conversation segments occurring between the initial auto-assignment and the re-offering are not being captured.
    • We request information on how to retrieve the full engagement history, including these interim segments.
  3. Incorrect engagement_start_reason for Offered Engagements:
    • The engagement_start_reason field consistently reports "assigned" even when the engagement initially had an "offered" status.
    • We require accurate reporting of the initial engagement status to differentiate between direct assignments and offered engagements.
  4. Explore and Omnichannel API mismatch
    • Some metrics have huge difference like total requester wait time is showing 420 seconds in API, but in Explore is 319 only. 

We kindly request your assistance in understanding and resolving these discrepancies. Please provide detailed explanations of the expected behavior for these scenarios and any potential workarounds or fixes.

 

Thank you!

1 reply

Elaine14
  • October 31, 2025
Hi there,
 
Thank you for detailing these observations — this is really valuable information, and you’ve outlined the issues clearly. I understand how critical consistent and accurate engagement data is when using the Omnichannel API alongside Explore. Let’s address this step by step.
 
  1. Missing Closed Messaging Tickets – Closed tickets may be excluded from the engagement list if the engagement wasn’t actively assigned at the time of closure. However, this can vary depending on how the conversation was ended (for example, due to triggers or reassignment rules). I’d recommend reviewing your trigger logic to ensure the ticket remains in an engaged state before closure.

     

  2. Incomplete Engagement Data Due to Trigger-Based Unassignment – You’re correct that if a chat is unassigned and re-offered to the same agent, the engagement is sometimes split. This is due to how the system treats new offers as separate engagement instances. We’re aware of ongoing discussions around this scenario, and I’d suggest keeping an eye on any announcements from the Omnichannel product team for improvements or schema updates.

     

  3. Incorrect engagement_start_reason Values – That’s an insightful observation. The field may currently display “assigned” across multiple cases regardless of the initial offer state. I’m confirming whether there’s a mapping limitation or expected behavior here and will share back any documentation or clarification once verified.

     

  4. Explore vs. Omnichannel API Mismatch – Metric variations, such as differences in requester wait time, are often due to the way Explore aggregates time data versus the raw Omnichannel API values (which are calculated per engagement event). Explore tends to use derived metrics that exclude certain idle or transition states.

     

To help investigate further, could you share which dataset you’re using in Explore (for example, Agent Activity, Engagements, or Messaging dataset) and the exact API endpoint where you’re seeing the discrepancy? This will help narrow down whether it’s a metric definition or a data timing issue.
 
Once I have those details, I can provide more targeted guidance or confirm known behaviors with the product team.
 
Thank you again for raising this — your examples are extremely helpful for understanding how these edge cases impact data accuracy.