Product ideas | Community
Skip to main content

Filter by idea status

Filter by product

8849 Ideas

Regarding the AI translations for ticket conversationsIdea Submitted

Hello Team,I have been a part of the EAP testing of the AI translations and feel that I had provided sufficient and detailed feedback during testing in my sandbox.I just saw the announcement that GA has begun yesterday and is expected to finish next month, which is very exciting and made me go back to the feature to see how polished it has gotten.Unfortunately in the 5 months since I shared the feedback, not much seems to have improved and my excitement suddenly turned into wonder why it is being rolled out in its current state. Previously I had flagged the following issues:- If the Translator tool was turned off, there was no indication at all that a message was translated. This was fixed and you now have the “Show original” and “Show translation” messages displayed at all times, even if the Translator is turned off. The issue is that there is still no indication in the “Events” tab that a translator was used. Since the “Events” tab is basically an Audit Log for the main tab, it should be expected that Translations will be mentioned there as well.- No option to Translate proactive messages - the new translations work only if you have the ability to turn on the Translator tool. This feature is not available on proactive tickets, meaning that the tool cannot be used until we receive a response from the customer. This button you also mention in the article "To preview a translated message before sending, click the Translate button in the ticket composer." is not available.- Text formatting and spacing breaks - you can see in my original email, each paragraph has single row spacing, but the translated version has double row spacing. The doble row spacing is also seen in the end user's mail.:- Translations in Zendesk QA: I had requested a way to check if the email was translated in Zendesk QA, so that the QA agent knows if you sent a translated version (as one should in foreign language tickets), or not. Without this function, QA agents have to do extra steps by opening the ticket in Support.Thank you for the time and I hope these requests are fulfilled as soon as possible as they should be part of the feature as soon as it is GA.

Feature Request: Preserve HTML formatting for comment placeholders in webhooksIdea Submitted

 Hi Zendesk team, Zendesk already provides HTML comment placeholders like {{ticket.latest_public_comment_html}} and {{ticket.latest_comment_html}} that return properly rendered HTML when used in email notifications. However, as documented, when these same placeholders are sent to a URL target or webhook, unformatted text is used instead of HTML. This is a significant limitation for third-party integrations. In our case, we sync Zendesk ticket comments to Freshservice. Agent signatures contain rich content (bold text, images, links, horizontal rules) that renders correctly in HTML. But when sent through a webhook, the HTML placeholders are stripped down to plain text, leaving raw Markdown syntax like ![Logo](https://...), **, and --- visible in the receiving system.The irony is that the data already exists in the right format — the Ticket Comments API returns a perfectly rendered html_body field. But since webhooks don't preserve the HTML formatting of these placeholders, we're forced to set up an external middleware (Power Automate, Lambda, etc.) that:1. Receives the webhook with just the ticket ID2. Makes an extra API call back to Zendesk to fetch the comment's html_body3. Forwards the HTML to the third-party systemThis adds unnecessary complexity, latency, and an additional API call per comment — to retrieve data that Zendesk already has and already renders correctly for email notifications. Proposed solution:Allow HTML comment placeholders ({{ticket.latest_public_comment_html}}, {{ticket.latest_comment_html}}, etc.) to retain their HTML formatting when used in webhook payloads, rather than converting them to plain text. This could be an opt-in setting per webhook, or a new set of dedicated placeholders for webhook use.This would greatly simplify integrations with third-party ITSM tools (Freshservice, ServiceNow, Jira Service Management) and reduce unnecessary API overhead. Thank you for considering this!

Need for Categorization and Search in Service CatalogIdea Submitted

We use Zendesk to manage a large internal service catalog with dozens of request types across multiple departments and have implemented a customized Copenhagen theme for our Help Center. As the number of forms grows (~80+), it becomes increasingly difficult for both agents and end users to navigate and find the correct request form. Currently, there is no sufficient categorization or search functionality in the service catalog, even with theme customization.This leads to users selecting incorrect forms, which results in misrouted tickets, additional manual work for agents, and delays in request processing. It also reduces the effectiveness of structured workflows and automation.This issue occurs daily, especially for new users or employees who are not familiar with the system. They often struggle to locate the correct request type and either submit incorrect requests or contact support directly, increasing the workload on the service desk. Over time, this negatively impacts user experience and operational efficiency.As a workaround, we provide internal instructions and training, and agents manually redirect or correct tickets. While the customized Copenhagen theme allows some UI improvements, it does not fully solve the lack of native categorization and search capabilities. This approach is not scalable and consumes significant time.Ideally, the service catalog should support clear categorization (e.g., by department, service type, or user role) and include a robust search function that allows users to quickly find the correct request form. This would significantly improve usability, reduce errors, and enhance overall service delivery.

Limited Access to Custom Ticket Forms in Agent WorkspaceIdea Submitted

We use Zendesk as an internal ITSM platform and have configured a large number of custom ticket forms (~80) to support different service request types across IT departments. However, when agents create tickets from the “support” workspace, they are only able to select default forms such as “Generic Request” and “Incident,” which affects service desk agents who frequently create tickets on behalf of users. This creates a major issue where agents cannot select the correct form, leading to incorrect classification, missing required fields, and broken workflows. As a result, data quality decreases and automation, reporting, and SLA tracking become unreliable. This problem occurs on a daily basis. Many requests come via phone calls or in-person interactions, and agents are required to log tickets manually. Since the correct forms are not available, they either choose incorrect ones or bypass structured data entirely. Over time, this creates inconsistencies in reporting and makes it harder to manage service categories effectively. As a workaround, agents sometimes use the end-user interface to access the correct forms or manually adjust fields after creating the ticket. Both approaches are inefficient and increase the risk of human error. Ideally, agents should have full access to all configured ticket forms directly within the agent workspace, with the ability to search and easily select the appropriate form even when a large number of forms is in use. 

Allow Editing of Custom CSAT Link Expiration PeriodIdea Submitted

Hello Zendesk Community,We are writing to request a crucial feature for better management of our CSAT metrics and internal reporting: the ability to customize the expiration date of the custom CSAT survey link.We appreciate that the expiration period was recently extended to 28 days. However, having this long, fixed period without the option to set our own timeframe is significantly disrupting our internal controls and reporting cycles.  🎯 The Request:We urgently request that Zendesk allow customers to edit and configure the number of days the custom CSAT link remains active after the ticket is solved/closed. 📉 Why the 28-Day Period is Problematic:A 28-day validity period drastically alters our feedback collection and reporting alignment: Reporting Distortion: Our reporting cycles (e.g., weekly, monthly) are much shorter. Responses coming in up to 28 days later skew the metrics for the period when the service interaction actually occurred, making it harder to link performance directly to agent action. Relevance: Feedback is most valuable immediately after service. Extended periods dilute the relevance and accuracy of the customer’s memory of the interaction. Operational Alignment: We need the flexibility to align the CSAT eligibility window with our internal definition of a closed cycle, which is often much shorter than four weeks. In summary, we need to choose the expiration period (e.g., 7 days, 14 days) to ensure our CSAT data is timely, accurate, and aligned with our internal business metrics.Thank you for considering this critical functionality.

Allow both AI Agent and fallback messages to be customized in "Notify by>Autoreply using generative AI" trigger actionIdea Submitted

Please give a quick overview of your product feature request or feedback and note who in your org is affected by this issue [ex. agents, admins, customers, etc.]. (2-3 sentences)The new “Notify by>Autoreply using generative AI” trigger action (added with AI Agents for email/webform) only allows the fallback response to be customized. The normal response should be customizable as well as the fallback response.What problem do you see this solving? (1-2 sentences) With the existing “autoreplies with answers” trigger action, I can customize the response to include appropriate boilerplate above the suggested articles, which are inserted via the “autoreply.article_list” substitution. I want to be able to include the same boilerplate for GenAI replies with the new trigger action.When was the last time you were affected by this lack of functionality, or specific tool? What happened? How often does this problem occur and how does this impact your business? (3-4 sentences)This is the default behavior of the new “AI Agents for email/webform” capability, and not being able to implement the necessary boilerplate is blocking our implementing this new capability.Are you currently using a workaround to solve this problem? (If yes, please explain) (1-2 sentences)I am trying to implement Instructions for the AI Agent to include the necessary boilerplate, which currently isn't working (I have a ticket open for that). But frankly, doing this via Instructions is a hack. What would be your ideal solution to this problem? How would it work or function? (1-2 sentences)The “Notify by>Autoreply using generative AI” trigger action should allow both the normal and fallback responses to be customized, inserting the GenAI content into the template via an appropriate subsitution, as is done for “autoreplies with articles”.

Auto Assist: "Why did you do that?" — Reasoning Transparency for Procedure TestingIdea Submitted

Overview                                                                                                              A request for greater reasoning transparency in Auto Assist (Co-pilot) for procedure designers and QA testers.        Currently, the "Why this suggestion was generated" tooltip explains what the AI decided, but not the full reasoning chain behind it. This affects anyone building, testing, or maintaining Auto Assist procedures. Problem StatementWhen the AI deviates from expected procedure steps, there is no way to query which specific element — a tag, a phrase, a piece of ticket context — caused the deviation. Diagnosis requires repeated (and often brutal) trial-and-error testing. Business ImpactDuring a recent beta procedure rollout, diagnosing an issue where the AI was conflating a shipping tag with warranty eligibility required 6+ hours of testing across 30+ test tickets. Each diagnosis required creating a new ticket, reproducing the exact conditions, and reverse-engineering the AI's decision from the tooltip summary. A transparency tool would have surfaced the root cause in minutes. Current WorkaroundsRelying on the "Why this suggestion was generated" tooltip, which provides a useful but high-level summary. It does not break down how individual tags, procedure conditions, or cross-procedure context contributed to the decision. Ideal SolutionAn interactive reasoning interface where admins can ask natural language questions about a suggestion — e.g. "Why did you skip Step 4?" or "How did this tag affect your decision?" — returning a step-by-step breakdown of the AI's decision logic for that specific suggestion.