Hopefully this information can be used to an extent by everyone. It's easy to recognize that each company and their approach to support, tickets, and their customers are unique. Out of fairness, here's the approach I've had for the last 2 years that's turned out to be far more progressive with a rapidly growing Marketplace/IT company.
Document Everything!
The easiest way to anticipate scaling and at least have your objective standards of Customer Support and training methods is to simply document everything. Depending on your resources, shared documents (Google Docs/Sheets) and Zendesk (Knowledge Base/Explore) can work together like PB&J.
Create a Customer Support manual. It should document your Support/Company Culture, team member directory, agent workflows, training programs, escalation pathways, and other resources. A manual can be a centralized document on Agent conduct. Through the lens of QA, this creates objective standards that has clear visibility for the Agent, QA team member, and as a manager.
I work with a 24/7 global team who mostly work together closely on Skype/HipChat. We get a high volume of tickets every day, and often have issues that remain unsolved. As you can imagine, there's plenty of stuff to talk about with Backlog management, Ticket Prioritization, and more to the point, QA.
QA Philosophy
QAing with a small Start Up all the way up to an Enterprise level should follow a few core philosophies:
- Don't micromanage an agent's handling of the ticket.
- Evaluate company/support culture against the ticket handling type.
- Demonstrate best practices.
- Make learning and training a team-building exercise.
Micromanaging can lead to less productivity, you're feeding in to the pre-existing anxiety that agent has knowing that their tickets are evaluated for QA. Of course that depends on how you QA and how you empower your agents. In a good scenario, micromanaging may be very useful to a developing agent and is meant just to correct on subtle nuances.
Getting back to the manual, you have to put it out there what you expect from an agent - QA standards shouldn't come out of left field! If an agent knows what to expect and be evaluated on, they're more willing to follow that course of action. QA standards are really up to you, but some of the basics are:
- answering the call of the question (did you answer what this person asked in the ticket?),
- productive/effective handling (did you resolve the issue with fewer iterations?),
- was it clear to the customer (on what they need from you, what was done, how they should do something).
How to do that will be later on. Demonstrating best practices and making training a team building exercise is simple, if you have veteran team members who know your standards (support manual), know what to expect/the correct path (QA), empower your front line staff to help train newer/inexperienced agents. Because I document and make QA part of the support lifestyle, our newest Customer Support team members are more informed about the customer experience and best practices than most team members in the rest of the organization - remember, they're at the pulse of the customer, have solidarity through customer tickets.
There is one question I ask every single customer support agent whenever a "bad" QA ticket is pulled or whenever an escalation is required that has truly helped build a better team, QA results, and support mentality: If you were the CEO of this company, how would you handle this ticket?
QA Workflow
Because of our growing amount of tickets and team, our department structure has evolved hand in hand with our QA structure. Because our veteran agents turned in to Team Leads, and our Team Leads became managers, more opportunities for QA became possible. Our Team Leads are in charge of QAing a few of their team members' tickets at the start of their shift (we have views, triggers, and spreadsheets designated for each department/team structure).
We have one QA specialist that reviews each team leads QA per Agent and assesses their overall body of work to help provide a clearer picture to management and to that individual agent. This can include customer satisfaction score, the types of tickets they're handling, response time, agent efficiency, etc. (basically the stuff you see in Explore).
How-to Views
Create QA specific views that pull tickets handled (or assigned) to that agent/group for the last week (again, depending on your volume). This can be sorted out by satisfaction score, ticket tendency (we create a ticket field that summarizes the call of the question in to a drop down field, we record that information every quarter and produce User Experience reports based on agent handling), or however you organize your tickets. Allow team leads to pull 5-10 tickets at random (or based on a corrective course of action from the QA specialist), and apply it in your QA resources (spreadsheet or another program).
Resources
We use GoogleApps because it's free and helpful for global teams. We have QA spreadsheets set up quarterly. The Team Lead/Manager has one for each department (not each team member) and they add the 5-10 tickets in this spreadsheet. It should include; the Ticket URL, the QA evaluation (call of the question based on what's asked by the customer), COTQ answered?, Clear Communication, productive handling, customer satisfaction, QA comments, and Agent retrospect (to confirm that they reviewed it + their comments). QA specialists and Managers should create individual shared documents with the QA Spec and Manager. This gives a progress report on that team member and address how their tickets were during the week/month/quarter. The same standards should be applied (and even consider a point system on good/ok/bad standards).
If there are any immediate corrective course of actions, it can be addressed at the weekly level, noticeable ticket trends can be seen on a monthly level, and serious regression (or progression!) can be displayed on a quarterly level. This document should contain research from Insights, the built in tools are helpful, but you can also create your own Insights-like dashboards. QA Lifecycle: Agent handles it, Team Lead does initial QA, QA specialist evaluates previous week of QA and creates corrective course of action, Team Lead / Manager passes information down to agent, Agent learns best practices and handles ticket. Rinse and repeat and over a Monthly/Quarterly analysis, you can see the progression trends of that agent from a individual QA document for management. Hope that helps!