Вы находитесь на странице: 1из 4

Avoiding Common Pitfalls of Call Center Quality Assessment Programs

More and more companies are investing significant money in one of the plethora of Quality Assessment (QA) software packages available in todays marketplace. These programs offer powerful tools for capturing and analyzing calls between front line agents and customers. Their marketing promises to lead companies to the mountain tops of customer satisfaction. However, many companies dont stop to think that a tool is only as good as the abilities of the one who uses it. Just as you wouldnt expect accounting software to make your business financially solvent, you likewise cant expect QA software alone will improve the quality of service your call center provides or increase customer satisfaction. Many companies begin their ascent of the mountain tops of quality service, software in hand, only to find themselves mired in the foothills by the amount of time, energy and money required to implement a successful QA program. There is often significant internal conflict regarding how best to analyze the calls and provide feedback to agents. In addition, companies very rarely consider the validity of their scoring methodology and the resulting assessment, yet the results are often used in performance management and/or incentive pay. Unfortunately, most QA forms we have audited over the past seventeen years would not stand up to scrutiny if an agent chose to make an issue of it. Whether you are considering climbing into Quality Assessment for the first time or find yourself on the way but still struggling to reach the summit, we urge you to consider some of the most common pitfalls of call center QA programs.

Process Pitfalls 1. Poorly defined goals. It may seem like a simple issue, but many programs suffer because they dont clearly define what they are trying to accomplish and build their scoring tool accordingly. Some companies say they want to improve customer service and customer satisfaction, but then use the QA process to enforce up-selling and cross-selling approaches in ways that can ultimately damage customer satisfaction. Other call centers are committed to improving customer satisfaction, but their scoring tool does not give any weight or consideration to what their customers actually expect. It is very common for a scoring tool to become a potpourri management expectation, with little or no regard for what the customer desires or expects. 2. Conflicting expectations. Its a classic set-up found in many call centers. The QA team puts together a scoring tool that holds agents accountable for providing quality service through heavily weighted soft skills. Call center management then places expectations for call metrics such as average handle time (AHT), average

Avoiding Common Pitfalls of Call Center Quality Assessment Programs c. 2005 c wenger group, inc. All rights reserved.

call time (ACT), hold time, number of calls answered or number of dropped calls. Agents find themselves frequently stuck between opposing expectations. If they are going to serve the customer well and meet the QA criteria, it may necessitate putting the customer on hold or taking time to work through an issue with the customer (thus, not meeting their call metrics). To be successful, call centers must understand and balance both call metrics and service quality. 3. Scoring differences. Successful QA requires consistency in call analysis and coaching feedback. It is quite common to find people on the same QA team analyzing the same call with the same scoring tool and arriving at very different results. The most objective of QA forms can be interpreted or applied differently. One recent calibration we audited included a team that was relatively close in their results with the exception of one analyst who chose to interpret and apply the scoring tool completely differently. This analysts results and subsequent feedback to the agent was out of step with the rest of the team. To ensure consistent call analysis, QA teams must engage in regular, rigorous calibration and each QA analyst should be periodically audited. 4. The supervisor-as-QA analyst dilemma. On the surface, it looks like a good ideaan agents supervisor should be the QA analyst and coach for his/her team. While it has worked in certain circumstances, more often than not this is a recipe for QA disaster. First, front-line call center supervisors have a demanding job that pulls them in many directions each day. When call volumes are high, agents need help, management expects projects to be done on time and the to-do list is longer than can be humanly accomplished, the QA process quickly gets pushed to the back burner. It is common for us to watch management teams place the QA responsibility on supervisors only to be disappointed months later when it never gets done. In addition, relational issues or other job performance problems can make it difficult or impossible for the supervisor to analyze a call objectively. If possible, use a dedicated QA team who can focus on quality assessment and coaching without other supervisory entanglements. Considering the financial investment required by most QA programs, it may actually be less expensive to outsource the process to a company who specializes in third party quality assessment. 5. Ill-prepared analysts. If you want to reach the summit of a mountain, youll have to prepare yourself for climbing in high altitude. If you want to have a successful QA program, then you have to train your QA team for the job. Were often amazed when call center managers throw veteran agents into QA positions with little or no training. If they can take a call, the theory goes, they can analyze a call. While anyone may be able to analyze a call, quality analysis and the desired results require discipline and an understanding of QA principles. Any QA program should include a period of training, supervision and calibration to make sure that each analyst is performing in line with the goals of the program and is prepared to illicit the desired results.

Avoiding Common Pitfalls of Call Center Quality Assessment Programs c. 2005 c wenger group, inc. All rights reserved.

6. Lack of statistical discipline. Many companies implement QA programs and use the data as part of a front-line agents performance review or reward agents for achieving goals based on QA scores. Sample sizes used are often inadequate to provide reliable results at the individual agent level. In addition, poorly crafted scoring tools can render QA data statistically invalid. If an unhappy employee (or former employee) chose to take issue with your process, would it stand the test? It is imperative for any QA program to give consideration to their scoring methodology and sampling approach. Only then can you have confidence in the data gleaned from your analyst team.

Methodology Pitfalls The process of analyzing a phone call can be complex and, as mentioned, its easy to find yourself on a slippery slope when it comes to constructing a valid scoring methodology. Most software packages require you to develop your own scoring tool and then input it into the program. There is an art and discipline to developing an objective QA tool. Please consider a few of the common problems we have found with internal scoring methods: 1. Too few elements. One scoring tool we saw contained three vague elements like enthusiasm and efficiency. While it may be a quick analysis for the QA team, the results are far from useful or valid. The elements were not defined, the analyst has broad powers to subjectively apply their own opinions and there are no valid statistical results to help agents improve their performance. It is common for QA teams to want a short and simple scoring tool. Problems arise, however, when a desired behavior is not adequately defined or addressed in the abbreviated tool. QA analysts must either squeeze the issue into a part of the tool where it doesnt fit or coach the agent to change his/her behavior while having no recourse for statistically measuring the problem or resulting behaviors. Make sure that your QA tool is exhaustive enough to cover all major issues you want to address. 2. Double-barreled elements. Another way QA teams try to be more efficient is to lump multiple elements into one line item on the scoring tool. Be courteous; use the customers name is a good example. This creates a natural analytical dilemma. If agents are courteous, saying please and thank you but dont use the customers name, do you ding them? If you reason that the agents were courteous so youll give them credit and coach them on saying the name, then you really arent holding them accountable for the missed element. Agents will quickly (and correctly) surmise that as long as they use courtesy words they dont have to modify their behavior to use the name because they will never get dinged for it. When developing your scoring tool, care should be given to include simple, singular behavioral elements. 3. Multiple scoring options. Many scoring tools are based on giving the analyst multiple options for scoring a single behavior. For example, one scoring tool we have audited utilizes scaling that gives the analyst four options: Fully Achieved,
Avoiding Common Pitfalls of Call Center Quality Assessment Programs c. 2005 c wenger group, inc. All rights reserved.

Majority Achieved, Coaching Opportunity and Development Required. The problem with this approach is that each additional option increases statistical variability. The more options you give your analysts to choose from, the greater probability that you will have differing answers across analysts. Calibration becomes difficult, if not impossible, because you could have a team of people arguing four or five different positions. In addition, the results have limited statistical validity simply because of the number of variables. It is usually preferable to use simple binary options (e.g. Yes or No) along with more precise behavioral definitions to drive clarity in the calibration process. 4. Equally weighted attributes. Many scoring tools do a nice job of listing and defining all of the required behavioral elements, but they give each of the elements the same statistical weight in the overall service score. This can lead to QA agents focusing their coaching efforts on one element when an element that has greater impact on customer satisfaction is disregarded. Because certain elements drive customer satisfaction (which can be discovered through customer satisfaction research), those elements should accordingly be given greater weight in calculating overall service.

Companies who apply proven principles and discipline to their QA program are much more likely to gain measurable value from the process than those who simply throw money and FTEs at the process and assume they will make a profitable difference. The issues weve outlined are just a handful of the pitfalls that can hinder a QA program from having maximum impact. Applying this knowledge can help you avoid these pitfalls and speed you on your way to the heights of service quality and customer satisfaction.

c wenger group, inc. is an independent consulting firm that has served clients in customer satisfaction research, Service Quality Assessment and customer service training for 18 years. c wenger group focuses on helping client companies develop customer-centered service competencies that enable them to use world-class service to enhance competitiveness and grow market share. Clients include local, regional and nationally known companies operating contact centers ranging from just a few seats to over 1,000 seats at multiple sites. c wenger group seeks to help each client to become the service quality leader in their industry and chosen market. Learn more at www.cwengergroup.com.

Avoiding Common Pitfalls of Call Center Quality Assessment Programs c. 2005 c wenger group, inc. All rights reserved.

Вам также может понравиться