Why QA Teams Should Be Abolished
I started my career in QA. Truth be told, i've always wanted to abolish QA teams after they reach a certain milestone in maturity. In this age of AI (and as organizations have evolved through the years), we need to really ask ourselves if we still need QAs at all.
Let me tell you why.
QA is not a department nor a job title. It’s a shared function.
There are responsibilities that need to be team-based and shared across different roles and teams. Like revenue building. Generating or attracting revenue is not just a sales, marketing or a business development function. It can also be a function for your customer success teams, your after-sales support teams, your partnership teams, and many other teams.
Shouldn’t that be the same of QA?
Some of you are cringing. You’ll say shared functions mean no one is ultimately responsible. No one department. No one person. And you think that’s a bad thing. I don’t. It means we are all 100% responsible in our own locus of control, and we are still responsible even if it’s not within our lane by virtue of our span of influence.
Who is ultimately responsible?
People would sigh and say: “But Teki, who really should be responsible for customer churn?” And i didn’t really think it was a bad thing that no one was ultimately responsible. It means everyone was. That in our own functions, we have our own gates on how to retain customers and win them back. If they are still in a sales relationship, it’s the salesperson’s responsibility to ensure we’re honest about promises we keep to customers. With customer service, we have a proactive and pre-emptive responsibility to ensure customers don’t walk out the door due to poor support.
There is this video from The Conscious Leadership group i have always liked. They call it Radical Responsibility. It’s not about sharing a piece of the great pie of responsibility. It’s about taking 100% within your respective teams and roles.
So, looking for the one wringable neck to squeeze says something about how you value teams.
Are we over-engineering agent performance?
In BPOs, QA teams are distinct. The problem arises when dedicated QAs dilute the Team Leader’s responsibility to have a holistic lens of their agent’s performance. QA, then, becomes a policing function, a controlling function, a scoring function, an auditing-for-mistakes function. A place where agent hopes are meant to die. It’s where rigidity, analytical narrow-mindedness and over-metricized obsessions with KPIs abound. And unnecessary procedures such as dispute mechanisms, scoring adjustments, calibrations pop up. When you really think about it, they are all non-value adding activities anyway.
QA is not a score, nor a KPI
QA, even as an internal measure, will never go high in the totem pole of “metrics that matter” in any company. CSAT will be there. NPS too. QA? Not really.
What QA Scores Really Mean
I know that it breaks the heart of every QA practitioner who reads this. It’s not meant to be a metric for performance. It’s a signal of:
- skill progression and mastery
- state of the current process design
- potential indicator of customer feedback
- a threshold of the limits of what your customer service can do (because of a flawed product or solution)
- potential reputational risks that threaten the business if a problem is a recurring issue
Recommended by LinkedIn
Is it fully controllable by your agent?
QA, to me, is similar to occupancy in workforce management. There are only a few things that an agent can fully influence, much less fully control. And the agent is also a work in progress, as the process is a work in progress, and your training is not perfect and an ever-evolving space. The agent is a also a product of:
- how much s/he was trained
- how well s/he was trained
- how effective the coaching has been
- how much empowerment is given to your agent
QA scores tend to be interpreted as a nail that cements an agent’s performance. The very fact that we call it QA Monitoring should tell us that it is meant to track progress. It’s not about lofty scales of performance (and all the while the product sucks).
Should we use QA to confirm people to the role?
I’ve always been iffy using QA scores as a reason not to regularize or confirm an agent in their role.
Erin, a team leader, was committed to Christopher’s success. Although Chris did not meet the QA standard expected of others, he had a great attitude and embodied the company’s values, resulting in a high CSAT score. However, the operations leader at the time was reluctant to regularize him due to his low QA score. When TL Erin approached me for a decision, I recommended regularizing him, expecting Erin to support his success. That year, Chris became our most improved Champ, and the following year, he received the Steadfast Employee award for his consistent and valuable contributions. If I had judged Chris solely based on his performance at that moment, I would’ve missed out on his impressive impact on our customers.
QA is not binomial. It’s not Yes or No.
Most times, it’s in between.
I’ve always disliked binomial scoring in anything. Because humans are never either-or. And so are skills. There are not binaries of 0s and 1s. People argue with me on this for years. Apply scores in context, not in black and white.
QAs often immerse themselves in the analytics and the reporting, turning it into a numbers game, which makes them miss the opportunity to truly understand root causes, patterns, and skill elevation.
QA conversations descended more into mechanics, procedures or tooling the rubric rather than studying common best practices or inflection points.
Often, i thought why was our QA score so high but the CSAT score was low? Were we measuring the same things the customers were looking for?
Someone told me to change the rubric. But that wasn’t the point. It’s not about the form, or the score or the attributes. It was whether we’re seeing what the customers are seeing. Are we solving their pain points? Are we preventing customer pain from recurring?
What Do We Do Now
Looking back at my QA journey, I’m struck by how much of our professional lives we spend perfecting systems that might actually be limiting our potential. The future of quality isn’t about better scorecards or more sophisticated monitoring tools – it’s about creating environments where quality becomes as natural as breathing.
The transformation begins with small steps:
- Encouraging self-reflection and peer feedback over formal quality audits
- Celebrating creative solutions that prioritize customer outcomes over procedural compliance
- Investing in technologies that democratize quality insights
- Building networks of shared accountability rather than hierarchies of control
The most profound quality transformations I’ve witnessed didn’t come from tighter controls or better metrics. They came from moments of trust – when we dared to believe in the inherent desire of our teams to deliver excellence and then got out of their way.
Business Disruption | Digital Transformation | Finance
10moI agree, Teki. With the advent of AI tools, success requires a cultural shift in the workplace across all industries. This starts with leaders and decision-makers creating a robust framework—one that includes abolishing dedicated QA teams. Second, knowledge transfer (KT) is crucial. Effective AI tools deployment, regardless of whether a ladderized learning approach is used, is a game-changer if it significantly improves internal and external customer experience. Third, a timely feedback loop is essential. The framework shouldn't overly rely on standalone AI tools; instead, it should integrate automation and align with the company's daily business model. This feedback loop must also address the human element, considering both intentional and unintentional reasons and excuses. Finally, continuous upgrades are necessary, aligned with strategic budgeting. An AI tool that's effective today might not optimize the customer journey tomorrow.