Decentralisation of the QA function and its reduction

Decentralisation of the QA function and its reduction

About 17 years ago, my journey in QA started that’s when I had an idea that the main goal of my work was QA’s reduction. Now, my idea is no different, but back then, I didn’t think it was possible. But it is.

This article is about Miro's shift towards decentralising its Quality Assurance (QA). Let’s start with understanding what our structure looked like before: QA Engineers were integrated within product teams, following established processes (as detailed previously). However, their direct managers were QA Leads, not the leaders of the product teams they worked with daily. The responsibilities of the QA structure and QA Leads in that model have also been stated before. 

Why Decentralisation was essential

The primary goal driving this restructuring is to make the QA function more scalable as Miro grows. At the very beginning, there was a small engineering team of ~20 engineers with a ratio of 4 developers for 1 QA engineer. We made a decision that we should invest more in automation and practices, as scaling the QA function with the same proportion is too expensive. At a scale of 700 engineers, the ratio was more than 10:1 with different setups, but in reality, one QA engineer rarely worked with one team. Decentralisation was supposed to help with two main concerns:

  • High stress and conflicting priorities for QA Engineers. Decentralisation ensures focus in their dedicated Scrum team. Any work outside this primary team (e.g., supporting secondary teams) required a clear, limited scope agreed upon with their direct EM. This reduced ambiguity and managed stable workload.
  • Lack of direct control for Engineering Managers (EMs): Due to no direct authority over the QA Engineer who are responsible to push quality up in their team there were frequent requests which caused tension and slowed down the processes. Decentralisation provides responsibility and control.

Prerequisites for the Change

Initially, the idea of QA Engineers reporting directly to team leaders wasn't achievable due to two factors:

  • Immature QA Process. In the early stages, QA processes weren't fully defined, and the QA Engineer often solely carried the burden of testing. Moving them under a team leader who might de-prioritise quality could have had negative effects. We first needed to establish a clear QA process based on the shift-left paradigm, where everyone shares responsibility for quality, and testing is integrated throughout development, not just a final step.
  • Tooling and Infrastructure. Teams needed effective tools and infrastructure to enable quality practices. Miro's specific technologies often meant that standard frameworks couldn't be used directly; we had to adapt or build our own ones.

Once these foundational aspects were settled, the final prerequisites for decentralisation were:

  • Full QA Lead Coverage: A QA Lead needed to be present in every stream (major product area) to support EMs, team leaders, and QA engineers across the entire engineering department.
  • Component Decoupling: Distributing quality ownership is most effective when teams are truly independent. Significant technical interconnections between components would interfere with this autonomy. While perfect decoupling wasn't achieved instantly, solutions like platform teams and defined ownership for core components were established to manage dependencies.
  • Appropriate Engineering Structure: Miro organises its teams around product areas, meaning a single customer feature can depend on several technical systems. To avoid confusion and ensure reliability, it was important to define clear ownership for the shared platform components that many teams rely on.

Addressing these points, especially hiring and transforming the QA Lead role to meet new demands, was challenging but crucial to start the decentralisation process.

The Change

Reporting Lines:

  • Engineering Managers (EMs) / Team Leaders now serve as direct people managers for QA Engineers within their teams. This covers both daily engagement (ensuring QA focus is aligned with team goals) and professional growth.
  • Heads of Engineering now manage the QA Leads.

Ownership: Engineering Managers are now explicitly responsible for the quality outcomes of their teams.

QA Lead Role Transformation

The decentralisation significantly altered the QA Lead role:

  • From Manager to Individual Contributor (IC): QA Leads no longer have direct reports. The role shifted towards a Staff Engineer who focuses on defining strategy for large-scale initiatives, establishes and leads cross-team programs and projects (including technical design and contribution).
  • Broader Scope: With EMs taking direct responsibility for team quality, fewer QA Leads are needed per stream. Instead of one lead per 6-8 engineers within a dedicated QA structure, one QA Lead now supports an entire stream (potentially over 20 product teams), focusing on strategic guidance rather than direct management. The core idea is that teams and EMs are responsible for quality, not a parallel QA hierarchy.

While my core as Head of QA role hasn't drastically changed, the lack of direct reports led to the running joke, "Head of Who?" Driving quality is definitely easier with direct control. But without the ability to dictate, you're pushed to build stronger cases for ideas, frame them using business language, and focus on initiatives that balance logic with innovation to gain traction. I think this dynamic is actually accelerating my own development and maturity.

Redefined Roles and Responsibilities

Here are the guidelines which outline how each role contributes to quality, allowing flexibility based on team maturity and emphasising close collaboration:

Quality Assurance Engineer (QA): The general expectations are:

  • The team's quality ambassador and coach.
  • Testing expert leading QA strategy and test execution within the team.
  • Drives a shift-left approach where the entire team is accountable for quality.

Responsibilities: Contribute to agile rituals, create test strategies, design/execute test cases, manage defects, produce/expose quality KPIs, maintain documentation, coach the team on QA best practices and tool usage.

One of the biggest changes during the transformation was the increased technical expectations for QAs. In a team with limited resources, reporting directly to an Engineering Manager meant they couldn’t focus only on manual testing. They also had to add value by building automated tests and pipelines, troubleshooting problems, and sometimes fixing code themselves.

Engineering Manager (EM) / Team Leader (Direct Line Manager): The general expectations are:

  • Leader of the cross-functional squad, accountable for the quality of delivered products.
  • Focuses on delivery, outcomes, and people management.

Responsibilities: Manage attendance, set goals, conduct performance reviews, lead promotions/HR consultations, review/sign off on test strategies, monitor quality KPIs, manage team capacity (balancing delivery with community/growth time), own hiring (supported by QA/QA Lead), coach/mentor team members (including QA).

The main challenge was making Engineering Managers truly accountable for quality. We had to define clear metrics, trust them, track progress, and consistently act on the results. Once the quality status is automatically visible, the processes for addressing issues are in place, and EMs fully understand what’s expected of them, the QA lead naturally steps back and becomes less involved.

QA Lead (Dotted Line Manager / Quality Consultant):

  • Mentors QA engineers and supports their work delivery.
  • Acts as a quality consultant, coaching all team members and influencing consistency/alignment in QA practices across teams.

Responsibilities: Contribute to QA goals/objectives/performance reviews/promotions/HR escalations, manage standards for test plans/KPIs/tooling, mentor the engineering team on tool usage, manage QA Community contributions, conduct competency screening for hiring, mentor/coach QA engineers, and coach EMs on quality topics.

The QA Lead role seemed to be evolving toward decentralisation, acting as the link between high-level strategic initiatives from the Head of QA or Engineering and their implementation within teams. They’re able to empower teams while also driving large cross-stream programs, prototyping technical solutions, and stepping in wherever needed. Over time, their role becomes more interim, helping to establish strong foundations before stepping back.

Metric System for Success Measurement

When direct QA oversight is reduced, having a strong metrics system becomes essential. We use metrics to monitor quality levels across teams. If a team is performing well, we don’t interfere. But if they fall short of targets, we step in to understand the challenges and find ways to support their success.

The system must connect product quality with technical quality. So the expectations were stated as following:

  • Overall Goal: Operationalise quality measurement by providing managers with high-level overviews and engineers with actionable, code-level insights.
  • For Managers: Offer clear quality dashboards filterable by stream/team, anomaly alerts, and recommended actions.
  • For Engineers: Provide early feedback on potential anti-patterns (e.g., architectural complexity, low test coverage) directly related to code changes.

The QA Community

To counteract the potential silos created by decentralisation and encourage continued collaboration and growth, the QA Community (formerly QA Guild) was created. The new name was assigned to reflect the intention to attract a broader group, not only just QA engineers. The idea is that it will be the driving force for quality and continuous improvement, where every member contributes to user experience and software reliability. 

The QA Community had to address the challenges of decentralisation (communication, collaboration, knowledge sharing, consistency, and career development) by providing a unifying platform. It was important in order to break down specific challenges to foster self-sustaining community focused on quality. It allows QAs to contribute beyond their teams, requiring transparency with their EMs regarding capacity. Providing a platform for defining and aligning QA metrics (Quality KPIs) across teams.

Although facilitated by QA Leads, it is open to everyone (engineering, product, analytics, marketing, design, etc.) and encourages all members to be Directly Responsible Individuals (DRIs) for initiatives. Additionally, there are weekly meetups, Miro Techtalks (internal/external), and invited external speakers to share knowledge. Each member has the ability to work with other teams on strategy and implementation, so it is a great basis for exploration and sharing of new tools/methodologies/automation and development, as it offers guides, workshops, internships, and a mentorship program.

Quality Enablement Team

Reflecting the increased technical focus, a dedicated Quality Enablement team was established. It included software Engineers (not QAs or SDETs). Their mission was to help product teams with building and delivering high-quality software by equipping them with actionable insights as well as scalable, automated, efficient, and reliable testing solutions. They could do it by assessing quality state, identifying gaps/blockers, enhancing tooling, and providing data to ensure accountability for quality among engineering leaders and teams.

Ultimately, once fundamental quality components were established, the next step was to enhance the testability of the application, allowing for the development of more efficient testing tools. As this stage, Quality Enablement should also be decentralised between other platform teams who are responsible for architectural and platform enablements.

Career Matrix Refinement

Decentralisation demanded a complete revision of the QA Career Matrix. Since EMs (non-QA experts) now manage QAs, the matrix must be clear, concise, and precise, avoiding ambiguity. Competencies must be clearly defined for each level (e.g., Junior, Middle, Senior) rather than listed as generic responsibilities. For instance, "setting up QA process in a team" is a senior-level competency, not a basic responsibility for all levels. This refactoring involved careful wording.

Article content
Roles and Responsibilities

Consequences

Implementing such a significant change involves risks:

  • Reduced QA Community Capacity: Risk: Engineers prioritise team tasks over community initiatives. Mitigation: Negotiate a ~20% capacity buffer with EMs for community contributions and personal development.
  • Limited Career Progression Perception: Risk: A Flatter structure might seem like a dead end. Mitigation: Clearly define roles/responsibilities demonstrating impact via dotted-line influence and cross-organisational contributions.
  • Lack of Consistency/Alignment: Risk: Team autonomy leads to fragmented approaches. Mitigation: Evolve policies into flexible guidelines, maintain a consistent quality bar via the QA Community, and encourage cross-pollination of best practices.
  • Reduced Team Health: Risk: Poor implementation or lack of manager maturity impacts morale. Mitigation: Maintain open communication, use Role/Responsibility guidelines as guardrails (not rigid rules), provide continuous coaching/mentoring, and support leadership development.

The Results

Our quality metrics have reached levels we've never seen before—and they’ve become significantly more stable. This, in turn, allowed us to tighten the quality thresholds while still maintaining the same high standards. At the same time, we’ve developed a solid, systematic approach to measure the process of addressing bugs—specifically, fixing bugs on time.

The approach consisted of several essential elements:

  • OLA (operational level agreement) that shows a percentage of bugs fixed on time is approximately 80%, utilising a stricter formula that ensures we do not accumulate bugs in the backlog, which was previously possible.
  • MTTR (mean time to resolve) bugs expected timeframe across the entire engineering team.
  • We now have twice as many bugs. There is a small percentage of additional ones, but this is mainly because the traction has become much more accurate. Reported bugs from diverse sources are automatically classified as bugs. We track Defect Density (Bug Fixes / Lines Added) * 1000), which is more than 4 in 2-3 teams, sometimes. It is very positive overall. Recurring bugs in the same files of code are exceptional and are becoming part of Operational Reviews. We also track Potential Bug Mismatches, where we consider PRs without a linked Jira ticket and different Jira items that are not bugs, but contain fixes, errors, or where the Churn Ratio (Lines Deleted / Lines Added) indicates a code change, not the addition of new code.
  • Test coverage is stable for monoliths and growing in isolated components within monoliths, as well as in microservices, where quality tools are implemented. We have now rolled them out to all the microservices and are starting to enforce the static code analysis checks.

Previously, one QA could cover 2-3 teams. Post-decentralisation, many engineering teams operate without a dedicated QA engineer. The trend is towards having QA engineers only in exceptional cases. With limited resources within teams—and as quality stopped being solely dependent on QA engineers—teams began phasing out the dedicated QA role. Senior Engineering leaders often supported this direction, even in cases where teams preferred to keep a QA role in place.

During the transformation, when we still had a significant number of QA Engineers, keeping the QA community engaged required continuous effort. But as the proportion of QA Engineers decreased, other roles showed less interest in driving quality initiatives collectively. The QA community stopped working when the number of QA Engineers became small. I’m exploring various opportunities to enhance QA practices. Quality gradually became just another part of day-to-day work, something teams handled as they went, rather than a dedicated focus area or strategic goal. When issues do arise and, of course, they do, they’re often seen as something for the Quality Enablement team to tackle, and other DevEx teams once the Quality Enablement team is decentralised.

The QA Lead position was held on an interim basis. Two years into the transformation, quality Enablement does not exist, so we no longer have dedicated QA Leads. The QA engineer role is not a title anymore; it has become a very technical SWE role with QA specialisation and only in exceptional teams.

All in all, Decentralising the QA function at Miro was a challenging but necessary step to foster a scalable culture where quality is a shared responsibility, deeply embedded in product development. While it reduced the number of dedicated QA personnel, it pushed for greater technical depth, stronger tooling, refined metrics, and a collaborative QA Community, ultimately aiming to enhance Miro's passion for delivering high-quality products.

The most significant consequence has been a reduction in the dedicated QA function, without which shift-left testing could not be implemented fully. No QA engineer title, no QA leads, and no need for a Head of QA. 

There are two paths for me: grow upward into broader engineering leadership roles, or crash downward. The first option looks good, and I’m certainly not opposed to it. But oddly enough, it doesn’t always bring stability. It’s easy to end up floating in your own clouds, disconnected from what you’re supposed to be steering.

The second might sound bleak, but if you reframe it as growing deeper instead of falling down, it becomes more positive. My experience is a strong foundation, and as long as my head stays pointed toward the light toward learning, then digging into the depths of innovation in a fast-moving world might just lift me even higher. This is my choice.

What’s Next?

With fewer dedicated QAs, the reliance on robust quality metrics, dashboards, quality gates, linters, and other automated tools becomes paramount: product, process and technical quality.

Furthermore, explicit testing remains essential. We need efficient ways to test broadly. This involves exploring AI-driven automation for test generation and execution, code analysis, bug fixing and much more, alongside structured bug-bashing sessions, potentially involving external participants and AI as well. Importantly, findings from these methods are treated as signals of process/tooling gaps to be addressed and improved, not as a safety net that excuses developers from owning quality.

Viacheslav Smirnov

Performance Engineer at JetBrains

1mo

Miro teams use feature toggles and experiments a lot. This is an important part of infrastructure and architecture. I think the way 4:1 => 10:1 => N:N worked well because the experiments. It may be a hard way for a random team and a random company without the feature toggles

Like
Reply

Why didn’t you use shift-left from the start? it’s been around for a while now. Or it was only possible once your core solution became stable and no major refactoring takes place?

Like
Reply
Andrei Kukoverov

Senior Engineering Manager, AI at Veeam Software

1mo

Probably, the same is actual for DevOps specialists. Even the main parameters are similar.

Vadim Barshtak

CTO, Entrepreneur and Investor.

1mo

Thanks, Anton! One thing I would highlight is that the list of prerequisites is logical, but also excessive and hard to achieve. Remember that when we started, it looked quite premature and even scary? :) But now, reflecting back, really happy that we started sooner than later.

Michael Aronzon

CTO, Software Executive

1mo

Great summary, Anton. It has been quite a journey, indeed. Thanks for being a thoughtful partner in this!

To view or add a comment, sign in

Others also viewed

Explore content categories