The goal of Quality Assurance is to ensure that your product meets requirements, is debugged, and delivers happy customers. To obtain satisfied customers your testing and QA processes must provide meaningful information during the development lifecycle and after that to efficiently assist engineers and decision-makers in informing choices.
Dashboards as a Tool
The challenge for many is making sense of the data. As a tool, dashboards serve as a functional way to deliver pretty information with a simple overview. Unfortunately, they can also provide too much data, hide essential details, or create easily ignorable aggregates.
Let’s look at a TFS dashboard from Visual Studio Reporting Tools:
This particular dashboard looks great and provides meaningful readings: things are good or bad (green or red), and it is easy to see that things are ok. In reality, dashboards rarely look this nice. Many teams see prominent displays of yellow and orange with little substantial evidence of green and red.
Where Dashboards Go Wrong
Flakey tests: When there are a lot of tests, with concurrency issues for instance, that fail sometimes, there will inevitably be the occasional failing test. This situation results in teams whose dashboard delivers too much unhelpful orange.
False Sense of Security: Dashboards can also provide a false sense of security. If you set a goal to keep errors under 1%, but then you only look at the percentage number of errors, and you fail to check out what is failing; you will likely miss that the same key defects may have been failing for quite some time.
A Call to Action
In the end, the goal of a dashboard is not to convey information, but rather to get people to act (assuming the data reported indicate something bad). The dashboard when thoughtfully constructed will thus be a warning flag and a call to action, but only if it says something that causes people to take action.
If the dashboard is not practical, people will ignore what it displays.
It is like the threat level announcements at the airport or the nightly news, you know all is not well, but you either learn to tune it out or find another way to numb the information. Developers that get told every day that there are some failures will do the same, sure they spend some time chasing unreproducible failures, but after awhile they will stop paying attention.
A Good Dashboard Generates an Emotional Response
The genius of a thoughtfully constructed dashboard is that it generates an emotional response.
Oh. No, not again!
So how do we get people back to the emotional reaction of a simple read / green dashboard in a world where things are noisy?
This question leads to the creation of a new dashboard that focuses on the history of defects and failures. In this case, we categorize failures that are new as orange, consecutive failures over multiple days as red, with red failures that exist for several days then going black.
Screenshot © Possum Labs 2018
It is easy to see from the chart that even though there are some flay tests, there are also a few consistent failures. And it is easy to see that we have a few regular failures over the last few weeks.
This occurrence creates an opportunity to start drawing lines in the sand, for example, setting the goal to eliminate any black on the charts. Even when getting to a total pass rate seems infeasible for a project, it should be more manageable to at least fix the tests that fail consistently.
This type of goal is useful because it is something that can get an emotional response, people can filter the signal from the noise visually, so they get to the heart of the information, which is that there are indeed failures that matter. When we make it evident how things change over time, if things get worse, people will notice.
Dashboards are a Tool — not a Solution
Each organization is different, and each organization has its challenges, getting to a 100% pass rate is much easier when it is an expectation from the beginning of the project, but often systems were designed years before the first tests crept in. In those scenarios, the best plan is to create a simple chart and listen to people when they tell you it is meaningless. These are the people that give you your actual requirements; these are the people who will spell out where and what to highlight to get an emotional response to the dashboard data.
Dashboards do not always convey meaningful data about your tests. If they lack thoughtful construction, it is likely they may fail at getting people to act when the information is bad. Bad results on your panel should be a call to action, a warning flag, but the results need to mean something or your people simply ignore and move on.
At Possum Labs, meaningful dashboards are not a solution, but instead just one of many tools available to assure that your QA delivers effective and actionable data to your teams.