Quantitative Testing vs. Expert Reviews
This is the third post of a series of four. You can find the previous posts here.
While many see user testing as a risk-minimizing technique, there is just as much risk involved in testing without a good setup. You might get feedback that seems solid, but is actually merely a result of a faulty test setup, of asking the wrong questions. We shouldn't be asking too many questions to begin with, but rather observe our testers using the product and strive for objective, quantifiable metrics. Usability of specific routes through an UI is quantifiable by measuring effectiveness and efficiency through A/B testing. This works well for selecting alternatives, modifications and iterations.
"strive for objective, quantifiable metrics."
Keep in mind that the data says what the data says, and nothing else. Also, be aware of the limitations of a test setup - for example, while A/B testing for conversion, we can't measure all that well how brand values are being perceived. The weighing of findings will always remain up to the team.
"disruptive, structural decisions require a bigger picture."
Major, disruptive, structural decisions, in any case, require a bigger picture and would simply overwhelm our standard tester and test setup - and typically it is these decisions that present themselves first when developing a disruptive product. Prototypes at this stage don't provide full functionality - be it paper prototypes, clickdummies, physical mock-ups or partially programmed versions. Quantitative metrics alone just won't suffice, and we'll often need test users with previous knowledge. Typically, some sort of informed but subjective opinion will be the only result possible. In other words - specialists providing an expert review that goes beyond usability metrics for webshop conversions and the like.
These experts rely on us to provide a test setup with sufficient information for them to reach a qualified opinion. And again it will have to be the development team to decide what to make of the feedback received - no one can take this responsibility off their shoulders.
The sample defines the result
Without meaningful metrics, there is no way of obtaining reliable test results - unless you are after hollow numbers for responsibility outsourcing, which I assume you're not.
The same applies to expert reviews: however qualified the feedback, it will be biased in one way or another. The important part here is to a) select the testers so you can actually learn from them about your product and b) to strive for a representative cross-section of your market. As far as the bigger picture is concerned, not all veteran users will be able to provide insights about a disruptive product.
"some might prefer what they know - simply because they are used to it"
Henry Ford famously stated, "If I had asked my customers what they wanted, they would have asked for a faster horse". Now imagine how carriage-owners would have reacted to being offered to test-drive a Model T. Some would have rejected the very idea of a horseless carriage, while others might have readily taken part: assessing the potential, challenges and shortcomings of the innovation from their vantage point. While insight may be earned from both groups, we should be careful to find a balance when choosing testers. Familiar with previous product generations, some might prefer what they know over what strives to innovate - simply because they are used to it. Their insight is valuable, but possibly limited.
Another factor to take into account is including users representing all involved personas: for a sports product, this could be an athlete, trainer, and reseller - whose complementary expert opinions will provide a bigger picture than could be obtained when focusing exclusively on the principal user. Setup and sample influence test results - and again, interpretation is up to the team.
See the final post of this series for conclusions on how to structure user testing for your product, and how to integrate it into your product development here.
About the author: Heinrich Lentz is the founder of Antimatter, a physical / digital product design agency in Vienna/Austria, and functions as its design director. Previously he has been working in product and ux/ui design for agencies in Austria and Spain and lecturing at IED Barcelona.
This 4-part series was originally published by Heinrich Lentz as a single article on LinkedIn in October 2018. It has been adapted to fit the new format.