conceptual graphic of testing

Automation falls shorts for user-centric testing

Since the dawn of the industrial revolution, efficiency gains have been attributed, in large part, to automation. Less effort means less time, and less time means lower costs. This holds true in many activities, but certain activities are less suited to the philosophy of “the more the better” in regard to automation.

Among these activities is software testing. Automation in software testing provides many efficiencies, but it has its limits. Automation can introduce some hurdles, particularly when it comes to the need for programming skills for setup and maintenance and simulating the end users’ environment and behavior. However, these areas are where manual testing shines!

Manual testing requires less programming overhead

In order to write and maintain the automated test script, a tester must have a background in software development or a developer must take time from product development. In either case, maintaining automated scripts is a tedious, full-time task as scripts require updating whenever a feature changes. This effort is compounded if the automated scripts go beyond basic functional testing to address a wide range of user scenarios. For this reason, many development teams stick to functional testing and overlook user-centric testing altogether, which can result in enough user frustration to prevent the product’s success.

Of course, manual tests require maintenance as well but do not require an engineering background. While maintaining an automated test script requires many lines of code, maintenance of a manual test might only take a couple sentences or a few bullet points.

Dedicated testers minimize bias

Many organizations are streamlining their development teams by hiring full-stack developers and having their developers also wear the hat of tester. This dual role can create a conflict of interest as the developers, who wrote the code and – often under time constraints – are more likely to limit testing to functionality only so the feature can pass and the team can stay on schedule. A dedicated tester should have more bandwidth as they are focused solely on testing and will likely lack such bias so they can counterbalance the developers and effectively provide quality control.

Manual testing can more closely mimic end-user environment

A developer’s workstation is often something to behold, with an abundance of processing power, a ton of memory, and high-resolution graphics. This is not the case for the typical end user. Having a developer or automated tester perform testing on a powerful developer workstation can yield results that differ from those that would come from real-world users, therefore overlooking potential issues users will encounter. If you want to ensure your application’s performance is ideal for your end users, you will want to conduct manual testing in an environment that closely resembles that of your users.

Manual testing can more closely simulate end-user behavior

Automation is often ideal for testing functionality and performance, but tends to fall short when it comes to testing for ease of use. However, testers armed with a list of user scenarios can efficiently tackle a variety of user-centric perspectives, edge cases, and visual bugs, shedding light on two key areas of performance.

Usability
Manual tests are ideal for identifying usability issues that would likely go undetected by relying on automated tests alone. For example: Identifying words and phrases that may not be familiar to the average user and incidents where the same word is used for different actions are design issues that can cause significant user frustration and can easily be caught through manual testing.

Accessibility
An application’s accessibility can be measured, only in part, by automated tools as many accessibility guidelines require manual test procedures. For example, one important accessibility guideline requires all functionality to work using keyboard only. For this criterion, the tester will step through each of the various user task flows and ensure they can complete each task using keyboard alone.

For identifying accessibility and usability issues, the advantages of manual testing versus automated testing cannot be overstated.

Conclusion

Automated testing is imperative for jobs such as build verification, stress testing, and batch testing, while manual testing is necessary for accessibility testing, and usability testing. If an optimal user experience is important to your organization’s business goals, user-centric manual testing should be an integral part of your testing regimen.

About the Authors

Wanda Hill, Software Tester
Wanda joined Flexwind Inc in 2019 and has experience in systems engineering, software testing, and system integration. She has supported both small applications and large critical enterprise systems for various government agencies. Wanda holds a Bachelors of Science in Networking from Strayer University.

Petrina Moore Pervall, UX Engineer
Petrina joined Flexwind in 2018 and has over 20 years of experience in UI development, user research, and UX design. She helps development teams discover user behavior patterns and make informed design decisions using qualitative research methods and cognitive design principles. Petrina holds UX certifications from Human Factors Int’l and NN/g as well as a Masters in Human Factors and Applied Cognition.

Leave a Reply