Monday, April 09, 2018

It's Never Too Early to Think About Performance

It's never too early to think about performance
Business users specify their needs primarily through functional requirements. The non-functional aspects of the systems, like performance, responsiveness, up-time, support needs, and so on, are left up to the development team.

Testing of these non-functional requirements is left until very late in the development cycle, and is sometimes delegated completely to the operations team. This is a big mistake that is made far too often. Having separate development and operations team is already a mistake by itself, but I will leave that discussion for another article.

I was recently part of two large software development projects were performance was addressed to late, and the costs and time necessary fixing it was a magnitude larger as it would have to address performance early in the project. Not to mention the bad reputation the teams and systems got after going live with such a bad performance that users could hardly do their daily work with the system. 

Besides knowing before you go live that users are not going to be happy (and therefore should NOT go live) there is another big advantage of early performance testing. If you aren't looking at performance until late in the project cycle, you have lost an incredible amount of information as to when performance changed. If performance is going to be an important architectural and design criterion, then performance testing should begin as soon as possible. If you are using an Agile methodology based on two-week iterations, I'd say performance testing should be included in the process no later than the third iteration.

Why is this so important? The biggest reason is that at the very least you know the kinds of changes that made performance fall off a cliff. Instead of having to think about the entire architecture when you encounter performance problems, you can focus on the most recent changes. 

Doing performance testing early and often provides you with a narrow range of changes on which to focus. In early testing, you may not even try to diagnose performance, but you do have a baseline of performance figures to work from. This trend data provides vital information in diagnosing the source of performance issues and resolving them.

This approach also allows for the architectural and design choices to be validated against the actual performance requirements. Particularly for systems with hard performance requirements, early validation is crucial to delivering the system in a timely fashion.

“Fast” Is Not a Requirement 

"Fast" is not a requirement. Neither is "responsive". Nor "extensible". The main reason why not is that you have no objective way to tell if they're met. 

Some simple questions to ask: How many? In what period? How often? How soon? Increasing or decreasing? At what rate? If these questions cannot be answered then the need is not understood. The answers should be in the business case for the system and if they are not, then some hard thinking needs to be done. If you work as an architect and the business hasn't (or won't) tell you these numbers ask yourself why not. Then go get them. The next time someone tells you that a system needs to be "scalable" ask them where new users are going to come from and why. Ask how many and by when? Reject "lots" and "soon" as answers.

Uncertain quantitative criteria must be given as a range: the least, the nominal, and the most. If this range cannot be given, then the required behavior is not understood. As an architecture unfolds it can be checked against these criteria to see if it is (still) in tolerance. As the performance against some criteria drifts over time, valuable feedback is obtained. Finding these ranges and checking against them is a time-consuming and expensive business. 

If no one cares enough about the system being "performant" (neither a requirement nor a word) to pay for performance tests, then more than likely performance doesn't matter. quote

You are then free to focus your efforts on aspects of the system that are worth paying for.

Automated Performance Testing

In order to keep costs and spend time on performance testing in check, I advise you to automate this as much as possible. Tools like Taurus simplifies the automation of performance testing, is built for developers and DevOps, and relies on JMeter, Selenium, Gatling and Grinder as underlying engines. It also enables parallel testing, its configuration format is readable and can be parsed by your version control system, it’s tool friendly and tests can be expressed using YAML or JSON.

Here are some types of tests you can run automated:

> Load Tests are conducted to understand the behavior of the system under a specific expected load.

> Stress Tests are used to understand the upper limits of capacity within the system.

> Soak Tests determine if the system can sustain the continuous expected load.

> Spike Tests determine if the system can sustain a suddenly increasing load generated by a large number of users.

> Isolation Tests determine if a previously detected system issue has been fixed by repeating a test execution that resulted in a system problem.

Closing Thoughts

Technical testing is notoriously difficult to get going. Setting up the appropriate environments, generating the proper data sets, and defining the necessary test cases all take a lot of time. By addressing performance testing early you can establish your test environment incrementally avoiding much more expensive efforts once after you discover performance issues.

In a nutshell: It's never too early to think about performance.
Posted on Monday, April 09, 2018 by Henrico Dolfing