Performance Testing Issues and Trends

And now, after my posts about agile performance testing and performance requirements in agile projects, we are getting to another fundamental issue behind performance testing issues in agile projects.

There are two very important assumptions for what I am saying below: stakeholders understand the need in performance engineering and the team knows the basics. How to get there is a very interesting topic, but it is outside of this post. For those, who are not there yet, the issues described below may sound abstract and they probably have more urgent problems to fight – but it is something you may want to be aware about while you are getting there.

The fundamental issue is, as I see it, that performance engineering teams don’t scale well, even assuming that they are competent and effective. At least not in their traditional form. They work well in traditional corporate environments where they check products for performance before release, but they face challenges as soon as we start to expand the scope of performance engineering (early involvement, more products/configurations/scenarios, etc.). And agile projects, when we need to test the product each iteration or build, expose the problem through the increased volume of work to do.

Just to avoid misunderstandings, I am a strong supporter of having performance teams and I believe that it is the best approach to building performance culture. Performance is a special area and performance specialists should have an opportunity to work together to grow professionally. The details of organization may vary (Scott Barber, for example, specified three models: “on demand”, “on retainer”, “full immersion”), but a center of performance expertise should exist. Only thing I am saying here is that while the approach works fine in traditional environment, it needs major changes in organization, tools, and skills when the scope of performance engineering should be extended (as in the case of agile projects).

Actually remedies are well known: automation, making performance everyone jobs (full immersion), etc. However they are not wide-spread yet.

Historically performance testing automation was almost non-existent (at least in traditional environments). Performance testing automation is much more difficult than, for example, functional testing automation (I use “automation” here as what should be done for “continuous testing”, i.e. process when you run test and get a report automatically for a new build without human intervention – not in its old way, when it meant using a tool; in performance we almost always use a tool). Setups are much more complicated. A list of possible issues is long. Results are complex (not just pass/fail). It is not easy to compare two result sets. So it is definitely much more difficult and would probably require much more human intervention, but it isn’t impossible.

However, the cost of performance testing automation is high. You need to know system well enough to make meaningful automation. Automation for a new system doesn’t make much sense – overheads are too high. So there was almost no automation in traditional environment [with testing in the end with a record/playback tool]. When you test the system once in a while before next major release, chances to re-use your artifacts are low.

It is opposite when the same system is tested again and again (as it should be in agile projects). It makes sense to invest in setting up automation. It rarely happened in traditional environments – even if you test each build, they are far apart and the difference between the builds prevents re-using the artifacts (especially with recorded scripts – API, for example, is usually more stable). So demand for automation was rather low and tool vendors didn’t pay much attention to it. Well, the situation is changing – we may see more automation-related features in load testing tools soon.

There are some vendor claiming that their load testing tools better fit agile processes, but it looks like in the best case it means that the tool is a little easier to handle (and, unfortunately, often just because there is not much functionality in it). Even if there is something that may be used for automation, like starting by a command line with parameters, it is difficult to find out.

At the moment, I read about few implementation of continuous performance testing – for example OpTier or Betfair (and few about measuring response times during functional testing in single-user mode – like this – which is a good step toward full-scale performance testing automation). Some prototypes, like this, are described– but without many details.

Probably we don’t see it more often because we don’t have much infrastructure for that kind of automation and performance testers may be not the best people to create complex integrated solutions from dozens of not-related pieces (as those who implemented it did). When we get more automation support in tools, we will probably see it changing.

By the way, I am not saying that automation would replace performance testing as we know it. Performance testing of new systems is agile and exploratory in nature and can’t be replaced by automation (well, at least not in the foreseen future). Automation would complement it – together with additional input from development offloading performance engineers from routine tasks not requiring sophisticated research and analysis.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *