I am looking forward to share my thoughts on ‘Reinventing Performance Testing’ at the imPACt performance and capacity conference by CMG held on November 7-10, 2016 in La Jolla, CA. I decided to publish a few parts here to see if anything triggers a discussion.
Cloud and cloud services significantly increased a number of options to configure for both the system under test and load generators. Cloud practically eliminated the lack of appropriate hardware as a reason for not doing load testing and significantly decreased the cost of large-scale load tests as it may provide a large amount of resources for a relatively short period of time.
We still have the challenge of making the system under test as close to the production environment as possible (in all aspects – hardware, software, data, configuration). One interesting new trend is testing in production. Not that it is something new by itself; what is new is that it is advocated as a preferable and safe (if done properly) way of doing performance testing. As systems become so huge and complex that it is extremely difficult to reproduce them in a test setup, people are more ready to accept the issues and risks related to using the production site for testing.
If we create a test system, the main challenge is to make it as close to the production system as possible. In case we can’t replicate it, the issue would be to project results to the production environment. And while it is still an option – there are mathematical models that will allow making such a projection in most cases – but the further away is the test system from the production system, the riskier and less reliable would be such projections.
There were many discussions about different deployment models. Options include traditional internal (and external) labs; cloud as ‘Infrastructure as a Service’ (IaaS), when some parts of the system or everything are deployed there; and service, cloud as ‘Software as a Service (SaaS)’, when vendors provide load testing service. There are some advantages and disadvantage of each model. Depending on the specific goals and the systems to test, one deployment model may be preferred over another.
For example, to see the effect of performance improvement (performance optimization), using an isolated lab environment may be a better option to see even small variations introduced by a change. To load test the whole production environment end-to-end just to make sure that the system will handle the load without any major issue, testing from the cloud or a service may be more appropriate. To create a production-like test environment without going bankrupt, moving everything to the cloud for periodical performance testing may be a solution.
For comprehensive performance testing, you probably need to use several approaches – for example, lab testing (for performance optimization to get reproducible results) and distributed, realistic outside testing (to check real-life issues you can’t simulate in the lab). Limiting yourself to one approach limits the risks you will mitigate. It is important to consider that selecting tools – if the tool doesn’t support all approaches, you may end up using different tools, probably introducing noticeable overheads.
The scale also may be a serious consideration. When you have only a few users to simulate, it is usually not a problem. The more users you need to simulate, the more important the right tool becomes. Tools differ drastically on how many resources they need per simulated user and how well they handle large volumes of information. This may differ significantly even for the same tool, depending on the protocol used and the specifics of your script. As soon as you need to simulate thousands of users, it may become a major problem. For a very large number of users, some automation – like automatic creation of a specified number of load generators across several clouds – may be very handy. Cloud services may be another option here.