While many companies promote performance testing in the cloud (or from the cloud), it makes sense only for certain types of performance testing. For example, it should work fine if we want to test how many users the system supports, would it crash under load of X users, how many servers we need to support Y users, etc., but are not too concerned with exact numbers or variability of results (or even want to see some real-life variability).
Even in this case it assumes that we don’t introduce any bottleneck using the cloud (for example, saturating network bandwidth between load generators and the system under test) and leave the cloud provider to care that our test doesn’t impact other cloud tenants (that may be not too trivial in the case of PaaS or SaaS).
However it doesn’t work for performance optimization, when we make a change in the system and want to see how it impacts performance. Testing in a cloud with other tenants intrinsically has some results variability as far as we don’t control other activities in the cloud and in most cases don’t know exact hardware configuration. For example, if the system scales out by automatic creation of an additional application instance, the new instance may be outside of the network segment where other servers are. The effects may be even more sophisticated in case of PaaS and SaaS.
So when we talk about performance optimization, we still need an isolated lab. And, if the target environment for the system is a cloud, it should be an isolated private cloud with all hardware and software infrastructure of the target cloud. And we need monitoring access to underlying hardware to see how the system maps to the hardware resources and if it works as expected (for example, testing scaling out or evaluating impacts to/from other tenants – which probably should be one more kind of performance testing to do). Real-world network emulators should be used to make sure that performance testing is representative of how the system would be used in production – otherwise we don’t taking into account such factors as network latency, bandwidth, jitter, etc. This means that we need a way to plug in the network emulation appliance properly.
So if we need optimization for cloud software, we still need a lab – but the lab should be more sophisticated to emulate the cloud environment and real-world network conditions. An ultimate example of such lab probably is the lab Microsoft created for testing IE.
So factoring in the cloud into performance testing, we have two alternatives: coarse performance testing in/from the cloud with inherent variability (and perhaps some savings on hardware and configuration costs) or granular performance testing and optimization in a sophisticated isolated lab emulating the cloud (thus avoiding variability with probably higher hardware and configuration costs).