The Intersection of Performance and Functional Testing

The theme for WOPR 16 was announced: The Intersection of Performance and Functional Testing.

One my colleague commented it in the following way:

The fact of establishing that WOPR16 session, from my point of view, just answers on the main question about correlation in between those, IMHO absolutely different areas, as functional and performance testing. If it wasn’t so obvious, that those areas have very limited overlap, then we would see a LOT of evidences in real life ineternet. But, it doesn’t happen.

While, I’m not so firm about areas in general, both of them are testing: functional and performance. But I’m rather solid that people are working in those areas must have different sets of skills. I haven’t seen many examples (if only seen at all) of people coming into performance analysis from functional testing. At the same time all really experienced performance analysts I’ve met came either from development or admin roles (Unix admin, network admin, dba, …). People just need to be willing go deeper into analysis of perf. issues, working with a wide set of profiling tools and very often in cooperation with development, and not just report resultsproblems by making script and running a test. The last part – preparing script and executing a test – is just 20% of responsibilities of a really good perf. engineer.

From one side, I definitely agree with that. I even once upon a time wrote an article highlighting differences between functional and performance testing (it is in a slightly adjusted form as far as the original topic didn’t get much interest ).

From another side, I see some areas where synergy can be achieved. Synergy, not merge – I still believe that these are different activities handled by [usually] different people. For example:

1) Probably functional testers should measure transaction performance during their functional tests. They cover much more functionality, do it end-to-end, and are much more systematic. They easily can track one-user performance numbers and raise alarm early in case of degradation. Yeah, I haven’t seen it much. Some reasons: automated functional testing isn’t as wide spread as it looks; functional testing tools don’t have good functionality to measure and track performance; both areas are often quite separated; many functional testers still need a little bit of performance education; multiple people often using the same environment.

2) Adding functional / GUI / end-user scripts to load testing. Can be useful in many ways, including validation of protocol-based scripts, seeing real end-user performance, covering more functionality, catching more subtle issues related to timing/order of downloading. The script shouldn’t be the same (as it was in old Empirix – although it would be nice when technically possible), but it should be running smoothly inside load testing to make it practical. I did it a few times with LoadRunner/WinRunner long time. If somebody gives me such scripts – I don’t see why not to use them (considering that they are integrated in the Load Testing tool).

3) When we get to the point when you start to test familiar systems again and again (new builds of the same products in different configurations and with different sets of data), a kind of routine performance regression testing, the process becomes somewhat more similar to functional testing. And functional testing has developed some interesting methods of choosing what tests to run – perhaps something may be used in more routine performance testing.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *