User Concurrency

Performance testing terminology is not well defined and one of the most ambiguous terms is user concurrency. Re-reading Load Testing: Concurrent Users verses Simultaneous Users by Scott Moore (@loadtester) and LoadRunner Concurrency video by Mark Tomlinson (@mtomlins) inspired me to post this comment. Here is what I wrote (and still believe in it) in my old CMG paper about performance requirements in 2007 ( the latest version of this paper was presented again at CMG’12):

Concurrency is the number of simultaneous users or threads. It is important too: connected, but inactive users still hold some resources. For example, the requirement may be to support up to 300 active users.

When we speak about the number of users, the terminology is somewhat vague. Usually three metrics are used:

• Total or named users: all registered or potential users. That is a metric of data the system works with. It also indicates the upper potential limit of concurrency.

• Active or concurrent users: users logged in at a specific moment of time. That one is the real measure of concurrency in the sense it is used here.

• Really concurrent: users actually running requests at the same time. While that metric looks appealing and is used quite often, it is almost impossible to measure and rather confusing: the number of “really concurrent” requests depends on the processing time for this request. For example, let’s assume that we got a requirement to support up to 20 “concurrent” users. If one request takes 10 sec, 20 “concurrent” requests mean throughput of 120 requests per minute. But here we get an absurd situation that if we improve processing time from 10 to 1 second and keep the same throughput, we miss our requirement because we have only 2 “concurrent” users. To support 20 “concurrent” users with 1 second response time we really need to increase throughput 10 times to 1,200 requests per minute.

It is important to understand what users you are speaking about: the difference between each of these three metrics for some systems may be drastic. Of course, it heavily depends on the nature of the system.

The number of online users (the number of parallel session) looks like the best metric for concurrency (complementing throughput and response time requirements).

To summarize my comment, I believe that the number of “really concurrent” users is not an appropriate input metric for performance engineering and performance testing. It perhaps may be an output metric characterizing system’s load if we find a way to measure it (in a way, it is the number of users in the system if we use queuing theory terminology).

Share

Leave a Reply

Your email address will not be published. Required fields are marked *