Front End vs. Back End
Steve Souders in his recent interview said: For years when developers started focusing on the performance of their websites, they would start on the back end, optimizing C++ code or database queries. Then we discovered that about 10% or 20% of the overall page load time was spent on the back end. So if you cut that in half, you only improve things 5%, maybe 10%. In many cases, you can reduce the back end time to zero and most users won’t notice. So really, improvement comes from the time spent on the front end, on the network transferring resources and in the browser pulling in those resources.
Well, if we see that “about 10% or 20% of the overall page load time was spent on the back end” under the maximal load, this statement is a great example of applying performance engineering to the problem analysis. It is definitely the first thing to do investigating any performance issue – find where time is spent. And, considering popularity of WPO, it is probably the case for most modern websites with rich web interface and no need of transactional processing behind the scene. But it is usually not the case for sophisticated business applications working with transactional data (although I even doubt that it is exactly the case for the moment you click on “confirm order” button when you buy something on the Internet).
For back end you have multi-user load and back-end performance problems observable with a single user are somewhat trivial (see above about profiler, etc.). But many performance issues may be observed only under [heavy] load. So you get one more level of sophistication on the top: you need to simulate load and you need to find a way to debug / profile under load (and most tools bring too much overhead to be used in this situation, plus issues may be related to timing and attempts to look inside may change the behavior of the system). Plus you get system resources limitations on the top of multi-user software problems (such as synchronization issues, running out software objects, etc.) introducing non-linear effects (it is where you get to capacity management, queuing theory, etc.). All these introduce a high probability that back end performance will degrade drastically with load (if not properly tested and configured), while the time spent on the client side for rendering and client side processing would remain the same in most cases (although affected by server response timing). It was the primary reason why almost all attention was used to be paid to the back end.