When I created my site as a collection of performance-related links and documents in 2004, I grouped links somewhat arbitrary, just to avoid “analysis paralysis”, hoping to get back soon and polish as needed. It is interesting that I haven’t changed much in grouping for these seven years (definitely many things changed, many changes are long time due, but with main grouping of information I wasn’t able to improve much). Whatever links I added, they mainly fit one (or few) existing category. And just now I realized that we have a new information category – Application Performance Management – which doesn’t fit in any existing category. I had a category for APM tools from the beginning – they were around for a while – but not for generic APM information (something beyond talking about just tool features). And finally I put together a list of great information sources into a new group, Application Performance Management:
Application Performance, Scalability, and Architecture blog from Dynatrace
Seriti Consulting Blog the Web Operations and Management Specialists, by Stephen Thair
Many of them existed for a while, but it looks like the quantity finally got into a new quality and we see a new discipline emerging (instead of a marketing term to promote a special kind of tools). It is definitely related that with new technologies, such as virtualization and cloud computing, traditional resource monitoring is not enough anymore and there is a need monitor on application and service levels. Some mentioned above blogs are from tool vendors, but they provide great content far beyond discussing the tools.
See my Performance Requirements – Do we need a better word? post on Application Performance Engineering Hub
New Approaches to Performance Testing: An Open Discussion on Plans, Experiments and Points In-Between
Join your peers to discuss the dynamics of performance testing, planning and experimentation in today’s fast-paced, impatient world. At the Computer Measurement Group conference (CMG’11) in Gaylord National, Washington DC area, on December 7th, Wednesday, at 6:30pm in the Annapolis 1/2 room we will have an open Birds of a Feather session “New Approaches to Performance Testing: An Open Discussion on Plans, Experiments and Points In-Between”.
Everybody is welcome!
The idea to discuss the topic was suggested by James Pulley during a Linkedin discussion in the LoadRunner group. It turned out that there are quite different views of the role of experimenting in performance testing.
We hope to get a lot of CMG attendees as well as many performance testers from the area there. Please e-mail me to apodelko at yahoo dot com if you have any question or going to attend – just to have an idea how many people will attend.
Bernd Harzog’s post Why is Application Performance Management so Screwed Up? started a lot of discussions on the Internet. The post is a very good list of existing issues you may face when you try to use APM tools. I’d add one more – overheads. At least for the first generation, the claim that you may use APM in production worked only if you did very selective monitoring.
My view of APM is that first generation of APM tools so well described by Bernd was very immature. Not that something was explicitly wrong with the APM in general – really wrong was the drastic contrast between what the tools actually could do and marketing promises of tool vendors. The vendors talked more about the APM vision and how the APM tools are supposed to work – but not about the exact things these tools are able to do. Which you figured out in the best case after you spent a few days evaluating the product.
If check Garter Magic Quadrant for Application Performance Monitoring or my list of tools, it is clear that the market is very crowded and not well defined. There is no good criteria you can compare tools and different tools may actually do pretty different things, although it may be difficult to understand reading about them on vendor’s sites.
However I’d say that now we are getting the second generation of APM tools which are much closer to the APM promise for some technologies. I don’t want to list names here and separate “first” and “second” generations. I’d guess that some “first” generation tools might advance to the “second” generation if kept progress – but, as I said, it is difficult to say without actual evaluation of the tools. So I am hearing a lot of stories that people were able to successfully implement APM for system X using tool Y without many problems.
Still you doesn’t have a product which will do APM across all platforms and system if you have a full zoo of different technologies some of which are older than most of your IT employees (as many large corporations do). And don’t believe to anybody who tells you that they can do that. Still it looks like you can do it now for more systems with fewer problems – and start reaping the benefits of APM. Actually I don’t see any other alternative to APM in the long run – although it is a topic for a separate post. But be aware of all points mentioned in Bernd’s post – and check if the product you are going to use doing what you need in the way you want.
P.S. Just before posting noticed another Bernd Harzog post where he shares his view of next generation APM products.
Oracle Application Testing Suite 9.3 was released some time ago. It is available for download (subject to OTN License Agreement). Some new features and updates in this release are described in the press release.
At last I have updated my site a little. It is, of course, long time overdue for re-design – but, with all things going on, it doesn’t look like I get to this soon. Meanwhile I decided to get more mileage from the update and list links I added to the site here separately. These links may be not new, they are just links I stumbled upon relatively recently and decided that I need to add them to my collection of performance links.
Performance By Design by Mark Friedman
Book of Speed by Stoyan Stefanov
Thinking Clearly about Performance by Cary Millsap
Performance Engineering Parables by Chris Papineau
Seriti Consulting Blog the Web Operations and Management Specialists, by Stephen Thair
High performance Web Sites site and blog by Steve Souders
Performance Testing portal
Testing Experience Magazine the publisher, Díaz & Hilterscheid, organized multiple conferences across Europe
De-Mystifying Software Performance Optimization by Paul Del Vecchio
1202 Performance Performance by Design, Andrew Lee’s site
New York Web Performance Group and links to other Web Performance Groups
WPO Links (thanks to Alon’s write-up):
Yslow analyzes web pages and suggests ways to improve their performance
Smush.it “smushes” images to make them smaller without losing any noticeable quality.
Show Slow is an open source tool that helps monitor various website performance metrics over time. It captures the results of YSlow, Page Speed and dynaTrace AJAX Edition rankings and graphs them, to help you understand how various changes to your site affect its performance
Mobile Perf Bookmarklet – According to Steve Souders, “Web development on mobile devices is especially challenging. The debuggers and profilers we use on the desktop aren’t available. Bookmarklets are a good alternative.”
Page Speed is an open-source project started at Google to help developers optimize their web pages by applying web performance best practices
HTTParchive.org a permanent repository of web performance information such as size of pages, failed requests, and technologies utilized.
Whichloadsfaster.com Pages compete head-to-head in your browser to see who’s fastest
Webpagetest.org tests website performance
Mod_PageSpeed is an open-source Apache module that automatically optimizes web pages and resources on them
SOASTA launched CloudTest Lite – a free edition of their performance testing solution. Basically, they give it free for up to 100 users. A serious move for sure. It should heat up the load testing tool market. It may work indeed – I guess they don’t have many paid customers in that range anyway, looks like CloudTest’s sweet spot is when you need a very large number of users. I am very interested to see how it will turn out.
Several rosy reviews were posted, for example, CloudTest Lite – A Game Changer in the Performance Tool Market by Scott Barber and SOASTA CloudTest Lite Hands-On by Bernard Golden.
As I already mentioned, it indeed is pretty interesting. However, I’d say that we need add some skepticism to be more realistic.
First, it is not the first and absolutely unique move in load testing tools. I recall a few somewhat similar moves before which then quietly disappeared. Well, I don’t remember what limitations were (maybe a little bit more restrictive). And the companies were not the leaders of the market. Moreover, there is a list of 50 open source load testing tools on opensourcetesting.com and some, like JMeter and OpenSTA, are pretty mature. Yes, open source in load testing area was not so successful as in other areas. Especially analysis is weak in most of these tools (if existent at all).
Second, releasing is just the first step. The challenge for SOASTA would be how they support a large number of non-paying users (although, of course, for a promising start-up the number of customers may be important by itself). The community maybe can help with “how-to” questions, but implementing, let’s say, enhancement requests is up to the SOASTA team. And the number of such requests may be pretty high as people start to use it with different applications.
For example, it looks like we can’t specify transactions during recording in CloudTest for the moment. Well, what I am supposed to do with a script with a few hundred identical requests in it (AJAX type, differ by incomprehensible http body content)? Track delays in the scripts and try to correlate them with recording steps? Not exactly my understanding of quick and easy.
Scott writes in his review “it is free from now until the sun explodes” Hmm… I’d rather prefer to hear this from the SOASTA team. Well, even if the SOASTA team is completely devoted to this edition, nobody can guarantee that SOASTA won’t be acquired and who knows what acquirer decides to do with the freemium edition…
Yes, Scott is not easily getting excited. Last time, as far as I remember, Scott got excited about a load testing tool when Microsoft released their tool as part of Visual Studio back in 2005. See, for example, the discussions around my old posts VisualStudio 2005 and Load Testing and Scripting Language in Performance Tools. Well, Microsoft didn’t live up to its promises and I haven’t heard about their load testing tools for a while (my understanding is that it is not dead, but doesn’t play any noticeable role). But who knew that Microsoft was losing its grip?
Anyway, it looks like we have several interesting developments in the load testing tool market that may be beneficial to the community. CloudTest and its Lite version are definitely on the list. LoadRunner AJAX True client may be introducing a new paradigm in load testing (or promoting it if follow e-Valid blog). Oracle Application Testing Suite (former Empirix) is practically a new product and is getting traction [at least in the Oracle Universe].
Steve Souders in his recent interview said: For years when developers started focusing on the performance of their websites, they would start on the back end, optimizing C++ code or database queries. Then we discovered that about 10% or 20% of the overall page load time was spent on the back end. So if you cut that in half, you only improve things 5%, maybe 10%. In many cases, you can reduce the back end time to zero and most users won’t notice. So really, improvement comes from the time spent on the front end, on the network transferring resources and in the browser pulling in those resources.
Well, if we see that “about 10% or 20% of the overall page load time was spent on the back end” under the maximal load, this statement is a great example of applying performance engineering to the problem analysis. It is definitely the first thing to do investigating any performance issue – find where time is spent. And, considering popularity of WPO, it is probably the case for most modern websites with rich web interface and no need of transactional processing behind the scene. But it is usually not the case for sophisticated business applications working with transactional data (although I even doubt that it is exactly the case for the moment you click on “confirm order” button when you buy something on the Internet).
For back end you have multi-user load and back-end performance problems observable with a single user are somewhat trivial (see above about profiler, etc.). But many performance issues may be observed only under [heavy] load. So you get one more level of sophistication on the top: you need to simulate load and you need to find a way to debug / profile under load (and most tools bring too much overhead to be used in this situation, plus issues may be related to timing and attempts to look inside may change the behavior of the system). Plus you get system resources limitations on the top of multi-user software problems (such as synchronization issues, running out software objects, etc.) introducing non-linear effects (it is where you get to capacity management, queuing theory, etc.). All these introduce a high probability that back end performance will degrade drastically with load (if not properly tested and configured), while the time spent on the client side for rendering and client side processing would remain the same in most cases (although affected by server response timing). It was the primary reason why almost all attention was used to be paid to the back end.
Looks like we got a completely new area of performance engineering – Web Performance Optimization (WPO), with its own terminology, approaches, experts , Web Performance meeting groups, Velocity conference, and, perhaps, even new load testing tools like CloudTest (according to my impression, it is more beneficial for WPO projects). WPO actually was around for a while (looks like the first Velocity conference was in 2008), but only recently, after attending a couple of New York Web Perf events, I realized that it became a separate discipline. I guess the appearance of this new movement concentrated on the web performance means that we get a pretty mature industry of very scalable web sites delivering sophisticated content.
Well , the history of performance engineering looks like a series of waves (for me, although my knowledge of its history is limited, especially for the period before I got involved). Computer Measurement Group (CMG) was organized in 1975 as an organization of performance analysts and capacity planners. Dr. Connie Smith book “Performance Engineering of Software Systems” book was published in 1990 created the Software Performance Engineering movement.
Distributed systems brought new wave of performance engineering based around load testing. Perhaps because there was not much instrumentation available and only way to make sure that the system performs was to apply load. It looks like the first version of LoadRunner was shipped in 1989. But when I first time got involved into load testing in 1997 with SQL Bench (SilkPerformer’s ancestor), it was still far from what we expect from load testing tools now. The latest wave was probably Application Performance Management with a large array of tools promising application instrumentation (visibility in what is going on inside applications).
It is interesting that all these overlapping areas never completely merged. This is probably the reason why we have such discrepancy in performance terminology because every group often started terminology from a scratch (while others still used old terminology).
And now we get Web Performance Optimization (looks like the term was coined by Steve Souders). While WPO looks like a separate discipline, I’d rather placed it as a part of overall performance engineering. You still have a back end in most cases – and while the back end is mentioned in the WPO presentations, it sometimes looks like authors mention something trivial. Well, it is not, even for most web sites, not to mention large banks and insurance companies with many tiers of sophisticated systems in the back – and for the end-user performance you need to consider all together. Downplaying “back end” is probably as wrong as downplaying “front end” (which, working mostly with business applications, I am definitely guilty – well, historically load testing concentrated on the server performance). The importance of each component depends on the system. In my opinion, performance principles are much more generic that the details of specific technologies. Most of performance engineering experience may be applied to any technology (you, of course, still need to learn something about this new technology too).
So, while it is very promising and exciting that we get a new wave of people dedicated to performance, it is a little sad that it looks like it often gets started from a scratch inventing new terminology and ignoring what existed before. For me it would be better if we get all these waves together to enrich each other with the area of performance engineering they specialize in. Of course, there are some interaction – well, you need to work together in a way to ensure systems’ performance – but it still looks like every wave tend to stay somewhat separate, cultivating their own terminology, approaches, and events.