The Application Performance Index, commonly known as Apdex, is an open standard of the Apdex Alliance. The members of the alliance see a need to have a clear and consistent way to report on application performance, something that is sorely lacking in the industry today. Think about it for a minute. Do you know of a standard method to measure and display application performance? Application performance monitoring is certainly widely available today, and is considered an entire market segment by industry analysts. However, if you’ve done any evaluations at all of application performance monitoring solutions you know there’s no standard way of reporting the data. The reporting methods are highly disparate, leaving you with the responsibility of determining how you want to have application performance reported, and then searching for a compatible solution. This is, of course, tedious and time consuming, and it’s what the Apdex Alliance hopes to change by promoting a simple, standard method for reporting on overall application performance.
Before digging into what the Apdex Alliance is proposing, let‘s take a quick tour of the two basic schools of thought in application performance monitoring. In one camp, we have the application-centric view, which focuses on how the application itself is behaving. This view considers factors local to the application, like resource usage and number of connections, as being most important to monitor. The thinking here is that if application resource usage stays within the given specifications for that application, the application is performing acceptably. From an application designer’s point of view this is an excellent approach, but what about the users? Where are they located and how do they access the application? Is the application single tier or can a single user transaction spawn connections to multiple applications making the end-user experience of the “application” much fuzzier?
This of course takes us to the second school of thought, where the perspective shifts to the end user. Many would argue that this is a more appropriate approach, but again, the real question is exactly what to measure to give a clear indication of the user’s experience. Application performance, from a user’s perspective, involves a number of complex factors, all of which mean nothing to the user except for how long it takes to fulfill the transaction. The primary difference between taking a user vs. application-centric view is that the user-centric view factors in not only the application performance, but also the performance of the network connecting them to the application(s). This is where the increased complexity enters.
We recently covered the topic of application response time (the user-centric view of application performance) in detail in another LMT blog post, so you may want to jump there for more detail. Suffice it to say that making measurements from the users perspective significantly increases the range of metrics to be monitored and reported on, while increasing the overall complexity and cost of obtaining meaningful and accurate data.
Enter Apdex. Given the wide range of approaches to measuring and reporting on application response time, the Apdex Alliance set out to establish a simple reporting scheme that remains comparable across all applications, all users, and all enterprises and all monitoring solutions. The goals of this metric are as follows (taken from the Apdex specification):
- To provide a useful summary of an application’s responsiveness
- To make it easy to understand the significance of values produced by the index
- To work for all transactional applications
- To operate within a fixed range (0 to 1) and be unit-neutral
- To indicate application performance directly so that 0 is the worst performance and 1 is the best performance
- To operate in such a way that specific values of the index (e.g., 0.5) report the same user experience across any application, user group, or enterprise
- To operate in such a way that equivalent user experiences observed by different measurement and reporting tools will report the same value
The idea behind Apdex is simple: The user has a certain level of tolerance for the response of any given application. Apdex defines three levels of user satisfaction: satisfied, tolerating and frustrated, and then specifies the methodology for determining the overall user satisfaction level for each application being monitored. The index is sample-based and therefore highly statistical; this will become evident as the methodology unfolds.
To determine the overall user satisfaction level using Apdex, the notion of a “task” has a specific connotation. Apdex defines a task to be “the time measured from the moment the user enters an application query, command, function, etc. that requires a server response, to the moment the user receives the response so they can proceed with the application. This is often called the ‘user wait time’ or ‘application response time’.”
In Apdex each application is viewed as a series of tasks with each individual task measured as described. This measured time (remember, this is actual network data recorded from the user’s perspective) is compared against a threshold value “T”, which is set by default to 4 seconds per the specification, but can be adjusted by the user (and sometimes per individual application depending on the actual implementation of Apdex), and a second value “F”, which is 4T. Apdex then defines three possible outcomes per task.
Satisfied = 0 to T
Tolerating = >T to F
Frustrated = >F
Each task continues to receive an Apdex rating, and the overall Apdex value for the application is calculated as follows: ￼
As you can tell from this equation the resulting ApdexT will be a value between 0 and 1. The only way to get a “perfect score” of 1 is for all tasks to have an outcome of Satisfied.
Lastly, to ensure consistency in reporting, Apdex defines a rating system for ApdexT values that all vendors are required to use. The ratings are as follows:
With an Excellent range of only 0.94 – 1.00, it becomes clear how such a simple specification can reveal some very interesting results about your overall application performance.
Let’s be clear that Apdex is merely a specification. As such it’s only as good as the products that implement it. These products need to do a proper job measuring the task response times, which can require relatively complex algorithms and equipment. Although the specification does an excellent job normalizing a complex measurement in an elegant fashion, it is in some ways only the tip of the iceberg. If user satisfaction levels fall into the poor or even unacceptable ranges, you will typically need to turn to more in-depth network analysis solutions with packet-based capabilities. Packet-based network analysis solutions can break down the application response time into its individual elements so you can pinpoint the root cause of the problem.
Author Profile - Jay Botelho is the Director of Product Management at WildPackets, Inc., a leading network analysis solutions provider for networks of all sizes and topologies. Jay holds an MSEE, and is an industry veteran with over 25 years of experience in product management, product marketing, program management and complex analysis. From the first mobile computers developed by GRiD Systems to modern day network infrastructure systems, Jay has been instrumental in setting corporate direction, specifying requirements for industry-leading hardware and software products, and growing product sales through targeted product marketing.