Performance analysis (aka Performance Management) aims to continuously:
Just use this simple heuristic:
if it's slow and the computer goes WOOOOOOOOSH, the bottleneck is the CPU; if it's slow and the mouse stop moving, the bottleneck is the memory.
For one method, see Performance - Rapid Bottleneck Identification (RBI)
A computer system is a series of bottlenecks in motion. In order to achieve high levels of performance, you have to be able to identify and resolve bottlenecks—which are not isolated, but are inter-related and constantly shifting.
There are numerous elements to consider when dealing with performance, you can divide these elements into the following categories:
The goal of performance tuning is to optimize performance by eliminating performance bottlenecks. The first step in performance tuning is to identify performance bottlenecks. The strategy is to identify a performance bottleneck, eliminate it, and then identify the next performance bottleneck until you are satisfied with the performance.
Because there are so many variables, monitoring and tuning performance isn't as easy as defining it.
The two Primary Issues that you meet are:
A majority of all system and application performance issues result from limitations in throughput.
If the testers check only for throughput, the concurrency issue can not be discover until the application is in production.
Performance management is closed linked to capacity planning. The difference is that:
While confronting the myriad of elements that make up a production system, you also have to balance the goals and priorities of two viewpoints—that of the user and that of the administrator. Although it often seems like users and administrators have conflicting views, both want the same things from a system.
They want:
It's really a matter of perspective; users and administrators simply have slightly different ways of viewing system goals and interpreting performance.
For most users, performance equates to speed—the perceived response time of the system they're using. When they activate a hyperlink and the requested page is retrieved and displayed quickly—typically in less than 10 seconds—their perception of performance is favorable. (It's interesting to note that it's not uncommon for a user to think that a page takes longer to retrieve and display than it actually does.)
From a user's perspective, the definition of performance and the primary goal of performance tuning is the same—make it fast. This speed-based viewpoint encompasses the following:
From an administrator's viewpoint, performance is a measure of how system resources are utilized by all the running programs. The scope of resource usage ranges from the lowest level program (drivers, for example) up to and including the applications that are hosted on a server.
In terms of performance tuning:
An indirect goal is eliminating or reducing Help Desk calls, a goal that is usually achieved by meeting the direct goals.
Unlike the user, who deals primarily with perception, the administrator can quantify resource utilization through:
You can use performance data to:
Regardless of the perspective you take, you have to approach tuning systematically and employ a methodology for implementing and testing system configuration changes.
The business perspective also plays a significant role in performance management. In this context, someone has to do determine how much hardware is required, how to make provisions for peak loads, how to balance out spikes with low overall load, and how to determine or satisfy service-level agreements.
It's often necessary to make price and performance trade-offs—it may be too expensive to have enough servers for maintaining low processor utilization at all times, so low average utilization with spikes becomes acceptable.
Ensure there are no errors in the application logs