Performance - Perf (Analysis|Management|Tuning) - bottlenecks identification

Scale Counter Graph

About

Performance analysis (aka Performance Management) aims to continuously:

  • evaluate a server to determine whether or not it can deliver the level of performance that's required (the ability to handle a certain load of concurrent users).
  • find performance bottlenecks,
  • eliminate them (tuning the system)

Method

Simple

Just use this simple heuristic:

  • if it's slow and the computer goes WOOOOOOOOSH, the bottleneck is the CPU;
  • if it's slow and the mouse stop moving, the bottleneck is the memory.

Others

For one method, see Performance - Rapid Bottleneck Identification (RBI)

Bottlenecks

A computer system is a series of bottlenecks in motion. In order to achieve high levels of performance, you have to be able to identify and resolve bottlenecks—which are not isolated, but are inter-related and constantly shifting.

There are numerous elements to consider when dealing with performance, you can divide these elements into the following categories:

  • The performance of the resource: the hardware and the network
  • The consumers of this resource: the applications, the database, … and the operating system (as manifested through the processor, memory, and disks)

The goal of performance tuning is to optimize performance by eliminating performance bottlenecks. The first step in performance tuning is to identify performance bottlenecks. The strategy is to identify a performance bottleneck, eliminate it, and then identify the next performance bottleneck until you are satisfied with the performance.

Because there are so many variables, monitoring and tuning performance isn't as easy as defining it.

Performance

Issues

The two Primary Issues that you meet are:

A majority of all system and application performance issues result from limitations in throughput.

If the testers check only for throughput, the concurrency issue can not be discover until the application is in production.

Analysis and Capacity Planning

Performance management is closed linked to capacity planning. The difference is that:

  • performance management involves tuning the current system so that it can perform better, thereby enabling it to support more users.
  • Capacity planning, on the other hand, focuses on how many users a site can support and how to scale the site so it can support more users.

Perspective

While confronting the myriad of elements that make up a production system, you also have to balance the goals and priorities of two viewpoints—that of the user and that of the administrator. Although it often seems like users and administrators have conflicting views, both want the same things from a system.

They want:

  • the system to provide good performance,
  • the applications to work,
  • the site to be up all the time.

It's really a matter of perspective; users and administrators simply have slightly different ways of viewing system goals and interpreting performance.

User

For most users, performance equates to speed—the perceived response time of the system they're using. When they activate a hyperlink and the requested page is retrieved and displayed quickly—typically in less than 10 seconds—their perception of performance is favorable. (It's interesting to note that it's not uncommon for a user to think that a page takes longer to retrieve and display than it actually does.)

From a user's perspective, the definition of performance and the primary goal of performance tuning is the same—make it fast. This speed-based viewpoint encompasses the following:

  • Initialization
  • Shut down
  • Page retrieval and rendering
  • Reasonable time-outs

Administrator

From an administrator's viewpoint, performance is a measure of how system resources are utilized by all the running programs. The scope of resource usage ranges from the lowest level program (drivers, for example) up to and including the applications that are hosted on a server.

In terms of performance tuning:

  • the administrator's primary goal is to make the system satisfy client requests quickly and without errors or interrupts.
  • His secondary tuning goals are:
    • Conserving bandwidth
    • Conserving CPU resources and RAM utilization

An indirect goal is eliminating or reducing Help Desk calls, a goal that is usually achieved by meeting the direct goals.

Unlike the user, who deals primarily with perception, the administrator can quantify resource utilization through:

  • the collection,
  • the observation (monitoring),
  • and analysis of performance data.

You can use performance data to:

  • Observe changes and trends in resource usage and workload distribution.
  • Quantify the relationship between the workload and its effect on system resources.
  • Test configuration changes or tuning efforts by monitoring the results.

Regardless of the perspective you take, you have to approach tuning systematically and employ a methodology for implementing and testing system configuration changes.

Business perspective

The business perspective also plays a significant role in performance management. In this context, someone has to do determine how much hardware is required, how to make provisions for peak loads, how to balance out spikes with low overall load, and how to determine or satisfy service-level agreements.

It's often necessary to make price and performance trade-offs—it may be too expensive to have enough servers for maintaining low processor utilization at all times, so low average utilization with spikes becomes acceptable.

During testing

Responses

  • Check the size of the response data after processing queries. If only a few hundred bytes are returned for a large result set, it is a sign that an error message was returned instead of data.
  • For each transaction, ensure there are no error messages in the response data.
  • Look for exceptions and stack traces in the data returned from the server.
  • Receiving lots of binary data from the server is a good thing. Error messages come back in clear text.

Application logs

Ensure there are no errors in the application logs

Documentation / Reference





Discover More
Card Puncher Data Processing
Computer - Capacity Planning (Sizing)

Capacity planning is the process of predicting a future system load in order to to determine the type of hardware required for a software. Capacity = the number of agents (m) = the number of parallel...
Card Puncher Data Processing
Computer Monitoring / Operational Intelligence / Real Time Monitoring

Computer Monitoring / Operational Intelligence / Real Time Monitoring Monitoring is the process of defining metrics and alerts in order to respond to a performance degradation where the acceptable level...
Scale Counter Graph
Performance - Rapid Bottleneck Identification (RBI)

Rapid Bottleneck Identification is a performance analysis method. Traditionally, performance testers focused on concurrent users as the key metric for application scalability. However, if a majority...
Scale Counter Graph
What are Application Metrics? ie Perfcounter, Performance Metrics, Operational data, Monitoring, telemetry

This section is about the collection and calculation of metrics in a monitoring context known as observability.



Share this page:
Follow us:
Task Runner