CPU - (System) Load Average

System Metrics

About

The performance load metrics in the context of a system. This CPU metrics is often called a CPU Load average but this is really a system load average as Unix refers it as the run-queue length. Ie:

CPU load can be compared with car traffic. “Cars” are processes:

  • using a slice of CPU time (“crossing the bridge”)
  • or queued up to use the CPU.

Cpu System Load

Amount meaning

Single Processor

For one core:

  • 0.00 means there's no traffic. Between 0.00 and 1.00 means there's no wait an arriving process will just go right on.
  • 1.00 means the CPU is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
  • over 1.00 means there's backup. How much? Well, 2.00 means that there is 2 CPU to process the whole traffic. – 1CPU working, and one CPU waiting..

To not wait, the CPU load should ideally stay below 1.00.

Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse.

Multi-processor

On multi-processor system, the load is relative to the number of processor cores available. A load of 1.00 is 100% CPU utilization on single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.

The total number of cores is what matters, regardless of how many physical processors those cores are spread across.

Example on Linux

grep 'model name' /proc/cpuinfo | wc -l
40
# The max load is then 40

CPU On linux (Uptime and top):

# uptime or top output
12:54:11 up 326 days, 10:56,  1 user,  load average: 0.11, 0.08, 0.09

where

  • 0.11 is the average over the last minute,
  • 0.08 is the average over the last five minutes,
  • and 0.09 is the average over the last 15 minutes

How to decrease the load

The most common technic to decrease the load of a server is to cache the response. 1) 2) 3) 4)





Discover More
System Metrics
CPU - (Metrics|Counter)

This page is counter / metrics for a CPU All metrics are System metrics. System Performance counters are CPU hardware registers that count hardware events such as instructions executed and cache-misses...
Cpu Moore Law Transistor
CPU - Processor Core

Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processor (called cores in this sense) into one integrated circuit....
Data System Architecture
Data - Cache

In computer science, a data cache is a component that aims to: improve performance reduce load on the server. The cache will: store transparently a request response and use it to for later...
Image Bomb

image bomb is a DDOS attack that targets image api. Image processing are quite heavy and the server CPU will be overwhelmed pretty loaded. and if the result is cached, the storage may go up pretty...
Nagios Nrpe Architecture
Monitoring - Nagios

Nagios is a monitoring platform. Addon NRPE is an addon that allows you to execute plugins remotely on Linux/Unix...
Scale Counter Graph
Monitoring Metrics - Accumulates

An accumulate is a monitoring metrics that record time-weighted metrics such as the mean and standard deviation of a time series of values They weight are given by time interval. In contrast, a tally...
Scale Counter Graph
Performance - Load Metric (Capacity)

A load is the utilization over a period of time of a resource such as: a computer cpu a program (agents ) a harddisk To reduce the load, the most common technique is to implement a cache.
Stream Vs Batch
Stream vs Batch

This article talks Stream Processing vs Batch Processing. The most important difference is that: in batch processing the size (cardinality) of the data to process is known whereas in a stream processing,...



Share this page:
Follow us:
Task Runner