Table of Contents

About

In computer, integer are stored in word from 8 to 64 bit.

Because CPU manipulates integer data type, they are also sometime called binary data type.

Bit Length Two's complement signed Unsigned Float / Double
8 Int8 Uint8
16 Int16 Uint16
32 Int32 Uint32 Float32
64 BigInt64 BigUint64 Float64

Example

The integer number 42 in bit representation and the number of integer element that it can contains (min and max) 1)

representation bit
two’s complement 8-bit 0010 1010
two’s complement 32-bit 0000 0000 0000 0000 0000 0000 0010 1010
packed binary-coded decimal (BCD) 0100 0010
32-bit IEEE-754 floating-point 0100 0010 0010 1000 0000 0000 0000 0000
64-bit IEEE-754 floating-point 0100 0000 0100 0101 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000

Data Type

In a programming language, integers are made available as primitive type generally under the name:

  • integer: for the short version (java stores them in 32bit)
  • long: for the longer version (longer than integer) (java stores them in 64bit) - Long is a data type that allows higher range of number than integer at the cost of a higher storage.

but you may store integer in:

Size

Integer size is defined in computer by the number of bit used to store them.

They are generally bound to the size of the cpu word (allowing quick bit operation) while the language was defined.

The integer range is:

  • at least <math>-2^{n − 1}</math>
  • to <math>2^{n − 1} − 1</math>

where:

  • n is the storage size in bit
  • -1 because you need a bit to store the byte order (?)
Size (Bit) Minimum (Bit) Maximum
8 <math>-2^{7} = 128 </math> <math>2^{7}−1 = 127 </math>
16 <math>-2^{15} = 32.768 </math> <math>2^{15}−1 = 32.767</math>
32 <math>-2^{31} = -2,147,483,648</math> <math>2^{31}−1 = -2,147,483,647</math>
64 <math>-2^{63} = 9.223.372.036.854.775.808</math> <math>2^{63}−1 = 9.223.372.036.854.775.807</math>

Overflow - The storage limitation may cause programs to crash

For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than <math>2^{15}−1</math> , the program will fail on computers with 16-bit integers creating a overflow failure. That variable should have been declared in C as long, which has at least 32 bits on any computer.

Documentation / Reference