Jump to content

Units of information

From Wikibooks, open books for an open world

PAPER 2 - ⇑ Fundamentals of data representation ⇑

← Error checking Units of information Binary number system →


Specification

[edit | edit source]
Specification coverage
  • 3.5.3.1 - Bits and bytes
  • 3.5.3.2 - Units

Bits and bytes

[edit | edit source]

The bit

[edit | edit source]

Computers process data in digital form. Essentially this means that they use microprocessors, also referred to as chips or silicon chips, to control them. A chip is a small piece of silicon implanted with millions of electronic circuits. The chip receives pulses of electricity that are passed around these microscopic circuits in a way that allows computers to represent text, numbers, sounds and graphics.

A bit is a binary digit. The processor can only handle electricity in a relativity simple way - either electricity is flowing or it is not. This is often referred to as two states. The processor can recognise whether it is receiving an off signal or an on signal. This is handled as a zero (0) for off and a one (1) for on. Each binary digit therefore is either a 0 (no signal) or a 1 (a signal).

The processor now needs to convert these 0s and 1s into something useful for the user. Everything you can use your computers for its represented internally by a series of 0s and 1s. Computers string zeros and ones together to represent text, numbers, sound, video and everything else we use our computers for.

The clock speed of your computers indicates the speed at which the signals are sent around the processor. In simple terms, a clock speed of 2 GHz means that it will receive 2 billion of these on/off pulses per second.

The byte

[edit | edit source]

A single byte is a string of eight bits. Eight is a useful number of bits as it creates enough permutations (or combinations) of zeros and ones to represent every character on your keyboard:

  • With one bit we have two permutations: 0 and 1.
  • With two bits we have permutations: 00, 01, 10 and 11. This could be represented as 22 or 2 x 2. As we increase the number of bits, we increase the number of permutations by the power of two.
  • Three bits would give us 23 which is 2 x 2 x 2 = 8 permutations.
  • Four bits would give us 24 permutations which is 2 x 2 x 2 x 2 = 16 permutations.

The basic point here is that the more bits you use, the greater the range of numbers, characters, sounds or colours that can be created. Taking numbers as an example, as we have seen, 8 bits would be enough to represent 256 different numbers (0-255). As the number of bits increases, the range of numbers increases rapidly. For example 216 would give 65,536 permutations, 224 would give approximately 1.6 million and 232 would give over 4 billion permutations.

Units

[edit | edit source]

Larger combinations of bytes are used to measure the capacity of memory and storage devices. The size of the units can be referred to either using binary or decimal prefixes. For example, in decimal, the term kilo is commonly used to indicate a unit that is 1,000 times larger than a single unit. So the correct term would be kilobyte (KB). In binary, the correct term is actually kibibyte (Ki) with 1024 bytes being the nearest binary equivalent to 1,000.

The tables below show both binary and decimal prefixes.

Binary

Unit name Unit symbol Unit value
kibibyte Ki 210
mebibyte Mi 220
gibibyte Gi 230
tebibyte Ti 240

Decimal

Unit name Unit symbol Unit value
kilobyte KB 103
megabyte MB 106
gigabyte GB 109
terabyte TB 1012
Exercises

Which is biggest, a Mi, a Ki or a Gi?

Answer:

Gi

Explain the difference between a kibibyte and a kilobyte.

Answer:

Any of the following:

  • A kibibyte is 210
  • 1024 bytes whilst a kilobyte is 103
  • 1000 bytes