6 min read



“Begin at the beginning," the King said, very gravely, "and go on till you come to the end: then stop.” - Lewis Carroll, Alice in Wonderland

This is the first in a series of posts that will provide some basic Computer Science (CS) information that is useful for most languages. Any examples will usually be Python based, but the concepts are general, and all programmers should be aware of them.

We will start from the basics, and work up to more advanced concepts as we go.

What is a numeric?

If you've been coding a while, this may seem like a fairly basic question, but for those that are new to the game, it bears some explanation.

A numeric is - as the name suggests - a number. Python, like most other programming languages, distinguishes between different types of numbers. Some of these reflect categories of numbers found in mathematics, others are simply different bases (we will discuss bases later).

Common number types


An Integer is a whole number. That is, it does not have a fractional or decimal componenet. 3, 7, -2, 1 million, -273 are all integers. 1.5, pi, -2 1/2 are not.

Binary (Base 2)

Binary numbers are the basis for pretty much all of computing. They are base 2, meaning they can have exactly one of 2 values: 0 or 1. This mimics the values that internal electronic switches can have (on/off), meaning that if we can translate data to a binary number (which we always can!), then the computer can be set to some internal state that reflects this number, and we can manipulate it.

Binary Decimal
1 1
10 2
11 3
100 4
1000 8
10000 16
100000 32

As you can see, each position we add to the left is the previous column multiplied by 2. If this is confusing, think about regular decimal numbers, where each column is the previous one multipled by 10 (units, 10s, 100s, 1000s etc.).

This also explains why you will see all the powers of 2 a lot when learning to code, and you will soon know the following sequence without having to think too much about it:

1 2 4 8 16 32 64 128 256 512 1,024 2,048 4,096 8,192 16,384 32,768 65,536

This corresponds to the binary sequence 1 10 100 ... 10,000,000,000,000,000

Floating point (floats)

Floating point numbers are those that are not integers becuase they have some fractional or decimal part. Integers are a subset of these, so while 2.5 is a float and can never be an integer, 2 is an integer but could also be stored as a float.

In mathematical terms, floating points would correspond to Rational numbers. Though there are technical limitations as to exactly how many decimal places it is possibe to be accurate to. Irrational numbers could never fully be represented as a float (or as a fraction, by definition), though in practice, it is sometimes necesary to use an irrational number (for example, pi) in a calculation. In this case it is usually enough to specify the number to a suitable number of decimal places (eg, pi = 3.14, or 3.14159265359) and store it as a float.

There are some complexities around how floating point numbers work in a computer, which means that you may occasionally get a strange result that you weren't expecting. We will cover this another time, but for the moment, just go with the rule of using integers as much as possible. This includes things like currency calculations, where you would intuitively have a decimal part. Simply store your values as cents or pennies, rather than dollars, euros or pounds, and you will avoid falling into strange rounding error problems that can arise with floats.


Complex numbers are a mathematical concept, where a number has two parts: a Real part and an Imaginary part.

The Real part is any number that can be written as a fraction or a decimal, even if that decimal goes on forever (an Irrational number, the most famous of which is pi)

The Imaginary part is a little more tricky. An imaginary number is usually denoted as a multiple of i, where i = sqrt(-1).

What on earth is the square root of -1?

This is certainly a tricky concept to understand, and I would suggest - for the moment - that you don't worry about it too much. We will come back to this at a later date when it makes sense to delve more deeply into the ideas behind it.

For the moment, just know that this is the definition, and that we use i to denote it for the simple reason that there is no number that, multiplied by itself, gives the result -1.

So how do we write a complex number?

This bit is a little easier. We write the 2 parts - Real and Imaginary - as a sum.

a + bi

So the following are examples of imaginary numbers:

1 + 2i
-7 + 2.6i
pi - 0.0002763i

Note also that the set of all Complex numbers includes the set of all Real numbers, simply by setting b = 0 in the equation above. So the number 2 is an Integer, but also a Real number, and in turn a Complex number, which would be written as 2 + 0i.

There are mathematical rules for how to manipulate complex numbers (addition, subtraction, multiplication etc.). Often, they are represented on a graph with real numbers on the x axis and imaginary numbers on the y axis.

Hexadecimal (Base 16)

Like Binary numbers, Hexadecimal numbers are defined in a different base, in this case 16. So instead of multiples of 2 (Binary) or multiples of 10 (Decimal), each position in the number increases by a multiple of 16.

Because this means we need to fit in 15 numbers before we get to 10, we need some extra notation to indicate the hexadecimal values corresponding to the decimal values 10 through 15 (we run out of actual digits otherwise, as our entire counting system is based around decimal numbers, which are represented by the symbols 0 through 9). For these "extra" symbols, we use A through F.

Hexadecimal Decimal Binary
1 1 1
9 9 1001
A 10 1010
F 15 1111
10 16 10000
1F 31 11111
FF 255 11111111
100 256 100000000

Look carefully at the patterns here, particularly the relationship between Hexadecimal and Binary. There's a mathematical reason for this: 16 can also be written as 2^4 (2 to the power of 4, or 2 x 2 x 2 x 2). We've already covered the reason for Binary numbers being so important in computing (i.e. that they correspond to the on/of positions of internal electronic components of your computer). Perhaps this relationship will help you understand why Hexadecimal numbers are so common in computing.

Octal (Base 8)

The use of Octal numbers is less common than either Binary or Hexadecimal, but perhaps you understand why they are used now: 8 can also be written as 2^3 (2 to the power of 3, or 2 x 2 x 2).

Base 8 numbers mean that Octal 8 is the same as Decimal 10, so the digits 8 and 9 are not used in this Base (in the same way that digits 2 through 9 are not used in base 2 - Binary)

Here is the same table as above, but with Octal versions of the numbers added.

Hexadecimal Octal Decimal Binary
1 1 1 1
8 10 8 1000
9 11 9 1001
A 12 10 1010
F 17 15 1111
10 20 16 10000
1F 37 31 11111
FF 377 255 11111111
100 400 256 100000000

It's not likely you will make a conscious choice to use Octal numbers in your programming, though you may come across them from time to time. One common example that uses Octal numbers that you will almost certainly see (though may not give much thought to), is file permissions in Unix/Linux (we will likely cover these at a later date too).

Boolean (True / False)

Boolean values are a little special, and can be represented differently in different languages. Essentially they are a single Binary unit value, that is 0 or 1, where 0 corresponds to True and 1 corresponds to False.

Including them here is perhaps a little bit of a cheat, as not all lanugages will allow the comparison between 0/1 and False/True (Python does allow this comparison). It is an important concept though, and one that you should know about early on, so we think its inclusion here is warranted.

Enjoying these posts? Subscribe for more