“To see a world in a grain of sand and heaven in a wild flower Hold infinity in the palms of your hand and eternity in an hour.” - William Blake
Can you imagine the time when humans do
not know how to count? Not only that they do not know how to count, they do not
have the concept of numbers except, perhaps, the most rudimentary kind barely
enough for survival.
Numbers and counting must have begun with
the number one. (Even though in the beginning, they likely didn’t have a name
for it.) The first solid evidence of the existence of the number one, and that
someone was using it to count, appears about 20,000 years ago. It was just a series
of unified lines cut into a bone. It’s called the Ishango Bone. The Ishango
Bone (it’s a fibula of a baboon) was found in the Congo region of Africa in
1960. The lines cut into the bone are too uniform to be accidental.
Archaeologists believe the lines were tally marks to keep track of something.
But numbers, and counting, didn’t truly
come into being until the rise of cities. Indeed numbers and counting weren’t
really needed until then. It began about 4,000 BC in Sumeria, one of the
earliest civilizations. With so many people, livestock, crops and artisan goods
located in the same place, cities needed a way to organize and keep track of it
all, as it was used up, added to or traded.
Their method of counting began as a series
of tokens. Each token a man held represented something tangible, say, chickens.
If a man had five chickens he was given five tokens. When he traded or killed
one of his chickens, one of his tokens was removed. This was a big step in the
history of numbers and counting because with that step subtraction was invented
and thus the concept of arithmetic was born.
In the beginning Sumerians kept a group of
clay cones inside clay pouches. The pouches were then sealed up and secured.
Then the number of cones that were inside the clay pouch was stamped on the
outside of the pouch, one stamp for each cone inside. Someone soon hit upon the
idea that cones weren’t needed at all. Instead of having a pouch filled with
five cones with five marks written on the outside of the pouch, why not just
write those five marks on a clay tablet and do away with the cones altogether?
This is exactly what happened. This development of keeping track on clay
tablets had ramifications beyond arithmetic, for with it, the idea of writing
was also born.
The Egyptians
were the first civilization to invent different symbols for different numbers.
They had a symbol for one, which was just a line. The symbol for ten was a
rope. The symbol for a hundred was a coil of rope. They also had numbers for a
thousand and ten thousand. The Egyptians were the first to dream up the number
one million, and its symbol was a prisoner begging for forgiveness, which was a
person on its knees, hands upraised in the air, in a posture of humility.
Egyptian Number symbols
Greece made further contributions to the
world of numbers and counting, much of it under the guidance of Pythagoras. He
studied in Egypt and upon returning to Greece established a school of
mathematics, introducing Greece to mathematical concepts already prevalent in
Egypt. Pythagoras was the first man to come up with the idea of odd and even
numbers. To him, the odd numbers were male; the evens were female. He is most
famous for his Pythagorean theorem, but perhaps his greatest contribution was
laying the groundwork for Greek mathematicians who would follow him.
Pythagoras was one of the world’s first
theoretical mathematicians, but it was another famous Greek mathematician,
Archimedes, who took theoretical mathematics to a level no one had ever taken
it to before. Archimedes is considered to be the greatest mathematician of
antiquity and one of the greatest of all time. Archimedes enjoyed doing
experiments with numbers and playing games with numbers. He is famous for inventing
a method of determining the volume of an object with an irregular shape. The
answer came to him while he was bathing. He was so excited he leapt from his
tub and ran naked through the streets screaming “Eureka!” which is Greek for “I
have found it.” Archimedes made many, many other mathematical contributions,
but they are too numerous to mention here.
The Greek’s role in mathematics ended,
quite literally, with Archimedes. He was killed by a Roman soldier during the
Siege of Syracuse in 212 BC. And thus ended the golden age of mathematics in
the classical world.
Under the rule of Rome, mathematics entered
a dark age, and for a couple different reasons. The main reason was that the Romans
simply weren’t interested in mathematics (they were more concerned with world
domination), and secondly, because Roman numerals were so unwieldy, they
couldn’t be used for anything more complicated than recording the results of
calculations. The Romans did all their calculating on a counting board, which
was an early version of abacus. And because of that Roman mathematics couldn’t,
and didn’t go far beyond adding and subtracting. Their use of numbers was good
for nothing more than a simple counting system. The Romans' use of numbers was
no more advanced than the notches on the Ishango Bone. There’s a good reason
there are no famous Roman mathematicians.
The next big advance (and it was a huge
advance) in the world of numbers and mathematics came around 500 AD. It would
be the most revolutionary advance in numbers since the Sumerians invented
mathematics. The Indians invented an entirely new number: zero.
Though humans have always understood the
concept of nothing or having nothing, the concept of zero was only fully
developed in India in the fifth century A.D. Before then, mathematicians
struggled to perform the simplest arithmetic calculations. Today, zero, both as
a symbol (or numeral) and a concept, meaning the absence of any quantity —
allows us to perform calculus, do complicated equations, and to have invented
computers.
Under Hinduism, the Indians possessed
concepts such as Nirvana and eternity. These are some very abstract concepts
that need some abstract math to help describe them. The Indians needed a way to
express very large numbers, and so they created a method of counting that could
deal with very large numbers. It was they who created a different symbol for
every number from one to nine. They are known today as Arabic numerals, but
they would more properly be called Indian numerals, since it was the Indians
who invented them.
Once zero was invented it transformed
counting and mathematics, in a way that would change the world. Zero is still
considered India’s greatest contribution to the world. For the first time in
human history the concept of nothing had a number.
Zero, by itself, wasn’t necessarily all
that special. The magic happened when you paired it with other numbers. With
the invention of zero the Indians gained the ability to make numbers infinitely
large or infinitely small. And that enabled Indian scientists to advance far
ahead of other civilizations that didn’t have zero, due to the extraordinary
calculations that could be made with it. For example, Indian astronomers were
centuries ahead of the Christian world. With the help of the very fluid Arabic
numbers, Indian scientists worked out that the Earth spins on its axis, and that
it moves around the sun, something that Copernicus wouldn’t figure out for
another thousand years.
The next big advance in numbers is the
invention of fractions in 762 AD in what is now Baghdad — and what was then
Persia. This does not mean that the earlier civilizations had no concept of
fractions---they do. But their symbols and representations were so cumbersome
that it was very difficult to do simple calculations. It was the Persians’ adherence
to the Koran and the teachings of Islam that led to the invention of fractions
in the form that we are using now. The Koran teaches that possessions of the
deceased have to be divided among the descendants but not equally---the women
descendants have lesser share than the men. Working all of that out required
fractions. Prior to 762 AD they didn’t have a system of mathematics
sophisticated enough to do a very proper job.
The number of symbols or numerals used to
represent numbers is the base of that particular number system. The most common
is the base-10 or decimal system where we have numerals 0, 1, 2, 3, 4, 5, 6, 7,
8 and 9. With these 10 numerals, any number big or small can be represented. As
an analogy, the English alphabet has 26 letters. With these 26 letters, any
English word that you can think of can be written down.
But there are other systems aside from decimal
that have been used by different civilizations in different time periods. The
base-12, called duodecimal or dozenal system had been used at one time or
another and that’s the reason why until now we buy some things by the dozens.
The base-60 or sexagesimal was first used by the Sumerians, passed down to the
ancient Babylonians, and still in use today—in a modified form—for measuring
time, angles, and geographic coordinates.
The base-5 or quinary system was not very
popular but some civilizations used it in combination with the decimal system
and is called biquinary system. A good example of this is the Roman system
where the numbers 1, 5, 10, 50, 100, 500 and so on were assigned different
symbols.
The advent of computers, bring newer number
systems into use. The binary system (base-2) which uses only two numeral
symbols zero (0) and one (1) is considered as the computer’s natural language
because it corresponds to the dual states of the computer’s electrical
components which is either ON or OFF, negative or positive. The octal (base-8)
and the hexadecimal (base-16) are widely used by computer designers and programmers.
If the binary system is the computer’s natural language, the humans have the
most affinity to the decimal system due to the fact that we have ten fingers
that we use for counting. The octal and hexadecimal systems serve as a
transition for binary to decimal conversion and vice versa during man-machine
interaction.
With the invention of the number zero,
came the idea of positional or place-value notation where the value of a
numeral or digit depends on its position or place among a group of digits
representing a number. In the decimal system, the rightmost position is called
ones, next to it to the left is called tens, followed by hundreds, then
thousands, and so on… For example, the number 2635 can be written as “two
thousands, six hundreds, three tens and five ones.”
We can extend this idea by placing a
decimal point right after the ones position. Every number after the decimal
point represents a fractional part of a whole. The first position represents
the tenth part, the next position represents hundredth part, then thousandth
part, and so on… For example, the number 1.5 means one and five tenths or 15/10. But since 5 is
half of ten, the number 1.5 also means “one and a half” or 1½. Here’s another
example: 0.465 is equal to 465/1000 and should be
read as 465 thousandths since three positions after the decimal point are used.
With this notation, any number, however large or small, can be represented by
simply adding more positions to the left of the decimal point or to the right.
Convenient as it is, the positional
notation reaches its limit of usefulness as the number we are dealing with
becomes increasingly large. For example, if we are dealing with thousands, we
only need 4 digits to represent each number. As we increase our numbers to
millions, we need 7 digits---still easy to remember like our phone numbers
without the area code. Billions require 10 digits which our ordinary 8-digit
calculators cannot handle anymore. But nowadays, government accountants manage
national budgets in the billions and trillions while physicists and astronomers
deal with numbers much, much greater than that. It is therefore clear that we
need new notations to represent large numbers.
Let us start by looking at repeated multiplication
of a number by itself. For example, if we multiply 6 by 6, we can easily
calculate it mentally to be 36. Numerically, we say: 6 x 6 = 36. At this early
let me introduce a new notation to represent this kind of mathematical
operation. It is called exponential notation. In this notation, 6 x 6 is
represented as 62. The number 6
here is called the base and the number 2 which is written a little higher than
the base is called the exponent. The symbol 62 should be read as
“Six raised to the power of 2.” In general the symbol xn should be read as
“x raised to the power of n” where x and n represent any number.
Example of exponential notation with 6 as the base.
The figure above is an example of
exponential notation when the base number is 6. The exponents 0 and 1 are
extensions which can easily be proved mathematically and are included here for
completeness. Next, let’s take a look at exponential notation when the base is
10.
Example of exponential notation with 10 as the base.
Interestingly, the exponential notation
becomes highly intuitive when the base number is 10. From the figure above we
can easily see that the exponent is equal to the number of 0’s after the number
1 when the number is written out expressly. The exponential notation base 10 is
so widely used by mathematicians and scientists that it is called scientific
notation.
In 1938, mathematician Edward Kasner asked
his 9-year old nephew, Milton Sirotta what would be the appropriate name for a
number 1 followed by 100 zeros (10100). After a short thought,
Milton replied that such a number could only be called something as silly as a
“googol.” The name stuck and the 9-year old Milton earned his place in the
annals of mathematics. The googol is so large that it is much
greater than the total number of elementary particles in the entire universe
which is only about 1080.
Later, Kasner coined the term googolplex
as the name of the much larger number which is 1 followed by a googol zeros. To
many people, this is the largest number with a name. The noted astronomer Carl
Sagan, in episode 9 of his TV series Cosmos pointed out that googolplex has so
many zeros that there is not enough room to write out all the zeros even in the
entire volume of the known universe.
This chapter will not be complete without including
the topic on “infinity.” Infinity is not a real number. It is a concept of
something that is unlimited, endless, without bound. Its common symbol, “∞”
called lemniscate was invented by the English mathematician John Wallis in 1657.
Early mathematicians have some vague
notions of infinity although they did not know how to deal with it. The ancient
Greeks, particularly Zeno of Elea (c. 490 - 430 BCE) hinted on it by
constructing paradoxes which resulted to contradictions when applying finite
reasoning to essentially infinite processes. In general, the Greeks had immense
difficulties with infinity that they never could quite accept it as we do
today. Their inability to deal with infinity and infinite processes may be
considered as one of the greatest failures of Greek mathematical thought.
Following the Greeks, the Arabs and then
the European mathematicians continued to dabble with infinity and infinite
processes. After Wallis invented its symbol, the concept of infinity caught on
with other mathematicians, and, in a way, made its entrance into the world of mathematics
although it was only in the 19th century that Georg Cantor (1845-1918)
formally defined it. The acceptance of infinity as a mathematical object
resulted in great advances in different branches of Mathematics: calculus,
complex algebra, fourier series, set theory, among others.
Today, our mathematics is so advanced and so
powerful that we have now the capability to predict the weather, or pinpoint
the location of any person or object anywhere in the world with amazing
accuracy. Astronomers train the sights of their telescopes to faraway stars and
galaxies and calculate their distances, densities and determine their chemical composition.
We have developed mathematical models that predict the existence and behaviors
of sub-atomic particles long before we obtain empirical evidence of their
presence. Finally, we have now the mathematical models that describe how the
universe came into existence and how it will end.