Thursday, December 31, 2020

A Significant Life

 


Emmanuel S. Bacleon, an ardent Filipino nationalist and a passionate student activist in his time, died peacefully in his home in Arleta, California last April 7, 2010. He was 58.

Manny's unassuming personality, humility and sincerity endeared him to his friends. He was a dutiful son to his parents and a caring brother to his siblings.

He was born on August 20, 1951 in Cabadbaran, Agusan del Norte, Philippines, the fifth child among the nine children of Melencio Ala-an Bacleon and Irene Josefa Antiga Sarita. His father was a tailor and his mother was a full-time housewife. After finishing his elementary education at Candelaria Institute in Cabadbaran, he entered the Sacred Heart Seminary in Lawaan, Talisay, Cebu.

His high school classmates Celestino Mausisa and Mercurio Montenegro fondly remember him as small in stature with a big heart and very approachable especially when it came to helping them in their Latin subject where Manny excelled. He was the valedictorian of class 1967.

After his high school graduation, he stayed in the seminary for two more years as a college student majoring in Philosophy. Upon the encouragement and a scholarship offer from Fr. Matthew van Santvoord, MSC, at the time, the parish priest of Cabadbaran and Director of Candelaria Institute, Manny left the seminary and took up BS Physics at the University of San Carlos in Cebu City starting on the summer of 1969.

He was just few months into the university when he discovered his debating skills and polished it by joining debating contests and garnering awards as Best Debater and as Best Speaker during the USC debate in the same summer that he entered the university. For some time he was president of the USC Debating Team.

 During these times, the Philippines was simmering in a political turmoil. Student protests filled the air and on the streets. Manny found himself in the midst of heightened student activism and together with the rest of the studentry in the whole country took up the cudgel of toppling down the oppressive regime that was becoming unpopular by the day.

 He joined several student organizations. Due to his oratorical mastery he rose from the ranks and became the Secretary General of Samahang Demokratikong Kabataan (Society of Democratic Youth) in the province of Cebu. Later he joined the hard core revolutionary group Kabataang Makabayan (Patriotic Youth) and became one of its top leaders.

He was about to finish his studies in San Carlos when martial law was declared by then President Ferdinand Marcos. Manny was among those wanted by the military being one of the top student leaders and he went into hiding to elude arrest.

A few days before Christmas, Manny was with his sister Gilda attending the Misa de Gallo at the Santo Rosario Church in P. del Rosario Street in Cebu City when military-looking men approached them and whisked Manny away. Gilda remembers that day very well: December 16, 1972.

Manny never returned to their boarding house which was just located near the church. That afternoon at about 6 PM, Gilda received a call from the military camp in Jones Avenue informing her that Manny was in detention and that she had to bring him clothing, mat, mosquito net and other personal belongings. He stayed in the stockade in Jones Avenue for a few months. He was later transferred to Camp Lapu-lapu in Lahug, Cebu City.

While in detention, he joined tournaments in the game of chess with his fellow political prisoners and he won many times in those games. His sister Gilda visited him every week and brought him food and news stories of what's happening in the country. The prisoners had no access to newspapers, TV and radio.

In the summer of 1974, Manny was transferred to Fort Bonifacio Rehabilitation Center in Makati where he suffered torture and deprivation which caused him and other detainees to launch hunger strike. His sister Gilda got a first-hand experience of how relatives of detainees felt shame and humiliation as they underwent strict body search and inspection before they could see their imprisoned kin.

By this time, some concerned sector of the Philippine society started organizing themselves to help detainees and their families through legal channels. Foremost among these organizations was the Task Force Detainee who worked for the release of the political prisoners. Prominent nationalist lawyers like the late Senators Jose Diokno and Lorenzo Tanada extended their legal expertise and their resources through these organizations.

Manny was a beneficiary of these concerted efforts. He was released from prison weak and emaciated. His body bore scars due to physical torture. Later on his sisters realized that he bore psychological scars too. He was no longer the energetic and enthusiastic person that they used to know.

After his release he came to the United States to join his parents and some of his siblings who were already residing here. His years in prison made him incapable of finding work in his new country. But his sisters and brother were supportive of him. He was in constant medication to keep those mental demons restrained and to keep him from having nervous breakdown. Freed from the rigors of employment, his typical day included going to the library to read any book that interested him. He also tutored his nieces in their math and science lessons.

Manny had one dream that persisted up to the day he died. He wanted to go back to the Philippines to help his struggling countrymen in any way he could. But his sisters won't let him. Manny is gone but his indomitable spirit lives on

To many of his colleagues who were infected by his enthusiasm and learned from him, he is a hero. In the words of Dr. Raul Monton, a colleague who considered him his mentor during the student activism days, "He was such a likable and a fiercely unselfish nationalist who dedicated his life to serving the Filipino. He was unwavering in his conviction. There are only a few Filipinos like him. I will not forget the days we were together fighting the Dictatorship while others were just enjoying in their comfort zone."

Finally, Manny's favorite quotation is worth contemplating: "Every person dies; but each death varies in significance." Indeed, Manny's death was significant. But it was because he led a significant life, touching the lives of countless of his countrymen. A life that was willing to sacrifice for what he believed in.           

Epilogue

I wrote this piece as a eulogy during Manny's wake. Like Dr. Monton, I also considered Manny as my mentor. I first met him in the summer of 1971 when I attended a teach-in seminar conducted by him and other vacationing student leaders from Cebu and Manila. Manny and his group converted me and my friend, Misach, overnight and opened our understanding on the relevant political issues of the country at the time. I lost track of him after martial law was declared.

Thirty years later, I would meet him again in Los Angeles, California. He was sharing an apartment with their mother in San Fernando Valley while I was then teaching at a nearby city of Oxnard. I was a frequent visitor in their apartment during weekends where we enjoyed reminiscing our student activism days. Manny and I were actively involved in the organizational formation of the Cabadbaranons of Southern California where we were both elected as Public Relations officers. The following year, I transferred to New York and then to Alabama.

I returned to California in late February of 2010. I informed Manny that I was back and promised that I would visit him in a few weeks as soon as I get myself settled down and my schedule would allow me. That promise did not materialize. The week before my planned visit, his sister Emma informed me that Manny had already passed away.

Dé·jà vu


 It is a French term for “already seen.” It is that strange feeling you get when you are in a situation, and feel like you've been in the exact same place before, but really haven't; or meet a person for the first time but seems that you have already met that person before, somewhere. Buddhists point to deja vu as proof that reincarnation is real. But our present crop of scientists and researchers admit that they still don't know what actually causes it.

I have had two deja vu experiences and, thankfully, I found down-to-earth explanations to both of them. My first experience was resolved almost immediately but it took more than a year for my second experience to make sense to me.

Scene 1. In June 2001, I was part of an entourage who drove to northern Michigan. I described the details of that trip on my other write-up titled ‘At The Great Lakes On The First Day Of Summer 2001.’ One of our destinations was the idyllic Mackinac Island in Lake Huron. Since the island has no public transportation (motorized vehicles are not allowed on the island), we just walked around. When we reached the entrance to the Grand Hotel, I was mesmerized looking at the long stairway---as if---I had been to this place before!

Scene 2. In summer of 2002, I was driving solo, going northwest along US Highway 101. At that time, this was the farthest that I have driven northward from Los Angeles. I was in the vicinity of Santa Barbara when, suddenly, I had a weird feeling as if I had passed through this part of the highway before---the mountain formation ahead of me looked so familiar. I could not explain it so I just filed it in my mental database under the category UNSOLVED MYSTERIES.

Going back to the first scene, I was at the foot of a long stairway that led to the entrance of the Grand Hotel. The place looked so familiar… Shirley aka Shinar must have noticed my bewildered look and asked me, "Kuya Shem, have you seen the movie, ‘Somewhere in Time ?’ That movie was shot in this island, specifically here in Grand Hotel.” Oh, I see. Every tidbit of my mental processes seemed to fall into their right places.  My favorite movie, that I saw three or four times was filmed here. The added information  heightened my interest to explore the island some more.

In 1983, we had a day of gallivanting around Metro Manila with  Roger Saldia and Sandra Querol and we ended up at the Manila Film Center watching the movie Somewhere in Time. That movie was unforgettable to me for a number of reasons:

§  I like the tune of the theme song and became one of my favorites ever since,

§  I have an intriguing curiosity on the idea of time travel as a theoretical possibility, and

§  Richard Collier (Christopher “Superman” Reeve) met the girl of his dreams, Elise McKenna (Jane Seymour) on the grounds of Grand Hotel on June 27, 1912, of which 43 years later to the day, I was born.

One case closed.

I was in staying in New Jersey in the autumn of 2003. One lazy Sunday, I decided to watch a movie from among my DVD collection. I picked up  an old movie, The Graduate. I first saw this movie in 1968 when I was still a high school freshman. I always remember that movie because it was Dustin Hoffman’s debut and every time I see Hoffman in later movies, I am always reminded of The Graduate.  It was also in that movie that I first heard Simon and Garfunkel’s song Sound of Silence. As the scene moved on to the time when Dustin’s character was traveling to San Francisco in his convertible, the same mountain formation in Santa Barbara that gave me a weird sense of déjà vu the year before, flashed on the screen. It was an ‘aha-moment’ for me. No wonder the mountain formation in Santa Barbara looked so familiar to me for I first saw it in this movie 35 years earlier!

As I look back through my two déjà vu experiences I cannot help but be amazed on the indelibility of the human mind. A momentary scene, a wisp of perfume, an innocent laughter, a casual touch---all these sensory information are processed and meticulously filed in the inner recesses of your subconscious only to surface when a similar information is encountered in another time or a different set of circumstances.                                                  

 

It All Started With 1

 


“To see a world in a grain of sand and heaven in a wild flower Hold infinity in the palms of your hand and eternity in an hour.” - William Blake

Can you imagine the time when humans do not know how to count?  Not only that they do not know how to count, they do not have the concept of numbers except, perhaps, the most rudimentary kind barely enough for survival.

Numbers and counting must have begun with the number one. (Even though in the beginning, they likely didn’t have a name for it.) The first solid evidence of the existence of the number one, and that someone was using it to count, appears about 20,000 years ago. It was just a series of unified lines cut into a bone. It’s called the Ishango Bone. The Ishango Bone (it’s a fibula of a baboon) was found in the Congo region of Africa in 1960. The lines cut into the bone are too uniform to be accidental. Archaeologists believe the lines were tally marks to keep track of something.

 But numbers, and counting, didn’t truly come into being until the rise of cities. Indeed numbers and counting weren’t really needed until then. It began about 4,000 BC in Sumeria, one of the earliest civilizations. With so many people, livestock, crops and artisan goods located in the same place, cities needed a way to organize and keep track of it all, as it was used up, added to or traded.

Their method of counting began as a series of tokens. Each token a man held represented something tangible, say, chickens. If a man had five chickens he was given five tokens. When he traded or killed one of his chickens, one of his tokens was removed. This was a big step in the history of numbers and counting because with that step subtraction was invented and thus the concept of arithmetic was born.

 In the beginning Sumerians kept a group of clay cones inside clay pouches. The pouches were then sealed up and secured. Then the number of cones that were inside the clay pouch was stamped on the outside of the pouch, one stamp for each cone inside. Someone soon hit upon the idea that cones weren’t needed at all. Instead of having a pouch filled with five cones with five marks written on the outside of the pouch, why not just write those five marks on a clay tablet and do away with the cones altogether? This is exactly what happened. This development of keeping track on clay tablets had ramifications beyond arithmetic, for with it, the idea of writing was also born.

The Egyptians were the first civilization to invent different symbols for different numbers. They had a symbol for one, which was just a line. The symbol for ten was a rope. The symbol for a hundred was a coil of rope. They also had numbers for a thousand and ten thousand. The Egyptians were the first to dream up the number one million, and its symbol was a prisoner begging for forgiveness, which was a person on its knees, hands upraised in the air, in a posture of humility.

Egyptian Number symbols 

Greece made further contributions to the world of numbers and counting, much of it under the guidance of Pythagoras. He studied in Egypt and upon returning to Greece established a school of mathematics, introducing Greece to mathematical concepts already prevalent in Egypt. Pythagoras was the first man to come up with the idea of odd and even numbers. To him, the odd numbers were male; the evens were female. He is most famous for his Pythagorean theorem, but perhaps his greatest contribution was laying the groundwork for Greek mathematicians who would follow him.

Pythagoras was one of the world’s first theoretical mathematicians, but it was another famous Greek mathematician, Archimedes, who took theoretical mathematics to a level no one had ever taken it to before. Archimedes is considered to be the greatest mathematician of antiquity and one of the greatest of all time. Archimedes enjoyed doing experiments with numbers and playing games with numbers. He is famous for inventing a method of determining the volume of an object with an irregular shape. The answer came to him while he was bathing. He was so excited he leapt from his tub and ran naked through the streets screaming “Eureka!” which is Greek for “I have found it.” Archimedes made many, many other mathematical contributions, but they are too numerous to mention here.

The Greek’s role in mathematics ended, quite literally, with Archimedes. He was killed by a Roman soldier during the Siege of Syracuse in 212 BC. And thus ended the golden age of mathematics in the classical world.

Under the rule of Rome, mathematics entered a dark age, and for a couple different reasons. The main reason was that the Romans simply weren’t interested in mathematics (they were more concerned with world domination), and secondly, because Roman numerals were so unwieldy, they couldn’t be used for anything more complicated than recording the results of calculations. The Romans did all their calculating on a counting board, which was an early version of abacus. And because of that Roman mathematics couldn’t, and didn’t go far beyond adding and subtracting. Their use of numbers was good for nothing more than a simple counting system. The Romans' use of numbers was no more advanced than the notches on the Ishango Bone. There’s a good reason there are no famous Roman mathematicians.

The next big advance (and it was a huge advance) in the world of numbers and mathematics came around 500 AD. It would be the most revolutionary advance in numbers since the Sumerians invented mathematics. The Indians invented an entirely new number: zero. 

Though humans have always understood the concept of nothing or having nothing, the concept of zero was only fully developed in India in the fifth century A.D. Before then, mathematicians struggled to perform the simplest arithmetic calculations. Today, zero, both as a symbol (or numeral) and a concept, meaning the absence of any quantity — allows us to perform calculus, do complicated equations, and to have invented computers. 

Under Hinduism, the Indians possessed concepts such as Nirvana and eternity. These are some very abstract concepts that need some abstract math to help describe them. The Indians needed a way to express very large numbers, and so they created a method of counting that could deal with very large numbers. It was they who created a different symbol for every number from one to nine. They are known today as Arabic numerals, but they would more properly be called Indian numerals, since it was the Indians who invented them.

Once zero was invented it transformed counting and mathematics, in a way that would change the world. Zero is still considered India’s greatest contribution to the world. For the first time in human history the concept of nothing had a number.

Zero, by itself, wasn’t necessarily all that special. The magic happened when you paired it with other numbers. With the invention of zero the Indians gained the ability to make numbers infinitely large or infinitely small. And that enabled Indian scientists to advance far ahead of other civilizations that didn’t have zero, due to the extraordinary calculations that could be made with it. For example, Indian astronomers were centuries ahead of the Christian world. With the help of the very fluid Arabic numbers, Indian scientists worked out that the Earth spins on its axis, and that it moves around the sun, something that Copernicus wouldn’t figure out for another thousand years.

 The next big advance in numbers is the invention of fractions in 762 AD in what is now Baghdad — and what was then Persia. This does not mean that the earlier civilizations had no concept of fractions---they do. But their symbols and representations were so cumbersome that it was very difficult to do simple calculations. It was the Persians’ adherence to the Koran and the teachings of Islam that led to the invention of fractions in the form that we are using now. The Koran teaches that possessions of the deceased have to be divided among the descendants but not equally---the women descendants have lesser share than the men. Working all of that out required fractions. Prior to 762 AD they didn’t have a system of mathematics sophisticated enough to do a very proper job.

 The number of symbols or numerals used to represent numbers is the base of that particular number system. The most common is the base-10 or decimal system where we have numerals 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. With these 10 numerals, any number big or small can be represented. As an analogy, the English alphabet has 26 letters. With these 26 letters, any English word that you can think of can be written down.

 But there are other systems aside from decimal that have been used by different civilizations in different time periods. The base-12, called duodecimal or dozenal system had been used at one time or another and that’s the reason why until now we buy some things by the dozens. The base-60 or sexagesimal was first used by the Sumerians, passed down to the ancient Babylonians, and still in use today—in a modified form—for measuring time, angles, and geographic coordinates.

The base-5 or quinary system was not very popular but some civilizations used it in combination with the decimal system and is called biquinary system. A good example of this is the Roman system where the numbers 1, 5, 10, 50, 100, 500 and so on were assigned different symbols.

The advent of computers, bring newer number systems into use. The binary system (base-2) which uses only two numeral symbols zero (0) and one (1) is considered as the computer’s natural language because it corresponds to the dual states of the computer’s electrical components which is either ON or OFF, negative or positive. The octal (base-8) and the hexadecimal (base-16) are widely used by computer designers and programmers. If the binary system is the computer’s natural language, the humans have the most affinity to the decimal system due to the fact that we have ten fingers that we use for counting. The octal and hexadecimal systems serve as a transition for binary to decimal conversion and vice versa during man-machine interaction.

With the invention of the number zero, came the idea of positional or place-value notation where the value of a numeral or digit depends on its position or place among a group of digits representing a number. In the decimal system, the rightmost position is called ones, next to it to the left is called tens, followed by hundreds, then thousands, and so on… For example, the number 2635 can be written as “two thousands, six hundreds, three tens and five ones.”

 We can extend this idea by placing a decimal point right after the ones position. Every number after the decimal point represents a fractional part of a whole. The first position represents the tenth part, the next position represents hundredth part, then thousandth part, and so on… For example, the number 1.5 means one and five tenths or 1510. But since 5 is half of ten, the number 1.5 also means “one and a half” or 1½. Here’s another example: 0.465 is equal to 465/1000 and should be read as 465 thousandths since three positions after the decimal point are used. With this notation, any number, however large or small, can be represented by simply adding more positions to the left of the decimal point or to the right.

Convenient as it is, the positional notation reaches its limit of usefulness as the number we are dealing with becomes increasingly large. For example, if we are dealing with thousands, we only need 4 digits to represent each number. As we increase our numbers to millions, we need 7 digits---still easy to remember like our phone numbers without the area code. Billions require 10 digits which our ordinary 8-digit calculators cannot handle anymore. But nowadays, government accountants manage national budgets in the billions and trillions while physicists and astronomers deal with numbers much, much greater than that. It is therefore clear that we need new notations to represent large numbers.

Let us start by looking at repeated multiplication of a number by itself. For example, if we multiply 6 by 6, we can easily calculate it mentally to be 36. Numerically, we say: 6 x 6 = 36. At this early let me introduce a new notation to represent this kind of mathematical operation. It is called exponential notation. In this notation, 6 x 6 is represented as 62. The number 6 here is called the base and the number 2 which is written a little higher than the base is called the exponent. The symbol 62 should be read as “Six raised to the power of 2.” In general the symbol xn should be read as “x raised to the power of n” where x and n represent any number.


 Example of exponential notation with 6 as the base.

 The figure above is an example of exponential notation when the base number is 6. The exponents 0 and 1 are extensions which can easily be proved mathematically and are included here for completeness. Next, let’s take a look at exponential notation when the base is 10.  


 Example of exponential notation with 10 as the base.

Interestingly, the exponential notation becomes highly intuitive when the base number is 10. From the figure above we can easily see that the exponent is equal to the number of 0’s after the number 1 when the number is written out expressly. The exponential notation base 10 is so widely used by mathematicians and scientists that it is called scientific notation. 

In 1938, mathematician Edward Kasner asked his 9-year old nephew, Milton Sirotta what would be the appropriate name for a number 1 followed by 100 zeros (10100). After a short thought, Milton replied that such a number could only be called something as silly as a “googol.” The name stuck and the 9-year old Milton earned his place in the annals of mathematics. The googol is so large that it is much greater than the total number of elementary particles in the entire universe which is only about 1080.

Later, Kasner coined the term googolplex as the name of the much larger number which is 1 followed by a googol zeros. To many people, this is the largest number with a name. The noted astronomer Carl Sagan, in episode 9 of his TV series Cosmos pointed out that googolplex has so many zeros that there is not enough room to write out all the zeros even in the entire volume of the known universe.

This chapter will not be complete without including the topic on “infinity.” Infinity is not a real number. It is a concept of something that is unlimited, endless, without bound. Its common symbol, “∞” called lemniscate was invented by the English mathematician John Wallis in 1657.

Early mathematicians have some vague notions of infinity although they did not know how to deal with it. The ancient Greeks, particularly Zeno of Elea (c. 490 - 430 BCE) hinted on it by constructing paradoxes which resulted to contradictions when applying finite reasoning to essentially infinite processes. In general, the Greeks had immense difficulties with infinity that they never could quite accept it as we do today. Their inability to deal with infinity and infinite processes may be considered as one of the greatest failures of Greek mathematical thought.

Following the Greeks, the Arabs and then the European mathematicians continued to dabble with infinity and infinite processes. After Wallis invented its symbol, the concept of infinity caught on with other mathematicians, and, in a way, made its entrance into the world of mathematics although it was only in the 19th century that Georg Cantor (1845-1918) formally defined it. The acceptance of infinity as a mathematical object resulted in great advances in different branches of Mathematics: calculus, complex algebra, fourier series, set theory, among others. 

Today, our mathematics is so advanced and so powerful that we have now the capability to predict the weather, or pinpoint the location of any person or object anywhere in the world with amazing accuracy. Astronomers train the sights of their telescopes to faraway stars and galaxies and calculate their distances, densities and determine their chemical composition. We have developed mathematical models that predict the existence and behaviors of sub-atomic particles long before we obtain empirical evidence of their presence. Finally, we have now the mathematical models that describe how the universe came into existence and how it will end.


Leaves And Quarks


When I was a kid, I loved to hang out at the second-floor veranda of my grandparents’ house. There, different varieties of potted plants: bougainvilleas, ferns, cacti, orchids, roses and many others were neatly arranged, regularly watered and meticulously taken care of by my three unmarried aunts. Then I would pick a large leaf from among the plants and would start doing what had become a ritual to me. I would divide the leaf into two, toss away one of the halves and kept the other to be divided again into two. The cycle would go on for some time until the portion of the leaf left in my hands was too small to be divided further.

Then questions and possibilities would begin flooding my mind: If there’s a way of breaking down this leaf fragment further into smaller and still smaller pieces, would it come to a point where the remaining piece is no longer a leaf? What are leaves made of? What are the flowers and the trees made of? And the mountains? Then I would realize that my quest had come to a dead-end and my young mind would begin to wander elsewhere.

When I was already in grade school, our science teacher told us that everything---including us, is composed of matter. Anything that has weight and occupies space is matter: leaves, trees, rocks, water…Even the air which we cannot see is matter because it occupies space. But what is matter made of?

Two thousand four hundred years ago, the Greek philosopher Democritus was asking a similar question: Could matter be divided into smaller and smaller pieces forever, or was there a limit to the number of times a piece of matter could be divided? After spending countless hours and days (or perhaps years) pondering on this fundamental question, he came up with a theory: Matter could not be divided into smaller and smaller pieces forever, eventually the smallest possible piece would be obtained. This piece would be indivisible. He named the smallest piece of matter atomos, meaning, “cannot be cut.”

However only few learned Greeks of the time accepted Democritus’ position. Another school of thought championed by Aristotle supported the more intuitive and commonly held belief that all substance found in our world can be derived from or composed of the following four elements: earth (meaning, soil), water, fire and air. He contended that these four elements were not made of atoms but were continuous. Because Aristotle was more influential, his idea was widely accepted while Democritus’ unpopular treatise on “atomism” was forgotten and his scholarly works were consigned in the back shelves of the great libraries of the ancient world through the centuries.

As Renaissance transformed the cultural landscape of Europe starting on the 14th century, the scientific world has also undergone its own renaissance especially after the invention of the microscope. There was a renewed interest in the study of atomism as preserved in the works of Democritus and that of his student Epicurus. Results of repeated and replicated experiments in leading laboratories at the time showed consistently that indeed, matter is made up of smaller components. Thinking that they have finally discovered the elemental component proposed by Democritus, they called that component “atom.” Thus Democritus was vindicated and the Aristotelian physics which reigned supreme in the centers of learning since the glorious days of the Greeks was unceremoniously dethroned. But Aristotle’s idea was not totally rejected. His four-element concept is now re-interpreted as the four states of matter: solid (earth), liquid (water), gas (air) and plasma (fire).

The discovery of the atom paved the way for fast advances in the fields of physics, chemistry and material science and led to greater understanding of electricity and magnetism. Intense experimentations and observations resulted to the discovery of different types of atoms. They discovered that atoms of different substances have different weights while atoms of one substance have uniform weights. The atomic weight, therefore, has become the identifying property of a substance. A substance which is composed of only one kind of atom is called “pure substance” or element while a substance that is composed of two or more different kinds of atoms is called “composite” or compound.

Several elements have been identified out of the common substances that man has been familiar with since the dawn of civilization. Iron, silver, copper and gold are among those identified to be pure and therefore they are elements. Some gases, too, have been identified to be elements: hydrogen, oxygen and nitrogen. The chemists and physicists started constructing an abstract table called periodic table where elements were placed in their logical order according to atomic weights and other characteristics. Today, you can see this periodic table prominently displayed on the walls of Chemistry classrooms and laboratories. You can also find it usually printed in the inside back cover of chemistry textbooks.

In 1807, English chemist John Dalton laid down 5 propositions describing the atom which later became known as the Dalton’s Law:

  1. All matter is made of atoms.
  2. All atoms of a given element are identical in mass (or weight) and properties.
  3. Atoms are indivisible and indestructible.
  4. Compounds are formed by a combination of two or more different kinds of atoms.
  5. A chemical reaction is a rearrangement of atoms.

By this time, there was a budding field of study called Alchemy. Its adherents, which counted among them the notable Sir Isaac Newton, believed that there could be a formula to transform a base metal like lead or iron into gold. They tried and experimented on different procedures. Some even resorted to magic. There were so many false claims of iron turning into gold and the believers were fooled over and over again.

The proper understanding of the atom marked the end of alchemy. But not all results of alchemists’ researches and investigations went to naught.  Many of the chemists started as alchemists and many of their discoveries were reinterpreted in the light of the atomic theory. We can, therefore, safely say that the pseudoscience of alchemy was the precursor to the modern science of Chemistry.



By the late 1800s, Physics was now considered a mature science. There were those who believed there wasn’t much more to do than smooth out some rough edges in nature’s plan. There was a sensible order to things, a clockwork universe governed by Newtonian forces, with atoms as the foundation of matter.

 But then strange things started popping up in laboratories: x-rays, gamma rays, a mysterious phenomenon called radioactivity. Physicist J. J. Thomson discovered the electron. Atoms were not indivisible after all, but had constituents. Was the atom, as Thomson believed, a pudding, with electrons embedded like raisins? No. In 1911 physicist Ernest Rutherford announced that atoms are mostly empty space, their mass concentrated in a tiny nucleus orbited by electrons.

 There was a growing interest in the study of hydrogen---the lightest and the smallest among elements. Scientists discovered that the hydrogen atom has only one electron orbiting around the nucleus. The electron is a very small particle with a negligible mass and possesses a negative electric charge. They reasoned that since the electron is orbiting around the nucleus, this means that there is a force of attraction between the electron and whatever is residing in the nucleus. They cited as analogy the earth-moon system and the solar system in general, which was already well understood since Newton’s time. 

 The moon orbits around the earth because initially the moon was moving in the direction perpendicular to the center line between the moon and the earth. The attractive force between these two bodies causes the moon’s otherwise straight line of motion into a curve thus making it a circular motion around the earth. In the case of the earth-moon system, the attraction is due to the gravitational force; in the case of the hydrogen atom, the attraction is due to electric force.

 Since, like magnets, opposite electric charges attract while like charges repel, they reasoned correctly that the particle in the nucleus of hydrogen has positive charge with equal strength as the electron’s negative charge to make it balanced and stable, otherwise the hydrogen atom would have disintegrated long time ago. And since the mass of the electron is almost zero, the mass of the particle in the hydrogen’s nucleus accounts for the total mass of the hydrogen atom. They called that particle inside the hydrogen’s nucleus proton. The proton, therefore, is a particle with a positive electric charge and has a mass equal to that of the hydrogen atom. They assigned it a value of one atomic mass unit ( amu) and became a standard in measuring all the other atoms.

 Next, researchers discovered that Helium, the second lightest element in the periodic table, has two electrons orbiting around its nucleus. But when they measured the weight of Helium, it was found out to be 4 amus. That means that the Helium nucleus contains 4 proton-like particles but only two of these have positive charge to balance the negative charges of two electrons orbiting around it.

 In 1920, Ernest Rutherford, proposed the existence of a proton-like particle with no electric charge. He called it neutron. After years of experimentation, they were able to isolate and detect the neutron particle. The year was 1932. The discovery of the neutron solves the problem of seeming discrepancies between their theoretical calculations and experimental measurements. They introduced a new number designation to each element. They called it the atomic number which is equal to the number of protons in the nucleus (which is also equal to the number of orbiting electrons). The amu continued to designate the amount of mass of the atom which is the sum total of protons and neutrons. For hydrogen, both the atomic number and the amu is equal to 1. Helium has the atomic number 2 and the amu is 4. The heaviest naturally-occurring element, Uranium has atomic number 92 and atomic mass unit of 238 because its nucleus has 92 protons and 146 neutrons.

 

Figure shows the atom of the second lightest element, Helium, which has two protons (colored red) and two neutrons (colored green) in its nucleus and two electrons orbiting around it.

The discovery of the neutron led to the realization that the nucleus, after all, is composite and breaking it down into its component parts is a possibility. But before the scientists could take steps to break up the atom, they have to understand first what holds the nucleus together. They already learned in electricity and magnetism that like charges repel and opposite charges attract. Under normal condition, the positively charged proton within the nucleus should repel each other and should have disintegrated long time ago. The only possible explanation that a group of protons stick together in the nucleus is because they are being held there by a very strong force. They called it strong nuclear force.  It is logical, therefore, that to break up the atom is to bombard the nucleus with a force greater than the nuclear force that holds them together.

 In 1938, the German chemist Otto Hahn, a student of Rutherford, bombarded a uranium atom with neutrons and successfully split the heavy uranium nucleus into two lighter nuclei of approximately equal size. They called the process nuclear fission as an analogy to biological fission in living cells. In the process of breaking up the atom, the strong nuclear force that holds the protons and neutrons together is released and is converted into an unimaginable amount of energy. That’s how the atomic bomb acquire its destructive power. Today, nuclear fission is occurring everyday under a very controlled environment inside the reactors of nuclear power plants in many parts of the world for the purpose of generating electricity.

 The successful division of the atom into smaller components violated the third rule of Dalton’s 1807 proposition and the scientists realized that they have concluded too soon. The thing that they called atom is not the same “atomo” in the mind of Democritus. Nevertheless, they continue to call it atom since it had already been universally accepted but the search for the fundamental, indivisible component of matter went on.

 After the atom was successfully split, the scientific community thought that maybe, we have already found the fundamental components of matter in protons and neutrons. Surely, these particles are already too small to be broken further. But again, they spoke too soon. Not long after, researchers discovered that protons and neutrons are made up of still smaller particles called quarks.

 How did the scientists discover all these? By using similar techniques used by Thompson, Rutherford, Hahn and other physicists since the last century: smashing atoms and sub-atomic particles with other particles inside those devices called accelerators and taking inventory of all debris in the form of smaller particles and released energies that are detected by their instruments. The early experiments consist of trying to break up a heavy atom like uranium by bombarding it with protons and later, neutrons. A breakup is successful only when the energy of the oncoming particle is greater than the strong nuclear force that holds the nucleus together.

 Once they succeeded in breaking up the atomic nucleus into its component protons and neutrons, the next step was to determine whether the proton and neutron can be cracked, too. But breaking up the proton or neutron is a much more daunting task than breaking up the atom. Aside from the fact that the proton or neutron is much smaller and thus a more elusive target, the force that held the quarks together inside the proton is much stronger. To accomplish the breakup, the colliding particles must possess a much greater energy. It has long been established that speed is convertible to energy---the higher is the speed, the greater is the energy. So, to raise the energy level of the colliding particles, they should be traversing at very high velocities as they smash each other.

 To attain this extremely high velocities, the particles have to be positioned far from each other to give them sufficient time to accelerate. It is similar to a motorist driving on a city street and wanting to merge onto a high- speed interstate highway. The motorist has to increase its speed and sync with the highway motorists before merging otherwise he will be in danger of being bumped. To accomplish this, the highway builders construct ramps that connects a city street to the highway. The longer the ramp, the higher is the speed that is attained by the merging vehicle.

 In the case of the colliding particles, they traverse inside a specially designed tubes called accelerators surrounded with powerful magnets. Whereas the car’s acceleration is powered by its engine, the particles accelerate due to the action of electromagnetic force. The second function of the electromagnetic field is to keep the particles in their designated trajectories to ensure a hit.

 The first accelerator, called cyclotron was invented and patented by Ernest Lawrence of the University of California, Berkeley in 1932. It was so compact, it could fit in one’s pocket. But as experiments require longer and longer distances in order for particles to attain higher and higher velocities, larger and larger accelerators were constructed. Today, there are more than 30,000 accelerators in operation around the world of varying sizes and shapes. The second most powerful accelerator in the world with 3.9 miles in circumference is situated underground in Batavia, Illinois 27 miles west of Chicago and managed by Fermilab.

 With this worldwide effort to find the fundamental stuff that made up matter, they soon discovered that the sub-atomic space is inhabited with so many weird and exotic family of particles and anti-particles that they coined the term “particle zoo.” Whether one of these particles is the fundamental stuff that they have been looking for, they are not sure yet and many more experiments have to be conducted. At this level of smallness, scientists also discovered that the particles and the forces that are interacting on them are starting to become interchangeable; that is, beginning to merge and thereby losing their distinctions. One of the problems the particle physicists were tackling at the time was how come that in one family of particles with the same characteristics some have mass and the others have none.

 In 1964, Professor Emeritus Peter Higgs of the University of Edinburgh proposed a mechanism that purportedly explains this phenomenon. Higgs mechanism predicted the existence of a particle which gives mass to everything. The scientific world embraced Higgs’ proposal and named the yet-to-be-found particle Higgs boson. The mainstream media dubbed it the “God particle.”  Higgs explained that this particle permeates all space which gives rise to the idea of a Higgs field which, in layman’s term, we call Higgs ocean. If the existence of the Higgs particle is proven true, then “empty space” is not so empty after all. We are all immersed in the Higgs ocean which gives mass to our bodies just as the air in the earth’s atmosphere causes the drag of all moving objects like cars and golf balls.   

 To prove the existence (or non-existence) of the Higgs particle the European Organization for Nuclear Research, otherwise known by its acronym CERN, constructed the world’s largest and most powerful particle accelerator, 27 kilometers in circumference, in a tunnel as deep as 574 feet beneath the Franco-Swiss border near Geneva, Switzerland. They called it the Large Hadron Collider (LHC). The LHC, which is a collaborative project of CERN and hundreds of universities and laboratories around the world, is the most complex machine man has ever built. It took thousands of scientists, engineers and technicians decades to plan and build it at a cost of ten billion dollars.

 On July 4, 2012, after executing a carefully planned experiment or experiments using the LHC, CERN announced they have sufficient evidence to conclude that the Higgs boson had been found. In the years to follow, that experiment will be repeated and replicated to confirm that particle’s existence with certainty. In addition, scientists will continue to conduct other related experiments to determine that particle’s other properties. Those properties have already been predicted in the standard model of particle physics but they need experimental data to confirm the results of their mathematical calculations.

 Have we finally found the fundamental building block of the universe? Probably we have. But then, it might be too early to say. But one thing is sure, we have gone a long way since Democritus’ time. Personally, I have gone a long way from my innocent ponderings about leaves and flowers to what is beyond quarks and Higgs bosons and the meaning of reality. And as long as we will not lose our imagination, the world around us will continue to be a source of awe and wonder.

  

My Father: Some Poignant Recollections

After I completed elementary grades, my father left farming and worked at a timber company in Bayugan, some 60 kilometers south of Cabadbara...