This is part 4 of 6. The first three are here:
In the 1720s the German professor Johann Heinrich Schulze (1687–1744) showed that certain silver salts, most notably silver chloride and silver nitrate, darken in the presence of light, not heat as some scholars had previously believed (the process had been known for some time, but not the cause). The Swedish pioneering chemist Carl Wilhelm Scheele (1742–1786) demonstrated in 1777 that the violet rays of the prismatic spectrum were most effective in decomposing silver chloride.
William Herschel discovered infrared radiation because thermometers, which had recently been developed in Europe, showed a higher temperature just beyond the red end of the visible spectrum of sunlight. The German chemist Johann Wilhelm Ritter (1776–1810), after hearing about Herschel’s discovery from 1800, identified another “invisible” radiation which we now know as ultraviolet (UV) in 1801. He experimented with silver chloride since blue light was known to cause a greater reaction to it than did red light, and he found that the area just beyond the violet end of the visible spectrum showed the most intense reaction of all.
During the 1790s Thomas Wedgwood (1771-1805), an early experimenter together with the leading English chemist Humphry Davy (1778–1829) in photography, sun-printed “profiles” of objects onto paper and leather moistened with silver nitrate, but he could not “fix” these images. According to Davy’s 1802 report, they were initially successful in producing a negative image (a white silhouette on a dark background), but unless the picture was kept in the dark, the image eventually vanished. There are those who claim that Wedgwood should be credited as the inventor of photography, but they currently constitute a minority.
The first universally accepted permanent images were recorded by the Frenchman Joseph NicÃ©phore NiÃ©pce (1765-1833) in the 1820s. I have found conflicting information in the literature as to exactly when NiÃ©pce recorded his first permanent image. Some say that his heliograph “Boy Leading His Horse” from 1825 is the world’s oldest photography. In 1827 he successfully produced a camera obscura view of his courtyard in Paris on a bitumen-coated pewter plate, which took eight hours to complete. Photography was still hampered by very long exposure times. Only with later technical advances came the ability to expand the repertoire of views from architecture to cityscapes, street scenes, aerial photography etc. NiÃ©pce eventually teamed up with Louis Jacques MandÃ© Daguerre (1787-1851), a successful Parisian theater designer and painter of the popular spectacle known as the diorama, the closest thing to a modern movie theater in those days. Together they tried to create easier ways to do photography. Scholar Eva Weber in Pioneers of Photography, page 6:
“After NiÃ©pce’s death in 1833, Daguerre found a way to sensitize a silver-coated copper plate with iodine fumes and to produce a direct positive image without the use of NiÃ©pce’s bitumen coating. A crucial success came in 1835 when he discovered the phenomenon of the latent image: the camera image does not appear during the exposure of the plate, but is revealed later only during the chemical development process. At the same time, he found a way to bring out this latent image by using mercury vapor, considerably shortening the required exposure time. The fixing process — making the image permanent — was the final hurdle Daguerre surmounted in 1837 by washing the exposed and developed plate with a solution of salt water. In March 1839 he changed the fixing solution to hyposulphite of soda, a method discovered in 1819 by English scientist Sir John Herschel (1792-1871).”
The astronomer and chemist Sir John Herschel, son of Sir William Herschel, coined the term “photography” and made contributions to its development. In 1839 in France, a crowded meeting of scientists and others observed Daguerre’s demonstration of the daguerreotype process, the first form of photography to enjoy some commercial success. However, Daguerre was not the only person working with the possibilities of photography, which clearly was an invention whose time had come. Weber again, page 9:
“In 1834, William Henry Fox Talbot (1800-1877), an English country gentleman scholar and scientist, began trying to fix a camera obscura image on paper. By 1835 he was making exquisite ‘photogenic drawings,’ as he called them, or contact prints, by placing botanical specimens and pieces of lace on sheets of good quality writing paper sensitized with silver chloride and silver nitrate, exposing them to sunlight, and then fixing them with a rinse of hot salt water. (Like Daguerre, he also changed his fixative to hyposulphite of soda in 1839 on Herschel’s recommendation). He also made a small negative image of his home, Lacock Abbey, on sensitized paper in 1835. Temporarily losing interest in photography he turned his attention to other studies. When news of Daguerre’s discovery reached him, he went back to experimenting, independently discovering the latent image and its development in 1840, as well as the process of making multiple positive paper prints from a single paper negative. He worked hard to perfect his paper process and patented it in February 1841 as the calotype (from the Greek, meaning beautiful image), also known as the talbotype.”
Talbot became the inventor of the negative/positive photographic process, the precursor to most photographic processes used in the nineteenth and twentieth centuries. He had independently devised photogenic drawing paper by 1835. In 1839 Talbot noted the greater sensitivity of silver bromide — later the chief constituent of all modern photographic materials — made possible by the isolation of the chemical element bromine by the French chemist Antoine Jerome Balard (1802-1876) and the German chemist Carl Jacob LÃ¶wig (1803–1890) independently of each other in 1825-26. Talbot made another discovery in 1840, that an invisibly weak dormant picture in silver iodide could be brought out by gallic acid, thus increasing the speed of his camera photography greatly, from hours to minutes. From now on, a quest was mounted for shorter camera exposures and higher resolution.
The daguerreotype was much more popular than the calotype in the early years, but Talbot, in contrast to Daguerre, remained active and continued to experiment. His most significant discovery, the reproducible negative, came to be applied universally only with the development of the wet-plate collodion process in 1851. There were other early pioneers, too. Eva Weber, page 10:
“In 1833 Antoine Hercules Florence, a French artist in Brazil, started to experiment with producing direct positive paper prints of drawings. Most importantly, Hippolyte Bayard (1801-1887), a French civil servant in the Ministry of Finance, began experimenting in 1837 and by 1839 had created a method for making direct positive prints on paper. Official support for the daguerreotype overshadowed Bayard’s achievement. Discouraged but persistent, he went on to work with the calotype and other photographic processes. As a photographer he produced a large body of high quality work, covering a wide range of subject matter from still lifes, portraits, cityscapes, and architectural views to a record of the barricades of the 1848 revolution. Other pioneers include Joseph Bancroft Reade, and English clergyman, and Hans ThÃ¸ger Winther, a Norwegian publisher and attorney.”
Further technical improvements were made by the French artist Gustave Le Gray (1820-1884) and the English sculptor Frederick Scott Archer (1813-1857), among others. Weber, page 14:
“Throughout the nineteenth century, each refinement of the photographic process led to a new flourishing of talented photographers, sometimes in a single region or nation, and at other times globally. It is generally agreed that during the daguerreotype era an exceptionally fine body of work came from the United States. In March 1839 Daguerre personally demonstrated his process to inventor and painter Samuel Morse (1791-1872) who enthusiastically returned to New York to open a studio with John Draper (1811-1882), a British-born professor and doctor. Draper took the first photograph of the moon in March 1840 (a feat to be repeated by Boston’s John Adams Whipple in 1852), as well as the earliest surviving portrait, of his sister Dorothy Catherine Draper. Morse taught the daguerreotype process to Edward Anthony, Albert Southworth and possibly Mathew Brady, all of whom became leading daguerreotypists.”
A daguerreotype by George Barnard (1819-1902) of the 1853 fire at the Ames Mill in New York is the earliest known work of photojournalism. Mathew Brady (1823-1896) became one of the most important photographers during the American Civil War (1861–1865). The English photographer Roger Fenton’s (1819-1869) views of the Crimean War (1853–1856) battlefields are widely regarded as the first systematic photographic war coverage. Much impressive work of elegant landscapes and street scenes, portraiture etc. still came from France. In 1858 the French journalist Gaspard-FÃ©lix Tournachon (1820–1910), known as Nadar, made the first aerial photographs of the village of Petit-Becetre taken from a hot-air balloon, 80 meters above the ground. The oldest aerial photograph still in existence is James Wallace Black’s (1825-1896) image of Boston from a hot-air balloon in 1860.
This was also an age of travel photography, facilitated by steamships, railways and cheaper transport, with French photographers taking pictures in Mexico, Central America and Indochina, British in the Middle East, India, China, Japan, etc. For Easterners in the USA, Western views from the frontier were popular and exotic. Edward S. Curtis (1868–1952) recorded the lives of the Native Americans. Photographs of the remarkable Yellowstone area influenced the authorities to preserve it as the country’s first national park in 1872.
The American George Eastman (1854–1932) pioneered the use of celluloid-base roll film, which greatly sped up the process of recording multiple images and opened up photography to amateurs on a wide scale since cameras were no longer so large, heavy and complicated. He registered the trademark Kodak in 1888. Glass plates remained in use among astronomers and others scientists into the second half of the twentieth century due to their superiority for research-quality imaging. Pluto was for instance discovered in 1930 with photographic plates.
There were numerous experiments with moving pictures or “movies” in Europe and in North America, with the French inventor Louis Le Prince (1842-1890) being one of the pioneers, but the brothers Auguste (1862-1954) and Louis LumiÃ¨re (1864-1948) are usually credited with the birth of cinema with their public screening with admission charge in Paris in December 1895.
The brilliant American Thomas Alva Edison (1847–1931), one of the most prolific inventors in history, played a key role as well. Through his years of working as a telegraph operator he had learned much about electricity, and developed new techniques for recording sounds. However, “records,” as in the analog sound storage medium we know as gramophone or vinyl records, which remained the most common storage medium for music until Compact Discs (CDs) and the digital revolution in the 1980s and 90s, were patented by the German-born American inventor Emile Berliner (1851–1929) in 1896. James E. McClellan and Harold Dorn in Science and Technology in World History, second edition, page 354:
“In 1895, with their CinÃ©matographe”¦Auguste and Louis LumiÃ¨re first successfully brought together the requisite camera and projection technologies for mass viewing, and so launched the motion-picture era. With paying customers watching in theaters — sometimes stupefied at the illusion of trains surely about to hurtle off the screen and into the room — movies immediately became a highly successful popular entertainment and industry. Not to be outdone, the Edison Manufacturing Company quickly adopted the new technology and produced 371 films, including The Great Train Robbery (1903), until the company ceased production in 1918. Sound movies – the talkies — arrived in 1927 with Al Jolson starring in The Jazz Singer; by that time Hollywood was already the center of a vigorous film industry with its ‘stars’ and an associated publicity industry supplying newsstands everywhere with movie magazines. The use of color in movies is virtually as old as cinema itself, but with technical improvements made by the Kodak Company in the film, truly vibrant color movies made it to the screen in the 1930s in such famous examples as The Wizard of Oz (1939) and Gone with the Wind (1939). (Color did not become an industry standard, however, until the 1960s.)”
Photography in natural colors was first achieved by the Scottish scientist James Clerk Maxwell (1831–1879) as early as in 1861, but the autochrome process of the brothers LumiÃ¨re from 1907 was the first moderate commercial success. The Russian photographer Sergey Prokudin-Gorsky (1863–1944) developed some early techniques for taking color photographs and documented the Russian Empire between 1909 and 1915. Color photography progressed with research in synthetic organic chemistry of dyestuffs and the Eastman Kodak Company produced Kodachrome in 1935, yet it did not become cheap and accessible enough to become the standard until the second half of the twentieth century. Black and white photography remains in use to this day for certain artistic purposes, for instance portraits.
While photography was of great use in arts and entertainment, it became an invaluable tool in numerous scientific disciplines, from medicine via geology and botany to archaeology and astronomy, since it can detect and record things that the human eye cannot see. The Austrian physicist and philosopher Ernst Mach (1838–1916) used it for his investigations in the field of supersonic velocity, and from the 1870s developed photographic techniques for the measurement of shock waves. The Englishman Eadweard J. Muybridge (1830–1904) and the Frenchman Ã‰tienne-Jules Marey (1830–1904) invented new ways of recording movement.
In the late twentieth and early twenty-first centuries, traditional photography was gradually replaced by digital techniques. Asian and especially Japanese companies such as Sony played a major role in the digitalization of music, movies and photography, in addition to Western ones. However, with the creation of photography in early nineteenth century, advances in chemistry were crucial.
Chemistry developed out of medieval alchemy. In India, alchemy was used in serious metallurgy, medicine, leather tanning, cosmetics, dyes etc. The work of Chinese alchemists facilitated inventions such as gunpowder, which was to revolutionize warfare throughout the world. Although their views differed considerably in the details, scholars in Japan, China, Korea, India, the Middle East and Europe as late as the year 1750 would have agreed that “water” is an element, not a compound of hydrogen and oxygen as we know today. Likewise, the fact that “air” consists of a mixture of several substances was only fully grasped in the second half of the eighteenth century. The easiest way to date when chemistry was born, as distinct from alchemy, is when scholars started talking about “oxygen” instead of “water” as an element. This transition happened in Europe in the late eighteenth century, and only there.
The first seeds of this can be found in Europe during the Scientific Revolution in the seventeenth century, with a new emphasis on experimentation and a more critical assessment of the knowledge of the ancients. The French philosopher Pierre Gassendi (1592–1655) attempted to reconcile atomism with Christianity and thus helped revive the Greek concept of atoms, which, though not totally forgotten, had not been much discussed during the European Middle Ages. Some of the earliest known atomic theories were developed in ancient India in the sixth century BC by Kanada, a Hindu philosopher, and later Jainic philosophy linked the behavior of matter to the nature of the atoms.
The concept of atomism was introduced among the ancient Greeks by Democritus (ca. 460 BC-ca. 370 BC), who believed that all matter is made up of various imperishable, indivisible elements which he called atoma or “indivisible units.” This was supported by Epicurus (341 BC–270 BC). The philosopher Empedocles (ca. 490–430 BC) believed that all substances are composed of four elements: air, earth, fire and water, a view which was supported by Aristotle and became known as the Greek Classical Elements. The Chinese had their “five phases,” namely fire, earth, water, metal and wood, and similar, though not identical ideas were shared by the major Eurasian civilizations. However, Democritus believed that all matter was composed atoms of these elements, and that view was not shared by Aristotle. Atomism always remained a minority view among the ancient Greeks.
During the Middle Ages, alchemists became somewhat more sophisticated and could question these Classical Elements, but the vast majority of chemical elements, i.e. substances that cannot be decomposed into simpler substances by ordinary chemical processes, were not identified until the 1800s. There are currently 117 known chemical elements. Several of them are highly radioactive and unstable and about 20 percent of them do not exist in nature.
Ibn Warraq in his books is critical of Islam but gives due credit to scholars within the Islamic world who deserves it, a sentiment I happen to share. One of them is the Persian physician al-Razi (865″”925), known in the West as Rhazes, the first to describe the differences between smallpox and measles. Here is the book Why I Am Not a Muslim, page 266:
“Al-Razi was equally empirical in his approach to chemistry. He shunned all the occultist mumbo jumbo attached to this subject and instead confined himself to ‘the classification of the substances and processes as well as to the exact description of his experiments.’ He was perhaps the first true chemist as opposed to an alchemist.”
He considered the Koran to be an assorted mixture of “absurd and inconsistent fables” and was certainly a freethinker, but unlike Ibn Warraq, I still view Rhazes as a committed alchemist who believed in transmutation and the possibility of turning base metal into gold. Another well-known Persian scholar, Ibn Sina or Avicenna (ca. 980-1037) was more skeptical of the possibility of transmutation. After the gifted alchemist Geber in the eighth century, a number of scholars in the Middle East, among them Rhazes, made some advances in alchemy, for instance regarding the distillation of ethanol (alcohol) as a pure compound. Some of Geber’s work was later translated into Latin. Belief in the possibility of transmutation was not necessarily stupid according to the understanding of elements of the time. Here is David C. Lindberg in The Beginnings of Western Science, second edition, page 291:
“Aristotle had declared the fundamental unity of all corporal substance, portraying the four elements as products of prime matter endowed with pairs of the four elemental qualities: hot, cold, wet, dry. Alter the qualities, and you transmute one element into another”¦.It is widely agreed by historians that alchemy had Greek origins, perhaps in Hellenistic Egypt. Greek texts were subsequently translated into Arabic and gave rise to a flourishing and varied Islamic alchemical tradition. Most of the Arabic alchemical writings are by unknown authors, many of them attributed pseudonymously to Jabir ibn Hayyan (fl. 9th-10th c., known in the West as Geber). Important, along with this Geberian (or Jabirian) corpus, was the Book of the Secret of Secrets by Muhammad ibn Zakariyya al-Razi (d. ca. 925). Beginning about the middle of the twelfth century, this mixed body of alchemical writings was translated into Latin, initiating (by the middle of the thirteenth century) a vigorous Latin alchemical tradition. Belief in the ability of alchemists to produce precious metals out of base metals was widespread but not universal; from Avicenna onward, a strong critical tradition had developed, and much ink was devoted to polemics about the possibility of transmutation.”
The most influential of all medieval alchemical writings in the West was the Summa perfectionis by the Franciscan friar Paul of Taranto, who was strongly influenced by Geber, in the early fourteenth century. Crucially, he believed that the four Classical Elements exist in the form of tiny corpuscles. This tradition was continued by Daniel Sennert (1572-1637), a German professor of medicine who was an outspoken proponent of atomism. His example again influenced the Englishman Robert Boyle (1627-1691).
Boyle’s publication of The Sceptical Chymist in 1661 was an important milestone in alchemy’s evolution towards modern chemistry, but progress was slow and gradual. The German Hennig Brand (ca 1630–ca. 1710) discovered phosphorus a few years later, but he was still an alchemist working with transmutation and searching for the “philosopher’s stone.” He recognized the usefulness of the new substance, but although he became the first known discoverer of an element (the discovery of gold, silver, copper etc. is lost in prehistory) he didn’t recognize it as a chemical element as we understand it. Here is Bill Bryson in his highly entertaining book A Short History of Nearly Everything, page 130:
“Brand became convinced that gold could somehow be distilled from human urine. (The similarity of colour seems to have been a factor in his conclusion.) He assembled fifty buckets of human urine, which he kept for months in his cellar. By various recondite processes, he converted the urine first into a noxious paste and then into a translucent waxy substance. None of it yielded gold, of course, but a strange and interesting thing did happen. After a time, the substance began to glow. Moreover, when exposed to air, it often spontaneously burst into flame.”
The substance was named phosphorus, from Greek meaning “light-bearer.” This discovery was utilized further by the Swedish chemist Carl Scheele. Bryson again, page 131:
“In the 1750s a Swedish chemist named Karl (or Carl) Scheele devised a way to manufacture phosphorus in bulk without the slop or smell of urine. It was largely because of this mastery of phosphorus that Sweden became, and remains, a leading producer of matches. Scheele was both an extraordinary and an extraordinarily luckless fellow. A humble pharmacist with little in the way of advanced apparatus, he discovered eight elements — chlorine, fluorine, manganese, barium, molybdenum, tungsten, nitrogen and oxygen – and got credit for none of them. In every case, his finds either were overlooked or made it into publication after someone else had made the same discovery independently. He also discovered many useful compounds, among them ammonia, glycerin and tannic acid, and was the first to see the commercial potential of chlorine as a bleach — all breakthroughs that made other people extremely wealthy. Scheele’s one notable shortcoming was a curious insistence on tasting a little of everything he worked with”¦.In 1786, aged just forty-three, he was found dead at his workbench surrounded by an array of toxic chemicals, any one of which could have accounted for the stunned and terminal look on his face.”
The most famous Swedish chemist is undoubtedly Alfred Nobel (1833-1896). Gunpowder remained the principal explosive from the Mongol conquests brought it from China until the chemical revolution in Europe. Nitroglycerin was discovered by the Italian chemist Ascanio Sobrero (1812-1888) in 1847, but it was highly unstable and its use was banned by several governments following serious accidents. Nobel succeeded in stabilizing it and named the new explosive “dynamite” in reference to its dynamic force. The invention made him a very rich man. The foundation of the various Nobel Prizes, awarded annually since 1901, were laid in 1895 when the childless Nobel wrote his last will, establishing the foundation which carries his name.
The most scientifically important Swedish chemist, however, was JÃ¶ns Jacob Berzelius (1779–1848). He and his students discovered several chemical elements, but above all he created a simple and logical system of symbols””H for hydrogen, C for carbon, O for oxygen etc., a system of chemical formula notation which essentially remains in use to this day. Yet this happened in the nineteenth century. Even an otherwise brilliant man such as Newton, who devoted more time to alchemy than to optics, was clearly an alchemist. He was a deeply religious man but in a theologically unorthodox way, and looked for hidden information in the Bible. McClellan and Dorn, page 253:
“In the quest after secret knowledge, alchemy occupied the major portion of Newton’s time and attention from the mid-1670s through the mid-1680s. His alchemical investigations represent a continuation and extension of his natural philosophical researches into mechanics, optics, and mathematics. Newton was a serious, practicing alchemist — not some sort of protochemist. He kept his alchemical furnaces burning for weeks at a time, and he mastered the difficult occult literature. He did not try to transmute lead into gold; instead, using alchemical science, he pried as hard as he could into forces and powers at work in nature. He stayed in touch with an alchemical underground, and he exchanged alchemical secrets with Robert Boyle and John Locke. The largest part of Newton’s manuscripts and papers concern alchemy, and the influence of alchemy reverberates throughout Newton’s published opus. This was not the Enlightenment’s Newton.”
The earliest known use of the word “gas,” as opposed to just “air,” has been attributed to the Flemish scholar Jan Baptist van Helmont (1580–1644), who was a proponent of the experimental method. He did not have access to adequate laboratory apparatus to collect the gas. One of the obstacles he faced was the issue of containment. Here are Cathy Cobb and Harold Goldwhite in their book Creations of Fire, page 114:
“Helmont did not always appreciate the volume of gas that would be released in his reactions, so he routinely burst the crude and delicate glassware of the day”¦.However, a Benedictine monk, Dom Perignon, showed that effervescence in his newly invented beverage, champagne, could be trapped in glass bottles with bits of the bark of a special oak tree. The resultant cork was a triumph for celebrants and chemists alike. Another worker, Jean Bernoulli, used a burning lens (a lens used to focus the sun — soon to be standard equipment in the chemist’s repertoire) to ignite gunpowder in a flask. To avoid repeating the shattering experience of Helmont, Bernoulli did his work in an open, rather than a sealed, system, running a tube from the ignition flask to a vat of water. He was able to show in this manner that gases from the reaction occupied a much larger volume than the gunpowder (and became wet in the process). Otto von Guericke designed a practical air pump in the mid-1600s, and armed with this and new techniques for containment — corks and Bernoulli’s vat — a group of young scientists took on the task of determining the qualities of Helmont’s gases.”
The French monk Dom Perignon (1638-1715) did not technically speaking invent champagne, but he was indeed a pioneer in the use of corks to keep the new creation in place. Among the scientists who continued Helmont’s lead were the Englishmen Robert Boyle and Robert Hooke (1635–1703). Hooke compared cork cells he saw through his microscope to the small rooms monks lived in. These were not cells in the modern biological meaning of the term, but when cells were later identified, biologists took over that name from Hooke.
The great contribution of the seventeenth century to the story of wine was bottles and corks. Wine was traditionally shipped and consumed rather quickly, not stored. Wine bottles made of glass were rare as they were expensive and fragile. The Englishman Kenelm Digby (1603–1665) is often credited with creating the modern wine bottle in the 1630s and 1640s. Hugh Johnson explains in The Story of Wine, page 105-106:
“It now remained to equip them with the perfect stopper. How to plug bottles of whatever sort was a very old problem. The Romans had used corks, but their use had been forgotten. Looking at medieval paintings one sees twists of cloth being used, or cloth being tied over the top. Leather was also used, and sometimes covered with sealing wax. Corks begin to be mentioned in the middle of the sixteenth century. It has often been suggested, and may well be true, that cork became known to the thousands of pilgrims who tramped across northern Spain to Santiago de Compostella. It seems that the marriage of cork and bottle, at least in England, took place by degrees over the first half of the seventeenth century; stoppers of ground glass made to fit the bottle neck snugly held their own for a remarkably long time”¦.Eventually, glass stoppers were abandoned because they were usually impossible to extract without breaking the bottle. Cider, beer, and homemade wines were what the seventeenth-century householder chiefly bottled. Bottling by wine merchants only began at the very end of the century.”
Santiago de Compostela in Galicia in the northwest of Spain was a major center of pilgrimage as Saint James the Great, one of the disciples of Jesus, is said to be buried there. Cork is the thick outer bark of the cork oak, Quercus suber, which grows in the Western Mediterranean and especially in the Iberian Peninsula. Portugal and Spain are still the most important exporters of cork. Cork is light, elastic, clean, largely unaffected by temperature and does not let air in or out of the bottle, which is what makes it so useful. Corkscrews were invented soon after the introduction of corked glass bottles.
Following advances made by Boyle, Daniel Bernoulli (1700–1782), one of the most prominent of the Swiss Bernoulli family of mathematicians, published his Hydrodynamica, which laid the basis for the kinetic theory of gases.
The Scottish scientist Joseph Black (1728–1799), a friend of the engineer and inventor James Watt (1736–1819), discovered carbon dioxide. It was by now quite clear that “air” consisted of several different substances, which led to further experiments in pneumatic chemistry. The great English experimental scientist Henry Cavendish (1731-1810) identified hydrogen, or what he called “inflammable air.” Another Englishman, Joseph Priestley (1733–1804), a contemporary of Cavendish who corresponded with him, is usually credited with discovering oxygen, although Scheele had in fact done so before him. The Frenchman Antoine-Laurent de Lavoisier (1743–1794) noted its tendency to form acids by combining with different substances and named the element oxygen (oxygÃ¨ne) from the Greek words for “acid former.” He worked closely with the mathematical astronomer Pierre-Simon Laplace (1749–1827) in developing new chemical equipment.
It is worth noting here that Watt, a practical man of steam engine fame, and the brilliant theoretical scientist Laplace both made contributions to the advancement of chemical science. This illustrates that theoretical science and applied technology were now gradually growing closer, a development of tremendous future importance which in my view had begun in Europe already in the eighteenth century, if not before, but whose effects would only become apparent some generations later.
Several observers noticed that water formed when a mixture of hydrogen with oxygen (or common air) was sparked, but they were cautious in their conclusions. Cobb and Harold Goldwhite, page 159-160:
“Lavoisier did not hesitate. He made the pronouncement that water was not an element as previously thought but the combination of oxygen with an inflammable principle, which he named hydrogen, from the Greek for the begetter of water. He claimed priority for this discovery, making only slight reference to the work of others. There was perhaps understandably a furor. Watt felt that Cavendish and Lavoisier had used some of his ideas, but of course all three owed some debt to Priestley. Again it may be asserted that the significance of Lavoisier’s work lies not in the timing of his experimental work but in his interpretation of the results”¦.Lavoisier however saw it as the combination of two elements to form a compound”¦.Laplace favored a mechanical explanation of heat as the motion of particles of matter (as it is currently understood), but Lavoisier described heat as a substance. This material he called caloric, the matter of fire”¦.His true accomplishments however were that he broke the Aristotelian barrier of four elements, established the conservation of mass as an inviolate law, and confirmed the need for verifiable experimental results as the basis for valid chemical theory.”
Lavoisier is generally considered the “father of modern chemistry.” He had not yet fully arrived at the modern definition of a chemical element, but he was a great deal closer to it than past scholars and had given chemists a logical language for naming compounds and elements. Lavoisier and Laplace now conducted a number of studies of respiration and concluded that oxygen was the element in air necessary for life. Although himself an honest man, Lavoisier represented the hated tax collectors and found himself on the wrong side of the French Revolution which began in 1789. He was guillotined during the Reign of Terror after the revolutionary judge remarked, “The Republic has no need of scientists.”
In his last years he had become involved in the construction of the metric system, an idea which had been suggested before but was only successfully implemented with the Revolution. The great French mathematician Adrien-Marie Legendre (1752–1833), among others, continued this work afterwards. The metric system has since conquered the world, despite stubborn resistance from the English-speaking countries. In my view, it is one of the most positive outcomes of the French Revolution, an otherwise predominantly destructive event which championed and spread many damaging political ideas.
During the nineteenth century, dozens of new chemical elements were discovered and described, not only because of better electrochemical equipment but also because scientists now knew what to look for. This eventually enabled the Russian scholar Dmitri Mendeleyev (1834-1907) to construct his famous periodic table of the chemical elements in 1869. He was not the first person to construct such a table, but he took a bold step by leaving open spaces for elements not yet discovered. After these were identified, with roughly the characteristics predicted by Mendeleyev, his periodic table won general acceptance.
Another scientific discipline which was destined to influence the progress of optics was the study of electricity and magnetism. Electrical phenomena, especially static electricity, had been known in a number of cultures since ancient times. The English physician and natural philosopher William Gilbert (1544-1603) published his work De Magnete in 1600. Although his investigation of static electricity was less complete than his study of magnetism, Gilbert was one of the originators of the term “electricity” and a pioneering advocate of the scientific method. His example influenced Galileo.
Progress in the study of electrical phenomena was especially rapid during the Enlightenment era. In the 1730s the French physicist Charles du Fay (1698-1739) discovered that there are two kinds of electric charge which were dubbed “positive” and “negative,” with like charges repelling one another and opposite charges attracting one another. The French physicist Charles de Coulomb (1736–1806) discovered that the force between two electrical charges is proportional to the product of the charges and inversely proportional to the square of the distance between them. The Dutch scientist Pieter van Musschenbroek (1692–1761) invented the first capacitor in the 1740s, a device for storing static electricity known as the Leyden jar. Cathy Cobb and Harold Goldwhite in Creations of Fire, page 179-180:
“Electricity itself, like atomic theory, was nothing new. The Greeks knew how to generate static electricity by rubbing amber with wool (the word electricity is derived from elektron, the Greek word for amber), and Otto von Guericke of air-pump fame made a machine for generating a high-potential electric charge in the 1500s. The Leiden jar for storing static charge was invented in 1745 by Pieter van Musschenbroek of Leiden, who stumbled across the method while trying to preserve electrical charge in an empty glass bottle. Unknowingly he built up considerable static charge on the surface of the bottle, which he discovered when he touched the bottle, ‘the arm and the body was affected in a terrible manner which I cannot express; in a word, I thought it was all up with me.’ In the 1750s Benjamin Franklin carried out his famous kite experiment in which he collected a charge from a thunder cloud in a Leiden jar. He was fortunate in surviving this experiment; others who attempted to duplicate it did not. Franklin performed many revealing experiments with the Leiden jar, but these experiments were limited because the Leiden jar provided only one jolt of electricity at a time.”
The discovery of current electricity opened the door to a whole new area of research in the early 1800s. James E. McClellan and Harold Dorn, page 302-303:
“In experiments conducted with frogs’ legs in the 1780s, the Italian scientist Luigi Galvani (1737-98)”¦sought to investigate the ethereal ‘animal electricity’ that seemed to ‘flow’ in an animal’s body. His compatriot Alessandro Volta (1745-1827) built on Galvani’s work and in 1800 announced the invention of the pile, or battery, which could produce flowing electricity. Volta’s battery and the ever-larger ones that soon followed manifested profound new connections between electricity and chemistry. The battery — layers of metals and cardboard in salt (later acid) baths — was itself a chemically based instrument, and so the generation of current electricity was self-evidently associated in fundamental ways with chemistry. More than that, through electrolysis or using a battery to run electricity through chemical solutions, scientists”¦.discovered new chemical elements”¦.Lavoisier had been content to describe chemical elements as merely the last products of chemical analysis without saying anything about the constitution — atomic or otherwise — of these elements”¦.John Dalton (1766-1844) noticed that the proportions of elements entering into reactions were often ratios of small integers, suggesting that chemical elements are in fact discrete particles. He thus became the first modern scientist to propose chemical atoms — or true indivisibles — in place of the more vague concept of chemical elements.”
The rise of “modern” atomism is often taken to start with the work of the English Quaker John Dalton in the early 1800s. The atomism of the ancients was not supported by experimental verification. It was “a deduction from certain mental postulates, not from experience, and therefore should be considered as literature rather than as science,” as one writer put it. The “philosophical” atomism during the nineteenth century got a much firmer experimental basis. The invention of the battery, which, like, photography happened after rapid advances in chemistry, for the first time in human history made possible the study of electromagnetism. One of the greatest revolutions in optics was the understanding that visible light is in fact just one of several forms of electromagnetic radiation.