A History of Medicine, part 5
Microscopes were allegedly invented by the Dutch spectacle maker Zacharias Jansen (ca. 1580 - ca. 1638) just before the year 1600 or by the Dutch lensmaker Hans Lippershey (1570–1619), who is often credited with having made the first practical telescope in 1608 (a design Galileo Galilei soon improved).
Other candidates have been suggested as possible inventors, too. What we can say with certainty is that the microscope had been introduced before 1610. Among the first persons to achieve good results from the new invention, with remarkable levels of magnifications for his time, was a seventeenth-century Dutchman called Antoni van Leeuwenhoek. Bill Bryson explains in his entertaining A Short History of Nearly Everything:
Though he had little formal education and no background in science, he was a perceptive and dedicated observer and a technical genius. To this day it is not known how he got such magnificent magnifications from such simple handheld devices….
Over a period of fifty years - beginning, remarkably enough, when he was already past forty – Leeuwenhoek made almost two hundred reports to the Royal Society, all written in Low Dutch, the only tongue of which he was master….
In 1683 Leeuwenhoek discovered bacteria – but that was about as far as progress could get in the next century and a half, because of the limitations of microscope technology.
Not until 1831 would anyone first see the nucleus of a cell – it was found by the Scottish botanist Robert Brown, that frequent but always shadowy visitor to the history of science. Brown, who lived from 1773 to 1858, called it nucleus from the Latin nucula, meaning little nut or kernel.
Only in 1839, however, did anyone realize that all living matter is cellular. It was Theodor Schwann, a German, who had this insight, and it was not only comparatively late, as scientific insights go, but not widely embraced at first.
Robert Brown (1773-1858) spent years doing botanic research in Australia during the early 1800s. The so-called Brownian motion, the random movement of small particles in a liquid or gas, is named after him. Theodor Schwann (1810–1882) was the co-founder of the cell theory along with two other Germans, Matthias Jakob Schleiden (1804-1881) and Rudolf Virchow (1821–1902).
The invention of both the telescope and the microscope had been made during the first decade of the 1600s, and both instruments were the result of improvements in lens grinding among Dutch eyeglass makers. However, while the introduction of the telescope made big and lasting changes the very first time it was used for astronomical observation by Galileo Galilei, it took much longer for the microscope to achieve major scientific changes. Symptomatically, most people know that Galileo had great skills in making and using telescopes; far fewer know that he was equally skilled at making microscopes.
There was some good work done in microscopy already during the first century of the instrument's existence. The Italian physician Marcello Malpighi (1628-1694) did interesting studies related to medicine, and the English polymath Robert Hooke (1635–1703) published Micrographia, the first substantial book on microscopy by any scientist, in 1665. As John Gribbin says in The Scientists:
Hooke was not the first microscopist. Several people had followed up Galileo's lead by the 1660s and, as we have seen, Malpighi in particular had already made important discoveries, especially those concerning the circulation of the blood, with the new instrument. But Malpighi's observations had been reported piece by piece to the scientific community more or less as they had been made.
The same is largely true of Hooke's contemporary Antoni van Leeuwenhoek (1632-1723), a Dutch draper who had no formal academic training but made a series of astounding discoveries (mostly communicated through the Royal Society) using microscopes that he made himself. These instruments consisted of very small, convex lenses (some the size of pinheads) mounted in strips of metal and held close to the eye – they were really just incredibly powerful magnifying glasses, but some could magnify 200 or 300 times.
Van Leeuwenhoek's most important discovery was the existence of moving creatures (which he recognized as forms of life) in droplets of water – microorganisms including varieties now known as protozoa, rotifera and bacteria.
In Gribbin's view, though Leuwenhoek's studies were significant and impressive considering the fact that he was an amateur, he was a one-off, using unconventional techniques and instruments. Hooke represented the mainstream path along which microscopy developed and packaged his discoveries in a single, accessible volume with scientifically accurate drawings.
Hooke described the structure of feathers, a butterfly's wing and identified fossils as remains of once-living creatures and plants, which was still far from self-evident in the seventeenth century. Nevertheless, microscopy didn't produce real changes in medicine until significantly improved instruments constructed on the basis of sound optical theory, essentially reaching the same level of quality as the light microscopes in use today, had been created after the mid-nineteenth century.
A leading force behind this was the brilliant German mathematician and physicist Ernst Abbe (1840–1905), in cooperation with Carl Zeiss (1816–1888). During this period the Germans played a leading role in laboratory medicine. Michael Kennedy explains:
In 1846, Carl Zeiss opened his workshop in Jena and German lenses quickly became the best in the world. Jacob Henle (1809-85) produced the first textbook of combined gross and microscopic anatomy in 1866 and encouraged the use of microscopes by students. The Royal College of Surgeons in England instituted courses in gross and microscopic anatomy in 1848.
German universities invested in academic science, with the support of rulers concerned about national prestige, and Germany quickly adopted research-based medical science, which would pay great dividends by the end of the century. Chemistry at last was to play the role once emphasized by Paracelsus four centuries earlier. The Germans began to study what we now call organic chemistry.
As Joel Mokyr writes in The Gifts of Athena: Historical Origins of the Knowledge Economy:
The invention of the modern compound microscope by Joseph J. Lister (father of the famous surgeon) in 1830 serves as another good example. Lister was an amateur optician, whose revolutionary method of grinding lenses greatly improved image resolution by eliminating spherical aberrations.
His invention changed microscopy from an amusing diversion to a serious scientific endeavor and eventually allowed Pasteur, Koch, and their disciples to refute spontaneous generation and to establish the germ theory, a topic I return to below.
The germ theory was one of the most revolutionary changes in useful knowledge in human history and mapped into a large number of new techniques in medicine, both preventive and clinical. The speed and intensity of this interaction took place was still slow, but it was accelerating, and by the close of the eighteenth century it had become self-sustaining.
The improvements were made based on a mathematical optimization for combining lenses to minimize spherical aberration and reduced average image distortion by a huge proportion, from 19 to 3 percent. In plain words, the average microscope was now much better and more accurate than it had been a few generations before.
Although Antoni van Leeuwenhoek had probably spotted bacteria in his unusually good microscope already in the seventeenth century, the concept that infectious diseases were caused by living organisms too small to be seen by the naked human eye met with stubborn resistance. Kennedy again:
Girolamo Fracastoro, in 1546, had proposed the cause of infectious diseases as seminaria contagiosa, 'disease seeds' that were carried by the wind or communicated by contact with infected objects.
Francesco Redi, in 1699, boiled broth and sealed it in containers proving that maggots did not develop in meat protected from flies and putrefaction did not occur without contamination. This should have disproved spontaneous generation, but John Needham, in 1748, repeated the experiment and saw 'animalcules' in the broth, which must have appeared spontaneously.
The debate about spontaneous generation continued for another century. In 1835, Agostino Bassi, manager of a silkworm estate, conducted an experiment with a silkworm disease, muscarine. A fungus on the dead silkworms could produce the disease when healthy silkworms were incubated with it.
Jacob Henle, influenced by Bassi's observations, concluded in 1840 that a living agent that acted as a parasite caused infectious diseases. The theory had been proposed repeatedly since the sixteenth century but remained on the fringes of medical science until after 1870 and the work of the great Frenchman Louis Pasteur (1822-1895). One famous victim of this resistance to the germ theory was the Hungarian physician Ignaz Semmelweis (1818–1865).
In some history books I have seen, it is said that Semmelweis was born in Budapest in the Austro-Hungarian Empire. This is slightly inaccurate since the beautiful city of Budapest, today the capital city of Hungary, was initially two different cities, Buda and Pest, occupying both banks of the river Danube, and they weren't merged until 1873, after Semmelweiss had died. Likewise, the Austrian Empire didn't become the dual monarchy known as the Austro-Hungarian Empire until 1867 (it was formally dissolved after World War I).
In any case, while working in the Imperial capital at the Vienna General Hospital in 1847, Semmelweiss discovered that the frequency of puerperal fever or childbed fever could be drastically reduced by simple hand washing methods with chlorinated lime solutions.
His insight that puerperal fever was transmitted to patients by doctors led to his expulsion from his position and the delay of a discovery that could have saved the lives of tens of thousands of women. He is now regarded as a pioneer of antiseptic procedures, but his ideas didn't gain acceptance until after his death. That Semmelweiss could suffer this rejection even by the middle of the nineteenth century shows how late the germ theory of disease was established.
As Mokyr says, "
Even after the discovery was made, American physicians fiercely resisted it. On the European continent, which was more receptive to techniques based on the body of useful knowledge we call bacteriology, resistance was weaker. Indeed, the idea went back to a much earlier age. The idea of germ-caused infection was first proposed by Girolamo Fracastoro in his De Contagione (1546).
In 1687, Giovanni Bonomo explicitly proposed that diseases were transmitted because minute living creatures he had been able to see through a microscope passed from person to another (Reiser, 1978. p. 72). Bonomo's observations, along with the microscopy of pioneers like Leeuwenhoek, ran into skepticism because they were irreconcilable with accepted humoral doctrine.
Pasteur and Koch's demonstrations of the culpability of bacteria took many years to be accepted, and the opposition of some of the great figures of public medicine at the time, such as the sanitary reformer Max von Pettenkofer and Rudolf Virchow, the founder of cell pathology, is legendary. In New York, well-known doctors walked out of scientific meetings in protest as soon as the issue of bacteriology was raised (Rothstein, 1972, p. 265).
The canning of food was invented in the early 1800s by a French confectioner named Nicolas Appert (1749-1841). He placed food in champagne bottles, corked them loosely, immersed them in boiling water and hammered the corks tight. This practice preserved the food for extended periods, but neither he nor his emulators who later perfected the preservation of food in tin-plated canisters knew why this technique worked; it's a textbook case of an applied technology without any theoretical basis. Louis Pasteur knew of Appert's work, but his scientific methods and careful experiments succeeded in convincing many skeptics.
The optimal temperatures for the preservation of various foods with minimal damage to flavor were worked out by two scientists at the Massachusetts Institute of Technology (MIT) in the USA, Samuel Prescott (1872–1962) and William Lyman Underwood (1864–1929) in 1895-96.
Their work represented a milestone in the development of food technology and food science. Appert's method of cooking the food to a temperature far in excess of what is used in pasteurization can easily destroy some of the flavor.
Pasteurization does not intend to kill all microorganisms, only to reduce their number sufficiently to prevent them from causing diseases. Complete sterilization has negative effects on the taste of the food. Pasteur had developed an interest for chemistry and biology and focused on the souring of milk and the fermentation of sugar to alcohol.
He was convinced that the latter was a biological phenomenon. In France, wine is a source of both revenue and national pride. Kennedy again:
Pasteur worked on problems of the wine industry and proved that Mycoderma aceti was the microorganism responsible for souring wine. Furthermore, he demonstrated that heating wine to fifty-five degrees centigrade, which did not damage the wine, killed the organism and prevented the souring.
Eventually, the principle was applied to beer and milk and the term 'Pasteurization' became a common one. The Pasteurizing process has virtually eliminated the risk of tuberculosis from milk without affecting its quality. Henle had argued that fermentation, putrefaction, and disease were related and Pasteur had demonstrated microorganisms, which produced these phenomena, in the air.
The connection was there to be explored. The next step was the study of another silkworm disease, pebrine which was producing serious problems for the industry. Pasteur demonstrated that the cause was a living organism, a protozoan, and discovered its life cycle from moth to egg to chrysalis. On February 19, 1878, he appeared before the French Academy of Medicine to present the germ theory of disease.
According to Mokyr,
In terms of its direct impact on human physical well-being, the victory of the germ theory must be counted as one of the most significant technological breakthroughs in history. The bacteriological revolution heralded a concentrated and focused scientific campaign to once and for all identify pathogenic agents responsible for infectious diseases.
Between 1880 and 1900 researchers discovered pathogenic organisms at about the rate of one a year and gradually identified many of the transmission mechanisms, although many mistaken notions survived and a few new ones were created.
The age-old debates between contagionists and anti-contagionists and between miasma and anti-miasma theories slowly evaporated, although the belief that 'bad air' was somehow responsible for diseases such as diarrhea was still prevalent in the 1890s.
The refutation of the Aristotelian notion of 'spontaneous generation' of life from lifeless matter by Pasteur demonstrated that bacterial infection was contracted exclusively from a source outside the body. It provided a much wider epistemic base for a large number of household techniques that were thought to prevent disease, thus making them both more effective and more persuasive.
The triumph of the germ theory after 1865 was above all a victory of scientific persuasion by forceful personalities. In 1879, Pasteur turned his attention to chicken cholera and anthrax. He injected chickens with an old, "stale" culture of cholera organisms and later found that they were now immune against "strong" cultures.
Anthrax was a common disease in cattle that occasionally affected humans. According to the previous medical paradigm it had been attributed to "rural miasma." The disease was studied by several scholars, among them the German physician Robert Koch (1843-1910).
While serving as a surgeon in the Franco-Prussian war of 1870, which facilitated the unification of Germany under the leadership of Otto von Bismarck (1815–1898), Koch was appointed to an office as district health officer in Posen in modern Poland. Anthrax was endemic in Posen and this gave Koch an opportunity to examine the disease. He learned that anthrax formed spores which were resistant to heat.
According to Michael Kennedy,
Pasteur used samples of Koch's Bacillus anthracis to conduct experiments on attenuation of the virulence of the organism. Finally, he was able to produce a 'weak' form that could be used to produce a vaccine.
On May 5, 1881, he injected twenty-four sheep, a goat and six cattle with the attenuated strain of anthrax at a public demonstration. A similar group was left unexposed. On May 17, a second injection, using a stronger culture, was given to the test animals.
On May 31, all animals, inoculated and naïve (the control group), were given an injection of virulent anthrax. By June 2, all the sheep and the goat in the control group were dead and the cattle were sick. The inoculated group was all healthy.
The era of vaccines had begun and medicine was finally able to prevent, if not yet treat, disease. It had taken nearly 100 years since Jenner to develop a second vaccine. Pasteur had been able to produce artificially the attenuated strain of an organism that nature had provided in smallpox/cowpox.
In 1880 Pasteur, aided by his assistant Charles Chamberland (1851-1908) and the doctor Pierre Roux (1853–1933), began to study the feared disease rabies, which was (and remains when untreated) almost 100% lethal. He didn't see the organism causing rabies since the virus is too small to be seen in optical microscopes. He injected spinal cord tissue from infected individuals into rabbit brain, which caused infection with an incubation period of six days.
In 1884 he proceeded to test a weakened form of the disease on dogs that later turned out to be immune against virulent rabies. Because of the long incubation period, immunization could be effective even after exposure to rabies, if done quickly.
This vaccine was first used on the 9-year-old Joseph Meister (1876–1940) in July 1885 after the boy was badly mauled by a rabid dog. Trying the vaccine on humans even with a weakened version of the virus was obviously risky, but since the boy had already been infected he faced almost certain death without treatment. The vaccination was a success, and a new vaccine had been introduced.
There are still debates as to whether or not a virus should be considered a living organism since it can only multiply in living cells of other organisms, be that of animals, plants or bacteria.
The Russian biologist Dmitry Ivanovsky (1864-1920) in 1892 and the Dutch microbiologist Martinus Beijerinck (1851-1931) in 1898 both found that a disease of tobacco plants was transmitted by an agent, later called tobacco mosaic virus, small enough to pass through a filter that would not allow the passage of bacteria. Apparently Beijerinck understood that he was dealing with a new kind of infectious agent which he dubbed a virus. He is considered the founder of virology.
Viruses are so small, even compared to bacteria, that they cannot be seen in traditional microscopes. In the 1940s the development of the electron microscope, which due to the much smaller wavelength of electrons vs. that of visible light allows for far greater resolution and magnification than light microscopes, permitted individual virus particles to be seen for the first time. Advances in the second half of the twentieth century and the early years of the twenty-first have revolutionized the study of viruses.
The ruling miasma theory of disease held that diseases such as cholera were caused by a miasma (Greek: "pollution"), a form of "bad air." During the Victorian era, especially between 1820 and 1870, a great sanitarian and hygienic movement started a widespread but unfocused war against dirt based on a vague correlation with disease.
It was believed that filth was a source of disease but that disease spores traveled through odors, which led to a great emphasis on ventilation and refuse removal. This strategy did have some positive effects, although the reason for this was not properly understood. Mokyr explains:
The war against filth, which had eighteenth-century roots, drew new strength and focus from the statistical revolution that grew out of the Enlightenment and led to the development of nineteenth-century epidemiology. It provided data to support the close relation, long suspected, among consumption patterns, personal habits, and disease….
The roots of this movement went back more than a century, especially to the debates around the efficacy of smallpox inoculation procedure, the beneficial effects of breast-feeding, and the bad effects of miasmas (putative disease-causing elements in the atmosphere). The empirical regularities discovered by the statisticians reinforced earlier middle-class notions that cleanliness enhances health.
By the middle of the nineteenth century, those notions were filtering down vertically through the social layers of society. But their persuasiveness was vastly extended by the growing interest in statistics and the analysis of what we today would call 'data' dating to the decades after 1815.
The founding of the Statistical Society of London in 1834 led to an enormous upsurge in statistical work on public health. In Britain, William Farr, William Guy, and Edwin Chadwick were the leaders of this sanitary movement, but it encompassed many others (Flinn, 1965).
Lister, Jenner and others used practical measures to deal with diseases caused by microorganisms they did not believe in. The pioneering nurse and statistician Florence Nightingale (1820–1910), a firm believer in the miasmatic theory of disease, placed much emphasis on hospital sanitation.
Between 1853 and 1862, a quarter of the papers read before the Statistical Society of London dealt with public health. Similar establishments existed in other European countries. The sanitarian movement looked for empirical regularities, which often led down blind alleys but sometimes to real advances in epidemiology.
Many social reformers and activists were enthusiastic members of the Statistical Society. Among the great triumphs of this methodology were the discoveries of John Snow and William Budd in the 1850s that water was the transmission mechanism of cholera and typhoid, and in 1878 that milk was a carrier of diphtheria.
In Germany, the founder of modern physiology, Rudolf Virchow, called for more medical statistics: "We will weigh death and life and see where death lies thicker," he insisted. The influential German physician Max von Pettenkofer (1818-1901) fought against the germ theory of disease, yet still advocated public health measures to prevent the spreading of infectious disease in the city of Munich.
The English civil engineer Sir Joseph Bazalgette (1819–1891) played a leading role in improving public health in London during this period. A cholera epidemic in the late 1840s killed thousands of Londoners, and another epidemic struck in 1853, killing thousands more. The medical opinion at the time still held that cholera was caused by foul air, miasma.
The River Thames resembled an open sewer. In 1858, Parliament passed an enabling act to channel London's sewerage system into underground brick sewers, built to such generous scale that they are still in use to this day.
In Paris, Napoléon III (1808–1873) commissioned major works designed to modernize the city. Led by Georges-Eugène Haussmann (1809–1891), the project in the 1850s and 60s changed the face of Paris into a city of wide boulevards, large public parks and a new sewer system. Indirectly, the changes did benefit public hygiene. A little later, the Eiffel Tower was built in 1889, symbolizing the new age.
Paris and London remained two of Europe's leading cities, but the urban hierarchies of the continent did change somewhat between 1750 and 1950. Some cities such as Liverpool, Manchester and Birmingham experienced spectacular growth related to the Industrial Revolution.
The Russian cities Moscow and St. Petersburg were special cases, disproportionately large compared to other cities in the Russian Empire. Sofia, Bulgaria, grew rapidly during the late nineteenth and early twentieth centuries, as did Bucharest, Romania and Budapest.
Berlin was arguably the most spectacular case of all as it grew during the second half of the nineteenth century into one of the most dynamic cities not just in Germany, but in all of Europe. London, Paris and Berlin built underground railroads, as did Budapest, New York City and eventually other cities such as Madrid. In beautiful Barcelona, straight streets blasted through the central slums. The Catalonian city experienced a cultural spring, visually represented by the unique buildings of the architect Antonio Gaudí (1852–1926).
The dirty chaos of early industrial urbanism was not always healthy, but better nutrition and education, especially in the cities, facilitated the spread of new knowledge which improved public health. Technological advances had positive effects on urban areas in the nineteenth and twentieth centuries.
The development of lifts, piped water and gas, electricity, sewer lines, water closets and central heating meant that it was now possible to manage urbanization without decay, and indeed improve the quality of life as well as health. Big cities and national capitals led the way in this reduction in mortality, which took place earlier in Northern and Western Europe than in the South or East.
In the end, the effects of industrialization were felt in every city across the European continent, and eventually across the world. Paul Hohenberg and Lynn Lees explain in The Making of Urban Europe, 1000-1994:
National governments cared more about their capitals, and money was more easily forthcoming there for the massive investments that proper sanitation required. At a time when Paris had already built new water and sewerage systems, the Marseille population still drank polluted water from the Durance River.
In consequence, the Mediterranean port was the site of the last big cholera epidemic in France in 1884, at a time when death rates in Paris had already fallen (W. Lee 1979; Pinkney 1958). Improvement in urban death rates began in central Europe before 1890. Indeed, in Austria and in Bavaria urbanites had a higher life expectancy than did their country cousins by the later 1880s (A. Weber 1899).
Even in southern and eastern Europe, where demographic change set in more slowly, the major cities were far less deadly in 1900 than they had been a century before. In the long run improved life expectancy more than equalized risks between the urban and the rural environment.
By the later nineteenth century towns shifted from being killers to being net producers of people.
Yellow fever devastated much of the American South and the Caribbean region in the nineteenth century, but it took some time before the mechanisms of its transmission were understood. As Joel Mokyr writes:
During the cleanliness campaigns of the mid-nineteenth century standing water and open sewage in cities were reduced, and with them the mosquitoes. The decline of the disease was attributed to the disappearance of the stench. Memphis, for example, was free of yellow fever after the sanitation campaign, but since the epistemic base was essentially empty, this experience could not be put to good use elsewhere (Spielman and d'Antonio, 2001, pp. 72-73).
The suspicion that mosquitoes might be involved in the transmission of some diseases had already been raised in 1771 by an Italian physician named Giovanni Lancisi (for the case of malaria), and in 1848 a physician from Mobile, Alabama, Dr. Josiah Nott, extended the idea to yellow fever.
A more detailed hypothesis that the disease was spread by the mosquito Aedes aegypti was put forward by a Cuban doctor, Carlos Finlay, in 1878, but his experiments failed to carry conviction, in part because the notion that insects carried disease was too novel and revolutionary for many physicians to accept (Humphreys, 1992, pp. 35-36). Only in 1900 did Walter Reed demonstrate the infection mechanism by persuasive experimental methods (costing the lives of three volunteers).
Another revolution in medicine at this time was the realization that small traces of certain substances are vital to human health and that some crucial substances cannot be manufactured by the body from other nutrients and need to be supplied through the diet. This was coupled with the understanding that some diseases are not caused by bacteria or germs, but by deficiencies of trace elements.
The Japanese naval physician Takaki Kanehiro (1849–1920), who had received education in traditional Chinese medicine as well as modern Western medical science, discovered that the disease beriberi, which represented a serious problem for the Japanese navy at the time, was caused by nutritional deficiency, not by infectious germs.
This was confirmed by the Dutch physician Christiaan Eijkman (1858–1930) in 1897, who demonstrated a link between beriberi and diet. Eijkman received a Nobel Prize in 1929 together with the English biochemist Frederick Hopkins (1861-1947) for their discovery of several vitamins.
The name "vitamin" was introduced by the Polish biochemist Casimir Funk (1884-1967) in 1912. After reading an article by Eijkman he tried to isolate the substance in question, which we now know as vitamin B1. These substances that are vital to human health he called vital amines or vitamines. Most vitamins are obtained with food but a few by other means.
The concept that eating certain types of food can be beneficial to your health had been known since ancient times but remained unspecific, as did most medical knowledge prior to modern times.
The Scottish physician James Lind (1716–1794) conducted the first clinical trial in the British Royal Navy in the mid-eighteenth century to prove that citrus fruits cure scurvy, and published his Treatise on the Scurvy, which recommended using lemons and limes to avoid scurvy, in 1753.
The Dutch East India Company kept citrus trees on the Cape of Good Hope in the seventeenth century, but the knowledge kept being rediscovered and lost for centuries. The active ingredient, which we know as vitamin C, was only detected in the twentieth century by the Hungarian physiologist Albert Szent-Györgyi (1893–1986).
The medical advances of the twentieth century, from the discovery of insulin by the Canadian scientist Frederick Banting (1891–1941) in the early 1920s via the development of the modern intensive care unit to the introduction of the artificial birth-control pill in the 1960s (which had major social and demographic consequences) are simply too numerous to list. I will briefly mention only a few of them here.
Experiments with blood transfusions, the transfer of blood into a person's blood stream, had been carried out for hundreds of years at the cost of many lives, since mixing the blood from different individuals can have potentially lethal consequences. The American surgeon William Stewart Halsted (1852–1922) performed one of the first blood transfusions in the United States in 1881 by giving some of his blood to his sister save her life.
The discovery of human blood groups was made in 1901 by the Austrian physician Karl Landsteiner (1868–1943). He developed the ABO blood group system, the most important (but by no means the only) blood type system in use today. Other early pioneers in the field include the American Alexander S. Wiener (1907-1976) and the Czech serologist Jan Janský (1873–1921).
The new scientific understanding of blood groups made blood transfusions far safer and enabled the development of blood banks. Along with other advances in surgery and antiseptics, this gradually made possible organ transplant of vitals organs such as kidneys and livers.
The first successful human-to-human heart transplant was achieved in December 1967 in Cape Town by Christiaan Barnard (1922–2001), the son of a minister in the Dutch Reformed Church in South Africa. The American physician Norman Shumway (1923–2006) contributed some of the research leading to the first human heart transplants.
Another major advance in the twentieth century was the discovery of antibiotics, which are natural substances. One of the first recorded observations of penicillin was made by the Irish physicist John Tyndall (1820–1893) who in 1875 reported to the Royal Society in London that it killed bacteria, but he then passed on to other matters.
In 1896 the French medical student Ernest Duchesne (1874–1912) commented on the antagonism between the Penicillium notatum mold and bacteria, but he didn't follow this insight up. The Costa Rican scientist Clodomiro Picado Twight (1887-1944), who also did research on snakes and anti-venom serums, did work on penicillin during the First World War, and in 1925 the bacteriologist André Gratia (1893–1950) of the University of Liège in Belgium reported that a substance produced by Penicillium could dissolve anthrax bacilli, but again there was no follow-up.
The general credit for discovering penicillin is usually granted to the Scottish biologist Alexander Fleming (1881–1955), who discovered the substance by accident in 1928. He published articles on the subject in 1929 and 1932, but then abandoned the subject.
A bacteriologist named Cecil Paine obtained a sample of Penicillium notatum from Fleming, made broth cultures and applied it to several test subjects with infections who responded to the treatment. He reported these results to the Australian scholar Howard Florey (1898–1968).
An important breakthrough came with the German bacteriologist Gerhard Domagk (1895–1964), who found the first effective drug against infections caused by bacteria in 1935.
Ernst Boris Chain (1906–1979), a Jew and refugee from Nazi Germany, went to Florey with a suggestion that they investigated the anti-bacterial properties of Fleming's discovery. They had access to one of Fleming's cultures and in 1939 led a team of researchers at the University of Oxford in England, made tests on mice and eventually found a stable form suitable for practical use by freeze-drying the penicillin. Though mass-production remained a challenge, penicillin was available to the Allied forces during the final phases of the Second World War. Fleming, Chain and Florey shared a Nobel Prize in 1945 for the discovery.
Charles Darwin's On the Origin of Species, published in 1859, triggered a massive debate about evolutionary biology, but the principles behind inheritance were not worked out by Mr. Darwin.
This was done by Gregor Mendel (1822 –1884), a German-speaking priest born in Brünn or Brno, the second-largest city of what is today the Czech Republic but was then a part of the Austrian Empire. He was a monk and a trained scientist who had studied at the University of Vienna. Although he is considered the "father of genetics" he did not coin the term "gene."
This was introduced by the Dutch botanist Hugo de Vries (1848-1935) as "pangen" and later abbreviated by scholar Wilhelm Johannsen (1857-1927) from Denmark to "gene" ("gen" in Danish). Mendel studied the laws of inheritance by cultivating and testing tens of thousands of pea plants between 1856 and 1863. He demonstrated that the inheritance of traits follows specific laws which we now call Mendelian inheritance, yet his work did at first not gain widespread attention when it was published in the 1860s.
According to John Gribbin,
Mendel had shown conclusively that inheritance works not by blending characteristics from the two parents, but by taking individual characteristics from each of them.
By the early 1900s, it was clear (from the work of people such as William Sutton at Columbia University) that the genes are carried on the chromosomes, and that chromosomes come in pairs, one inherited from each parent. In the kind of cell division that makes sex cells, these pairs are separated, but only (we now know) after chunks of material have been cut out of the paired chromosomes and swapped between them, making new combinations of genes to pass on to the next generation.
Mendel's discoveries were presented to a largely uncomprehending Natural Science Society in Brünn (few biologists had any understanding of statistics in those days) in 1865, when he was 42 years old. The papers were sent out to other biologists, with whom Mendel corresponded, but their importance was not appreciated at the time.
In 1868 Mendel became Abbot at his monastery and no longer had time to continue his scientific work. The rediscovery of the Mendelian laws of inheritance in the early twentieth century, combined with the identification of chromosomes, provided the keys to understanding how evolution works at the molecular level. This has provided us with new insights into hereditary diseases or genetic disorders, among other things.
In the twenty-first century Western hospitals contain equipment our ancestors could scarcely have imagined, like laser eye surgery or CAT-scan machines. The evolution of nanotechnology carries much potential for applications in future medicine.
However, perhaps the greatest of all revolutions has not been the development of new machines but of insights into the world of proteins, chromosomes, cells and eventually deoxyribonucleic acid, or DNA, which contains the genetic instructions that make us who we are.
In 1953, DNA's structure as a double helix had been established by the American molecular biologist James D. Watson (b. 1928), the English scientist Francis Crick (1916–2004) and the New Zealand-born Maurice Wilkins (1916–2004), who all shared the 1962 Nobel Prize for this achievement. The creation of molecular biology may well be viewed as the most significant medical event of the twentieth century by future generations, even though its full effects have not yet been seen.
A History of Medicine, Part 4
One of the oldest medical works in China, the Shen Nung Pen Tsao, contains lists of useful herbs. It is referred to already during the Han Dynasty (206 BC—220 AD) but probably contains older material. Another work, The Yellow Emperor’s Inner Canon of Medicine, was associated with the legendary Yellow Emperor from the mid-third millennium BC. His wife is said to have introduced silk to the Chinese. The book has been of just as great importance to the development of Chinese medicine as the Hippocratic corpus has been to European medicine, but modern historians believe it was actually compiled later than claimed, maybe during Han times. In oracle bones from the historically attested Shang Dynasty of the second millennium BC, disease was often attributed to the gods and relief frequently sought through prayer, as was the case in most civilizations at that time.
According to James E. McClellan and Harold Dorn in their book Science and Technology in World History, second edition, “Hospitals, or at least hospice-like organizations, arose in China out of Buddhist and Taoist philanthropic initiative, but these became state institutions after the suppression of religious foundations in 845 CE. To guide physicians, the central government issued many official textbooks dealing with general medicine, pharmacy, pediatrics, legal medicine, gynecology, and like subjects. One Song pharmaceutical document dating from about 990 CE contained 16,835 different medical recipes. The numerous botanical and zoological encyclopedias also deserve note, in part for their medicinal advice; a government official, Li Shih-Chen, compiled the Pen Tsao Kang Mu, or Classification of Roots and Herbs, which listed 1,892 medicaments in fifty-two volumes. Illustrations graced many of these books.”
There is every reason to believe that some herbs had real effects, in China as elsewhere. For instance, a drug derived from the Chinese joint fir, Ephedra sinica, was recommended for cough and lung ailments. In the 1880s the Japanese scientist Nagai Nagayoshi (1844-1929) extracted ephedrine from this herbal remedy. Ephedrine is used against respiratory diseases even today. Others drugs had beneficial effects but were sometimes overused in traditional Chinese medicine, for instance ginseng. Examination of the pulse was given great weight in diagnosing illness, similar to Roman practice, but this is a technique that, while clearly useful, can also be relied on too much. Chinese published works on natural history took a special interest in insects, especially the silkworm. Silk is one of the oldest inventions associated with Chinese civilization, possibly since prehistoric times.
According to Michael Kennedy, “The earliest hospitals were established by Buddhist monasteries and, in the ninth century, the Tang Dynasty nationalized them and thereafter assumed the responsibility of maintaining them. The Song and Yuan Dynasties continued state interest in medical matters and the compilation of a materia medica and government pharmacies and clinics were developed. Chinese medicine continued virtually unchanged from the Han Dynasty until the nineteenth century.” He believes that Chinese medicine was “marked by a profound conservatism” which lasted until it was confronted with Western science. This does not mean that the Chinese never accepted innovations from other cultures. The concepts of “hot” and “cold” in Chinese medicine may represent transplants from Indian, Ayurvedic theories of disease, and drugs could be imported from other countries. India had a promising start in surgery. Although it did not reach its full potential later, India maintained an edge over China in this discipline. Kennedy again:
“No similar development of surgery occurred and the sophisticated understanding of surgical procedures present in India seems not to have crossed the borders with the Buddhists. The only surgeon found in early Chinese history was Hua To who performed an operation to remove an arrow from the arm of General Kuan Yun. The same surgeon is later described in another incident. A prince named Tsao Tsao was suffering from severe headaches and called upon Hua To for relief. The surgeon recommended trephination of the skull to relieve the headaches. Just as he was about to proceed, his patient, Tsao Tsao, was seized with suspicion and accused the surgeon of conspiring to murder him in league with enemies. The luckless surgeon was arrested and, in 265 CE, executed at the prince’s order. This surgeon, who is unique in Chinese history, had authored many works on medicine and surgery. He requested that they be destroyed before his execution and this was done. The Chinese opposition to the ‘mutilation’ of the body seems to have prevented any development of surgery similar to that in India.”
Possibly, this particular surgeon was not of Chinese origins; he may have been an Indian who came with Buddhist scholars. Chinese accounts of anatomy were often mired with stylized and inaccurate descriptions which did not always correspond to reality. Dissection of human bodies is described only very sporadically and anatomy was largely ignored until recent times. It is possible that ancestor worship prohibited violation of the corpse of dead patients. The absence of dissection or any experimental analysis allowed theoretical speculations to become more and more convoluted “until the original grains of real knowledge in Chinese medicine were submerged by traditions that had no basis in science.”
The growing influence of European medicine in late imperial and early republican China (the 1800s and early 1900s) posed considerable challenges to traditional Chinese doctors. Some of them rejected it entirely; others wanted to adopt certain aspects of it or rejected traditional Chinese medicine. Some scholars tried to argue that the differences between the two traditions were minor or that Western learning had Chinese origins and that traditional Chinese medicine had also stressed the brain. The sinicization of Western pharmacy was made easier by the rich tradition of pharmacopoeia in China. Chinese doctors during the 1800s gradually integrated the Western anatomy of blood vessels and the nervous system and focused on advances in modern chemistry vs. ancient and medieval alchemy. However, there was some skepticism towards the invasive surgical techniques employed by Western physicians.
- - - - - - - - -
Acupuncture, the idea of inserting needles into the body, predates imperial times but was developed further in imperial China. The practice was linked to the Taoist doctrine of qi and its circulation. The insertion points are located on invisible lines or meridians running through the length of the human body, which control certain physical conditions. Acupuncture, too, became updated following the encounter with Western medicine. As scholar Benjamin A Elman puts it in A Cultural History of Modern Science in China:
“In this cultural encounter, Chinese practitioners such as Cheng Dan’an (1899-1957) modernized techniques such as acupuncture. Cheng’s research enabled him to follow Japanese reforms by using Western anatomy to redefine the location of the needle entry points. His redefinitions of acupuncture thus revived what had become from his perspective a moribund field that was rarely practiced in China and, when used, also served as a procedure for bloodletting. Indeed, some have argued that acupuncture may have originally evolved from bloodletting. This Western reform of acupuncture, which included replacing traditional coarse needles with the filiform metal needles in use today, ensured that the body points for inserting needles were no longer placed near major blood vessels. Instead, Cheng Dan’an associated the points with the Western mapping of the nervous system. A new scientific acupuncture influenced by Japan and sponsored by Chinese research societies thus emerged alongside traditional acupuncture, providing with its better map of the human body an enhanced diagnosis of its vital and dynamic aspects. Similarly, Chinese doctors assimilated the discourse of nerves and the theory of germ contamination from Western medicine.”
Daniel Jerome Macgowan (1814-1893) and Benjamin Hobson (1816-1873), both physicians, were key pioneers in the late 1840s and early 1850s in introducing Western medical and other sciences to China. Macgowan was an American, initially serving as a medical missionary, who later became a freelance lecturer and writer. Translations were made of Western scientific works on electricity and the nervous system and of Chinese classics to European languages. The English medical missionary Hobson in Hong Kong prepared a series of scientific translations co-authored with Chinese scholars. The introduction of Western medicine began first in the treaty ports, particularly Guangzhou, Ningbo and Shanghai.
However, as Elman says, “Meanwhile, outside the missionary hospitals and clinics in the treaty ports, Hobson’s translations were not popular due to the Chinese distaste for surgery. Minor surgical procedures such as cutting warts, lancing boils, cauterizing wounds, removing cataracts, and castration for eunuchs were relegated to the nonliterati majority of physicians. Hobson’s Treatise on Physiology and his Treatise on Midwifery introduced invasive surgery for childbirth, drawn from the anatomical sciences that had evolved in Europe since the sixteenth century. But although anatomy could pinpoint childbirth dysfunctions as happening in the uterus, interventions were dangerous even by Western standards until modern surgery integrated sterilization techniques with anesthetization procedures.. Rather than invasive surgery for childbirth problems, Chinese physicians preferred practical therapies for women based on their holistic, interactive model of the human body.”
This does not mean that all Chinese resisted the new advances: “On the basis of his own examination of human corpses, Wang Qingren (1768-1831), one of the few Chinese physicians to take anatomy seriously, contended that all of the bodily depictions in the Chinese medical classics were inaccurate. His Corrections of Errors in the Forest of Medicine (1830, 1853) also maintained that the brain was the central organ of the body, a view that became more prominent after Protestant medical texts such as Hobson’s were translated into Chinese. Hobson’s work represented the first sustained introduction of the modern European sciences and medicine in the first half of the nineteenth century.”
Hobson introduced new knowledge in physics, chemistry, astronomy, geography and other disciplines, but this was always presented as God’s creation, and this did not always go down well with Chinese scholars. In the early seventeenth century, European Christian missionaries were Catholics Jesuits. They did bring new knowledge in mathematics and astronomy to China and a renewed interest in China back to Europe, but eventually they lagged behind in the sciences, were suppressed by the Catholic Church and lost out in competition with other Westerners. In the nineteenth century most of the Christian missionaries were Protestants, who were soon supplemented by other groups. As Benjamin Elman writes:
“Patrick Manson (1844-1922), a port surgeon and medical officer in the Imperial Chinese Customs Office since 1866, helped establish the London School of Tropical Medicine in 1898. Assigned for over two decades to Chinese treaty ports, Manson studied tinea, Calabar swelling, and blackwater fever before he developed a focus on tropical hygiene. He distinguished himself with his research on filariasis, a disease endemic in South China for which neither Chinese nor European medicine had a remedy. In particular, he observed in 1878 that the filariae worms causing elephantiasis passed part of their natural life cycle in the Culex mosquito, thus demonstrating transmission by parasites and explaining their natural history. Until the idea was unseated by the germ-parasite theory of disease in the late 1890s, Europeans regarded malaria as a miasma defined by human fever; Hobson himself associated malaria with putrid air. In the latter half of the nineteenth century, Western physicians tried to explain such extreme fevers by using a chill theory that described tropical illnesses according to the degree of change in an individual’s physiology. Hot days and cold nights produced such fevers, most thought. Such views overlapped with Chinese notions of cold- and heat-factor illnesses.”
Unlike Chinese astronomy, which was completely reworked in the seventeenth and eighteenth centuries by the introduction of European techniques, traditional Chinese medicine did not face a serious challenge until the mid-nineteenth century. Except for quinine therapy for malaria and a number of herbal medicines unknown in China, the medicine brought by European physicians did not achieve superior therapeutic results until a relatively safe procedure for surgery combining anesthesia and asepsis was developed towards the end of the nineteenth century. But as we have seen before, Europeans did have a superior understanding of human anatomy, based on centuries of systematic dissection since the Late Middle Ages.
According to the eminent British historian of medicine Roy Porter in The Greatest Benefit to Mankind: A Medical History of Humanity, “The idea of probing into bodies, living and dead (and especially human bodies) with a view to improving medicine is more or less distinctive to the European medical tradition. For reasons technical, cultural, religious and personal, it was not done in China or India, Mesopotamia or pharaonic Egypt.”
It is wrong to assume that human dissection was never practiced in traditional civilizations. I have seen evidence of isolated cases of dissection in Hellenistic Egypt, India and elsewhere. Dissection was apparently used in a limited way in forensic medicine in the Chinese justice system. The medical expert Song Tz’u or Song Ci (1186 — 1249) combined historical cases of forensic science with his own experiences and wrote the influential book Collected Cases of Injustice Rectified to avoid miscarriages of justice. However, in China there was no medical profession as we know it, and religious healers remained prominent until modern times. The state exploited useful knowledge across a wide range of applications, but the centralized bureaucracy could and sometimes did hamper advances in science. According to Toby E. Huff in The Rise of Early Modern Science: Islam, China and the West, second edition:
“It stands in striking contrast to the local and community-based inquest held before an elected or appointed jury in the English and continental traditions. That is, in both of those European cases, citizens from local communities were elected or appointed to serve as a jury, with the coroner acting as much as an agent of the community as the national or federation officials. Moreover, unlike the Chinese case, physicians were often brought in to examine the body. Examples of Italian physicians performing an autopsy in cases of suspicious deaths go back to the thirteenth century. Furthermore, physicians and surgeons in Europe already at this time — the thirteenth century when the Chinese manual of instruction to the magistrate was being written — belonged to legally autonomous guilds as well as to university faculties. Hence, they were already launched on a path to specialization in medical inquests (and especially the performance of autopsies and dissections) as well as the autonomous teaching of medicine, when Chinese authorities were centralizing medical examinations in the hands of non-specialists, namely magistrates and Judicial Commissioners who were not trained in medicine.”
Although it is possible to find examples of sporadic cases of human dissection in other civilizations, the sustained practice of human dissection by a trained body of medical practitioners with the stated objective of understanding the workings of the human body was an achievement of Renaissance Europe in the Late Middle Ages.
According to Roy Porter, “In the short run, the anatomically based scientific medicine which emerged from Renaissance universities and the Scientific Revolution contributed more to knowledge than to health. Drugs from both the Old and New Worlds, notably opium and Peruvian bark (quinine) became more widely available, and mineral and metal-based pharmaceutical preparations enjoyed a great if dubious vogue (e.g., mercury for syphilis). But the true pharmacological revolution began with the introduction of sulfa drugs and antibiotics in the twentieth century, and surgical success was limited before the introduction of anaesthetics and antiseptic operating-room conditions in the mid nineteenth century. Biomedical understanding long outstripped breakthroughs in curative medicine, and the retreat of the great lethal diseases (diphtheria, typhoid, tuberculosis and so forth) was due, in the first instance, more to urban improvements, superior nutrition and public health than to curative medicine. The one early striking instance of the conquest of disease — the introduction first of smallpox inoculation and then of vaccination — came not through ‘science’ but through embracing popular medical folklore.”
Although the Chinese had a flawed theoretical understanding of the human body, they still managed to develop effective therapies for some conditions. Following a series of epidemics in seventeenth century, new theories of disease gained adherents which postulated that some diseases entered through the nose or mouth and in some cases such as smallpox and tuberculosis could be communicated by contact. Though rudimentary, this hypothesis was at least as close to a realistic understanding of infectious diseases as was European medicine of the day, still wedded to the idea that diseases were caused by bad air (“malaria” means “bad air”). However, as we know, the later breakthroughs in understanding did take place in Europe and couldn’t have happened without the European invention of the microscope.
Smallpox was present in China from an early age and the disease almost certainly originated somewhere in Asia. Several Asian countries used some form of induced immunity through limited exposure to smallpox since it had been recognized for centuries that some diseases never reinfect a person after recovery. However, there was no proper theoretical understanding of why this procedure worked and it was apparently not applied to other infectious diseases. This crucial step was taken in Europe after the concept of inoculation had been imported to the continent from Asia via the Middle East.
The alchemist/chemist and physician Philippus Theophrastus Aureolus Bombastus von Hohenheim is better known as Paracelsus (1493—1541). The name means “equal to or greater than Celsus,” a Roman encyclopedist from the first century AD known for his medical work De Medicina. His mother was Swiss and his father was a physician who taught metallurgy and chemistry (alchemy) at a mining college in Austria. Paracelsus travelled widely across the European continent. He started his medical studies at the University of Basel, Switzerland, later moved to Vienna and eventually got his medical degree from the University of Ferrara in Italy. He was highly unorthodox and gifted, but also a practicing astrologer and “had something of the charlatan in him.” According to Michael Kennedy, Paracelsus on his travels learned of a peasant remedy known among the subjects of the Ottoman Empire to prevent smallpox. He thus became the first in Europe to recommend inoculation, two centuries before Jenner and Lady Montague:
“He was a contemporary of Martin Luther and met him, but remained a Catholic. His revolutionary interest was to place chemistry at the center of medicine. He insisted on mixing chemical compounds from pure ingredients with standard formulas, a truism now, but unknown at that time of patent remedies with exotic ingredients. In 1527, he accomplished a spectacular cure of a prominent citizen of Basel — no one seems to know how — and this act brought him into contact with the famous scholar Erasmus. Paracelsus succeeded in curing Erasmus of gout and, through the influence of both his parents, was awarded the position of town medical officer of Basel. Paracelsus characteristically created problems for himself, when he declared that his lectures in Basel would be in German, not in Latin, and that barber-surgeons and midwives were welcome to attend. Luther had adopted German for religious writing and now Paracelsus followed the example. He rejected the four humours theory of disease and added that fermentation and putrefaction were at the center of biological functions. He advocated the use of chemistry in treatment of disease although he continued to hold some primitive beliefs similar to those of other cultures.”
Paracelsus burned the works of Galen and Avicenna, then still the authorities in the medical education in Europe, sprinkling sulphur and nitre on the flames with spectacular results, proclaiming that “All the universities and all the ancient writers put together have less talent than my arse.” Needless to say, his lectures became public events, but he made enemies with his unorthodox views and behavior. He continued to be a difficult figure but his work remained outstanding. In one case, he is alleged to have cured several cases of syphilis, then a new disease probably introduced by the returning sailors of Columbus from the Americas and much more virulent than the modern disease, although no-one knows exactly why. Nevertheless, despite the colorful history of Paracelsus, the real breakthrough for the concept of inoculation in the Western world came in the eighteenth century.
In 1718 Lady Mary Wortley Montague reported that the subjects of the Ottoman Empire deliberately inoculated themselves with fluid taken from mild cases of smallpox. Since the European medical profession was relatively organized, new methods of variolation could be made known quickly. Several people were engaged with this idea in Europe in the late eighteenth century to combat the greatly feared and often lethal disease, but credit for popularizing the concept goes to the Englishman Edward Jenner (1749-1823).
According to scholar Stefan Riedel, “During the great epidemic of 1721, approximately half of Boston’s 12,000 citizens contracted smallpox. The fatality rate for the naturally contracted disease was 14%, whereas Boylston and Mather reported a mortality rate of only 2% among variolated individuals. This may have been the first time that comparative analysis was used to evaluate a medical procedure. During the decades following the 1721 epidemic in Boston, variolation became more widespread in the colonies of New England. In 1766, American soldiers under George Washington were unable to take Quebec from the British troops, apparently because of a smallpox epidemic that significantly reduced the number of healthy troops. The British soldiers were all variolated. By 1777, Washington had learned his lesson: all his soldiers were variolated before beginning new military operations.”
Jenner had heard tales that dairymaids were protected from smallpox naturally after having suffered from cowpox, a related but less dangerous disease. He tested this and found that the tales were true. The Latin word for cow is vacca, cowpox is vaccinia; Jenner therefore called the procedure vaccination and reported his findings to the Royal Society of London. The use of vaccination against smallpox spread rapidly in Europe during the early 1800s.
According to Riedel, “Jenner’s work represented the first scientific attempt to control an infectious disease by the deliberate use of vaccination. Strictly speaking, he did not discover vaccination but was the first person to confer scientific status on the procedure and to pursue its scientific investigation. During the past years, there has been a growing recognition of Benjamin Jesty (1737—1816) as the first to vaccinate against smallpox.” It was nevertheless Jenner’s relentless promotion and devoted research that changed the way medicine was practiced.
By the early 1800s, surgery in the Western world was still just as painful and dangerous as everywhere else, and the causes of diseases were not better understood by Europeans than by others. This situation would change dramatically in the course of the nineteenth century, ushering in the greatest medical revolution in human history. As Roy Porter says:
“I devote most attention to what is called ‘western’ medicine, because western medicine has developed in ways which have made it uniquely powerful and led it to become uniquely global. Its ceaseless spread throughout the world owes much, doubtless, to western political and economic domination. But its dominance has increased because it is perceived, by societies and the sick, to ‘work’ uniquely well, at least for many major classes of disorders. (Parenthetically, it can be argued that western political and economic domination owes something to the path-breaking powers of quinine, antibiotics and the like.) To the world historian, western medicine is special. It is conceivable that in a hundred years time traditional Chinese medicine, shamanistic medicine or Ayurvedic medicine will have swept the globe; if that happens, my analysis will look peculiarly dated and daft.. But there is no real indication of that happening, while there is every reason to expect the medicine of the future to be an outgrowth of present western medicine — or at least a reaction against it. What began as the medicine of Europe is becoming the medicine of humanity. For that reason its history deserves particular attention.”
René Théophile Hyacinthe Laennec (1781-1826), a French physician working at the Necker Hospital in Paris, invented the stethoscope in 1816. It gave access to the internal organs and was one of the most important advances for diagnosis prior to the discovery of X-rays by the German physicist Wilhelm Conrad Röntgen (1845—1923) in 1895. Laennec in 1819 published a treatise and described a wooden instrument, which was applied to one ear with the other end placed on the chest. In 1852 the American George Cammann invented the familiar instrument with rubber tubing and two earpieces.
Some medical improvements were made by more rigorous application of the experimental method, for instance by the French physiologist Claude Bernard (1813 —1878), widely regarded as one of the founders of experimental medicine. Many advances, however, were dependent upon advances in other scientific disciplines, for instance chemistry and microscopy. Countless instruments which we take for granted today followed on the heels of studies of electricity and electromagnetism during the nineteenth century.
The Italian physicist Luigi Galvani’s (1737—1798) work on bioelectricity in the late eighteenth century paved the way. Static electricity had been known since ancient times, but no known instruments for generating an electric current existed before nineteenth century Europe, and no civilization had ever made the connection between electricity and physiology. Alessandro Volta (1745—1827) invented the battery in 1800. The French physicist André-Marie Ampère (1775—1836) was one of the contributors to the development of the galvanometer, used for detecting and measuring electric current, as was the German physicist Georg Ohm (1789—1854) and others. The Italian Carlo Mateucci (1811-1868) in 1843 was able to measure the electrical current of muscle contraction using a galvanometer. In 1856 the German anatomist Heinrich Müller (1820-1864) and the Swiss anatomist Rudolph Albert von Kölliker (1817—1905) identified an electrical current generated by frog heartbeat.
The Scottish electrical engineer Alexander Muirhead (1848-1920), a specialist in wireless telegraphy, while working at St Bartholomew’s Hospital in London 1869-1872 recorded the first human electrocardiogram. Gabriel Lippmann (1845—1921), born in Luxemburg but raised in Paris, in 1891 developed a method for reproducing colors photographically and an instrument called capillary electrometer in 1872 to measure changes in the heart. The French scientist Étienne-Jules Marey (1830—1904), a pioneer of photography and cinema, in 1881 devised a photographic technique to record these measurements. The British physiologist Walter H. Gaskell (1847-1914) demonstrated the sinus node and the atrio-ventricular node in the turtle heart, and in 1887 the British scientist Augustus Desiré Waller (1856-1922) created the first real ECG (electrocardiogram) machine and succeeded in measuring cardiac electrical activity from the surface of the body.
The greatest breakthrough came with the Dutch doctor Willem Einthoven (1860—1927). He began to improve the capillary electrometer in 1893 while being a professor of physiology at Utrecht, described the waves of electrical recording and discovered that people with heart disease had different electrocardiogram tracings. He then began to develop a new machine which he called a “string galvanometer.” The first one was large and heavy and occupied two rooms, but also very accurate. The reports he released, starting in 1901, changed cardiology forever, and made recordings of many cardiac diseases and their effects on ECG. Visitors came from all over Europe and North America to see the innovation. Einthoven was later astonished during a visit to the USA in 1924 to see that an ECG technician could make a diagnosis of heart conditions just by looking at the ECG diagram. In 1928 the Cambridge Scientific Instrument Company of London built the first portable string electrocardiograph.
Other improvements were related to advances in chemistry. Surgery before modern anesthesia was obviously extremely painful and was conducted quickly and only when absolutely necessary. For hundreds of years people suffered unspeakably during operations. Physicians and healers did have some forms of pain relief prior to modern times and sometimes employed opium, cannabis incense, coca, tobacco or other forms of herbal anesthesia locally available. Alternatively, the patient prior to surgery might drink vast amounts of alcohol. Yet they did not practice general anesthesia as we think of it today.
According to Michael Kennedy, “In 1800, surgery was conducted exactly as [French surgeon Ambroise] Paré had practiced in 1537. Operations were limited to amputations and drainage of abscesses with anal fistula surgery the most sophisticated and closest to modern physiological concepts. Fractures were set, but open fractures were fraught with danger from infection and amputation was often the safest course.” Moreover, “Speed was essential when pain could not be relieved. Baron Larrey, Napoleon’s chief surgeon, reported performing 200 amputations at the battle of Borodino in twenty-four hours: one amputation every seven minutes. No mention was made of how many survived, but in good conditions, the mortality rate was about forty percent. Great advances were being made in spite of the limitations early in the century.”
Scholar Joel Mokyr (pdf format) wonders why the discovery of general anesthesia happened so late in Europe, and not at all in China:
“Could anesthesia have been invented in China? Unlike optics, in this case there was no need here for some breakthrough in the underlying knowledge base, since little of that existed in the West either. Nobody in the mid nineteenth century had any idea how precisely ether, chloroform, or other substances knocked out the patient. The Chinese embarked on another route toward pain relief: instead of chemical intervention, their path led to physical means through acupuncture.. Yet much of Chinese medicine was based on the use of herbal medicine and the prevalence of opium in the nineteenth century indicates that chemical intervention in sensatory bodily processes was by no means alien to them. Perhaps more plausible is the explanation that surgery itself was rare in China. Conditional on that premise, perhaps the Chinese should not have been interested in anesthesia. But this argument does not seem wholly satisfactory. Childbirth suffering presumably was not wholly culturally-determined. We need to ask what it was, if anything, in Chinese culture that made surgery unacceptable….There was not one but many types of Chinese medicine…Yet none of them resulted in the adoption of surgery as a widely practice form of medicine outside cataract surgery.”
Mokyr concludes that Western science itself was not “inevitable.” One early development of anesthesia has been claimed in East Asia, but in Japan, not in China. The Japanese physician Seishu Hanaoka (1760-1835) has been credited with performing the first known surgery using general anesthesia in the form of an oral compound composed of a number of traditional plant-based drugs, among them mandragora (mandrake root), on a breast cancer patient in 1804. He had learnt traditional Chinese medicine as well as Dutch-imported European surgery, which inspired him to conduct experiments not previously performed in East Asia. However, due to the isolationist policies of the Tokugawa Shogunate his achievements were not known abroad. The development of general anesthesia in the Western world, which was later exported to other continents, happened along very different lines.
The discovery that a number of substances can knock a patient unconscious without long-term damage happened surprisingly late. In the late eighteenth century, great advances were being made in chemistry, especially regarding the nature of various gases. Nitrous oxide was discovered by Englishman Joseph Priestley (1733—1804), but he did not understand its anesthetic properties. The investigation of multiple gases in the atmosphere led to a faddish enthusiasm for “pneumatic medicine,” the inhalation of the various gases. The young English scientist Humphrey Davy (1778-1829) tried inhaling nitrous oxide and stumbled upon the idea of using it for anesthesia, but still failed to see its full potential.. Previous attempts at pain relief had used opium, mandrake root or mandragora (which produced Juliet’s death-like coma in Shakespeare’s play Romeo and Juliet) or atropine, all with inadequate effect. Hyoscyamine (called henbane or poor man’s opium) had been known since ancient Egypt and may have been used by the Greeks at the Oracle at Delphi to induce hallucinations.
In the United States, nitrous oxide (often called laughing gas) was popular at parties and fairs, but no medical application was considered until 1844 when the dentist Horace Wells (1815—1848) attended a fair and watched a demonstration. The idea of using it for tooth extraction occurred to him and he offered himself as a candidate. While under its effect, his molar was extracted by fellow dentist John Riggs. Although it could be useful for dentistry, his apparatus was not capable of producing enough depth of anesthesia for major surgical operations.
Morphine, a purified alkaloid named after the Greek god of dreams, Morpheus, was discovered by the German apothecary Friedrich Sertürner (1783-1841) in 1803-1805. It was enthusiastically received and aided the development of the modern pharmaceutical industry. The invention of the first practical hypodermic syringe in 1853 independently by the French surgeon Charles Gabriel Pravaz (1791-1853) and the Scottish physician Alexander Wood (1817-1884) led to increased use of morphine as a painkiller. However, although morphine is very useful in many cases, it was gradually understood that it is also highly addictive.
Ether had been manufactured since the eighteenth century for use as a solvent, but was never applied to surgery until 1842. In fact, ether was synthesized in 1540 and known as “sweet vitriol.” As Kennedy states: “Raymundus Lullius, a Spanish alchemist, first produced ether in 1275. He found that, if vitriol (sulphuric acid) was mixed with alcohol and distilled, a sweet white fluid resulted. Valerius Cordus rediscovered ether in 1540 and named it ‘sweet oil of vitriol.’ Paracelsus used the same chemical to relieve pain about the same time, but the concept of surgical anesthesia did not occur to him. It was renamed ether (or sulphuric ether) in 1730 and was used as an expectorant to bring up phlegm in respiratory illnesses. In 1815, Michael Faraday, Davy’s assistant, noted that ether could produce an effect similar to laughing gas and ‘ether frolics’ soon became popular.”
The American physician Crawford Long (1815—1878) became the first person to have performed a surgical operation using general anesthesia induced by ether. Long, a doctor in Jefferson, Georgia, had attended ether parties (ether was popular entertainment before its surgical use) and noted that pain was absent under its effects. He later used it to remove a cyst in the neck of a boy, James Venable, in March 1842. Venable was unconscious and did not suffer pain. Long’s practice became successful, but he did not publish his results until 1849 and has therefore often been ignored in historical accounts. William Clarke, a physician from New York, suggested the use of ether for extracting teeth to his dentist Elijah Pope, who performed the first successful use of ether in dentistry in January 1842. Like Long, however, Pope did not publish his results at the time.
The so-called ether controversy regarding who should be credited with the application of ether as general anesthesia involved several Americans, among them William T. Morton (1819-1868) and Charles Jackson (1805—1880). Long may not have been personally responsible for the worldwide dissemination of the idea and did not become involved in the controversy, but Morton had visited Georgia in 1842 when Long performed his first anesthesia and was probably aware of this since it caused a sensation in that state at the time. Morton arranged a famous public demonstration of diethyl ether (then called sulfuric ether) as an anesthetic agent in October 1846 at the Massachusetts General Hospital. After this, the use of ether spread rapidly throughout the Western world.
According to Kennedy, “Ether, stronger and more effective than nitrous oxide, was tried in Europe with equal success and a new age dawned. Robert Liston, an English surgeon known for speed, who held his knife in his teeth when not using it for cutting, performed an amputation of the thigh under ether anesthesia in December 1846, only two months after Morton’s demonstration. Present in the audience was medical student Joseph Lister who would conquer the next hurdle in surgery. After completing the pain-free operation Liston declared, ‘This Yankee dodge, gentlemen, beats mesmerism (hypnotism) hollow.’ World acceptance was rapid and ether was used in the Crimean War on battle casualties.”
Chloroform was discovered by the American physician Samuel Guthrie (1782-1848) in 1831, and independently at almost the same time by French pharmacist Eugène Soubeiran (1797—1859) and the great German chemist Justus von Liebig (1803—1873). Clearly, it was an invention whose time had come. It was named and classified in 1834 by the French chemist Jean-Baptiste Dumas (1800-1884), but he did not understand its medical usefulness. Its anesthetic properties were noted in 1847 by another Frenchman, Marie Jean Pierre Flourens (1794-1867). Because it caused less lung irritation and vomiting, chloroform tended to replace ether once its potential was grasped. In 1847, the Scottish obstetrician James Young Simpson (1811—1870) became the first to use chloroform for general anesthesia during childbirth.. After this, its use expanded rapidly in Europe. In 1853 Britain’s Queen Victoria (1819—1901) took chloroform during the birth of Prince Leopold, administered by the English physician John Snow (1813 — 1858). Snow published a book on chloroform describing its use in anesthesia.
There were a few protests against these developments. Some were on religious grounds (was not pain ordained by God?) and some on medical grounds. There were, and still are, risks associated with the use of general anesthesia. Ether would eventually prove to be safer, ironically partly because of the lung irritation it causes, which stimulates breathing. Chloroform carries a risk of liver damage and the mortality rate of surgery under chloroform was eventually shown to be higher than that using ether.
The West and Global Mathematics
I’ve been reading Greek Thought, Arab Culture by Dimitri Gutas, which is a surprisingly boring book. Gutas treats the Arabic translation movement in Baghdad as a major achievement. It was an achievement in some ways, but he admits that they benefited greatly from the pre-established Zoroastrian Persian ideology of translation and libraries. He also admits that Muslims only translated scientific works, not the Homeric epics, for instance. He does briefly mention that they translated some Sanskrit and Persian works, but says almost nothing about the Indian ones. The Indian numeral system was important and should be mentioned, although the Greek texts were clearly the most important.
Muslims had access to Greek, Persian and Sanskrit works. Theoretically speaking, they could have explored the Indo-European linguistic tree. But they didn’t. Europeans did. It is true that Muslims made some worthwhile works in mathematics, but we should remember that the three most important mathematical traditions in the ancient world were the Greek, the Mesopotamian (which the Greeks head learned from, and which the Persian continued) and the Indian. This means that Middle Eastern Muslims had direct access to all of the most important mathematical traditions on Earth simultaneously. They did make some progress in algebra, but it would almost be surprising if they didn’t manage to produce any significant mathematical works.
One book on my reading list which I haven’t read so far is The Shape of Ancient Thought: Comparative Studies in Greek and Indian Philosophies by Thomas McEvilley. My impression from what I have read about it is that he places too much emphasis on the Indian influence on Greek culture. Everybody says nowadays that Greek culture was “really” invented somewhere else (think Black Athena). We do know that the Egyptians, Mesopotamians and Phoenicians influenced the Greeks, but the Greeks openly admitted this, and these cultures all belonged to the Eastern Mediterranean world whereas India was far away.
There is a school of thought which claims that Plato’s political system in The Republic mirrors the Hindu caste system, and that Greek atomism was imported from Indian atomism. I haven’t seen convincing evidence of this so far, and it’s difficult to see how this influence should have been transferred to Greece prior to Hellenistic times, but the question is worth exploring. We can find traces of a commonly shared Proto-Indo-European mythological heritage with India, however faint.
- - - - - - - - -
The Chinese mathematical tradition was significant, but less influential than the Indian one. I would be tempted to say that China was a hardware civilization whereas India was a software civilization. The truth is that given the size of their economy and their population, the Chinese were surprisingly weak in mathematics and in the abstract sciences in general. This proves that although some minimum level of wealth is certainly a necessary cause for the growth of modern science (extremely poor people concentrate on surviving, not on inventing calculus or comparative linguistics), it is by no means a sufficient one. The Chinese believed the Earth was flat until the seventeenth century AD, and they only corrected this error after their astronomy had been virtually displaced by European astronomy. This was after European (Greek) astronomers had known that the Earth was spherical for more than two thousand years, a knowledge which, despite popular myths to the contrary, was never lost during medieval times.
Asian rockets weighed a couple of kilograms at most and were powered by gunpowder. None of them would have been able to challenge the Earth’s gravity, leave the atmosphere and explore the Solar System. In fact, Asians never coined the concept of “gravity” in the first place. Space travel is the invention of only one civilization, the Western one. None of the Asian nations ever came remotely close to achieving something similar on their own, not even the Japanese. In fact, without Europeans mankind might not have been able to explore the Solar System for many centuries yet.
From the fourteenth century AD, which is to say the Italian Renaissance, until the twentieth century, almost all important global advances in mathematics were European. I would be tempted to say that European leadership was stronger in mathematics than in any other scholarly discipline. Perhaps the simplest explanation for why the Scientific Revolution happened in Europe is because the language of nature is written in mathematics, and Europeans did more than any other civilization to develop — or discover — the vocabulary of this language.
Stained Glass: A European History
I will publish a multipart essay on the history of optics at the website Jihad Watch later this month. One of the parts will be about the history of glass, a fascinating subject which most of us rarely think about. We often talk about how much we owe to the ancient Greeks, but when it comes to the use of glass, we owe much more to the Romans than to the Greeks.
Today we see huge glass windows in every major city in the world, but many people don't know that the Romans were the first to use glass for architectural purposes, and the first to make glass windows. The Roman legacy of glassmaking survived after the fall of the Roman Empire and was carried in different directions. Under the influence of Christianity, the introduction of glazed windows, particularly in churches, and the further development of painted and stained glass manufacture was one of the most decorative uses. Here is a quote from the book Glass: A World History by Alan Macfarlane and Gerry Martin, page 20:
"There are references to such windows from fifth century France at Tours, and a little later from north-east England, in Sunderland, followed by developments at Monkwearmouth, and in the far north at Jarrow dating to the period between 682 and c.870. By AD 1000 painted glass is mentioned quite frequently in church records, for example in those of the first Benedictine Monastery at Monte Cassino in 1066. It was the Benedictine order in particular that gave the impetus for window glass. It was they who saw the use of glass as a way of glorifying God through their involvement in its actual production in their monasteries, injecting huge amounts of skill and money into its development. The Benedictines were, in many ways, the transmitters of the great Roman legacy. The particular emphasis on window glass would lead into one of the most powerful forces behind the extraordinary explosion of glass manufacture from the twelfth century."
This story is explored further in the book The History of Stained Glass by Virginia Chieffo Raguin.
Page 10: "Stained glass, considered a precious object, was linked in the twelfth and thirteenth centuries to the aesthetics of precious stones and metalwork; it therefore received a place of honour in the building that housed it […]. The importance of stained glass and gems may be explained by a prevailing attitude toward light as a metaphor in premodern Europe. In the Old Testament light is associated with good, and darkness with God's displeasure. The very first verses of Genesis announce to the reader that 'the earth was void and empty, and darkness was upon the face of the deep', then God created light and 'saw the light, that it was good' (Genesis 1:2-3). Light was associated with knowledge and power, 'the brightness of eternal light, and the unspotted mirror of God's majesty' (Wisdom 7:26). Light also functioned as a symbol of God's protection."
Page 32: "Traditionally, stained glass is used as an architectural medium and, as such, it is integral to the fabric of a building; not only, or always, a work of art, but also a screen letting in and modifying the light and keeping out the elements. Its development as a major art form in the Middle Ages was dependent on the needs of a powerful client, the Christian Church, and the evolution of architectural forms that allowed for ever larger openings in the walls of both humble churches and great cathedrals, producing awe-inspiring walls of coloured light. Its exact origins are uncertain. Sheets of glass, both blown and cast, had been used architecturally since Roman times. Writers as early as the fifth century mention coloured glass in windows. Ancient glass was set in patterns into wooden frames or moulded and carved stucco or plaster, but each network had to be self-supporting, which limited the kinds of shapes that could be used. When or where strips of lead were first employed to hold glass pieces together is not recorded, but lead's malleability and strength greatly increased the variety of shapes available to artists, giving them greater creative freedom."
Excavations at Jarrow in northern England have yielded strips of lead and unpainted glass cut to specific shapes from the seventh to the ninth centuries. Benedictine monks played an important role in the spread of stained glass, as in many other things. Often cited as the first Gothic construction, the choir of the Abbey of Saint-Denis, 1140-44, gives an important place to stained glass. We should remember that in the twelfth century, monks were still in some ways the elite class of society.
Raguin, page 63: "The windows they commissioned reflected not only their erudition but also their method of prayer: gathering several times a day in the choir area of the church to pray communally, primarily by singing psalms. The monks remained in the presence of the works of art they set in these spaces. With the construction of his abbey's new choir, Abbot Suger (1081-1155) of Saint-Denis installed a series of windows exemplary of monastic spirituality and twelfth-century visual thinking. Suger, a man of unusual determination and management skills, was a trusted advisor of Louis VII, who reigned from 1137 to 1180. Responding to the call of Bernard of Clairvaux, Louis embarked on the unsuccessful Second Crusade, 1147-49, leaving Suger to act as regent of France in his absence. The abbot's influence with the monarchy consolidated Saint-Denis's place as the site of burial for French kings and the repository of the regalia – crown, sceptre, spurs, and other ceremonial objects – of coronation (coronations themselves, however, were held in the cathedral of Reims). Suger rebuilt the eastern and western ends of the church around 1141-44, using revolutionary vaulting and construction techniques that proclaimed the new Gothic style."
Stained glass was also used in many great medieval cathedrals, for instance Chartres Cathedral and Reims Cathedral in France, Cologne Cathedral in Germany, York Minster in England and Florence Cathedral in Italy. Nothing similar could be found in any other civilization at the time.
A History of the Indo-European Languages
Whenever I get tired of Islam, which happens increasingly often, I read about some other topic which I find fascinating. Lately, I've concentrated on the history of the Indo-European language family and why it spread in the first place. I will tell some of the tale here, with emphasis on Persian, Sanskrit and especially Greek. Here is what Nicholas Ostler says in Empires of the Word: A Language History of the World, page 239:
"The Greek language was spread from its historic home, the southern Balkan peninsula and Aegean islands, through two processes, one piecemeal, long lasting and diffuse in its direction, the other organised, sudden and breathtakingly coherent. One is usually known as the Greek colonisation movement; the other is Alexander's conquest of the Persian empire. The first process, the colonisation of the Mediterranean and Black Sea coasts by Greek cities, lasted from the middle of the eighth to the early fifth century BC. The question why, of all the inhabitants of these shores, only the Greeks and the Phoenicians set up independent centres in this way has never been answered. The foundations clearly served a variety of purposes, as political safety valves, as trading posts for raw materials, and as opportunities to apply Greek agriculture to more abundant and less heavily populated soil, but it is noteworthy that they are exclusively coastal, never moving inland except on the island of Sicily. The Greek expansion came after the period of Phoenician settlements (eleventh to eighth centuries), so it may be that the most important factor was who had effective control of the sea."
The Greeks in the second millennium BC used a system of writing called Linear B, but this was a cumbersome script with a limited number of users and the knowledge of its use was lost after 1200 BC. This "Dark Age" was a troubled period in the entire Eastern Mediterranean region. Exactly when writing returned to the Greek world is a matter of some controversy. Early graffiti on pottery has been found which is believed to date to around 800 or maybe the mid-eighth century BC, but earlier is conceivable. Despite a claim that it was borrowed from a Canaanite alphabet ca. 1150, it is generally agreed that the origin of the Greek alphabet is to be found in the Phoenician alphabet in the early first millennium BC. Here is Jonathan M. Hall in A History of the Archaic Greek World: ca. 1200-479 BCE, page 57:
"[M]ost scholars are agreed that it is an adaptation of the later Phoenician, or Northwest Semitic, script. Greek scripts display notable local characteristics – principally with regard to the shape of letters but also in the matter of the phonetic values attributed to signs such as san, sigma, khi, and psi. All local Greek scripts, however, share important divergences from the Phoenician prototype, notably in the reutilization of certain Semitic consonantal symbols to represent vowels and perhaps – though the evidence for the Southern Aegean scripts is ambiguous – in the creation of three new symbols to represent aspirated plosives (phi, khi, and psi). These shared divergences would suggest that the Greek alphabet was born in one place only, in a single moment and perhaps as a result of the initiative of a single creator. Local differences would have arisen only subsequently. What is less clear is where such a transmission took place and whether our earliest extant graffiti are really the first examples of writing or whether writing was actually practised earlier but on more perishable items such as skins or wood that have not survived in the archaeological record."
The place of transmission of the alphabet is currently not known but is usually assumed to have been somewhere in the Eastern Mediterranean where Greeks and Phoenicians came into regular contact with one another, for instance Cyprus or Crete. The most complete poetic works from the Greek archaic period are the Iliad and the Odyssey, traditionally ascribed to Homer, and the Theogony and Work and Days, assigned to Hesiod. There is still debate regarding these works, but the internal literary unity of these four poems points to a single author for each of them. Ancient authors provided varying estimates for when Homer and Hesiod lived. Herodotus dates them to the late ninth century BC and Strabo to the mid-seventh. The ancients assumed that Homer was an historical person, whereas modern scholars are more skeptical of this view. The Homeric epics purport to portray the distant world of a Heroic Age. Earlier assumptions that this world matched the Mycenaean palatial civilization of the sixteenth to thirteenth centuries BC were dispelled after the decipherment of the Linear B tablets, which revealed a society that was structured very differently from that depicted by Homer. Jonathan M. Hall again, page 25:
"For some, Odysseus' wanderings reflect the great age of colonization in the last third of the eighth century, but others regard them as more indicative of a 'protocolonial' phase dating to the late ninth century. Hesiod's reference (Th. 490-500) to the sanctuary at Delphi could belong to any time after ca. 800 – the date from which cultic activity is first attested at the shrine. Descriptions in the Homeric epics of weaponry and battle tactics seem to presuppose the advent of hoplite warfare, which is normally dated to the first half of the seventh century. Finally, it has been suggested that the Homeric description of Achilles' shield (Il. 18.468-608) parallels early seventh-century Cypro-Phoenician metal vessels and that the premonition of the sack of Troy in the Iliad (12.17-32) consciously echoes accounts of the sack of Babylon at the hands of the Assyrian king Sennacherib in 689. For these reasons, there is a growing view among scholars that the Homeric and Hesiod poems date to the first half of the seventh century but no universal agreement has been reached and detailed chronological arguments based exclusively on the supposed dates of the poems are untenable."
Greek, the Indo-European language of the palace-centered Bronze Age warrior kings who ruled at Mycenae and other strongholds, is definitely attested in the mid-second millennium BC. The breakthrough in the decipherment of the Linear B tablets was made by the Englishmen Michael Ventris (1922–1956) and John Chadwick (1920–1998) in the early 1950s. Ventris was himself surprised to discover that the language was an early form of Greek. Here is David W. Anthony in The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World, page 48-49:
"The Mycenaean civilization appeared rather suddenly with the construction of the spectacular royal Shaft Graves at Mycenae, dated about 1650 BCE, about the same time as the rise of the Hittite empire in Anatolia. The Shaft Graves, with their golden death masks, swords, spears, and images of men in chariots, signified the elevation of a new Greek-speaking dynasty of unprecedented wealth whose economic power depended on long-distance sea trade. The Mycenaean kingdoms were destroyed during the same period of unrest and pillage that brought down the Hittite Empire about 1150 BCE. Mycenaean Greek, the language of palace administration as recorded in the Linear B tablets, was clearly Greek, not Proto-Greek, by 1450 BCE, the date of the oldest preserved inscriptions. The people who spoke it were the models for Nestor and Agamemnon, whose deeds, dimly remembered and elevated to epic, were celebrated centuries later by Homer in the Iliad and the Odyssey. We do not know when Greek speakers appeared in Greece, but it happened no later than 1650 BCE. As with Anatolian, there are numerous indications that Mycenaean Greek was an intrusive language in a land where non-Greek languages had been spoken before the Mycenaean age."
David W. Anthony believes that the "Proto-Indo-European homeland was located in the steppes north of the Black and Caspian Seas in what is today southern Ukraine and Russia," which is the most commonly cited alternative (and the one that I happen to favor, too), but by no means the only one. The homeland, or Urheimat, from which Proto-Indo-European (PIE) originally existed and spread has been sought for more than 200 years. It is in fact easier to establish when PIE was spoken than where, although there is dissent also here.
The Proto-Indo-European language is not historically recorded, which obviously makes our task much harder, but we can use its daughter languages and through comparative linguistics reconstruct with some degree of accuracy much of the vocabulary which existed in the mother language before it separated into different branches. We know that the people who spoke PIE were familiar with wheeled vehicles. The earliest archaeological evidence we currently have for wheeled vehicles anywhere on Earth dates from about 3500 BC and is found in Eastern and Central Europe. PIE contains words for silver, which was not known much before 4000 BC. Wool, the product of selectively bred sheep, also appears largely to be a development of the fourth millennium BC, although the dating here is less precise than with wheels.
All things considered, the Proto-Indo-European language which has been reconstructed by leading linguists over the past two centuries contains words for a technological package which probably did not exist before 4000 BC, possible not even before 3500 BC. PIE must thus have been a living language during the fourth millennium BC. It is likely that a very early form of PIE existed before 4000 BC and a very late form slightly after 3000 BC. Before and around 3000 BC Proto-Indo-European was rapidly expanding geographically and gradually breaking apart into what would later emerge as different Indo-European branches. Scholars J. P. Mallory and D. Q. Adams tell the tale in The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World, page 103:
"[I]ndividual Indo-European groups are attested by c. 2000 BC. One might then place a notional date of c. 4500-2500 BC on Proto-Indo-European. The linguist will note that the presumed dates for the existence of Proto-Indo-European arrived at by this method are congruent with those established by linguists' 'informed estimation'. The two dating techniques, linguistic and archaeological, are at least independent and congruent with one another. If one reviews discussions of the dates by which the various Indo-European groups first emerged, we find an interesting and somewhat disturbing phenomenon. By c. 2000 BC we have traces of Anatolian, and hence linguists are willing to place the emergence of Proto-Anatolian to c. 2500 BC or considerably earlier. We have already differentiated Indo-Aryan in the Mitanni treaty by c. 1500 BC so undifferentiated Proto-Indo-Iranian must be earlier, and dates on the order of 2500-2000 BC are often suggested. Mycenaean Greek, the language of the Linear B tablets, is known by c. 1300 BC if not somewhat earlier and is different enough from its Bronze Age contemporaries (Indo-Iranian or Anatolian) and from reconstructed PIE to predispose a linguist to place a date of c. 2000 BC or earlier for Proto-Greek itself."
How was the Indo-European language family discovered? Similarities between European languages had been known for a long time, but a systematic study of them appeared gradually in early modern Europe. For instance, scholar Joseph Scaliger constructed language groups based on their word for "god," i.e. the Deus group (from Latin deus, with variations in the Romance languages), the Gott group (from Germanic god or Gott) and the Bog group (from Slavic bog). Suggestions of similarities between Indian and European languages began to be made by European visitors to India in the sixteenth century. Mallory and Adams, page 4:
"Joseph Scaliger (1540-1609), French (later Dutch) Renaissance scholar and one of the founders of literary historical criticism, who incidentally also gave astronomers their Julian Day Count, could employ the way the various languages of Europe expressed the concept of 'god' to divide them into separate groups; in these we can see the seeds of the Romance, Germanic, and Slavic language groups. The problem was explaining the relationship between these different but transparently similar groups. The initial catalyst for this came at the end of the sixteenth century and not from a European language. By the late sixteenth century Jesuit missionaries had begun working in India – St Francis Xavier (1506-52) is credited with supplying Europe with its first example of Sanskrit, the classical language of ancient India, in a letter written in 1544 (he cited the invocation Om Srii naraina nama). Classically trained, the Jesuits wrote home that there was an uncanny resemblance between Sanskrit and the classical languages of Europe. By 1768 Gaston Cœurdoux (1691-1777) was presenting evidence to the French Academy that Sanskrit, Latin, and Greek were extraordinarily similar to one another and probably shared a common origin."
The correspondences between the language of ancient India and those of ancient Greece and Rome were too close to be dismissed as chance. The date which is usually seen as the birth of Indo-European studies is 1786 when the Englishman Sir William Jones (1746-94) gave a speech to the Asiatic Society in Calcutta, India. Jones was a gifted classical scholar and is said to have known thirteen languages well, and twenty-eight fairly well, at the time of his death, among them Arabic and Persian. In 1783 Jones was knighted and appointed to the judgeship at the high court at Calcutta. He arrived in India in 1783 and was to stay there until his death in 1794. He was to transform the intellectual and cultural life of India when he founded the Asiatic Society of Bengal and the associated journal, Asiatick Researches, dedicated to the scientific study of the languages, literature, science, history, and philosophy of India. In 1786, Jones elaborated a theory of the common origins of most European languages and those of much of India, an intuition that marks the beginning of comparative-historical linguistics:
"The Sanskrit language, whatever be its antiquity, is of a wonderful structure; more perfect than the Greek, more copious than the Latin, and more exquisitely refined than either, yet bearing to both of them a stronger affinity, both in the roots of verbs and in the forms of grammar, than could possibly have been produced by accident; so strong indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists: there is a similar reason, though not quite so forcible, for supposing that both the Gothic [Germanic] and the Celtic, though blended with a very different idiom, had the same origin with the Sanskrit; and the Old Persian might be added to the same family, if this were the place for discussing any question concerning the antiquities of Persia."
He was deeply interested in the cultures of the Middle East, India and Asia. As Ibn Warraq says in his book Defending the West: A Critique of Edward Said's Orientalism, page 190-191:
"Jones was forever emphasizing the similarities between India and Greece, or pointing out Europe's debt to Indian philosophy, or hinting at a common source for the two great civilizations, writing, for instance, in the third anniversary discourse that it was impossible 'to read the Vedanta, or the many fine compositions in illustration of it, without believing that Pythagoras and Plato derived their sublime theories from the same fountain with the sages of India.'…With his work on Indian chronology, and having created a solid framework for the understanding of India's past, Jones, in effect, can be considered the father of Indian history. Jones's translation of Sacontala (Shakuntala) had an enormous influence in Europe, inspiring Schiller, Novalis, Schlegel, and Goethe, who used its introductory scene as a model for the 'Vorspiel auf dem Theater' of Faust (1797). But even more remarkably, the collection, printing, and translations of Sanskrit texts by Jones and other Orientalists made available for the first time to Indians themselves aspects of their own civilization, changing forever their own self-image. Until now, these texts had only been accessible to a narrow coterie of Brahmins."
By 1800 a preliminary model for this language family had been constructed. The English polymath Thomas Young (1773–1829) first used the term Indo-European in 1813. Young is remembered for, among other things, his studies of the properties of light and contributions to the development of Egyptology. The French philologist Jean-François Champollion (1790–1832) is correctly credited with having deciphered Egyptian hieroglyphs from the trilingual Rosetta Stone in 1822, but contributions had been made by Young and others such as the Swedish orientalist Johan David Åkerblad (1763–1819). In the late nineteenth century, Indo-European studies had made enough progress for the German linguist August Schleicher (1821-1868) to publish in 1868 the first artificial text composed in the reconstructed language Proto-Indo-European (PIE). In the early century, progress was made by the German linguist Franz Bopp and philologist Rasmus Rask from Denmark. Mallory and Adams, page 6:
"The language family came to be known as Indo-Germanic (so named by Conrad Malte-Brun in 1810 as it extended from India in the east to Europe whose westernmost language, Icelandic, belonged to the Germanic group of languages) or Indo-European (Thomas Young in 1813). Where the relationship among language groups were relatively transparent, progress was rapid in the expansion of the numbers of languages assigned to the Indo-European family. Between the dates of the two early great comparative linguists, Rasmus Rask (1787-1832) and Franz Bopp (1791-1867), comparative grammars appeared that solidified the positions of Sanskrit, Iranian, Greek, Latin, Germanic, Baltic, Slavic, Albanian, and Celtic within the Indo-European family. Some entered easily while others initially proved more difficult. The Iranian languages, for example, were added when comparison between Iran's ancient liturgical texts, the Avesta, was made with those in Sanskrit. The similarities between the two languages were so great that some thought that the Avestan language was merely a dialect of Sanskrit, but by 1826 Rask demonstrated conclusively that Avestan was co-ordinate with Sanskrit and not derived from it. He also showed that it was an earlier relative of the modern Persian language."
The closest relative of English is Frisian, with Dutch next in line among the Germanic languages. The Old Persian language was spoken in the sixth and fifth centuries BC by those who founded the Achaemenid Persian Empire, Cyrus the Great and Darius I the Great. The history of Sanskrit is equally fascinating. J. P. Mallory and D. Q. Adams, page 32-33:
"The ancient Indo-European language of India is variously termed Indic, Sanskrit, or Indo-Aryan. While the first name is geographically transparent (the people of the Indus river region), Sanskrit refers to the artificial codification of the Indic language about 400 BC, i.e. the language was literally 'put together' or 'perfected', samskrta, a term contrasting with the popular or natural language of the people, Prakrit. Indo-Aryan acknowledges that the Indo-Europeans of India designated themselves as Aryans; as the Iranians also termed themselves Aryans, the distinction here is then one of Indo-Aryans in contrast to Iranians (whose name already incorporates the word for 'Aryan'). The earliest certainly dated evidence for Indo-Aryan does not derive from India but rather north Syria where a list of Indo-Aryan deities is appended to a treaty between the Mitanni and the Hittites. This treaty dates to c. 1400-1330 BC and there is also other evidence of Indo-Aryan loanwords in Hittite documents. These remains are meagre compared with the vast religious and originally oral traditions of the Indo-Aryans. The oldest such texts are the Vedas (Skt. veda 'knowledge'), the sacred writings of the Hindu religion. The Rgveda alone is about the size of the Iliad and Odyssey combined."
Dating for the Rgveda or Rig Veda is usually estimated at around 1200 BC, give or take a couple of centuries. Great attention was given to the spoken word in traditional Indian culture; therefore these important texts probably haven't changed much during the centuries. A distinction must be made between Vedic Sanskrit and the later Classical Sanskrit from the first millennium BC onwards. The literary output in Sanskrit was enormous and included not only religious texts but also drama and scientific works.
In the late second and first millennium BC the distribution of Iranian languages was far greater than it is today, from Central Asia to China and the Black Sea. There are two groups, Eastern and Western Iranian. The Eastern branch is earliest attested in the form of Avestan, the liturgical language of the religion founded by Zarathustra, or Zoroaster as he was known to the Greeks. The religion of Zoroastrianism achieved prominence in Achaemenid Persian Empire. Pockets of followers of Zoroastrianism exist in India and Iran to this day, but in greatly diminished numbers due to later Islamic persecution. Mallory and Adams, page 33-34:
"The Avesta is a series of hymns and related material that was recited orally and not written down prior to the fourth century AD. Unlike the Rgveda, the integrity of its oral transmission was not nearly so secure and there are many difficulties in interpreting the earlier passages of the document. These belong to the Gathas, the hymns reputedly composed by Zarathustra himself; there is also much later material in the Avesta. The dates of its earliest elements are hotly disputed but generally fall c. 1000 BC and are presumed to be roughly contemporary with the Rgveda. Eastern Iranian offers many other more recently attested languages that belong to the Middle Iranian period….The European steppelands were occupied by the nomadic Scythians in the west and the Saka in the east, and what little evidence survives indicates that these all spoke an East Iranian language as well. The Saka penetrated what is now western China and settled along the southern route of the Silk Road in the oasis town of Khotan….The most important modern East Iranian language is Pashto, the state language of modern Afghanistan. The West Iranian languages were carried into north-west Iran by the Persians and Medes."
As mentioned before, we can see from the reconstructed PIE lexicon that the people who spoke Proto-Indo-European were familiar with wheeled vehicles and had terminology for wheels, axles, shafts and yokes, but there are indications that these words and objects were recent adoptions at the time. The earliest attested wheels are solid, tripartite disc wheels. The invention of the spoke, which made wheels much lighter and transportation therefore swifter, happened later, with spoked wheels appearing around 2500-2000 BC. The speakers of PIE had some knowledge of water transport, but the terminology relating to boats suggests little more than canoes or similar small craft suitable for crossing rivers or lakes.
InThe Making of Bronze Age Eurasia, Philip L. Kohl writes about early clay models of disk wheels and remains of wooden wheels and wagons with solid wooden wheels. Page 85:
"Such vehicles are among the earliest known examples of wheeled transport found on the Eurasian steppes. They may be roughly contemporaneous with or perhaps a few hundred years later than the now earliest well-documented carts from moors in northwestern Germany and Denmark (Hayden 1989; 1991: ptc. 7; and Häusler 1981; 1994). On current evidence, the diffusion of the technology of wheeled transport may have just as plausibly spread north to south from northwestern Europe with its forests of useable hard woods to the more open steppes to the southeast and then farther south into Mesopotamia as the reverse (cf. Bakker et al. 1999). The important point is not where this revolutionary technology first originated but rather how quickly it diffused across western Asia, Eurasia, and Europe during the Early Bronze period, underscoring the interconnections among disparate cultures throughout this vast area."
It is true that the technology spread quickly, but the earliest evidence of wheels we have today is found in Europe. It is thus possible that wheeled vehicles were invented by prehistoric Europeans and aided the first waves of the Indo-European expansion. The PIE word for "wheel" relates to words for "to turn, spin" while the word for wheel in Sumerian appears to be a loanword from Indo-European. It is not uncommon to borrow words for borrowed technology. The fact that wheels have a "native" terminology in Proto-Indo-European but a borrowed one in Sumerian strengthens the argument that the knowledge of wheels was spread in Eurasia by speakers of Indo-European languages. Kohl again, page 110-111:
"[I]t is roughly around the middle of the fourth millennium BC that wheeled transport fist appears, stretching across a vast interconnected region from northern Europe to southern Mesopotamia (Bakker et al. 1999). The precise determination of which area or which archaeological culture first developed wheeled vehicles may prove impossible to document archaeologically simply because the technology diffused as rapidly as it did across this vast contiguous area. The question of origins, however, is much less significant than this phenomenon of convergence, this almost simultaneous evidence for the early use of wheeled vehicles stretching from northern Germany and southern Poland south across Anatolia to southern Mesopotamia, beginning ca. 3500 BC or immediately after the collapse of the gigantic Tripol'ye settlements….It is shortly after the introduction of wheeled transport that evidence for its massive utilization on the western Eurasian steppes is documented in the excavation of scores of kurgans containing wheeled carts with tripartite wooden wheels. These were not the chariots of a military aristocracy but the heavy, ponderous carts and wagons of cowboys who were developing a form of mobile Bronze Age pastoral economy that fundamentally differed from the classic Eurasian nomadism that is later attested historically and ethnographically."
The images on the "Royal Standard of Ur" shows that the Sumerians in southern Mesopotamia were familiar with wheeled vehicles before 2500 BC, but still in the form of slow-moving carts pulled by oxen or tamed asses. This was contemporary with the Old Kingdom period when the Egyptians built their most famous pyramids, yet we have no indications that they used wheels at this time. They did know wheels during the New Kingdom period (1570–1070 BC), when horse-drawn chariots were displayed in the tomb of Pharaoh Tutankhamun (r. 1333 BC – 1323 BC). The Battle of Kadesh (1274 BC?) between the forces of the influential Pharaoh Ramesses II and the Indo-European speaking Hittites is often cited as the largest chariot battle ever fought, involving several thousand chariots.
It is likely that peoples of the Eurasian steppes were the first to tame the horse, maybe as a meat animal before they figured out they could ride them or use them for warfare. The faster horse-drawn chariot was developed before 2000 BC in the western steppes and contributed to another phase of the Indo-European expansion. The first practical spoked wheel horse-drawn chariots are attested in the burials of the so-called Andronovo culture in modern Russia, which practiced sophisticated bronze metallurgy and spread eastwards across the steppes. It is often assumed that they spoke an Indo-Iranian language. The first Chinese words for horses and chariots (and a few other terms) were Indo-European loanwords. Pottery of Andronovo-type has been found in Xinjiang in far western China. The first known chariot burial site in Shang Dynasty China dates to about 1200 BC, although it is possible that there were earlier ones. At the other end of Eurasia, a stone at Bredarör in southern Sweden dated to about 1300 BC is carved with an image of a chariot with four-spoke wheels drawn by two horses.
Diffusion eastwards in Eurasia of metallurgy and metal weapons and tools during the second millennium BC is certain and acknowledged by Chinese specialists. This external stimulus to the emerging Chinese civilization spread via the western Xinjiang region, which physically belongs to the steppes, to the Yellow River valley. Philip L. Kohl, page 240:
"[T]he diffusion west to east of metallurgy and horse rearing in no way constitutes a tale of civilization itself spreading from west to east, enlightening ultimately the indigenous inhabitants of China. Technologies and influences always spread in both directions, and there are many other tales to be told, including, probably, an early diffusion of sericulture and silks east to west. The early Chinese State may have received its metal technology, wheeled vehicles, and horses from the west, but they quickly adapted and improved on them for their own culturally defined purposes. The intricate, elaborately cast and figured bronze vessels for which the Shang Dynasty is so justly renowned have no direct parallels either in the way they were made or the uses to which they were put in western Asia. The 'world' of West Asia was not united with the 'world' of East Asia in a single interconnected 'world system' during the Bronze Age (contra Frank 1993), despite the undeniable fact that both areas were in indirect contact with one another and that both borrowed and benefited from such contact."
Silk fabric was developed very early in China, probably in prehistoric times. There is a claim that traces of Chinese silk have been found on an Egyptian mummy from the end of the New Kingdom period, ca.1070 BC. There were thus contacts across Asia more than a thousand years before what is usually seen as the beginning of the Silk Road, albeit sporadic ones.
The details of which culture spread where and exactly what language they spoke are still disputed by scholars, but the effects are clear: Between 1600-1200 BC you could find horse-drawn chariots in use throughout the entire landmass of Eurasia, from the border regions of Shang Dynasty China via Egypt and Anatolia to Northern Europe. This corresponds to the period of the ancient Vedas and the emergence of Vedic Sanskrit in India. Peoples speaking Indo-European languages played a vital role in the diffusion of wheeled vehicles.
As we have seen above, Greek shares a common history with Persian and Sanskrit, but there are other connections as well. The fact that the Greeks got the alphabet from the Phoenicians and medical knowledge from the Egyptians is well-known. They also learnt Babylonian mathematical astronomy and may have been influenced by other ideas from Mesopotamia. According to scholar Walter Burkert, affinities and similarities between oriental epic such as the Epic of Gilgamesh from ancient Mesopotamia and Homeric poetry can no longer be ignored in interpreting Homer. Ibn Warraq writes in Defending the West, page 71:
"It should come as no surprise if we detect a possible influence of Mesopotamian literature on Homer, particularly of the epic Gilgamesh. Burkert summarizes the similarities, which were also noted by Sir Maurice Bowra in his Heroic Poetry, between the two, 'In both cases, in Greek as in Akkadian, 'epic' means narrative poetry which employs a long verse repeated indefinitely, without strophic division; the tale is about gods, sons of gods, and great men from the past, all of whom may interact with each other. Main characteristics of style are the standard epithets, the formulaic verses, the repetition of verses, and typical scenes such as the 'assembly of the gods.' Many are also struck by the similarity between the openings of Gilgamesh and the Odyssey – we are told of a hero who wandered wide and saw many things – while his name is intentionally withheld. Since the publication in 1969 of the Akkadian epic Athrahasis, scholars have also remarked on correspondences between it and the Iliad."
Burkert is careful to point out that philosophy, in the modern sense, was nonetheless a Greek invention, as much as deductive proof in mathematics. As Ibn Warraq puts it, "what emerges is something entirely distinctive: what we call Greek civilization. The very strength of this civilization lay in its ability to learn from and improve upon the ideas, art, and literature of the Near East, Persia, India, and Egypt."
The Eastern Mediterranean was a culturally mixed zone in late second and early first millennium. The influence of Near Eastern culture on archaic Greece may have been underrated previously, but it is possible to go too far in the other direction as well. People sought goods in distant places, with the Phoenicians playing a primary role in travel and commercial activity. It is no surprise that the cultural contact between the Near East and the emerging Greek world was extensive. Many works of art, especially metalwork and ivories, entered Greece from the Near East and influenced Greek art. Scholar Marc Van De Mieroop explains in A History of the Ancient Near East ca. 3000 - 323 BC, second edition, page 227:
"Other, less tangible, influences on Greek culture are clear, yet it is often difficult to demonstrate that they were directly borrowed, and, if so, when. Greek material may also have contained survivals from the second millennium, when the Aegean was clearly integrated in the regional system of the Near East. The elements where Near Eastern influence on Greek culture has been suggested include loan words, literary motifs, ideals of kingship, diplomacy, astronomy, divination, cultic procedures, mathematics, measures and weights, economic practices such as interest, and so on. The enthusiasm of scholars for finding connections depends largely on whether they see Greece as the beginning of western civilization or as located in a cultural evolution that dates much further back in time. It is usually difficult, it not impossible, to prove that Greeks were aware of a particular Near Eastern practice and consciously copied it. For example, Hesiod's Theogony, written around the year 700, has close parallels with second-millennium Hittite mythology. Did he personally know those texts, which would then have been preserved in Anatolia into the first millennium, or was he influenced by traditions that at some distant moment in time had inspired Hittite tradition as well?"
Two new branches were added to the Indo-European linguistic tree in the early twentieth century. The first one was Tocharian, announced by the German scholars Emil Sieg and Wilhelm Siegling to be an Indo-European language in 1908. It was once spoken in Central Asia and the western border regions of China. The other was Anatolian, a branch which includes Hittite and Luvian. The Hittites created a state in central Anatolia (present-day Turkey) in the second millennium BC. The Hittite language occupies a special place in Indo-European studies because of what is perceived to be the extremely archaic features of its grammar. Hittite is extensively documented through tablets from the mid-second millennium and was first suggested to be an Indo-European language by the Norwegian linguist Jørgen Alexander Knudtzon (1854-1917). The Czech linguist Bedřich Hrozný (1879-1952) deciphered the Hittite language some years later.
Finally, there are those who argue that through comparative studies, you can find traces of a Proto-Indo-European mythological universe distributed throughout the area of early Indo-European speech. This is obviously overlaid with many later changes as well as influences from earlier cultures, but can perhaps still be deducted through careful studies. Although the various Indo-European groups have different creation myths, there could be elements of a PIE creation myth preserved in the traditions of the Celts, Germans, Slavs, Iranians, and Indo-Aryans. These traditions all indicate a proto-myth whereby the universe is created from a primeval giant – either a cow such as the Norse Ymir or a "man" such as the Vedic Purusa – who is sacrificed and dismembered. The various parts of his anatomy serve to provide a different element of nature; his flesh becomes the earth, his hair grass, his bone yields stone, his blood water, his eyes the sun, his mind the moon, his brain the clouds, his breath the wind, and his head becomes the heavens, etc. Here is The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World by J. P. Mallory and D. Q. Adams, page 435:
"As to the identity of the sacrificer we have hints in a related sacrifice that serves as the foundation myth for the Indo-Iranians, Germans, and Romans (with a possible resonance in Celtic). Here we find two beings, twins, one known as 'Man' (with a lexical cognate between Germanic Mannus and Skt Manu) and his 'Twin' (Germanic Twisto, Skt Yama with a possible Latin cognate if Remus, the brother of Romulus, is derived from *Yemonos 'twin'). In this myth 'Man', the ancestor of humankind, sacrifices his 'Twin'. The two myths, creation and foundation of a people, find a lexical overlap in the Norse myth where the giant Ymir is cognate with Skt. Yama and also means 'Twin'. The dismemberment of the primeval giant of the creation myth can be reversed to explain the origins of humans and we find various traditions that derive the various aspects of the human anatomy from the results of the original dismemberment, e.g. grass becomes hair, wind becomes breath. The creation myth is then essentially a sacrifice that brought about the different elements of the world. Conversely, as Bruce Lincoln has suggested, the act of sacrifice itself is a re-enactment of the original creation."
Mallory and Adams again, page 440:
"Current trends in Indo-European comparative mythology are taking several directions. The evidence for trifunctional (or quadri-functional) patterns is continually being augmented by further examples both from well-researched sources, e.g. Indic, Roman, Norse, and from other traditions such as Greek and Armenian that have seen far less attention. Moreover, an increasing number of scholars have been examining the narrative structure of the earliest literary traditions of the various Indo-European groups to reveal striking parallels between different traditions. For example, N. B. Allen has shown how much of the career of the Greek Odysseus is paralleled by distinct incidents in the lives of Arjuna in the Mahabharata, the Buddha in the earliest Buddhist texts, and CúChulainn in early Irish heroic literature. Other scholars such as Claude Sterckx, Stepan Ahyan, and Armen Petrosyan have uncovered detailed correspondences in other early Indo-European traditions. According to Allen, the close coincidences go beyond both the type of random generic parallels that one might expect between different literary traditions and beyond what we might ascribe to some form of distant diffusion. He argues that such comparisons provide us with at least some of the detritus of the Proto-Indo-European narrative tradition."