By the time Kerouac and Burroughs met in 1944, Kerouac had already written a million words. More words came in the wake of Kerouac’s brief detainment in August 1944, when a friend and fellow Beat Lucien Carr—who had introduced him to Burroughs and Ginsberg—confessed to having killed David Kammerer, a longtime admirer whose advances had gotten aggressive, in Manhattan’s Riverside Park. Kerouac assisted Carr in disposing of Kammerer’s glasses and the knife used in the killing. When Carr eventually confessed to the police, Kerouac was arrested as a material witness. He was bailed out by Parker’s parents; at that time she was his girlfriend, and her parents insisted that the couple marry before he was released. Kerouac and Burroughs collaborated on a novelization of the events, And the Hippos Were Boiled in Their Tanks, soon after. It went unpublished until 2008.
In 1944 Kerouac also wrote a novella, a roman à clef about his childhood in Massachusetts. He left it unfinished, however, and then lost the manuscript, which was eventually sold at auction for nearly $100,000 in 2002, having been discovered years earlier in a Columbia University dorm. It was published, along with some of Kerouac’s notes on the book and some letters to his father, as The Haunted Life, and Other Writings in 2014. That novella was just one expression of Kerouac’s boyhood ambition to write “the great American novel.” His first published novel, The Town & the City (1950), received favourablefavorable reviews but was considered derivative of the novels of Thomas Wolfe, whose Time and the River (1935) and You Can’t Go Home Again (1940) were then popular. In his novel Kerouac articulated the “New Vision,” that “everything was collapsing,” a theme that would dominate his grand design to have all his work taken together as “one vast book”—The Legend of Duluoz.
Kerouac found himself a national sensation after On the Road received a rave review from The New York Times critic Gilbert Millstein. While Millstein extolled the literary merits of the book, to the American public the novel represented a departure from tradition. Kerouac, though, was disappointed with having achieved fame for what he considered the wrong reason: little attention went to the excellence of his writing and more to the novel’s radically different characters and its characterization of hipsters and their nonconformist celebration of sex, jazz, and endless movement. The character Dean Moriarty (based on Neal Cassady, another important influence on Kerouac’s style) was an American archetype, embodying “IT,” an intense moment of heightened experience achieved through fast driving, talking, or “blowing” (as a horn player might) or in writing. "In On the Road" Sal Paradise explains his fascination with others who have “IT,” such as Dean Moriarty and Rollo Greb as well as jazz performers: “The only ones for me are the mad ones, the ones who are mad to live, mad to talk, mad to be saved.” These are characters for whom the perpetual now is all.
Readers often confused Kerouac with Sal Paradise, the amoral hipster at the centrecenter of his novel. The critic Norman Podhoretz famously wrote that Beat writing was an assault against the intellect and against decency. This misreading dominated negative reactions to On the Road. Kerouac’s rebellion, however, is better understood as a quest for the solidity of home and family, what he considered “the hearthside ideal.” He wanted to achieve in his writing that which he could find neither in the promise of America nor in the empty spirituality of Roman Catholicism; he strived instead for the serenity that he had discovered in his adopted Buddhism. Kerouac felt that the Beat label marginalized him and prevented him from being treated as he wanted to be treated, as a man of letters in the American tradition of Herman Melville and Walt Whitman.
Despite the success of the “spontaneous prose” technique Kerouac used in On the Road, he sought further refinements to his narrative style. Following a suggestion by Ed White, a friend from his Columbia University days, that he sketch “like a painter, but with words,” Kerouac sought visual possibilities in language by combining spontaneous prose with sketching. Visions of Cody (written in 1951–52 and published posthumously in 1972), an in-depth, more poetic variation of On the Road describing a buddy trip and including transcripts of his conversation with Cassady (now fictionalized as Cody), was the most successful realization of the sketching technique.
As he continued to experiment with his prose style, Kerouac also bolstered his standing among the Beat writers as a poet supreme. With his sonnets and odes, he ranged across Western poetic traditions. He also experimented with the idioms of blues and jazz in such works as Mexico City Blues (1959), a sequential poem comprising 242 choruses. After he met the poet Gary Snyder in 1955, Kerouac’s poetry, as well as that of Ginsberg and fellow Beats Philip Whalen and Lew Welch, began to show the influence of the haiku, a genre mostly unknown to Americans at that time. (The haiku of Bashō, Buson, Masaoka Shiki, and Issa had not been translated into English until the pioneering work of R.H. Blyth in the late 1940s.) While Ezra Pound had modeled his poem “In a Station of the Metro” (1913) after Japanese haiku, Kerouac, departing from the 17-syllable, 3-line strictures, redefined the form and created an American haiku tradition. In the posthumously published collection Scattered Poems (1971), he proposed that the “Western haiku” simply say a lot in three short lines.
In his pocket notebooks, Kerouac wrote and rewrote haiku, revising and perfecting them. He also incorporated his haiku into his prose. His mastery of the form is demonstrated in his novel The Dharma Bums (1958).
Kerouac turned to Buddhist study and practice from 1953 to 1956, after his “road” period and in the lull between composing On the Road in 1951 and its publication in 1957. In the fall of 1953, he finished The Subterraneans (it would be published in 1958). Fed up with the world after the failed love affair upon which the book was based, he read Henry David Thoreau and fantasized a life outside civilization. He immersed himself in the study of Zen, and he became acquainted with the writings of American Buddhist popularizer Dwight Goddard, particularly the second edition (1938) of his A Buddhist Bible. Kerouac began his genre-defying Some of the Dharma in 1953 as reader’s notes on A Buddhist Bible, and the work grew into a massive compilation of spiritual material, meditations, prayers, haiku, and musings on the teaching of Buddha. In an attempt to replicate the experience of Han Shan, a reclusive Chinese poet of the Tang dynasty (618–907), Kerouac spent 63 days atop Desolation Peak in Washington state. Kerouac recounted this experience in Desolation Angels (1965) using haiku as bridges (connectives in jazz) between sections of spontaneous prose. In 1956 he wrote a sutra, The Scripture of the Golden Eternity. He also began to think of his entire oeuvre as a “Divine Comedy of the Buddha,” thereby combining Eastern and Western traditions.
By the 1960s Kerouac had finished most of the writing for which he is best known. In 1961 he wrote Big Sur in 10 days while living in the cabin of Lawrence Ferlinghetti, a fellow Beat poet, in California’s Big Sur region. Two years later Kerouac’s account of his brother’s death was published as the spiritual Visions of Gerard. Another important autobiographical book, Vanity of Duluoz (1968), recounts stories of his childhood, his schooling, and the dramatic scandals that defined early Beat legend.
In 1969 Kerouac was broke, and many of his books were out of print. An alcoholic, he was living with his third wife and his mother in St. Petersburg, Florida. He spent his time at the Beaux-Arts coffeehouse in nearby Pinellas Park and in local bars, such as the Wild Boar in Tampa. A week after he was beaten by fellow drinkers whom he had antagonized at the Cactus Bar in St. Petersburg, he died of internal hemorrhaging in front of his television while watching The Galloping Gourmet—the ultimate ending for a writer who came to be known as the “martyred king of the Beats.”
Jack Kerouac: Collected Poems (2012) gathered all of his published poetry collections along with poems that appeared in his fiction and elsewhere. The volume also contained six previously unpublished poems.
Kerouac’s insistence upon “First thought, best thought” and his refusal to revise was controversial. He felt that revision was a form of literary lying, imposing a form farther away from the truth of the moment, counter to his intentions for his “true-life” novels. For the composition of haiku, however, Kerouac was more exacting. Yet he accomplished the task of revision by rewriting. Hence, there exist several variations of On the Road, the final one being the 1957 version that was a culmination of Kerouac’s own revisions as well as the editing of his publisher. Significantly, Kerouac never saw the final manuscript before publication. Still, many critics found the long sweeping sentences of On the Road ragged and grammatically derelict.
Kerouac explained his quest for pure, unadulterated language—the truth of the heart unobstructed by the lying of revision—in two essays published in the Evergreen Review: “Essentials of Spontaneous Prose” (1958) and “Belief and Technique for Modern Prose” (1959). On the grammatically irreverent sentences, Kerouac extolled a “method” eschewing conventional punctuation in favor of dashes. In “Essentials of Spontaneous Prose” he recommended the “vigorous space dash separating rhetorical breathing (as jazz musician drawing breath between outblown phrases)”; the dash allowed Kerouac to deal with time differences, making it less prosaic and linear and more poetic. He also described his manner of developing an image, which began with the “jewel center,” from which he wrote in a “semi-trance,” “without consciousness,” his language governed by sound, by the poetic effect of alliteration and assonance, until he reached a plateau. A new “jewel center” would be initiated, stronger than the first, and would spiral out as he riffed (in an analogy with a jazz musician). He saw himself as a horn player blowing one long note, as he told interviewers for The Paris Review. His technique explains the unusual organization of his writing, which is not haphazard or sloppy but systematic in the most-individualized sense. In fact, Kerouac revised On the Road numerous times by recasting his story in book after book of The Legend of Duluoz. His “spontaneity” allowed him to develop his distinct voice.
American novelist, poet, and leader of the Beat movement whose most famous book, On the Road (1957), had broad cultural influence before it was recognized for its literary merits.
Lowell, Massachusetts, a mill town, had a large French Canadian population. While Kerouac’s mother worked in a shoe factory and his father worked as a printer, Kerouac attended a French Canadian school in the morning and continued his studies in English in the afternoon. He spoke joual, a Canadian dialect of French, and so, though he was an American, he viewed his country as if he were a foreigner. Kerouac subsequently went to the Horace Mann School, a preparatory school in New York City, on a gridiron football scholarship. There he met Henri Cru, who helped Kerouac find jobs as a merchant seaman, and Seymour Wyse, who introduced Kerouac to jazz.
In 1940 Kerouac enrolled at Columbia University, where he met two writers who would become lifelong friends: Allen Ginsberg and William S. Burroughs. Together with Kerouac, they are the seminal figures of the literary movement known as Beat, a term introduced to Kerouac by Herbert Huncke, a Times Square junkie, petty thief, hustler, and writer. It meant “down-and-out” as well as “beatific” and therefore signified the bottom of existence (from a financial and an emotional point of view) as well as the highest, most spiritual high.
Kerouac’s childhood and early adulthood were marked by loss: his brother Gerard died in 1926, at age nine. Kerouac’s boyhood friend Sebastian Sampas died in 1944 and his father, Leo, in 1946. In a deathbed promise to Leo, Kerouac pledged to care for his mother, Gabrielle, affectionately known as Memere. Kerouac was married three times: to Edie Parker (1944); to Joan Haverty (1951), with whom he had a daughter, Jan Michelle; and to Stella Sampas (1966), the sister of Sebastian, who had died at Anzio, Italy, during World War II.
By the time Kerouac and Burroughs met in 1944, Kerouac had already written a million words. More words came in the wake of Kerouac’s brief detainment in August 1944, when friend and fellow Beat Lucien Carr—who had introduced him to Burroughs and Ginsberg—confessed to having killed David Kammerer, a longtime admirer whose advances had gotten aggressive, in Manhattan’s Riverside Park. Kerouac assisted Carr in disposing of Kammerer’s glasses and the knife used in the killing. When Carr eventually confessed to the police, Kerouac was arrested as a material witness. He was bailed out by Parker’s parents; at that time she was his girlfriend, and her parents insisted that the couple marry before he was released. Kerouac and Burroughs collaborated on a novelization of the events, And the Hippos Were Boiled in Their Tanks, soon after. It went unpublished until 2008.
In 1944 Kerouac also wrote a novella, a roman à clef about his childhood in Massachusetts. He left it unfinished, however, and then lost the manuscript, which was eventually sold at auction for nearly $100,000 in 2002, having been discovered years earlier in a Columbia University dorm. It was published, along with some of Kerouac’s notes on the book and some letters to his father, as The Haunted Life, and Other Writings in 2014. That novella was just one expression of Kerouac’s boyhood ambition to write “the great American novel.” His first published novel, The Town & the City (1950), received favourable reviews but was considered derivative of the novels of Thomas Wolfe, whose Time and the River (1935) and You Can’t Go Home Again (1940) were then popular. In his novel Kerouac articulated the “New Vision,” that “everything was collapsing,” a theme that would dominate his grand design to have all his work taken together as “one vast book”—The Legend of Duluoz.
Yet Kerouac was unhappy with the pace of his prose. The music of bebop jazz artists Thelonious Monk and Charlie Parker began to drive Kerouac toward his “spontaneous bop prosody,” as Ginsberg later called it, which took shape in the late 1940s through various drafts of his second novel, On the Road. The original manuscript, a scroll written in a three-week blast in 1951, is legendary: composed of approximately 120 feet (37 meters) of paper taped together and fed into a manual typewriter, the scroll allowed Kerouac the fast pace he was hoping to achieve. He also hoped to publish the novel as a scroll so that the reader would not be encumbered by having to turn the pages of a book. Rejected for publication at first, it finally was printed as a book in 1957. In the interim, Kerouac wrote several more “true-life” novels, Doctor Sax (1959), Maggie Cassidy (1959), and Tristessa (1960) among them.
Kerouac found himself a national sensation after On the Road received a rave review from The New York Times critic Gilbert Millstein. While Millstein extolled the literary merits of the book, to the American public the novel represented a departure from tradition. Kerouac, though, was disappointed with having achieved fame for what he considered the wrong reason: little attention went to the excellence of his writing and more to the novel’s radically different characters and its characterization of hipsters and their nonconformist celebration of sex, jazz, and endless movement. The character Dean Moriarty (based on Neal Cassady, another important influence on Kerouac’s style) was an American archetype, embodying “IT,” an intense moment of heightened experience achieved through fast driving, talking, or “blowing” (as a horn player might) or in writing. In On the Road Sal Paradise explains his fascination with others who have “IT,” such as Dean Moriarty and Rollo Greb as well as jazz performers: “The only ones for me are the mad ones, the ones who are mad to live, mad to talk, mad to be saved.” These are characters for whom the perpetual now is all.
Readers often confused Kerouac with Sal Paradise, the amoral hipster at the centre of his novel. The critic Norman Podhoretz famously wrote that Beat writing was an assault against the intellect and against decency. This misreading dominated negative reactions to On the Road. Kerouac’s rebellion, however, is better understood as a quest for the solidity of home and family, what he considered “the hearthside ideal.” He wanted to achieve in his writing that which he could find neither in the promise of America nor in the empty spirituality of Roman Catholicism; he strived instead for the serenity that he had discovered in his adopted Buddhism. Kerouac felt that the Beat label marginalized him and prevented him from being treated as he wanted to be treated, as a man of letters in the American tradition of Herman Melville and Walt Whitman.
American novelist, poet, and leader of the Beat movement whose most famous book, On the Road (1957), had broad cultural influence before it was recognized for its literary merits.
Arctic Ocean, smallest of the world’s oceans, centring approximately on the North Pole. The Arctic Ocean and its marginal seas—the Chukchi, East Siberian, Laptev, Kara, Barents, White, Greenland, and Beaufort and, according to some oceanographers, also the Bering and Norwegian—are the least-known basins and bodies of water in the world ocean as a result of their remoteness, hostile weather, and perennial or seasonal ice cover. This is changing, however, because the Arctic may exhibit a strong response to global change and may be capable of initiating dramatic climatic changes through alterations induced in the oceanic thermohaline circulation by its cold, southward-moving currents or through its effects on the global albedo resulting from changes in its total ice cover.
Although the Arctic Ocean is by far the smallest of Earth’s oceans, having only a little more than one-sixth the area of the next largest, the Indian Ocean, its area of 5,440,000 square miles (14,090,000 square km) is five times larger than that of the largest sea, the Mediterranean. The deepest sounding obtained in Arctic waters is 18,050 feet (5,502 metres), but the average depth is only 3,240 feet (987 metres).
Distinguished by several unique features, including a cover of perennial ice and almost complete encirclement by the landmasses of North America, Eurasia, and Greenland, the north polar region has been a subject of speculation since the earliest concepts of a spherical Earth. From astronomical observations, the Greeks theorized that north of the Arctic Circle there must be a midnight sun at midsummer and continual darkness at midwinter. The enlightened view was that both the northern and southern polar regions were uninhabitable frozen wastes, whereas the more popular belief was that there was a halcyon land beyond the north wind where the sun always shone and people called Hyperboreans led a peaceful life. Such speculations provided incentives for adventurous men to risk the hazards of severe climate and fear of the unknown to further geographic knowledge and national and personal prosperity.
The tectonic history of the Arctic Basin in the Cenozoic Era (i.e., about the past 65 million years) is largely known from available geophysical data. It is clear from aeromagnetic and seismic data that the Eurasia Basin was formed by seafloor spreading along the axis of the Nansen-Gakkel Ridge. The focus of spreading began under the edge of the Asian continent, from which a narrow splinter of its northern continental margin was separated and translated northward to form the present Lomonosov Ridge. The origin of the Amerasia Basin is far less clear. Most researchers favour a hypothesis of opening by rotation of the Arctic-Alaska lithospheric plate away from the North American Plate during the Cretaceous Period (about 145 to 65 million years ago). Better understanding of the origin of the Arctic Ocean’s basins and ridges is critical for reconstructing the paleoclimatic evolution of the ocean and for understanding its relevance to global environmental changes.
The sediments of the Arctic Ocean floor record the natural of the physical environment, climate, and ecosystems on time scales determined by the ability to sample them through coring and at resolutions determined by the rates of deposition. Of the hundreds of sediment corings taken, only four penetrate deeply enough to predate the onset of cold climatic conditions. The oldest (approximately 80-million-year-old black muds and 67-million-year-old siliceous oozes) document that at least part of the Arctic Ocean was relatively warm and biologically productive prior to 40 million years ago. Unfortunately, none of the available seafloor cores have sampled sediments from the time interval between 35 to 3 million years ago. Thus there is no direct evidence of the onset of cooling that produced the present perennial ice cover. All the other cores collected contain younger sediments that were deposited in an ocean dominated by ice cover. They contain evidence of terrigenous (land-derived) sediments formed by bordering glaciers and transported by sea ice.
From the late 19th century, when the Norwegian explorer Fridtjof Nansen first discovered an ocean in the central Arctic, until the middle of the 20th century, it was believed that the Arctic Ocean was a single large basin. Explorations after 1950 revealed the true complex nature of the ocean floor. Rather than being a single basin, the Arctic Ocean consists of two principal deep basins that are subdivided into four smaller basins by three transoceanic submarine ridges. The central of these ridges extends from the continental shelf off Ellesmere Island to the New Siberian Islands, a distance of 1,100 miles (1,770 km). This enormous submarine mountain range was discovered by Soviet scientists in 1948–49 and reported in 1954. It is named the Lomonosov Ridge after the scientist, poet, and grammarian Mikhail Vasilyevich Lomonosov.
The Lomonosov Ridge has an average relief of about 10,000 feet and divides the Arctic Ocean into two physiographically complex basins. These are referred to as the Eurasia Basin on the European side of the ridge and the Amerasia Basin on the American side. The Lomonosov Ridge varies in width from 40 to 120 miles, and its crest ranges in depth between 3,100 and 5,400 feet.
The Eurasia Basin is divided into two smaller basins by a trans-Arctic Ocean extension of the Mid-Atlantic Ridge. This Arctic segment of the global ridge system is called the Nansen Cordillera, which was named for Fridtjof Nansen after its discovery in the early 1960s. It is a locus of active ocean-floor spreading, with a well-developed rift valley and flanking rift mountains. The Fram Basin lies between the Nansen-Gakkel Ridge and the Lomonosov Ridge at a depth of 14,070 feet. The geographic north pole is located over the floor of the Fram Basin near its juncture with the Lomonosov Ridge. The smallest of the Arctic Ocean subbasins, called the Nansen Basin, lies between the Nansen-Gakkel Ridge and the Eurasian continental margin and has a floor depth of 13,800 feet.
The Amerasia Basin is divided into two unequal basins by the Alpha Cordillera (Alpha Ridge), a broad, rugged submarine mountain chain that extends to within 4,600 feet of the ocean surface. The origin of this seismically inactive ridge, which was discovered in the late 1950s, is undetermined and holds the key to understanding the origin of the Amerasia Basin. The Makarov Basin lies between the Alpha Cordillera and the Lomonosov Ridge, and its floor is at a depth of 13,200 feet. The largest subbasin of the Arctic Ocean is the Canada Basin, which extends approximately 700 miles from the Beaufort Shelf to the Alpha Cordillera. The smooth basin floor slopes gently from east to west, where it is interrupted by regions of sea knolls. The average depth of the Canada Basin is 12,500 feet.
The Arctic Ocean is unique in that nearly one-third of its total area is underlain by continental shelf, which is asymmetrically distributed around its circumference. North of Alaska and Greenland the shelf is 60 to 120 miles wide, which is the normal width of continental shelves. In contrast, the Siberian and Chukchi shelves off Eurasia range from 300 to 1,100 miles in width. The edge of the continental margin is dissected by numerous submarine valleys. The largest of these, the Svyataya Anna Trough, is 110 miles wide and 300 miles long.
Several factors in the Arctic Ocean make its physical, chemical, and biological processes significantly different from those in the adjoining North Atlantic and Pacific Oceans. Most notable is the covering ice pack, which reduces the exchange of energy between ocean and atmosphere by about 100 times. In addition, sea ice greatly reduces the penetration of sunlight needed for the photosynthetic processes of marine life and impedes the mixing effect of the winds. A further significant distinguishing feature is the high ratio of freely connected shallow seas to deep basins. Whereas the continental shelf on the North American side of the Arctic Ocean is of a normal width (approximately 40 miles), the Eurasian sector is hundreds of miles broad, with peninsulas and islands dividing it into five main marginal seas: the Chukchi, East Siberian, Laptev, Kara, and Barents. These marginal seas occupy 36 percent of the area of the Arctic Ocean, yet they contain only 2 percent of its water volume. With the exception of the Mackenzie River of Canada and the Colville River of Alaska, all major rivers discharge into these marginal shallow seas. The combination of large marginal seas, with a high ratio of exposed surface to total volume, plus large summer inputs of fresh water, greatly influences surface-water conditions in the Arctic Ocean.
As an approximation, the Arctic Ocean may be regarded as an estuary of the Atlantic Ocean. The major circulation into and from the Arctic Basin is through a single deep channel, the Fram Strait, which lies between the island of Spitsbergen and Greenland. A substantially smaller quantity (approximately one-quarter of the volume) of water is transported southward through the Barents and Kara seas and the Canadian Archipelago. The combined outflow to the Atlantic appears to be of major significance to the large-scale thermohaline circulation and mean temperature of the world ocean with a potentially profound impact on global climate variability. Warm waters entering the Greenland-Iceland-Norwegian (GIN) Sea plunge downward when they meet the colder waters from more northerly produced fresh water, southward-drifting ice, and a colder atmosphere. This produces North Atlantic Deep Water (NADW), which circulates in the world ocean. An increase in this freshwater and ice export could shut down the thermocline convection in the GIN Sea; alternatively, a decrease in ice export might allow for convection and ventilation in the Arctic Ocean itself.
Low-salinity waters enter the Arctic Ocean from the Pacific through the shallow Bering Strait. Although the mean inflow seems to be driven by a slight difference in sea level between the North Pacific and Arctic oceans, a large source of variability is induced by the wind field, primarily large-scale atmospheric circulation over the North Pacific. The amount of fresh water entering the Arctic Ocean is about 2 percent of the total input. Precipitation is believed to be about 10 times greater than loss by evaporation, although both figures can be only roughly estimated. Through all these various routes and mechanisms, the exchange rate of the Arctic Ocean is estimated to be approximately 210 million cubic feet (5.9 million cubic metres) per second.
All waters of the Arctic Ocean are cold. Variations in density are thus mainly determined by changes in salinity. Arctic waters have a two-layer system: a thin and less dense surface layer is separated by a strong density gradient, referred to as a pycnocline, from the main body of water, which is of quite uniform density. This pycnocline restricts convective motion and the vertical transfer of heat and salt, and hence the surface layer acts as a cap over the larger masses of warmer water below.
Despite this overall similarity in gross oceanographic structure, the waters of the Arctic Ocean can be classified into three major masses and one lesser mass.
1. The water extending from the surface to a depth of about 650 feet (about 200 metres) is the most variable and heterogeneous of all that in the Arctic. This is because of the latent heat of freezing and thawing; brine addition from the process of ice freezing; freshwater addition by rivers, ice melting, and precipitation; and great variations in insolation (rate of delivery of solar energy) and energy flux as a result of sea ice cover. Water temperature may vary over a range of 7 °F (4 °C) and salinity from 28 to 34 grams of salt per kilogram of seawater (28 to 34 parts per thousand [‰]).
2. Warmer Atlantic water everywhere underlies Arctic surface water from a depth of about 650 to 3,000 feet. As it cools it becomes so dense that it slips below the surface layer on entering the Arctic Basin. The temperature of this water is about 34 to 37 °F (1 to 3 °C) as it enters the basin, but it is gradually cooled so that by the time it spreads to the Beaufort Sea it has a maximum temperature of 32.9 to 33.1 °F (0.5 to 0.6 °C). The salinity of the Atlantic layer varies between 34.5 and 35 ‰.
3. Bottom water extends beneath the Atlantic layer to the ocean floor. This is colder than the Atlantic water (below 32 °F, or 0 °C) but has the same salinity.
4. An inflow of Pacific water can be observed in the Amerasia Basin but not in the Eurasia Basin. This warmer and fresher water mixes with colder and more saline water in the Chukchi Sea, where its density enables it to flow as a wedge between the Arctic and Atlantic waters. The Pacific water, by the time it reaches the Canada Basin, has a temperature range of 31.1 to 30.8 °F (−0.5 to −0.7 °C) and salinities between 31.5 and 33 ‰
Arctic waters are driven by the wind and by density differences. The net effect of tides is unknown but could have some modifying effect on gross circulation. The motion of surface waters is best known from observations of ice drift. The most striking feature of the surface circulation pattern is the large clockwise gyre (circular motion) that covers almost the entire Amerasia Basin. Fletcher’s Ice Island (T-3) made two orbits in this gyre over a 20-year period, which is some indication of the current speed. The northern extremity of the gyre bifurcates and jets out of the Greenland-Spitsbergen passage as the East Greenland Current, attaining speeds of 6 to 16 inches per second. Circulation of the shallow Eurasian shelf seas seems to be a complex series of counterclockwise gyres, complicated by islands and other topographic relief.
Circulation of the deeper Atlantic water is less well known. On entering the Eurasia Basin, the plunging Greenland Sea water appears to flow eastward along the edge of the continental margin until it fans out and enters the Amerasia Basin along a broad front over the crest of the Lomonosov Ridge. There seems to be a general counterclockwise circulation in the Eurasia Basin and a smaller clockwise gyre in the Beaufort Sea. Speeds are slow—probably less than two inches per second.
The circulation of the bottom water is unknown but can be inferred to be similar to the Atlantic layer. Measured values of dissolved oxygen show that the bottom water is well ventilated, dissolved oxygen everywhere exceeding 70 percent of saturation.
The cover of sea ice suppresses wind stress and wind mixing, reflects a large proportion of incoming solar radiation, imposes an upper limit on the surface temperature, and impedes evaporation. Wind and water stresses keep the ice pack in almost continuous motion, causing the formation of cracks (leads), open ponds (polynya), and pressure ridges. Along these ridges the pack ice may be locally stacked high and project downward 33–80 feet (about 10–25 metres) into the ocean. Besides its deterrence to the exchange of energy between the ocean and the atmosphere, the formation of sea ice generates vast quantities of cold water that help drive the circulation of the world ocean system.
Sea ice rarely forms in the open ocean below a latitude of 60° N but does occur in more southerly enclosed bays, rivers, and seas. Between about 60° and 75° N the occurrence of sea ice is seasonal, and there is usually a period of the year when the water is ice-free. Above a latitude of 75° N there is a more or less permanent ice cover. Even there, however, as much as 10 percent of the area consists of open water owing to the continual opening of leads and polynyas.
In the process of freezing, the salt in seawater is expelled as brine. The degree to which this rejection takes place increases as the rate of freezing decreases. Typically, newly formed sea ice has a salinity of 4 to 6 0/00. Even after freezing the process of purification continues but at a much slower rate. By the time the ice is one year old, it is sufficiently salt-free to be melted for drinking. This year-old, or older, salt-free sea ice is referred to as multiyear sea ice or polar pack. It can be distinguished by its smoother, rounded surface and pale blue colour. Younger ice is more jagged and grayer in colour. Because the hardness and strength of ice increases as the salts are expelled, polar pack is a special threat to shipping. First-year ice has a characteristic thickness of up to 6 feet (2 metres), whereas multiyear ice averages about 12 feet (about 4 metres) in thickness.
There is no direct evidence as to the onset of the Arctic Ocean ice cover. The origin of the ice pack was influenced by a number of factors, such as the formation of terrestrial ice caps and the interaction of the Arctic and North Atlantic waters—with their different temperature and salinity structures—with atmospheric climate variables. What can be inferred from available data is that there was not a continuous ice cover throughout the Pleistocene Epoch (i.e., about 2,600,000 to 11,700 years ago). Rather, there was a continually warm ocean until approximately 2,000,000 years ago, followed by a permanent ice pack about 850,000 years ago.
Sea ice cover has fluctuated throughout the Holocene Epoch (11,700 years ago to the present) in response to changes in sea levels and the increases in coastline elevation following the retreat of the Pleistocene ice sheets (see isostasy). These fluctuations were largely centred in the eastern Arctic Ocean during multiyear warm episodes, where ice cover would advance and retreat with the seasons. In the late 20th century, however, Arctic sea ice coverage declined persistently (about 3 percent per decade since 1978) in response to the effects of global warming, which lifts regional near-surface air temperatures above freezing for long portions of the year.
Ancient Egyptian architecture, the architectural monuments produced mainly during the dynastic periods of the first three millennia BCE in the Nile valley regions of Egypt and Nubia. Due to location and material, most surviving Egyptian architecture is funerary or religious in purpose.
Ancient Egyptian architecture, the architectural monuments produced mainly during the dynastic periods of the first three millennia BCE in the Nile valley regions of Egypt and Nubia. The architecture, similar to representational art, aimed to preserve forms and conventions that were held to reflect the perfection of the world at the primordial moment of creation and to embody the correct relationship between humankind, the king, and the pantheon of the gods. For this reason, both Egyptian art and architecture appear outwardly resistant to development and the exercise of individual artistic judgment, but Egyptian artisans of every historical period found different solutions for the conceptual challenges posed to them.
Any survey of Egyptian architecture is weighted in favour of funerary and religious buildings, partly because of their location. Many temples and tombs survived because they were built on ground unaffected by the Nile flood, whereas most ancient Egyptian towns were lost because they were situated in the cultivated and flooded area of the Nile Valley. Yet the dry, hot climate of Egypt allowed some mud brick structures to survive where they have escaped the destructive effects of water or humans.
The two principal building materials used in ancient Egypt were unbaked mud brick and stone. From the Old Kingdom (c. 2575–2130 BCE) onward, stone was generally used for tombs—the eternal dwellings of the dead—and for temples—the eternal houses of the gods. Mud brick remained the domestic material, used even for royal palaces; it was also used for fortresses, the great walls of temple precincts and towns, and for subsidiary buildings in temple complexes.
Mortuary architecture in Egypt was highly developed and often grandiose. Most tombs comprised two principal parts, the burial chamber (the tomb proper) and the chapel, in which offerings for the deceased could be made. In royal burials the chapel rapidly developed into a mortuary temple, which, beginning in the New Kingdom (c. 1539–1075 BCE), was usually built separately and at some distance from the tomb. In the following discussion, funerary temples built separately will be covered with temples in general and not as part of the funerary complex.
Mastabas were the standard type of tomb in the earliest dynasties. These flat-roofed, rectangular superstructures had sides constructed at first from mud brick and later of stone, in the form of paneled niches painted white and decorated with elaborate “matting” designs. They were built over many storage chambers stocked with food and equipment for the deceased, who lay in a rectangular burial chamber below ground.
In the great cemeteries of the Old Kingdom, changes in size, internal arrangements, and groupings of the burials of nobles indicate the vicissitudes of nonroyal posthumous expectations. In the 3rd dynasty at Ṣaqqārah the most important private burials were at some distance from the step pyramids of Djoser and Sekhemkhet. Their large mastabas incorporated offering niches as well as corridors that could accommodate paintings of equipment for the afterlife and recesses to hold sculptures of the deceased owner. By the later Old Kingdom, internal space in mastabas became more complex as they accommodated more burials. In the mastaba of Mereruka, a vizier of Teti, first king of the 6th dynasty, there were 21 rooms for his own funerary purposes, with six for his wife and five for his son.
The tomb for Djoser, second king of the 3rd dynasty, began as a mastaba and was gradually expanded to become a step pyramid. It was built within a vast enclosure on a commanding site at Ṣaqqārah, the necropolis overlooking the city of Memphis. The high royal official Imhotep was credited with the design and with the decision to use quarried stone. This first essay in stone is remarkable for its design of six superposed stages of diminishing size. It also has a huge enclosure (1,784 by 909 feet [544 by 277 metres]) that is surrounded by a paneled wall faced with fine limestone and contains a series of “mock” buildings (stone walls filled with rubble, gravel, or sand) that probably represent structures associated with the heraldic shrines of predynastic Egypt. At Djoser’s precinct the Egyptian stonemasons made their earliest architectural innovations, using stone to reproduce the forms of predynastic wood and brick buildings. The columns in the entrance corridor resemble bundled reeds, while engaged columns in other areas of the precinct have capitals resembling papyrus blossoms. In parts of the subterranean complexes, fine reliefs of the king and elaborate wall panels in glazed tiles are among the innovations found in this remarkable monument.
For the Old Kingdom the most characteristic form of tomb building was the true pyramid, the finest examples of which are the pyramids at Al-Jīzah (Giza), notably the Great Pyramid of King Khufu (Cheops) of the 4th dynasty. The form itself reached its maturity in the reign of Snefru, father of Khufu, who constructed three pyramids, one of which is known as the Bent Pyramid due to its double slope. Subsequently only the pyramid of Khafre (Chephren), Khufu’s successor, approached the size and perfection of the Great Pyramid. The simple measurements of the Great Pyramid indicate very adequately its scale, monumentality, and precision: its sides are 755.43 feet (230.26 metres; north), 756.08 feet (230.45 metres; south), 755.88 feet (230.39 metres; east), 755.77 feet (230.36 metres; west); its orientation on the cardinal points is almost exact; its height upon completion was 481.4 feet (146.7 metres); and its area at the base is just over 13 acres (5.3 hectares). The core is formed of huge limestone blocks, once covered by a casing of dressed limestone. Other features in its construction contribute substantially to its remarkable character: the lofty, corbeled Grand Gallery and the King’s Chamber—built entirely of granite—with five relieving compartments (empty rooms for reducing pressure) above.
The pyramids built for the later kings of the Old Kingdom and most kings of the Middle Kingdom (c. 1938–1630 BCE) were comparatively smaller in size and not as well constructed. The tomb of King Mentuhotep II of the 11th dynasty is, however, of exceptional interest. Its essential components were a rectangular structure, terraced porticoes, a series of pillared ambulatories, an open court, and a hypostyle hall tucked into the cliffs. (See discussion below.)
The monumentality of the pyramid made it not only a potent symbol of royal power but also an obvious target for tomb robbers. During the New Kingdom the wish to halt the looting and desecration of royal tombs led to their being sited together in a remote valley at Thebes, dominated by a peak that itself resembled a pyramid. There, in the Valley of the Kings, tombs were carved deep into the limestone with no outward structure. These rock-cut tombs had been constructed for private citizens as early as the 4th dynasty. Most were fairly simple single chambers serving all the functions of the multiplicity of rooms in a mastaba. Some, however, were excavated with considerable architectural pretensions. At Aswān huge halls, often connecting to form labyrinthine complexes, were partly formal, with columns carefully cut from the rock, and partly rough-hewn. Chapels with false doors were carved out within the halls. In some cases the facades were monumental, with porticoes and inscriptions.
At Beni Hasan the local nobles during the Middle Kingdom cut large and precise tomb chambers in the limestone cliffs. Architectural features—columns, barrel roofs, and porticoes, all carved from the rock—provided fine settings for painted mural decorations. The tomb of Khnumhotep is an outstanding example of fine design impeccably executed.
The earliest royal tombs in the Valley of Kings were entirely hidden from view; those of the Ramessid period (19th and 20th dynasties) are marked only by a doorway carved in the rock face. They had no identical plan, but most consisted of a series of corridors opening out at intervals to form rooms and ending in a large burial chamber deep in the mountain, where the massive granite sarcophagus rested on the floor. Religious and funerary hieroglyphic texts and pictures covered the walls of the tomb from end to end. The finest of the tombs is that of Seti I, second king of the 19th dynasty; it extends 328 feet (100 metres) into the mountain and contains a spectacular burial chamber, the barrel-shaped roof of which represents the vault of heaven.
After the abandonment of the valley at the end of the 20th dynasty, kings of the subsequent two dynasties were buried in very simple tombs within the temple enclosure of the delta city of Tanis. No later royal tombs were identified in Egypt proper.
Two principal kinds of temple can be distinguished—cult temples and funerary or mortuary temples. The former accommodated the images of deities, the recipients of the daily cult; the latter were the shrines for the funerary cults of dead kings.
It is generally thought that the Egyptian cult temple of the Old Kingdom owed most to the cult of the sun god Re at Heliopolis, which was probably open in plan and lacking a shrine. Sun temples were unique among cult temples; worship was centred on a cult object, the benben, a squat obelisk placed in full sunlight. Among the few temples surviving from the Old Kingdom are sun temples built by the 5th-dynasty kings at Abū Jirāb (Abu Gurab). That of Neuserre reveals the essential layout: a reception pavilion at the desert edge connected by a covered corridor on a causeway to the open court of the temple high on the desert, within which stood the benben of limestone and a huge alabaster altar. Fine reliefs embellished the covered corridor and also corridors on two sides of the court.
The cult temple achieved its most highly developed form in the great sanctuaries erected over many centuries during the New Kingdom at Thebes. Architecturally the most satisfying is the Luxor Temple, started by Amenhotep III of the 18th dynasty. Dedicated to Amon, king of the gods, his consort Mut, and their son Khons, the temple was built close to the Nile River and parallel with the bank. The original design consists of an imposing open court with colonnades of graceful lotus columns, a smaller offering hall, a shrine for the ceremonial boat of the god, an inner sanctuary for the cult image, and a room in which the divine birth of the king was celebrated. The approach to the temple was made by a colonnade of huge columns with open papyrus-flower capitals, planned by Amenhotep III but decorated with fascinating processional reliefs under Tutankhamun and Horemheb. Later Ramses II built a wide court before the colonnade and two great pylons to form a new entrance. In front of the pylon were colossal statues of the pharaoh (some of which remain) and a pair of obelisks, one of which still stands; the other was removed in 1831 and reerected in the Place de la Concorde in Paris.
Sphinxes lining a path to the entrance of the Luxor temple complex in Luxor, Egypt.
The necessary elements of an Egyptian temple, most of which can be seen at Luxor, are the following: an approach avenue of sphinxes leading to the great double-towered pylon entrance fitted with flagpoles and pennants; before the pylon a pair of obelisks and colossal statues of the king; within the pylon a court leading to a pillared hall (the hypostyle), beyond which might come a further, smaller hall where offerings could be prepared; and, at the heart of the temple, the shrine for the cult image. In addition, there were storage chambers for temple equipment and, in later periods, sometimes a crypt. Outside the main temple building was a lake, or at least a well, for the water needed in the rituals; in later times there might also be a birth house (mammisi) to celebrate the king’s divine birth. The whole, with service buildings, was contained by a massive mud brick wall.
Successive kings would often add to temples so that some complexes became enormous. The great precinct of the Temple of Karnak (the longest side 1,837 feet [560 metres]) contains whole buildings, or parts of buildings, dating from the early 18th dynasty down to the Roman period. Of the structures on the main Karnak axis, the most remarkable are the hypostyle hall and the so-called Festival Hall of Thutmose III. The former contained 134 mighty papyrus columns, 12 of which formed the higher central aisle (76 feet [23 metres] high). Grill windows allowed some light to enter, but it must be supposed that even on the brightest day most of the hall was in deep gloom. The Festival Hall is better described as a memorial hall. Its principal room is distinguished by a series of unusual columns with bell-shaped capitals, inspired by the wooden tent poles used in predynastic buildings. Their lightness contrasts strikingly with the massive supports of the hypostyle.
The most remarkable monument of Ramses II, the great builder, is undoubtedly the temple dedicated to the sun gods Amon-Re and Re-Horakhte at Abu Simbel. Although excavated from the living rock, the structure follows generally the plan of the usual Egyptian temple. Four colossal seated statues emerge from the cliff face: two on either side of the entrance to the main temple. Carved around their feet are small figures representing Ramses’s children, his queen, Nefertari, and his mother, Muttuy (Mut-tuy, or Queen Ti). Three consecutive halls extend 185 feet (56 metres) into the cliff, decorated with more colossal statues of the king—here, disguised as Osiris, god of the underworld—and with painted scenes of Ramses’s purported victory at the Battle of Kadesh. On two days of the year (about February 22 and October 22), the first rays of the morning sun penetrate the whole length of the temple and illuminate the shrine in its innermost sanctuary.
Great Temple of Ramses II, the larger of the two temples at Abu Simbel, now located in Aswān muḥāfaẓah (governorate), southern Egypt.
The other type of temple, the funerary temple, belongs to the mortuary and valley temples included in the pyramid complex of the Old and Middle Kingdoms. The so-called valley temple stood at the lower end of the causeway that led up to the pyramid. It had a columnar hall and storerooms; basins and drainage channels have occasionally been found in the floors. Its exact function is uncertain. It may have been for the purification of the dead king’s body by ritualistic washing and even for the embalming ceremony. It might have served, moreover, as a landing stage during the inundation.
The valley temple of Khafre’s pyramid, known as the Granite temple, had a monumental T-shaped hall along the walls of which were ranged seated statues of the king; the floor was of alabaster, and the roof supported by monolithic pillars of red granite. The mortuary temple adjoined the pyramid and had, usually, a central open court surrounded by pillars or columns, a varying number of storerooms, five elongated chambers or shrines supposedly connected in some way with five official names of the monarch, and a chapel containing a false door and an offering table. It was in this chapel that the priests performed the daily funerary rites and presented offerings to the dead king’s soul.
The 5th dynasty mortuary temples of Sahure’s and Neuserre’s pyramids at Abusir are noteworthy; they appear to have been edifices of great magnificence, with fine palm and papyrus columns of granite and walls embellished with excellent reliefs. The only known instance of a mortuary temple not adjacent to the pyramid but enclosing it is the 11th dynasty funerary monument of King Mentuhotep (c. 2050 BCE) at Dayr al-Baḥrī, now badly ruined. A ramp led up to the terrace, in the centre of which was the pyramid resting on a podium and surrounded on all sides by a covered hall with 140 polygonal columns. Beyond the hall was an open courtyard flanked by columns, and then a hypostyle hall containing a chapel at the far end.
Although no longer buried in pyramids, the New Kingdom sovereigns still had funerary temples built in the vicinity of their rock-cut tombs. By far the most original and beautiful was the female king Hatshepsut’s temple, designed and built by her steward Senenmut near the tomb of Mentuhotep II at Dayr al-Baḥrī. Three terraces lead up to the recess in the cliffs where the shrine was cut into the rock. Each terrace is fronted by colonnades of square pillars protecting reliefs of unusual subjects, including an expedition to Punt and the divine birth of Hatshepsut. Ramps lead from terrace to terrace, and the uppermost level opens into a large court with colonnades. Chapels of Hathor (the principal deity of the temple) and Anubis occupy the south and north ends of the colonnade of the second terrace.
The largest conventionally planned funerary temple complex was probably that of Amenhotep III, now to be judged principally from the two huge quartzite statues, the Colossi of Memnon. These and other royal sculptures found in the ruins of the temple’s courts and halls testify to the magnificence now lost. Its design, as well as much of its stone, was used by Ramses II for his own funerary temple, the Ramesseum. The huge enclosure of the latter included not only the temple but also a royal palace (only traces of which can now be seen). The temple itself contained two huge open courts, entered through towering pylons, which led to a lofty hypostyle hall and a smaller hall with astronomical carvings on the ceiling. Statues of vast size stood before the second pylon, one of which, now toppled and ruined, has been estimated to weigh more than 1,000 tons. Mud brick storerooms in the enclosure preserve ample evidence of the use of the vault in the late 2nd millennium BCE.
Ramses III’s funerary temple at Madīnat Habu contains the best-preserved of Theban mortuary chapels and shrines, as well as the main temple components. The most private parts of the temple, to which few had access apart from the king and his priestly representatives, begin at the sides of the first hypostyle hall, with the temple treasury and a room for the processional boat of Ramses II (a much-honoured ancestor) on the south and shrines for various deities, including Ramses III, on the north. A second pillared hall is flanked by a solar chapel and a small Osiris complex, where the king took on the personae of Re, the sun god, and of Osiris, a transfiguration considered necessary for his divine afterlife. Beyond the Osiris complex, along the temple axis, is a third small hall and the main shrine for the Theban god Amon; two lateral shrines were reserved for Amon’s consort Mut and their divine child Khons.
As with most New Kingdom temples, the mural decorations on the outer walls of funerary temples, including that at Madīnat Habu, dealt mainly with the military campaigns of the king, while the inner scenes were mostly of ritual significance. Within the temple precinct lived and worked a whole community of priests and state officials. A small palace lay to the south of the main building, and a further suite of rooms for the king was installed in the castellated gate building on the east side of the precinct. The reliefs in this “high gate” suggest that the suite was used for recreational purposes by the king together with his women.
The Colossi of Memnon at Madīnat Habu in Thebes, Egypt
Mud brick and wood were the standard materials for houses and palaces throughout the Dynastic period; stone was used occasionally for such architectural elements as doorjambs, lintels, column bases, and windows.
The best-preserved private houses are those of modest size in the workmen’s village of Dayr al-Madīnah. Exceptional in that they were built of stone, they typically had three or four rooms, comprising a master bedroom, a reception room, a cellar for storage, and a kitchen open to the sky; accommodation on the roof, reached by a stair, completed the plan. Similar domestic arrangements are known from the workmen’s village at Kabun.
Villas for important officials in Akhenaton’s city of Tell el-Amarna were large and finely decorated with brightly painted murals. The house of the vizier Nakht had at least 30 rooms, including separate apartments for the master, his family, and his guests. Such houses had bathrooms and lavatories. The ceilings of large rooms were supported by painted wooden pillars, and there may have been further rooms above. Where space was restricted (as in Thebes), houses of several stories were built. Tomb scenes that show such houses also demonstrate that windows were placed high to reduce sunlight and that hooded vents on roofs were used to catch the breeze.
Palaces, as far as can be judged from remains at Thebes and Tell el-Amarna, were vast, rambling magnified versions of Nakht’s villa, with broad halls, harem suites, kitchen areas, and wide courts. At Tell el-Amarna some monumental formality was introduced in the form of porticoes, colonnades, and statuary. Lavish use was made of mural and floor decoration in which floral and animal themes predominated.
After the conquest of Egypt by Alexander the Great, the independent rule of pharaohs in the strict sense came to an end. Under the Ptolemies, whose rule followed Alexander’s, profound changes took place in art and architecture.
The most lasting impression of the new period is made by its architectural legacy. Although very little survives of important funerary architecture, there is a group of tombs at Tunah al-Jabal of unusual form and great importance. Most interesting is the tomb of Petosiris, high priest of Thoth in nearby Hermopolis Magna in the late 4th century BCE. It is in the form of a small temple with a pillared portico, elaborate column capitals, and a large forecourt. In its mural decorations a strong Greek influence merges with the traditional Egyptian modes of expression.
A boom in temple building followed the establishment of the Ptolemaic regime. At Dandarah, Esna, Idfū, Kawm Umbū (Kôm Ombo), and Philae the Egyptian cult temple can be studied better than at almost any earlier temple. Though erected by the Macedonian rulers of Egypt, these late temples employ purely Egyptian architectural conventions but include flourishes that appear only in the Ptolemaic period, such as pillars in the shape of colossal sistra (a percussion instrument), Composite capitals with elaborate floral forms, monumental screen walls, and subterranean crypts. The temple of Horus at Idfū is the most complete, displaying all the essential elements of the classical Egyptian temple, but for exploitation of setting and richness of detail it is difficult to fault the temples of Philae and Kawm Umbū, in particular.
At the end of December 2017, Maker released a system for a decentralized stablecoin called "Dai". This is an ERC20 token that is pegged to the US dollar. Each Dai is worth $1 and will always be worth that much no matter how much it exists. For him, there is no central authority like Tether to back up the value, and no traditional bank backing every Dai with a real US dollar. There is nothing to shut down and there is no centralized system to trust. Dai is used exclusively on the Ethereum blockchain using smart contracts.
The article explains why Dai can be trusted and how it is a game changer for cryptocurrencies. You will learn about pricing mechanisms, risks, remedies and outcomes of possible events. The Maker organization is one of the oldest Ethereum-focused companies and has been working on this project since before the advent of Ethereum. The company's team is highly respected and supported by Vitalik Buterin.
To explain the specifics of a token, suppose the following is true:
Merchants can get all the benefits of blockchain technology from Dai without the huge risk of volatility. For example, merchants no longer need to worry that the price of bitcoin fluctuates by 15% when they receive a payment and convert it to fiat currency. If a seller charges $19.99 for a T-shirt and receives 19.99 Dai, he can be sure that he will have this amount left regardless of whether he cashes out the tokens on the same day or in 2 months.
Likewise, clients no longer need to worry about spending on assets that are constantly growing in value. A buyer is unlikely to purchase a product with ETH if he thinks that its value is increasing. Why spend $19.99 on ETH today when it will be worth $24.99 tomorrow? When it comes to Dai, customers don't have to worry about price fluctuations.
In 2020, merchants who accept cryptocurrencies as payment use an intermediary like BitPay . It has all the disadvantages of traditional payment processors, such as processing fees, limits, and rules about which industries they do business with. By using intermediaries and offering a different payment method to their customers, sellers see no benefit other than upsells. If BitPay decides that you do not suit them, they can disconnect you for any reason without warning.
With Dai, the merchant can process payments directly as if they were receiving cash. There is no need for a third party to process payments or temporarily hold funds – the blockchain can handle everything. No one can deprive the seller of the opportunity to receive payments.
Tether, like any other centralized stablecoin, can be hacked, disabled or your money stolen from your wallet. It always depends on politics and the human factor. With Dai, you only need to trust the blockchain.
When Dai is above $1, the mechanisms are at work to bring the price down. When the price is below $1, the mechanisms work to increase the price. Rational players who participate in these mechanisms do so because they only make money when the token is not worth exactly $1. That's why it's always worth a little more or less than $1 - it's an infinite wave function that oscillates infinitely close to $1, but never reaches it. The more it moves away from $1, the more incentives for it to come closer.
It is a loan secured by Ethereum. Anyone can create a MakerDAO token, all it takes is ETH and technical know-how to use a decentralized application (dApp). Most users will never need to create Dai or understand how it is created. The longer it costs $1, the more it will be trusted. Users will spend, accept and convert it as needed. Even most cryptocurrency enthusiasts do not need to create Dai and understand how it is created. They will simply buy tokens by exchanging them on exchanges, including decentralized ones, which makes Dai an essential component of any decentralized exchange.
Dai is quite complex, and some argue that this complexity will prevent it from ever gaining popularity. This is misleading, as the main use case for Dai is a $1-pegged stable token . 99.999% of users won't need to understand how it works. However, if you really want to understand why you can trust Dai, you must understand all aspects of the system and the economic incentives that come with it
Advanced users using the MakerDAO dApp can borrow in Dai against their ETH holdings.
MakerDAO dApp is a system consisting of a decentralized stablecoin, collateralized loans and community management.
First, ETH turns into “wrapped ETH” (WETH), which is the ERC20 protocol wrapped around ETH. This "tokenizes" ETH so it can be used just like any other ERC20 token. WETH then turns into "pooled ETH" (PETH), which means it pools a large pool of Ethereum, which is the collateral for all Dai created. Once you receive PETH, you can create a Collateralized Debt Position (CDP) that locks up your PETH and allows you to leverage Dai against your collateral, i.e. PETH. When you withdraw Dai, the CDP debt ratio increases. There is a debt limit that sets the maximum amount of Dai you can withdraw against CDP. Once you receive Dai, you can freely spend or exchange it like any other ERC20 token.
One of the reasons why 99.999% of people will never create a CDP is the difficulty in creating Dai. However, there are several important reasons why it is worth creating it, despite all the difficulties:
You need a loan and you have an asset (ETH) to secure your loan.
Do you believe that Ethereum is growing in value. You can use your collateralized debt position to buy ETH on margin - you invest ETH in a CDP. Use Dai to buy more ETH on the exchange. Then use the purchased Ethereum to further increase the size of the CDP. This can be done without any third party authority – margin trading can be done entirely on the blockchain.
Demand for Dai pushes its price above one dollar. When this happens, you can create Dai and then immediately sell it for more than one US dollar. This is essentially free money and one of the mechanisms that the Maker system uses to keep the price of the token to one dollar. The system encourages the creation of more Dai if the price is above one dollar.
Margin is a collateral that is blocked on the account for the use of leverage. Leverage is a service that provides funds that exceed your own by several times.
These three reasons are enough to permanently create a token.
Economic incentives guarantee value retention.
Example
Let's say Vladimir opens a CDP with $1,000 in ETH. He withdraws 500 Dai. To close this position, he must pay 500 Dai. Paying off the debt destroys the tokens.
If Dai is worth less than $1, say $0.99, then he can buy it for less than $1 and then pay off his debt at a 1% discount. This is essentially free money - if he took out a loan of $500 (500 Dai), then bought 500 Dai for $495 (0.99 * 500 = 495, 1% discount), and then repaid the loan, he earned $5 ETH.
Demand for Dai increases its price, and eventually it increases in value until it approaches one dollar. If it stays below one dollar, CDP holders will continue to repay the debt and remove Dai from the system. When Dai exceeds $1, it is created to meet demand. Creation and destruction, supply and demand ensure that Dai is always pegged to one US dollar.
Short answer: if ETH is worth something and its value is not extremely volatile, the system continues to work.
The CDP has varying degrees of indebtedness. When you open a CDP, you can receive up to 60% of the value in Dai. This means that with 1000 ETH you can get 600 Dai. But not every CDP takes the full amount - the more you take, the riskier it is. Some CDP holders withdraw 10%, 25%, 30%.
As Ethereum changes in value against the dollar, the debt ratio of each CDP also changes. As ETH rises in value, each CDP becomes safer as it has less debt. If Ethereum falls in price, each CDP becomes more risky.
Because each CDP has a different debt ratio, they can be ranked in order of riskiness. Riskier ones have higher debt ratios.
As ETH drops in value, each CDP gets closer to 60% debt. If a CDP crosses this threshold, any owner of Dai can pay off its owner and make a profit that destroys the Dai, closes the CDP, and punishes the owner.
In summary, rational participants have an interest in removing risky debt from the Dai system by paying it off. CDP holders who allow debt to become so risky are penalized for doing so, which encourages them to pay off the debt on time.
CDP holders monitor the degree of risk in order to pay off Dai early in case of an emergency in order to avoid a penalty. But those who neglect their CDP will be penalized by the system if they cross the 60% threshold.
Ethereum’s value dropped from $1,400 in January 2018 to $400 in April 2018 , but Dai’s incentive system has successfully kept its value at $1, an incredible achievement and proof that Dai is doing well even in a falling ETH environment.
The Maker Governing Token (MKR) has two uses.
First , owning an MKR gives you the right to vote on the system itself. That's why Maker is a "Decentralized Autonomous Organization" (DAO) - MKR holders can vote on things like maximum debt ratios. They can also shut down the entire system in the event of a failure, which is an important fail-safe mechanism of the system.
Second , CDP debt repayment requires the owner to pay 1% per annum for a loan payable in MKR only. For example, if you took out a loan of 500 Dai secured by $1,000 in ETH and held it for a year, you would need 500 Dai and $5 in MKR to repay the loan. Such a token, used to repay a loan, is then "burned".
Burning encourages the value of the governance token to rise over time. Since MKR is divisible to 18 decimal places, as long as one MKR token exists, one quadrillion US dollars can be safely traded without issuance. The number of decimal places can be increased later if necessary. Thus, there is no risk that the MKR will run out and the number of destroyed tokens will decrease with the increase in value.
ETH is the most important asset on the Ethereum blockchain and it makes sense to use it as the first asset to support CDP. However, the Maker system will work regardless of the asset if it exists in Ethereum and experts can provide its valuation in US dollars.
Maker plans to use a physical gold-backed token in the Digix system as its next asset. Ultimately, multiple assets can be used to create one CDP, including other ERC20 tokens.
Why is Dai pegged to the dollar?
The Maker system can work with any other currency or asset peg. In the future, these stablecoins could be pegged to major currencies (euro, pound sterling, Swiss francs), assets (gold) or even stocks. All it takes is price predictions.
DAI tokens are traded on many exchanges such as Binance , Coinbase , and Kraken . And also on DEXs like Uniswap, SushiSwap and Curve Finance. The rate of a stablecoin can change by several cents during the day, that is, its price ranges from about $0.9 to $1.1.
How to buy DAI stablecoin with a card on the Binance exchange :
register on the platform with an email address;
pass passport verification;
on the "Buy Cryptocurrency" page, enter the required number of DAI tokens;
click the "Continue" button and select a bank card for payment.
Traders can find the coin paired with other stablecoins, fiat currencies and cryptocurrencies such as ETH, BTC and BNB. An extensive list of available platforms and currency pairs can be found on the DAI page on CoinMarketCap .
MakerDAO runs on the Ethereum blockchain and its developers have issued DAI and MKR on the ERC-20 token standard . Accordingly, DAI can be stored in Ethereum wallets, such as MyEtherWallet and MetaMask .
An alternative option is to store DAI on the internal wallets of cryptocurrency exchanges. In essence, DAI is a digital US dollar. Therefore, it is convenient to use it for trading in those markets where there are cryptocurrency pairs with it.
The safest storage option is hardware wallets from the Trezor and Ledger brands . Such wallets are special devices similar to flash drives, without which it is impossible to access your coins.
The proof that Dai works is its one dollar value. As Dai continues to successfully maintain its peg to the dollar, faith in it is growing. 99,999+% of users don't need to understand how Dai works, they only need to trust it, and history provides the necessary trust. If you need to understand Dai in depth in order to feel comfortable using it, you should read more about it
Dai is a game changer because it allows you to transfer US dollars in any amount, instantly, across borders, without commissions, without any intervention. This opens up a new era of commerce that exists exclusively on the blockchain and cannot be shut down. Maker is a technology that demonstrates the unique capabilities of Ethereum and provides a solution that was not possible before the advent of blockchain technology.
Émile Durkheim, (born April 15, 1858, Épinal, France—died November 15, 1917, Paris), French social scientist who developed a vigorous methodology combining empirical research with sociological theory. He is widely regarded as the founder of the French school of sociology.
Durkheim was born into a Jewish family of very modest means, and it was taken for granted that he would become a rabbi, like his father. The death of his father before Durkheim was 20, however, burdened him with heavy responsibilities. As early as his late teens Durkheim became convinced that effort and even sorrow are more conducive to the spiritual progress of the individual than pleasure or joy. He became a gravely disciplined young man.
As an excellent student at the Lycée Louis le Grand, Durkheim was a strong candidate to enter the renowned and highly competitive École Normale Supérieure in Paris. While taking his board examination at the Institut Jauffret in the Latin Quarter, he met another gifted young man from the provinces, Jean Jaurès, later to lead the French Socialist Party and at that time interested, like Durkheim, in philosophy and in the moral and social reform of his country. Jaurès won entrance to the École Normale in 1878; one year later Durkheim did the same.
Durkheim’s religious faith had vanished by then, and his thought had become altogether secular but with a strong bent toward moral reform. Like a number of French philosophers during the Third Republic, Durkheim looked to science and in particular to social science and to profound educational reform as the means to avoid the perils of social disconnectedness, or “anomie,” as he was to call that condition in which norms for conduct were either absent, weak, or conflicting.
He enjoyed the intellectual atmosphere of the École Normale—the discussion of metaphysical and political issues pursued with eagerness and animated by the utopian dreams of young men destined to be among the leaders of their country. Durkheim was respected by his peers and teachers, but he was impatient with the excessive stress on elegant rhetoric and surface polish then prevalent in French higher education. His teachers of philosophy struck him as too fond of generalities and too worshipful of the past.
Fretting at the conventionality of formal examinations, Durkheim passed the last competitive examination in 1882 but without the brilliance that his friends had predicted for him. He then accepted a series of provincial assignments as a teacher of philosophy at the state secondary schools of Sens, Saint-Quentin, and Troyes between 1882 and 1887. In 1885–86 he took a year’s leave of absence to pursue research in Germany, where he was impressed by Wilhelm Wundt, a pioneering experimental psychologist. In 1887 he was appointed lecturer at the University of Bordeaux, where he subsequently became a professor and taught social philosophy until 1902. He then moved to the University of Paris, where he wrote some of his most important works and influenced a generation of scholars.
Durkheim was familiar with several foreign languages and reviewed academic papers in German, English, and Italian at length for L’Année sociologique, the journal he founded in 1896. It has been noted, however, at times with disapproval and amazement by non-French social scientists, that Durkheim traveled little and that, like many French scholars and the notable British anthropologist Sir James Frazer, he never undertook any fieldwork. The vast information Durkheim studied on the tribes of Australia and New Guinea and on the Eskimos was all collected by other anthropologists, travelers, or missionaries.
This was not due to provincialism or lack of attention to the concrete. Durkheim did not resemble the French philosopher Auguste Comte in making venturesome and dogmatic generalizations while disregarding empirical observation. He did, however, maintain that concrete observation in remote parts of the world does not always lead to illuminating views on the past or even on the present. For him, facts had no intellectual meaning unless they were grouped into types and laws. He claimed repeatedly that it is from a construction erected on the inner nature of the real that knowledge of concrete reality is obtained, a knowledge not perceived by observation of the facts from the outside. He thus constructed concepts such as the sacred and totemism exactly in the same way that Karl Marx developed the concept of class. In truth, Durkheim’s vital interest did not lie in the study for its own sake of so-called primitive tribes but rather in the light such a study might throw on the present.
The outward events of his life as an intellectual and as a scholar may appear undramatic. Still, much of what he thought and wrote stemmed from the events that he witnessed in his formative years, in the 1870s and ’80s, and in the earnest concern he took in them.
The Second Empire, which collapsed in the 1870 defeat of the French at the hands of Germany, had signified an era of levity and dissipation to the young scholar. France, with the support of many of its liberal and intellectual elements, had plunged headlong into a war for which it was unprepared; its leaders proved incapable. The left-wing Commune of Paris, which took over the French capital in 1871, led to senseless destruction, which appeared to Durkheim’s generation, in retrospect, as evidence of the alienation of the working classes from capitalist society.
The bloody repression that followed the Commune was taken as further evidence of the ruthlessness of capitalism and of the selfishness of the frightened bourgeoisie. Later, the crisis of 1886 over Georges Boulanger, the minister of war who demanded a centralist government to execute a policy of revenge against Germany, was one of several events that testified to the resurgence of nationalism, soon to be accompanied by anti-Semitism. Such major French thinkers of the older generation as Ernest Renan and Hippolyte Taine interrupted their historical and philosophical works after 1871 to analyze those evils and to offer remedies.
Durkheim was one of several young philosophers and scholars, fresh from their École Normale training, who became convinced that progress was not the necessary consequence of science and technology, that it could not be represented by an ascending curve, and that complacent optimism could not be justified. He perceived around him the prevalence of anomie, a personal sense of rootlessness fostered by the absence of social norms. Material prosperity set free greed and passions that threatened the equilibrium of society.
These sources of Durkheim’s sociological reflections, never remote from moral philosophy, were first expressed in his very important doctoral thesis, De la division du travail social (1893; The Division of Labour in Society), and in Le Suicide (1897; Suicide). In Durkheim’s view, ethical and social structures were being endangered by the advent of technology and mechanization. He believed that societies with undifferentiated labour (i.e., primitive societies) exhibited mechanical solidarity, while societies with a high division of labour, or increased specialization (i.e., modern societies), exhibited organic solidarity. The division of labour rendered workers more alien to one another and yet more dependent upon one another; specialization meant that no individual labourer would build a product on his or her own.
Durkheim’s 1897 study of suicide was based on his observation that suicide appeared to be less frequent where the individual was closely integrated into a society; in other words, those lacking a strong social identification would be more susceptible to suicide. Thus, the apparently purely individual decision to renounce life could be explained through social forces.
These early volumes, and the one in which he formulated with scientific rigour the rules of his sociological method, Les Règles de la méthode sociologique (1895; The Rules of Sociological Method), brought Durkheim fame and influence. But the new science of sociology frightened timid souls and conservative philosophers, and he had to endure many attacks. In addition, the Dreyfus affair—resulting from the false charge against a Jewish officer, Alfred Dreyfus, of spying for the Germans—erupted in the last years of the century, and the slurs and outright insults aimed at Jews that accompanied it opened Durkheim’s eyes to the latent hatred and passionate feuds hitherto concealed under the varnish of civilization. He took an active part in the campaign to exonerate Dreyfus. Perhaps as a result, Durkheim was not elected to the Institut de France, although his stature as a thinker suggests that he should have been named to that prestigious learned society. He was, however, appointed to the University of Paris in 1902 and was made a full professor there in 1906.
More and more, Durkheim’s thought became concerned with education and religion as the two most potent means of reforming humanity or of molding the new institutions required by the deep structural changes in society. His colleagues admired Durkheim’s zeal on behalf of educational reform. His efforts included participating in numerous committees to prepare new curriculums and methods; working to enliven the teaching of philosophy, which too long had dwelt on generalities; and attempting to teach teachers how to teach. A series of courses that he had given at Bordeaux on the subject of L’Évolution pédagogique en France (“Pedagogical Evolution in France”) was published posthumously in 1938; it remains one of the best informed and most impartial books on French education. The other important work of Durkheim’s later years, Les Formes élémentaires de la vie religieuse (1912; The Elementary Forms of Religious Life), dealt with the totemic system in Australia. The author, despite his own agnosticism, evinced a sympathetic understanding of religion in all its stages yet ultimately subordinated religion to the service of society by concluding that religion’s primary function was to maintain the social order. French conservatives—who in the years preceding World War I turned against the Sorbonne, which they charged was unduly swayed by the prestige of German scholarship—railed at Durkheim, who, they thought, was influenced by the German urge to systematize, thereby making a fetish of society and a religion of sociology.
In fact, Durkheim did not make an idol of sociology as did the positivists schooled by Comte, nor was he a “functionalist” who explained every social phenomenon by its usefulness in maintaining the existence and equilibrium of a social organism. He did, however, endeavour to formulate a positive social science that might direct people’s behaviour toward greater solidarity.
The outbreak of World War I came as a cruel blow to him. For many years he had expended too much energy on teaching, on writing, on outlining plans for reform, and on ceaselessly feeding the enthusiasm of his disciples, and eventually his heart had been affected. His gaunt and nervous appearance filled his colleagues with foreboding. The whole of French sociology, then in full bloom thanks to him, seemed to be his responsibility.
The breaking point came when his only son was killed in 1916, while fighting on the Balkan front. Durkheim stoically attempted to hide his sorrow, but the loss, coming on top of insults by nationalists who denounced him as a professor of “apparently German extraction” who taught a “foreign” discipline at the Sorbonne, was too much to bear. He died in November 1917.
Durkheim left behind him a brilliant school of researchers. He had never been a tyrannical master; he had encouraged his disciples to go farther than himself and to contradict him if need be. His nephew, Marcel Mauss, who held the chair of sociology at the Collège de France, was less systematic than Durkheim and paid greater attention to symbolism as an unconscious activity of the mind. Social anthropologist Claude Lévi-Strauss also occupied the same chair of sociology and resembled Durkheim in the way he combined reasoning with intensity of feeling, yet, unlike Durkheim, he went on to become a leading proponent of structuralism.
Durkheim’s influence extended beyond the social sciences. Through him, sociology became a seminal discipline in France that broadened and transformed the study of law, economics, Chinese institutions, linguistics, ethnology, art history, and history.
division of labour, the separation of a work process into a number of tasks, with each task performed by a separate person or group of persons. It is most often applied to systems of mass production and is one of the basic organizing principles of the assembly line. Breaking down work into simple repetitive tasks eliminates unnecessary motion and limits the handling of different tools and parts. The consequent reduction in production time and the ability to replace craftsmen with lower-paid unskilled workers result in lower production costs and a less expensive final product. Contrary to popular belief, however, division of labour does not necessarily lead to a decrease in skills—known as proletarianization—among the working population. The Scottish economist Adam Smith saw this splitting of tasks as a key to economic progress by providing a cheaper and more efficient means of producing goods.
The French scholar Émile Durkheim first used the phrase division of labour in a sociological sense in his discussion of social evolution. Rather than viewing division of labour as a consequence of a desire for material abundance, Durkheim stated that specialization arose from changes in social structure caused by an assumed natural increase in the size and density of population and a corresponding increase in competition for survival. Division of labour functioned to keep societies from breaking apart under these conditions.
The intensive specialization in industrial societies—the refinement and simplification of tasks (especially associated with a machine technology) so that a worker often produces only a small part of a particular commodity—is not usually found in nonindustrialized societies. There is rarely a division of labour within an industry in nonliterate communities, except perhaps for the production of larger goods (such as houses or canoes); in these cases the division is often a temporary one, and each worker is competent to perform other phases of the task. There may be some specialization in types of products (e.g., one worker may produce pottery for religious uses; another, pottery for ordinary uses), but each worker usually performs all steps of the process.
A division of labour based on sex appears to be universal, but the form that this takes varies widely across cultures. Divisions on the basis of age, clan affiliation, hereditary position, or guild membership, as well as regional and craft specialization, are also found.
mechanical and organic solidarity, in the theory of the French social scientist Émile Durkheim (1858–1917), the social cohesiveness of small, undifferentiated societies (mechanical) and of societies differentiated by a relatively complex division of labour (organic).
Mechanical solidarity is the social integration of members of a society who have common values and beliefs. These common values and beliefs constitute a “collective conscience” that works internally in individual members to cause them to cooperate. Because, in Durkheim’s view, the forces causing members of society to cooperate were much like the internal energies causing the molecules to cohere in a solid, he drew upon the terminology of physical science in coining the term mechanical solidarity.
In contrast to mechanical solidarity, organic solidarity is social integration that arises out of the need of individuals for one another’s services. In a society characterized by organic solidarity, there is relatively greater division of labour, with individuals functioning much like the interdependent but differentiated organs of a living body. Society relies less on imposing uniform rules on everyone and more on regulating the relations between different groups and persons, often through the greater use of contracts and laws.
Émile Durkheim is a French social scientist who developed a methodology combining empirical research with sociological theory. He is widely regarded as the founder of the French school of sociology.
Cultural globalization anthropology
Phenomenon by which the experience of everyday life, as influenced by the diffusion of commodities and ideas, reflects a standardization of cultural expressions around the world. Propelled by the efficiency or appeal of wireless communications, electronic commerce, popular culture, and international travel, globalization has been seen as a trend toward homogeneity that will eventually make human experience everywhere essentially the same. This appears, however, to be an overstatement of the phenomenon. Although homogenizing influences do indeed exist, they are far from creating anything akin to a single world culture.
Some observers argue that a rudimentary version of world culture is taking shape among certain individuals who share similar values, aspirations, or lifestyles. The result is a collection of elite groups whose unifying ideals transcend geographical limitations.
“Davos” culture
One such cadre, according to political scientist Samuel Huntington in The Clash of Civilizations (1998), comprises an elite group of highly educated people who operate in the rarefied domains of international finance, media, and diplomacy. Named after the Swiss town that began hosting annual meetings of the World Economic Forum in 1971, these “Davos” insiders share common beliefs about individualism, democracy, and market economics. They are said to follow a recognizable lifestyle, are instantly identifiable anywhere in the world, and feel more comfortable in each other’s presence than they do among their less-sophisticated compatriots.
The international “faculty club”
The globalization of cultural subgroups is not limited to the upper classes. Expanding on the concept of Davos culture, sociologist Peter L. Berger observed that the globalization of Euro-American academic agendas and lifestyles has created a worldwide “faculty club”—an international network of people who share similar values, attitudes, and research goals. While not as wealthy or privileged as their Davos counterparts, members of this international faculty club wield tremendous influence through their association with educational institutions worldwide and have been instrumental in promoting feminism, environmentalism, and human rights as global issues. Berger cited the antismoking movement as a case in point: the movement began as a singular North American preoccupation in the 1970s and subsequently spread to other parts of the world, traveling along the contours of academe’s global network.
Nongovernmental organizations
Another global subgroup comprises “cosmopolitans” who nurture an intellectual appreciation for local cultures. As pointed out by Swedish anthropologist Ulf Hannerz, this group advocates a view of global culture based not on the “replication of uniformity” but on the “organization of diversity.” Often promoting this view are nongovernmental organizations (NGOs) that lead efforts to preserve cultural traditions in the developing world. By the beginning of the 21st century, institutions such as Cultural Survival were operating on a world scale, drawing attention to indigenous groups who are encouraged to perceive themselves as “first peoples”—a new global designation emphasizing common experiences of exploitation among indigenous inhabitants of all lands. By sharpening such identities, these NGOs have globalized the movement to preserve indigenous world cultures.
Transnational workers
Another group stems from the rise of a transnational workforce. Indian-born anthropologist Arjun Appadurai has studied English-speaking professionals who trace their origins to South Asia but who live and work elsewhere. They circulate in a social world that has multiple home bases, and they have gained access to a unique network of individuals and opportunities. For example, many software engineers and Internet entrepreneurs who live and work in Silicon Valley, California, maintain homes in—and strong social ties to—Indian states such as Maharashtra and Punjab.
The persistence of local culture
Underlying these various visions of globalization is a reluctance to define exactly what is meant by the term culture. During most of the 20th century, anthropologists defined culture as a shared set of beliefs, customs, and ideas that held people together in recognizable, self-identified groups. Scholars in many disciplines challenged this notion of cultural coherence, especially as it became evident that members of close-knit groups held radically different visions of their social worlds. Culture is no longer perceived as a knowledge system inherited from ancestors. As a result, many social scientists now treat culture as a set of ideas, attributes, and expectations that change as people react to changing circumstances. Indeed, by the turn of the 21st century, the collapse of barriers enforced by Soviet communism and the rise of electronic commerce have increased the perceived speed of social change everywhere.
The term local culture is commonly used to characterize the experience of everyday life in specific, identifiable localities. It reflects ordinary people’s feelings of appropriateness, comfort, and correctness—attributes that define personal preferences and changing tastes. Given the strength of local cultures, it is difficult to argue that an overarching global culture actually exists. Jet-setting sophisticates may feel comfortable operating in a global network disengaged from specific localities, but these people constitute a very small minority; their numbers are insufficient to sustain a coherent cultural system. It is more important to ask where these global operators maintain their families, what kind of kinship networks they rely upon, if any, and whether theirs is a transitory lifestyle or a permanent condition. For most people, place and locality still matter. Even the transnational workers discussed by Appadurai are rooted in local communities bound by common perceptions of what represents an appropriate and fulfilling lifestyle.
Research on globalization has shown that it is not an omnipotent, unidirectional force leveling everything in its path. Because a global culture does not exist, any search for it would be futile. It is more fruitful to instead focus on particular aspects of life that are indeed affected by the globalizing process.
The compression of time and space
The breakdown of time and space is best illustrated by the influential “global village” thesis posed by communications scholar Marshall McLuhan in Gutenberg Galaxy (1962). Instantaneous communication, predicted McLuhan, would soon destroy geographically based power imbalances and create a global village. Later, geographer David Harvey argued that the postmodern condition is characterized by a “time-space compression” that arises from inexpensive air travel and the ever-present use of telephones, fax, e-mail, and social media.
There can be little doubt that people perceive the world today as a smaller place than it appeared to their grandparents. In the 1960s and ’70s immigrant workers in London relied on postal systems and personally delivered letters to send news back to their home villages in India, China, and elsewhere; it could take two months to receive a reply. The telephone was not an option, even in dire emergencies. By the late 1990s, the grandchildren of these first-generation migrants were carrying cellular phones that linked them to cousins in cities such as Kolkata (Calcutta), Singapore, or Shanghai. Awareness of time zones (when people will be awake; what time offices open) is now second nature to people whose work or family ties connect them to far-reaching parts of the world.
McLuhan’s notion of the global village presupposed the worldwide spread of television, which brings distant events into the homes of viewers everywhere. Building on this concept, McLuhan claimed that accelerated communications produce an “implosion” of personal experience—that is, distant events are brought to the immediate attention of people halfway around the world. The spectacular growth of Cable News Network (CNN) is a case in point. CNN became an icon of globalization by broadcasting its U.S.-style news programming around the world, 24 hours a day. Live coverage of the fall of the Berlin Wall in 1989, the Persian Gulf War in 1991, and extended coverage of events surrounding the terrorist attacks in New York City and Washington, D.C., on September 11, 2001, illustrated television’s powerful global reach. Some governments have responded to such advances by attempting to restrict international broadcasting, but satellite communication makes these restrictions increasingly unenforceable.
Travel
Since the mid-1960s, the cost of international flights has declined, and foreign travel has become a routine experience for millions of middle- and working-class people. Diplomats, businesspeople, and ordinary tourists can feel “at home” in any city, anywhere in the world. Foreign travel no longer involves the challenge of adapting to unfamiliar food and living arrangements. CNN has been an essential feature of the standardized hotel experience since at least the 1990s. More significantly, Western-style beds, toilets, showers, fitness centres, and restaurants now constitute the global standard. A Japanese variant on the Westernized hotel experience, featuring Japanese-style food and accommodations, can also be found in most major cities. These developments are linked to the technology of climate control. In fact, the very idea of routine global travel was inconceivable prior to the universalization of air-conditioning. An experience of this nature would have been nearly impossible in the 1960s, when the weather, aroma, and noise of the local society pervaded one’s hotel room.
Clothing
Modes of dress can disguise an array of cultural diversity behind a facade of uniformity. The man’s business suit, with coloured tie and buttoned shirt, has become “universal” in the sense that it is worn just about everywhere, although variations have appeared in countries that are cautious about adopting global popular culture. Iranian parliamentarians, for example, wear the “Western” suit but forgo the tie, while Saudi diplomats alternate “traditional” Bedouin robes with tailored business suits, depending upon the occasion. In the early years of the 21st century, North Korea and Afghanistan were among the few societies holding out against these globalizing trends.
The emergence of women’s “power suits” in the 1980s signified another form of global conformity. Stylized trouser-suits, with silk scarves and colourful blouses (analogues of the male business suit), are now worldwide symbols of modernity, independence, and competence. Moreover, the export of used clothing from Western countries to developing nations has accelerated the adoption of Western-style dress by people of all socioeconomic levels around the world.
Some military fashions reflect a similar sense of convergence. Rebel fighters, such as those in Central Africa, South America, or the Balkans, seemed to take their style cue from the guerrilla garb worn by movie star Sylvester Stallone in his trilogy of Rambo films. In the 1990s the United States military introduced battle helmets that resembled those worn by the German infantry during World War II. Many older Americans were offended by the association with Nazism, but younger Americans and Europeans made no such connections. In 2001, a similar helmet style was worn by elite Chinese troops marching in a parade in Beijing’s Tiananmen Square.
Chinese fashion underwent sweeping change after the death in 1976 of Communist Party Chairman Mao Zedong and the resultant economic liberalization. Western suits or casual wear became the norm. The androgynous gray or blue Mao suit essentially disappeared in the 1980s, worn only by communist patriarch Deng Xiaoping and a handful of aging leaders who dressed in the uniform of the Cultural Revolution until their deaths in the 1990s—by which time Mao suits were being sold in Hong Kong and Shanghai boutiques as high-priced nostalgia wear, saturated with postmodern irony.
Entertainment
The power of media conglomerates and the ubiquity of entertainment programming has globalized television’s impact and made it a logical target for accusations of cultural imperialism. Critics cite a 1999 anthropological study that linked the appearance of anorexia in Fiji to the popularity of American television programs, notably Melrose Place and Beverly Hills 90210. Both series featured slender young actresses who, it was claimed, led Fijian women (who are typically fuller-figured) to question indigenous notions of the ideal body.
Anti-globalism activists contend that American television shows have corrosive effects on local cultures by highlighting Western notions of beauty, individualism, and sexuality. Although many of the titles exported are considered second-tier shows in the United States, there is no dispute that these programs are part of the daily fare for viewers around the world. Television access is widespread, even if receivers are not present in every household. In the small towns of Guatemala, the villages of Jiangxi province in China, or the hill settlements of Borneo, for instance, one television set—often a satellite system powered by a gasoline generator—may serve two or three dozen viewers, each paying a small fee. Collective viewing in bars, restaurants, and teahouses was common during the early stages of television broadcasting in Indonesia, Japan, Kenya, and many other countries. By the 1980s video-viewing parlours had become ubiquitous in many regions of the globe.
Live sports programs continue to draw some of the largest global audiences. The 1998 World Cup men’s football (soccer) final between Brazil and France was watched by an estimated two billion people. After the 1992 Olympic Games, when the American “Dream Team” of National Basketball Association (NBA) stars electrified viewers who had never seen the sport played to U.S. professional standards, NBA games were broadcast in Australia, Israel, Japan, China, Germany, and Britain. In the late 1990s Michael Jordan, renowned for leading the Chicago Bulls to six championships with his stunning basketball skills, became one of the world’s most recognized personalities.
Hollywood movies have had a similar influence, much to the chagrin of some countries. In early 2000 Canadian government regulators ordered the Canadian Broadcasting Corporation (CBC) to reduce the showing of Hollywood films during prime time and to instead feature more Canadian-made programming. CBC executives protested that their viewers would stop watching Canadian television stations and turn to satellite reception for international entertainment. Such objections were well grounded, given that, in 1998, 79 percent of English-speaking Canadians named a U.S. program when asked to identify their favourite television show.
Hollywood, however, does not hold a monopoly on entertainment programming. The world’s most prolific film industry is in Bombay (Mumbai), India (“Bollywood”), where as many as 2,000 feature films are produced annually in all of India’s major languages. Primarily love stories with heavy doses of singing and dancing, Bollywood movies are popular throughout Southeast Asia and the Middle East. State censors in Islamic countries often find the modest dress and subdued sexuality of Indian film stars acceptable for their audiences. Although the local appeal of Bollywood movies remains strong, exposure to Hollywood films such as Jurassic Park (1993) and Speed (1994) caused young Indian moviegoers to develop an appreciation for the special effects and computer graphics that had become the hallmarks of many American films.
Food
Food is the oldest global carrier of culture. In fact, food has always been a driving force for globalization, especially during earlier phases of European trade and colonial expansion. The hot red pepper was introduced to the Spanish court by Christopher Columbus in 1493. It spread rapidly throughout the colonial world, transforming cuisines and farming practices in Africa, Asia, and the Middle East. It might be difficult to imagine Korean cuisine without red pepper paste or Szechuan food without its fiery hot sauce, but both are relatively recent innovations—probably from the 17th century. Other New World crops, such as corn (maize), cassava, sweet potatoes, and peanuts (groundnuts), were responsible for agricultural revolutions in Asia and Africa, opening up terrain that had previously been unproductive.
One century after the sweet potato was introduced into south China (in the mid-1600s), it had become a dominant crop and was largely responsible for a population explosion that created what today is called Cantonese culture. It is the sweet potato, not the more celebrated white rice, which sustained generations of southern Chinese farmers. These are the experiences that cause cultural meaning to be attached to particular foods. Today the descendants of Cantonese, Hokkien, and Hakka pioneers disdain the sweet potato as a “poverty food” that conjures images of past hardships. In Taiwan, by contrast, independence activists (affluent members of the rising Taiwanese middle class) have embraced the sweet potato as an emblem of identity, reviving old recipes and celebrating their cultural distinctions from “rice-eating mainlanders.”
While the global distribution of foods originated with the pursuit of exotic spices (such as black pepper, cinnamon, and cloves), contemporary food trading features more prosaic commodities, such as soybeans and apples. African bananas, Chilean grapes, and California oranges have helped to transform expectations about the availability and affordability of fresh produce everywhere in the world. Green beans are now grown in Burkina Faso in Central Africa and shipped by express air cargo to Paris, where they end up on the plates of diners in the city’s top restaurants. This particular exchange system is based on a “nontraditional” crop that was not grown in Burkina Faso until the mid-1990s, when the World Bank encouraged its cultivation as a means of promoting economic development. The country soon became Africa’s second largest exporter of green beans. Central African farmers consequently found themselves in direct competition with other “counter-season” growers of green beans from Brazil and Florida.
The average daily diet has also undergone tremendous change, with all nations converging on a diet high in meat, dairy products, and processed sugars. Correlating closely to a worldwide rise in affluence, the new “global diet” is not necessarily a beneficial trend, as it can increase the risk of obesity and diabetes. Now viewed as a global health threat, obesity has been dubbed “globesity” by the World Health Organization. To many observers, the homogenization of human diet appears to be unstoppable. Vegetarians, environmental activists, and organic food enthusiasts have organized rearguard actions to reintroduce “traditional” and more wholesome dietary practices, but these efforts have been concentrated among educated elites in industrial nations.
Western food corporations are often blamed for these dietary trends. McDonald’s, KFC (Kentucky Fried Chicken), and Coca-Cola are primary targets of anti-globalism demonstrators (who are themselves organized into global networks, via the Internet). McDonald’s has become a symbol of globalism for obvious reasons: on an average day in 2001, the company served nearly 45 million customers at more than 25,000 restaurants in 120 countries. It succeeds in part by adjusting its menu to local needs. In India, for example, no beef products are sold.
McDonald’s also succeeds in countries that might be expected to disdain fast food. In France, for example, food, especially haute cuisine, is commonly regarded as the core element of French culture. Nevertheless, McDonald’s continues to expand in the very heartland of opposition: by the turn of the 21st century there were more than 850 McDonald’s restaurants in France, employing over 30,000 people. Not surprisingly, many European protest movements have targeted McDonald’s as an agent of cultural imperialism. French intellectuals may revile the Big Mac sandwich for all that it symbolizes, but the steady growth of fast-food chains demonstrates that anti-globalist attitudes do not always affect economic behaviour, even in societies (such as France) where these sentiments are nearly universal. Like their counterparts in the United States, French workers are increasingly pressed for time. The two-hour lunch is largely a thing of the past.
Food and beverage companies attract attention because they cater to the most elemental form of human consumption. We are what we eat, and when diet changes, notions of national and ethnic identity are affected. Critics claim that the spread of fast food undermines indigenous cuisines by forcing a homogenization of world dietary preferences, but anthropological research in Russia, Japan, and Hong Kong does not support this view.
Close study of cultural trends at the local level, however, shows that the globalization of fast food can influence public conduct. Fast-food chains have introduced practices that changed some consumer behaviours and preferences. For example, in Japan, where using one’s hands to eat prepared foods was considered a gross breach of etiquette, the popularization of McDonald’s hamburgers has had such a dramatic impact on popular etiquette that it is now common to see Tokyo commuters eating in public without chopsticks or spoons.
In late-Soviet Russia, rudeness had become a high art form among service personnel. Today customers expect polite, friendly service when they visit Moscow restaurants—a social revolution initiated by McDonald’s and its employee training programs. Since its opening in 1990, Moscow’s Pushkin Square restaurant has been one of the busiest McDonald’s in the world.
The social atmosphere in colonial Hong Kong of the 1960s was anything but genteel. Cashing a check, boarding a bus, or buying a train ticket required brute force. When McDonald’s opened in 1975, customers crowded around the cash registers, shouting orders and waving money over the heads of people in front of them. McDonald’s responded by introducing queue monitors—young women who channeled customers into orderly lines. Queuing subsequently became a hallmark of Hong Kong’s cosmopolitan, middle-class culture. Older residents credit McDonald’s for introducing the queue, a critical element in this social transition.
Yet another innovation, in some areas of Asia, Latin America, and Europe, was McDonald’s provision of clean toilets and washrooms. In this way the company was instrumental in setting new cleanliness standards (and thereby raising consumer expectations) in cities that had never offered public facilities. Wherever McDonald’s has set up business, it rapidly has become a haven for an emerging class of middle-income urbanites.
The introduction of fast food has been particularly influential on children, especially since so many advertisements are designed to appeal to them. Largely as a consequence of such advertising, American-style birthday parties have spread to many parts of the world where individual birth dates previously had never been celebrated. McDonald’s and KFC have become the leading venues for birthday parties throughout East Asia, with special rooms and services provided for the events. These and other symbolic effects make fast food a powerful force for dietary and social change, because a meal at these restaurants will introduce practices that younger consumers may not experience at home—most notably, the chance to choose one’s own food. The concept of personal choice is symbolic of Western consumer culture. Visits to McDonald’s and KFC have become signal events for children who approach fast-food restaurants with a heady sense of empowerment.
Central to Huntington’s thesis in The Clash of Civilizations is the assumption that the post-Cold War world would regroup into regional alliances based on religious beliefs and historical attachments to various “civilizations.” Identifying three prominent groupings—Western Christianity (Roman Catholicism and Protestantism), Orthodox Christianity (Russian and Greek), and Islam, with additional influences from Hinduism and Confucianism—he predicted that the progress of globalization would be severely constrained by religio-political barriers. The result would be a “multipolar world.” Huntington’s view differed markedly from those who prophesied a standardized, homogenized global culture.
There is, however, considerable ethnographic evidence, gathered by anthropologists and sociologists, that refutes this model of civilizational clash and suggests instead a rapid diffusion of religious and cultural systems throughout the world. Islam is one case in point, given that it constitutes one of the fastest-growing religions in the United States, France, and Germany—supposed bastions of Western Christianity. Before the end of the 20th century, entire arrondissements (districts) of Paris were dominated by Muslims, the majority of them French citizens born and reared in France. Thirty-five percent of students in the suburban Dearborn, Michigan, public school system were Muslim in 2001, making the provision of ḥalāl (“lawful” under Islam) meals at lunchtime a hot issue in local politics. By the start of the 21st century, Muslims of Turkish origin constituted the fastest-growing sector of Berlin’s population, and, in northern England, the old industrial cities of Bradford and Newcastle had been revitalized by descendants of Pakistani and Indian Muslims who immigrated during the 1950s and ’60s.
From its inception, Christianity has been an aggressively proselytizing religion with a globalizing agenda. Indeed, the Roman Catholic Church was arguably the first global institution, having spread rapidly throughout the European colonial world and beyond. Today, perhaps the fastest-growing religion is evangelical Christianity. Stressing the individual’s personal experience of divinity (as opposed to priestly intercession), evangelicalism has gained wide appeal in regions such as Latin America and sub-Saharan Africa, presenting serious challenges to established Catholic churches. Following the collapse of Soviet power in 1991, the Russian Orthodox church began the process of rebuilding after more than seven decades of repression. At the same time, evangelical missionaries from the United States and Europe shifted much of their attention from Latin America and Africa to Russia, alarming Russian Orthodox leaders. By 1997, under pressure from Orthodox clergy, the Russian government promoted legislation to restrict the activities of religious organizations that had operated in Russia for less than 15 years, effectively banning Western evangelical missionaries. The debate over Russian religious unity continues, however, and, if China is any guide, such legislation could have little long-term effect.
In China, unauthorized “house churches” became a major concern for Communist Party officials who attempted to control Muslim, Christian, and Buddhist religious activity through state-sponsored organizations. Many of the unrecognized churches are syncretic in the sense that they combine aspects of local religion with Christian ideas. As a result they have been almost impossible to organize, let alone control.
Social scientists confirm the worldwide resurgence, since the late 20th century, of conservative religion among faiths such as Islam, Hinduism, Buddhism, and even Shinto in Japan and Sikhism in India. The social and political connotations of these conservative upsurges are unique to each culture and religion. For example, some sociologists have identified Christian evangelicalism as a leading carrier of modernization: its emphasis on the Bible is thought to encourage literacy, while involvement in church activities can teach administrative skills that are applicable to work environments. As a sociologist of religion, Berger argues that “there may be other globalizing popular movements [today], but evangelicalism is clearly the most dynamic.”
Huntington’s “clash of civilizations” thesis assumes that the major East Asian societies constitute an alliance of “Confucian” cultures that share a common heritage in the teachings of Confucius, the ancient Chinese sage. Early 21st-century lifestyles in Tokyo, Seoul, Beijing, Taipei, and Hong Kong, however, show far more evidence of globalization than Confucianization. The reputed hallmarks of Confucianism—respect for parental authority and ancestral traditions—are no more salient in these cities than in Boston, London, or Berlin. This is a consequence of (among other things) a steady reduction in family size that has swept through East Asian societies since the 1980s. State-imposed restrictions on family size, late childbearing, and resistance to marriage among highly educated, working women have undermined the basic tenets of the Confucian family in Asia.
Birth rates in Singapore and Japan, in fact, have fallen below replacement levels and are at record low levels in Hong Kong; birth rates in Beijing, Shanghai, and other major Chinese cities are also declining rapidly. These developments mean that East Asia—like Europe—could face a fiscal crisis as decreasing numbers of workers are expected to support an ever-growing cohort of retirees. By 2025, China is projected to have 274 million people over age 60—more than the entire 1998 population of the United States. The prospects for other East Asian countries are far worse: 17.2 percent of Japan’s 127 million people were over age 65 in 2000; by 2020 that percentage could rise to 27.
Meanwhile, Asia’s “Confucian” societies face a concurrent revolution in family values: the conjugal family (centring on the emotional bond between wife and husband) is rapidly replacing the patriarchal joint family (focused on support of aged parents and grandparents). This transformation is occurring even in remote, rural regions of northwest China where married couples now expect to reside in their own home (“neolocal” residence) as opposed to the house or compound of the groom’s parents (“patrilocal” residence). The children produced by these conjugal units are very different from their older kin who were reared in joint families: today’s offspring are likely to be pampered only children known as “Little Emperors” or “Little Empresses.” Contemporary East Asian families are characterized by an ideology of consumerism that is diametrically opposed to the neo-authoritarian Confucian rhetoric promoted by political leaders such as Singapore’s Lee Kuan Yew and Hong Kong’s Tung Chee-hwa at the turn of the 21st century.
Chinese philosopher Confucius in conversation with a little boy; woodblock by Yashima Gakutei, 1829.
Italy, Mexico, and Sweden (among other countries) also experienced dramatic reductions in family size and birth rates during the late 20th century. Furthermore, new family formations are taking root, such as those of the transnational workers who maintain homes in more than one country. Multi-domiciled families were certainly evident before the advent of cheap air travel and cellular phones, but new technologies have changed the quality of life (much for the better) in diaspora communities. Thus, the globalization of family life is no longer confined to migrant workers from developing economies who take low-paying jobs in advanced capitalist societies. The transnational family is increasingly a mark of high social status and affluence.
Challenges to national sovereignty and identity
Anti-globalism activists often depict the McDonald’s, Disney, and Coca-Cola corporations as agents of globalism or cultural imperialism—a new form of economic and political domination. Critics of globalism argue that any business enterprise capable of manipulating personal tastes will thrive, whereas state authorities everywhere will lose control over the distribution of goods and services. According to this view of world power, military force is perceived as hopelessly out of step or even powerless; the control of culture (and its production) is seen as far more important than the control of political and geographic borders. Certainly, it is true that national boundaries are increasingly permeable and any effort by nations to exclude global pop culture usually makes the banned objects all the more irresistible.
The commodities involved in the exchange of popular culture are related to lifestyle, especially as experienced by young people: pop music, film, video, comics, fashion, fast foods, beverages, home decorations, entertainment systems, and exercise equipment. Millions of people obtain the unobtainable by using the Internet to breach computer security systems and import barriers. “Information wants to be free” was the clarion call of software designers and aficionados of the World Wide Web in the 1990s. This code of ethics takes its most creative form in societies where governments try hardest to control the flow of information (e.g., China and Iran). In 1999, when Serbian officials shut down the operations of Radio B92, the independent station continued its coverage of events in the former Republic of Yugoslavia by moving its broadcasts to the Internet.
The idea of a borderless world is reflected in theories of the “virtual state,” a new system of world politics that is said to reflect the essential chaos of 21st-century capitalism. In Out of Control (1994), author Kevin Kelly predicted that the Internet would gradually erode the power of governments to control citizens; advances in digital technology would instead allow people to follow their own interests and form trans-state coalitions. Similarly, Richard Rosecrance, in The Rise of the Virtual State (1999), wrote that military conflicts and territorial disputes would be superseded by the flow of information, capital, technology, and manpower between states. Many scholars disagreed, insisting that the state was unlikely to disappear and could continue to be an essential and effective basis of governance.
Arguments regarding the erosion of state sovereignty are particularly unsettling for nations that have become consumers rather than producers of digital technology. Post-Soviet Russia, post-Mao China, and post-Gaullist France are but three examples of Cold War giants facing uncertain futures in the emerging global system. French intellectuals and politicians have seized upon anti-globalism as an organizing ideology in the absence of other unifying themes. In Les cartes de la France à l’heure de la mondialisation (2000; “France’s Assets in the Era of Globalization”), French Foreign Minister Hubert Vedrine denounced the United States as a “hyperpower” that promotes “uniformity” and “unilateralism.” Speaking for the French intelligentsia, he argued that France should take the lead in building a “multipolar world.” Ordinary French citizens also were concerned about losing their national identity, particularly as the regulatory power of the European Union began to affect everyday life. Sixty percent of respondents in a 1999 L’Expansion poll agreed that globalization represented the greatest threat to the French way of life.
Anti-globalism organizers are found throughout the world, not least in many management organizations. They are often among the world’s most creative and sophisticated users of Internet technology. This is doubly ironic, because even as NGOs contest the effects of globalization, they exhibit many of the characteristics of a global, transnational subculture; the Internet, moreover, is one of the principal tools that makes globalization feasible and organized protests against it possible. For example, Greenpeace, an environmentalist NGO, has orchestrated worldwide protests against genetically modified (GM) foods. Highly organized demonstrations appeared, seemingly overnight, in many parts of the world, denouncing GM products as “Frankenfoods” that pose unknown (and undocumented) dangers to people and to the environment. The bioengineering industry, supported by various scientific organizations, launched its own Internet-based counterattack, but the response was too late and too disorganized to outflank Greenpeace and its NGO allies. Sensational media coverage had already turned consumer sentiment against GM foods before the scientific community even entered the debate.
The anti-GM food movement demonstrates the immense power of the Internet to mobilize political protests. This power derives from the ability of a few determined activists to communicate with thousands (indeed millions) of potential allies in an instant. The Internet’s power as an organizing tool became evident during the World Trade Organization (WTO) protests in Seattle, Washington, in 1999, in which thousands of activists converged on the city, disrupting the WTO meetings and drawing the world’s attention to criticisms of global trade practices. The Seattle protests set the stage for similar types of activism in succeeding years.
For hundreds of millions of urban people, the experience of everyday life has become increasingly standardized since the 1960s. Household appliances, utilities, and transportation facilities are increasingly universal. Technological “marvels” that North Americans and Europeans take for granted have had even more profound effects on the quality of life for billions of people in the less-developed world. Everyday life is changed by the availability of cold beverages, hot water, frozen fish, screened windows, bottled cooking-gas, or the refrigerator. It would be a mistake, however, to assume that these innovations have an identical, homogenizing effect wherever they appear. For most rural Chinese, the refrigerator has continued to be seen as a status symbol. They use it to chill beer, soft drinks, and fruit, but they dismiss the refrigeration of vegetables, meat, and fish as unhealthy. Furthermore, certain foods (notably bean curd dishes) are thought to taste better when cooked with more traditional fuels such as coal or wood, as opposed to bottled gas.
It remains difficult to argue that the globalization of technologies is making the world everywhere the same. The “sameness” hypothesis is only sustainable if one ignores the internal meanings that people assign to cultural innovations.
The domain of popular music illustrates how difficult it is to unravel cultural systems in the contemporary world: Is rock music a universal language? Do reggae and ska have the same meaning to young people everywhere? American-inspired hip-hop (rap) swept through Brazil, Britain, France, China, and Japan in the 1990s. Yet Japanese rappers developed their own, localized versions of this art form. Much of the music of hip-hop, grounded in urban African American experience, is defiantly antiestablishment, but the Japanese lyric content is decidedly mild, celebrating youthful solidarity and exuberance. Similar “translations” between form and content have occurred in the pop music of Indonesia, Mexico, and Korea. Even a casual listener of U.S. radio can hear the profound effects that Brazilian, South African, Indian, and Cuban forms have had on the contemporary American pop scene. An earlier example of splashback—when a cultural innovation returns, somewhat transformed, to the place of its origin—was the British Invasion of the American popular music market in the mid-1960s. Forged in the United States from blues and country music, rock and roll crossed the Atlantic in the 1950s to captivate a generation of young Britons who, forming bands such as the Beatles and the Rolling Stones, made the music their own, then reintroduced it to American audiences with tremendous success. The flow of popular culture is rarely, if ever, unidirectional.
A cultural phenomenon does not convey the same meaning everywhere. In 1998, the drama and special effects of the American movie Titanic created a sensation among Chinese fans. Scores of middle-aged Chinese returned to the theatres over and over—crying their way through the film. Enterprising hawkers began selling packages of facial tissue outside Shanghai theatres. The theme song of Titanic became a best-selling CD in China, as did posters of the young film stars. Chinese consumers purchased more than 25 million pirated (and 300,000 legitimate) video copies of the film.
One might ask why middle-aged Chinese moviegoers became so emotionally involved with the story told in Titanic. Interviews among older residents of Shanghai revealed that many people had projected their own, long-suppressed experiences of lost youth onto the film. From 1966 to 1976 the Cultural Revolution convulsed China, destroying any possibility of educational or career advancement for millions of people. At that time, communist authorities had also discouraged romantic love and promoted politically correct marriages based on class background and revolutionary commitment. Improbable as it might seem to Western observers, the story of lost love on a sinking cruise ship hit a responsive chord among the veterans of the Cultural Revolution. Their passionate, emotional response had virtually nothing to do with the Western cultural system that framed the film. Instead, Titanic served as a socially acceptable vehicle for the public expression of regret by a generation of aging Chinese revolutionaries who had devoted their lives to building a form of socialism that had long since disappeared.
Chinese President Jiang Zemin invited the entire Politburo of the Chinese Communist Party to a private screening of Titanic so that they would understand the challenge. He cautioned that Titanic could be seen as a Trojan horse, carrying within it the seeds of American cultural imperialism.
Chinese authorities were not alone in their mistrust of Hollywood. There are those who suggest, as did China’s Jiang, that exposure to Hollywood films will cause people everywhere to become more like Americans. Yet anthropologists who study television and film are wary of such suggestions. They emphasize the need to study the particular ways in which consumers make use of popular entertainment. The process of globalization looks far from hegemonic when one focuses on ordinary viewers and their efforts to make sense of what they see.
Another case in point is anthropologist Daniel Miller’s study of television viewing in Trinidad, which demonstrated that viewers are not passive observers. In 1988, 70 percent of Trinidadians who had access to a television watched daily episodes of The Young and the Restless, a series that emphasized family problems, sexual intrigue, and gossip. Miller discovered that Trinidadians had no trouble relating to the personal dramas portrayed in American soap operas, even though the lifestyles and material circumstances differed radically from life in Trinidad. Local people actively reinterpreted the episodes to fit their own experience, seeing the televised dramas as commentaries on contemporary life in Trinidad. The portrayal of American material culture, notably women’s fashions, was a secondary attraction. In other words, it is a mistake to treat television viewers as passive.
Local culture remains a powerful influence in daily life. People are tied to places, and those places continue to shape particular norms and values. The fact that residents of Moscow, Beijing, and New Delhi occasionally eat at McDonald’s, watch Hollywood films, and wear Nike athletic shoes (or copies thereof) does not make them “global.” The appearance of homogeneity is the most salient, and ultimately the most deceptive, feature of globalization. Outward appearances do not reveal the internal meanings that people assign to a cultural innovation. True, the standardization of everyday life will likely accelerate as digital technology comes to approximate the toaster in “user-friendliness.” But technological breakthroughs are not enough to create a world culture. People everywhere show a desire to partake of the fruits of globalization, but they just as earnestly want to celebrate the distinctiveness of their own cultures.
What Happens in the Brain When We Feel Fear. And why some of us just can’t get enough of it
Fear may be as old as life on Earth. It is a fundamental, deeply wired reaction, evolved over the history of biology, to protect organisms against perceived threat to their integrity or existence. Fear may be as simple as a cringe of an antenna in a snail that is touched, or as complex as existential anxiety in a human.
Whether we love or hate to experience fear, it’s hard to deny that we certainly revere it – devoting an entire holiday to the celebration of fear. Thinking about the circuitry of the brain and human psychology, some of the main chemicals that contribute to the “fight or flight” response are also involved in other positive emotional states, such as happiness and excitement. So, it makes sense that the high arousal state we experience during a scare may also be experienced in a more positive light. But what makes the difference between getting a “rush” and feeling completely terrorized?
We are psychiatrists who treat fear and study its neurobiology. Our studies and clinical interactions, as well as those of others, suggest that a major factor in how we experience fear has to do with the context. When our “thinking” brain gives feedback to our “emotional” brain and we perceive ourselves as being in a safe space, we can then quickly shift the way we experience that high arousal state, going from one of fear to one of enjoyment or excitement.
When you enter a haunted house during Halloween season, for example, anticipating a ghoul jumping out at you and knowing it isn’t really a threat, you are able to quickly relabel the experience. In contrast, if you were walking in a dark alley at night and a stranger began chasing you, both your emotional and thinking areas of the brain would be in agreement that the situation is dangerous, and it’s time to flee!
Fear reaction starts in the brain and spreads through the body to make adjustments for the best defense, or flight reaction. The fear response starts in a region of the brain called the amygdala. This almond-shaped set of nuclei in the temporal lobe of the brain is dedicated to detecting the emotional salience of the stimuli – how much something stands out to us.
For example, the amygdala activates whenever we see a human face with an emotion. This reaction is more pronounced with anger and fear. A threat stimulus, such as the sight of a predator, triggers a fear response in the amygdala, which activates areas involved in preparation for motor functions involved in fight or flight. It also triggers release of stress hormones and sympathetic nervous system.
This leads to bodily changes that prepare us to be more efficient in a danger: The brain becomes hyperalert, pupils dilate, the bronchi dilate and breathing accelerates. Heart rate and blood pressure rise. Blood flow and stream of glucose to the skeletal muscles increase. Organs not vital in survival such as the gastrointestinal system slow down.
A part of the brain called the hippocampus is closely connected with the amygdala. The hippocampus and prefrontal cortex help the brain interpret the perceived threat. They are involved in a higher-level processing of context, which helps a person know whether a perceived threat is real.
For instance, seeing a lion in the wild can trigger a strong fear reaction, but the response to a view of the same lion at a zoo is more of curiosity and thinking that the lion is cute. This is because the hippocampus and the frontal cortex process contextual information, and inhibitory pathways dampen the amygdala fear response and its downstream results. Basically, our “thinking” circuitry of brain reassures our “emotional” areas that we are, in fact, OK.
Being attacked by a dog or seeing someone else attacked by a dog triggers fear
Similar to other animals, we very often learn fear through personal experiences, such as being attacked by an aggressive dog, or observing other humans being attacked by an aggressive dog.
However, an evolutionarily unique and fascinating way of learning in humans is through instruction – we learn from the spoken words or written notes! If a sign says the dog is dangerous, proximity to the dog will trigger a fear response. We learn safety in a similar fashion: experiencing a domesticated dog, observing other people safely interact with that dog or reading a sign that the dog is friendly.
Fear creates distraction, which can be a positive experience. When something scary happens, in that moment, we are on high alert and not preoccupied with other things that might be on our mind (getting in trouble at work, worrying about a big test the next day), which brings us to the here and now.
Furthermore, when we experience these frightening things with the people in our lives, we often find that emotions can be contagious in a positive way. We are social creatures, able to learn from one another. So, when you look over to your friend at the haunted house and she’s quickly gone from screaming to laughing, socially you’re able to pick up on her emotional state, which can positively influence your own.
While each of these factors - context, distraction, social learning - have potential to influence the way we experience fear, a common theme that connects all of them is our sense of control. When we are able to recognize what is and isn’t a real threat, relabel an experience and enjoy the thrill of that moment, we are ultimately at a place where we feel in control. That perception of control is vital to how we experience and respond to fear. When we overcome the initial “fight or flight” rush, we are often left feeling satisfied, reassured of our safety and more confident in our ability to confront the things that initially scared us.
It is important to keep in mind that everyone is different, with a unique sense of what we find scary or enjoyable. This raises yet another question: While many can enjoy a good fright, why might others downright hate it? Any imbalance between excitement caused by fear in the animal brain and the sense of control in the contextual human brain may cause too much, or not enough, excitement. If the individual perceives the experience as “too real,” an extreme fear response can overcome the sense of control over the situation.
This may happen even in those who do love scary experiences: They may enjoy Freddy Krueger movies but be too terrified by “The Exorcist,” as it feels too real, and fear response is not modulated by the cortical brain.
On the other hand, if the experience is not triggering enough to the emotional brain, or if is too unreal to the thinking cognitive brain, the experience can end up feeling boring. A biologist who cannot tune down her cognitive brain from analyzing all the bodily things that are realistically impossible in a zombie movie may not be able to enjoy “The Walking Dead” as much as another person.
So if the emotional brain is too terrified and the cognitive brain helpless, or if the emotional brain is bored and the cognitive brain is too suppressing, scary movies and experiences may not be as fun.
All fun aside, abnormal levels of fear and anxiety can lead to significant distress and dysfunction and limit a person’s ability for success and joy of life. Nearly one in four people experiences a form of anxiety disorder during their lives, and nearly 8 percent experience post-traumatic stress disorder (PTSD).
Disorders of anxiety and fear include phobias, social phobia, generalized anxiety disorder, separation anxiety, PTSD and obsessive compulsive disorder. These conditions usually begin at a young age, and without appropriate treatment can become chronic and debilitating and affect a person’s life trajectory. The good news is that we have effective treatments that work in a relatively short time period, in the form of psychotherapy and medications.
Fear Is Physical
Fear is experienced in your mind, but it triggers a strong physical reaction in your body. As soon as you recognize fear, your amygdala (small organ in the middle of your brain) goes to work. It alerts your nervous system, which sets your body’s fear response into motion. Stress hormones like cortisol and adrenaline are released. Your blood pressure and heart rate increase. You start breathing faster. Even your blood flow changes — blood actually flows away from your heart and into your limbs, making it easier for you to start throwing punches, or run for your life. Your body is preparing for fight-or-flight.
Fear Can Make You Foggy
As some parts of your brain are revving up, others are shutting down. When the amygdala senses fear, the cerebral cortex (area of the brain that harnesses reasoning and judgment) becomes impaired — so now it’s difficult to make good decisions or think clearly. As a result, you might scream and throw your hands up when approached by an actor in a haunted house, unable to rationalize that the threat is not real.
Fear Can Become Pleasure
But why do people who love roller-coasters, haunted houses and horror movies enjoy getting caught up in those fearful, stressful moments? Because the thrill doesn’t necessarily end when the ride or movie ends. Through the excitation transfer process, your body and brain remain aroused even after your scary experience is over.
“During a staged fear experience, your brain will produce more of a chemical called dopamine, which elicits pleasure,” says Dr. Sikora.
Fear Is Not Phobia
If you’re slightly uneasy about swimming in the ocean after watching “Jaws,” the movie did what it set out to do. But if you find yourself terrorized, traumatized and unable to function at the mere thought of basking on the beach, you might be experiencing more than just fear.
The difference between fear and phobia is simple. Fears are common reactions to events or objects. But a fear becomes a phobia when it interferes with your ability to function and maintain a consistent quality of life. If you start taking extreme measures to avoid water, spiders or people, you may have a phobia.
Fear Keeps You Safe
“Fear is a natural and biological condition that we all experience,” says Dr. Sikora. “It’s important that we experience fear because it keeps us safe.” Fear is a complex human emotion that can be positive and healthy, but it can also have negative consequences. If a fear or phobia affects your life in negative and inconvenient ways, speak to your primary care provider, who can help determine the kind of treatment you might need.
Ralph Adolphs (RA): Fear can only be defined based on observation of behavior in a natural environment, not neuroscience. In my view, fear is a psychological state with specific functional properties, conceptually distinct from conscious experience; it is a latent variable that provides a causal explanation of observed fear-related behaviors. Fear refers to a rough category of states with similar functions; science will likely revise this picture and show us that there are different kinds of fear (perhaps a dozen or so) that depend on different neural systems.
The functional properties that define the state of fear are those that, in the light of evolution, have made this state adaptive for coping with a particular class of threats to survival, such as predators. Fear has several functional properties—such as persistence, learning, scalability and generalizability—that distinguish emotion states from reflexes and fixed-action patterns, although the latter can of course also contribute to behavior.
The neural circuits that regulate an animal’s fear-related behavior exhibit many of these same functional properties, including in the mouse hypothalamus2, are initial evidence that this brain structure is not merely involved in translating emotion states into behaviors, but plays a role in the central emotion state itself. Neuropsychological dissociations of fear from other emotions show that fear is a distinct category.
Michael Fanselow (MF): Fear is a neural–behavior system that evolved to protect animals against environmental threats to what John Garcia called the external milieu (as opposed to the internal milieu), with predation being the principal driving force behind that evolution (for example, as opposed to a toxin). This is the organizing idea behind my definition of fear. The complete definition must also include the signals giving rise to fear (antecedents) and objectively observable behaviors (consequents). The neuroscientific support for this definition is that many signals of external threat, such as cues signaling possible pain, the presence of natural predators and odors of conspecifics that have recently experienced external threats, all activate overlapping circuits and induce a common set of behaviors (for example, freezing and analgesia in rodents). Equally important as neuroscientific support is support from fieldwork, which has repeatedly shown that behaviors such as freezing enhance survival in the face of predators.
Lisa Feldman Barrett (LFB): I hypothesize that every mental event, fear or otherwise, is constructed in an animal’s brain as a plan for assembling motor actions and the visceromotor actions that support them, as well as the expected sensory consequences of those actions. The latter constitute an animal’s experience of its surrounding niche (sights, sounds, smells, etc.), including the affective value of objects. Here ‘value’ is a way of describing a brain’s estimation of its body’s state (i.e., interoceptive and skeletomotor predictions) and how that state will change as the animal moves or encodes something new. The plan is an inference (or a set of inferences) that is constructed from learned or innate priors that are similar to the present conditions; they represent the brain’s best guess as to the causes of expected sensory inputs and what to do about them.
The function most frequently associated with fear is protection from threat. The corresponding definition of fear is an instance an animal’s brain constructs defensive actions for survival. A human brain might construct inferences that are similar to present conditions in terms of sensory or perceptual features, but the inferences can also be functional and therefore abstract, and thus they may or may not be initiated by events that are typically defined as fear stimuli and may or may not result in the behaviors that are typically defined as fear behaviors. For example, sometimes humans may laugh or fall asleep in the face of a threat. In this view, fear is not defined by the sensory specifics of an eliciting stimulus or by a specific physical action generated by the animal; rather, it is characterized in terms of a situated function or goal: a particular set of action and sensory consequences that are inferred, based on priors, to serve a particular function in a similar situation (for example, protection).
In cognitive science, a set of objects or events that are similar in some way to one another constitute a category, so constructing inferences can also be described as constructing categories. Another way to phrase my hypothesis, then, is that a brain is dynamically constructing categories as guesses about which motor actions to take, what their sensory consequences will be, and the causes of those actions and expected sensory inputs. A representation of a category is a concept, and so the hypothesis can also be phrased this way: a brain is dynamically constructing concepts as hypotheses about the causes of upcoming motor actions and their expected sensory consequences. The concepts or categories are constructed in a situation-by-situation manner, so they are called ad hoc concepts or categories. In this way, biological categories can be considered ad hoc conceptual categories.
Kay Tye (KT): Fear is an intensely negative internal state. It conducts orchestration of coordinated functions serving to arouse our peak performance for avoidance, escape or confrontation. Fear resembles a dictator that makes all other brain processes (from cognition to breathing) its slave. Fear can be innate or learned. Innate fear can be expressed in response to environmental stimuli without prior experience, such as that of snakes and spiders in humans and to predator odor in rodents. Fear associations—primarily studied in the context of Pavlovian fear conditioning—are the most rapidly learned (one trial), robustly encoded and retrieved, and prone to activate multiple memory systems. Given its critical importance in survival and its authoritarian command over the rest of the brain, fear should be one of the most extensively studied topics in neuroscience, though it trails behind investigation of sensory and motor processes due to its subjective nature. Watching others exhibit the behavioral expressions and responses of fear may invoke emotional contagion or support learning about the environment. The usage of the term ‘fear’ in the field of behavioral neuroscience has taken on a related—but distinct—meaning through the extensive use and study of a very stereotyped behavioral paradigm originally termed ‘fear conditioning’. Fear conditioning is arguably the most commonly used behavioral paradigm in neuroscience and has been most comprehensively mined in terms of neural circuit dissection with rodent models but has also been used in humans, primates and even invertebrates. Fear conditioning refers to the Pavlovian pairing of a conditioned stimulus (most often an auditory pure tone) with a foot shock that is most often presented upon the termination of the conditioned stimulus.
Dread requires only a tenth of a second to take root
A copse can beckon, with its dappled leaves and songbird trills. But linger past twilight, and tree, bush, and animal assume different dimensions. Trunks thicken and loom, bushes snatch at clothing, and the rustlings and skitters of feather and claw magnify. You become unsettled, unnerved. You run.
You do this because you’re afraid. Even without direct evidence of danger, you’re compelled to flee, to protect yourself. Why this compulsion? It’s the work of your amygdala, a tiny almond–shaped structure in your brain. Sensory signals alert it; in turn, it triggers a cascade of activity, deluging your body with messages that widen your eyes, prick your ears, accelerate your heart, quicken your breathing, wrench your stomach, moisten your palms, and launch a full–body, organ–clenching, corpuscle–filling chill. You run quite simply because fear grips you.
“You could call the amygdala a relevance detector,” says Nouchine Hadjikhani, an HMS associate professor of radiology who specializes in capturing the activity of the brain as it reacts to fear–provoking stimuli. “In less than 100 milliseconds, just one–tenth of a second, sensory information reaches the amygdala, which signals your brain to be aware. All your systems become more receptive. You’re now ready to fight, freeze, or flee.”
The good news is that, should the terror prove benign, you’ll not long be in fear’s thrall. For while your amygdala is providing survival insurance by spurring action, sensory clues are also traveling to your prefrontal cortex. The amygdala’s action buys you additional milliseconds, during which you might glimpse a light, stumble upon a traveled road, or receive other sensory stimuli that your prefrontal cortex will use to temper the initial response. You will calm, completing an arc of reaction that has been key to mammalian survival through eons.
Investigating what drives that arc of reaction spurs much of today’s research into the molecular mechanisms of the fear response. HMS scientists are providing tantalizing insights by explaining how we decipher danger in the gazes or body movements of others, by informing treatments for conditions such as post–traumatic stress disorder, and even by providing clues to the gender–based underpinnings of human response to fear.
A 2005 poll of U.S. teenagers revealed the power that emotion can have in searing fear–filled memories deeply; despite the teens’ limited direct experience, terrorist attacks, war, and nuclear war held top–ten berths in a list of fears. This finding hints at a phenomenon that Hadjikhani and her colleagues study: the contagion of fear. In her research, Hadjikhani has found that humans, like other animals, can experience fear indirectly, the result of another’s glance or muscle tensing, or, on a larger scale, that electric connection that turns a milling crowd into a stampeding throng.
Nouchine Hadjikhani
“We’re born into this world with a system to read other people’s expressions,” says Hadjikhani. “Ten minutes after we’re born, we’re already oriented more to faces than to objects.” In 2008, Hadjikhani and colleagues reported on their investigation of one aspect of facial expression—the gaze—and its role in communicating danger. They found that while a direct gaze from a fear–filled face triggers activity in fear–response regions of the brain, the response is not as complex as that elicited by a fear–filled face in which the eyes are averted. A direct gaze signals an interaction between participants who know themselves to be non–threatening. But an averted gaze, “pointing with the eyes,” as the researchers call it, flags a possible environmental danger and sparks activity in brain regions skilled at reading faces, interpreting gazes, processing fear, and detecting motion.
In other research, Hadjikhani found that the brain can recognize happy and fearful expressions in body movements. A fearful posture—hands held open and in front of the body like shields, for example—activates brain regions that oversee emotion, vision, and action, while postures of happiness—arms loosely held from the body as if opened to embrace—spur activity only in vision–processing regions. These physical communications of actual or perceived danger offer one avenue to developing a conditioned fear, a learned response founded upon emotion and impressed so firmly within memory that it remains active for a lifetime.
According to the National Institute of Mental Health, roughly 19 million people in the United States have mental illnesses that involve persistent, outsized fear responses to seemingly ordinary stimuli. A door slam becomes a gun’s report to a shattered combat veteran, for example, while smoke from burning leaves might trigger smell–based memories of pyres for a genocide survivor. Among the anxiety disorders linked to conditioned fear responses is one that’s much in the news: post–traumatic stress disorder.
For more than a decade, Vadim Bolshakov, an HMS associate professor of psychiatry and director of McLean Hospital’s Cellular Neurobiology Laboratory, has explored fear–driven disorders by investigating their molecular bases in the brains of rats. One early finding from his laboratory showed that learned fear changes the way the animals’ brains operate, offering a mechanism for conditioned fear’s persistence.
Vadim Bolshakov
Bolshakov and colleagues taught rats to associate a harmless stimulus, a tone, with a painful event, a shock to their feet. The researchers found that neurons in the rodent amygdala exhibited remarkable sensitivity to the tone, so much so that the neurons continued to fire after the stimulus was removed. This sensitivity, known as long–term potentiation, is important to memory acquisition. It is normally modulated by glutamate, a chemical that is released into the synaptic spaces between neurons when a message is being passed, but then is deactivated to prevent message over–expression. Bolshakov’s team showed that the amygdala’s heightened sensitivity was the result of too much glutamate, either because the clean–up process failed or, as the researchers postulated, because production of the chemical went into overdrive.
Other studies by Bolshakov and colleagues identified two proteins essential to the innate and learned fear responses. When the researchers blocked production of one of the proteins, stathmin, fear–conditioned mice were less able to recall the learned fear—and lost the ability to recognize dangers that normally would have kicked their innate fear response into high gear. Blocking the gene that produced a protein known as transient receptor potential channel 5, normally found in high concentrations in the amygdala, decreased the rodents’ neurons’ sensitivity to cholecystokinin, a neuropeptide released when the innate fear response is triggered or a learned fear is recalled.
These insights are welcomed by Roger Pitman, an HMS professor of psychiatry, and Mohammed Milad, an HMS assistant professor of psychiatry. Based at Massachusetts General Hospital, these researchers seek to tease out treatments for people with anxiety disorders such as post–traumatic stress disorder.
Roger Pitman
Or, as Pitman and colleagues discovered several years ago, people might be helped to stave off a fear–filled memory by preventing it from consolidating in the first place. In a controlled study of patients entering Mass General’s emergency department after traumatic experiences—assaults or car accidents, for example—Pitman provided some participants with a placebo and others with propranolol, a drug that blocks the effects of the hormone adrenaline. At follow–up interviews participants listened to audiotapes of their own accounts of their trauma the day it occurred. Propranolol recipients had weaker physical responses to the tapes than placebo users, who showed physical signs of the stirring of their fearful memory despite time’s passage.
Replicating these results has proven difficult, however, so Pitman and colleagues have shifted their focus to reactivating traumatic memories in people with post–traumatic stress disorder and then administering an anti–stress drug to try to weaken the memory’s reconsolidation.
Reliving a fear, even a trauma–induced one, is not necessarily pathologic, Milad points out. Recalling the source of high emotion or injury can serve as a safeguard, a warning that our brains can tap as needed. In addition, time often softens the intensity of response.
“Say you’re in a car accident,” Milad adds. “It occurs at a particular intersection at the same time a certain song is playing on the radio. For a period following that accident, whenever you go through that intersection or hear that song, you will re–experience at some level your initial fear. If over time nothing horrible happens to rekindle your memory, your conditioned response to either stimulus will lessen until the fear is extinguished. This extinction doesn’t erase the initial learned fear; instead, it leads to forming a new memory, a ‘safety memory.’ The learned fear—the neuronal connections that the experience formed within your amygdala and between your amygdala and certain cortical structures—remains.”
For some, the trauma never lessens. In people with post–traumatic stress disorder, Milad and Pitman have found that two brain regions involved in extinction, the hippocampus and a region of the prefrontal cortex, function at a lesser capacity, while activity in the amygdala and the dorsal anterior cingulate, a region involved in cognition and motor control, rachets up. These findings may explain the unending rawness that trauma–induced fears bring to people with the disorder.
Gravity, also called gravitation, in mechanics, the universal force of attraction acting between all matter.
Gravity, also called gravitation, in mechanics, the universal force of attraction acting between all matter. It is by far the weakest known force in nature and thus plays no role in determining the internal properties of everyday matter. On the other hand, through its long reach and universal action, it controls the trajectories of bodies in the solar system and elsewhere in the universe and the structures and evolution of stars, galaxies, and the whole cosmos. On Earth all bodies have a weight, or downward force of gravity, proportional to their mass, which Earth’s mass exerts on them. Gravity is measured by the acceleration that it gives to freely falling objects. At Earth’s surface the acceleration of gravity is about 9.8 metres (32 feet) per second per second. Thus, for every second an object is in free fall, its speed increases by about 9.8 metres per second. At the surface of the Moon the acceleration of a freely falling body is about 1.6 metres per second per second.
The works of Isaac Newton and Albert Einstein dominate the development of gravitational theory. Newton’s classical theory of gravitational force held sway from his Principia, published in 1687, until Einstein’s work in the early 20th century. Newton’s theory is sufficient even today for all but the most precise applications. Einstein’s theory of general relativity predicts only minute quantitative differences from the Newtonian theory except in a few special cases. The major significance of Einstein’s theory is its radical conceptual departure from classical theory and its implications for further growth in physical thought.
The launch of space vehicles and developments of research from them have led to great improvements in measurements of gravity around Earth, other planets, and the Moon and in experiments on the nature of gravitation.
Early concepts
Newton argued that the movements of celestial bodies and the free fall of objects on Earth are determined by the same force. The classical Greek philosophers, on the other hand, did not consider the celestial bodies to be affected by gravity, because the bodies were observed to follow perpetually repeating nondescending trajectories in the sky. Thus, Aristotle considered that each heavenly body followed a particular “natural” motion, unaffected by external causes or agents. Aristotle also believed that massive earthly objects possess a natural tendency to move toward Earth’s centre. Those Aristotelian concepts prevailed for centuries along with two others: that a body moving at constant speed requires a continuous force acting on it and that force must be applied by contact rather than interaction at a distance. These ideas were generally held until the 16th and early 17th centuries, thereby impeding an understanding of the true principles of motion and precluding the development of ideas about universal gravitation. This impasse began to change with several scientific contributions to the problem of earthly and celestial motion, which in turn set the stage for Newton’s later gravitational theory.
The 17th-century German astronomer Johannes Kepler accepted the argument of Nicolaus Copernicus (which goes back to Aristarchus of Samos) that the planets orbit the Sun, not Earth. Using the improved measurements of planetary movements made by the Danish astronomer Tycho Brahe during the 16th century, Kepler described the planetary orbits with simple geometric and arithmetic relations. Kepler’s three quantitative laws of planetary motion are:
During this same period the Italian astronomer and natural philosopher Galileo Galilei made progress in understanding “natural” motion and simple accelerated motion for earthly objects. He realized that bodies that are uninfluenced by forces continue indefinitely to move and that force is necessary to change motion, not to maintain constant motion. In studying how objects fall toward Earth, Galileo discovered that the motion is one of constant acceleration. He demonstrated that the distance a falling body travels from rest in this way varies as the square of the time. As noted above, the acceleration due to gravity at the surface of Earth is about 9.8 metres per second per second. Galileo was also the first to show by experiment that bodies fall with the same acceleration whatever their composition (the weak principle of equivalence).
Newton's law of gravity
Newton discovered the relationship between the motion of the Moon and the motion of a body falling freely on Earth. By his dynamical and gravitational theories, he explained Kepler’s laws and established the modern quantitative science of gravitation. Newton assumed the existence of an attractive force between all massive bodies, one that does not require bodily contact and that acts at a distance. By invoking his law of inertia (bodies not acted upon by a force move at constant speed in a straight line), Newton concluded that a force exerted by Earth on the Moon is needed to keep it in a circular motion about Earth rather than moving in a straight line. He realized that this force could be, at long range, the same as the force with which Earth pulls objects on its surface downward. When Newton discovered that the acceleration of the Moon is 1/3,600 smaller than the acceleration at the surface of Earth, he related the number 3,600 to the square of the radius of Earth. He calculated that the circular orbital motion of radius R and period T requires a constant inward acceleration A equal to the product of 4π2 and the ratio of the radius to the square of the time:
A=(4π^2 R)/T
Effects of gravity on Earth and the Moon
The Moon’s orbit has a radius of about 384,000 km (239,000 miles; approximately 60 Earth radii), and its period is 27.3 days (its synodic period, or period measured in terms of lunar phases, is about 29.5 days). Newton found the Moon’s inward acceleration in its orbit to be 0.0027 metre per second per second, the same as (1/60)2 of the acceleration of a falling object at the surface of Earth.
Earth's gravitational force weakens with increasing distance
In Newton’s theory every least particle of matter attracts every other particle gravitationally, and on that basis he showed that the attraction of a finite body with spherical symmetry is the same as that of the whole mass at the centre of the body. More generally, the attraction of any body at a sufficiently great distance is equal to that of the whole mass at the centre of mass. He could thus relate the two accelerations, that of the Moon and that of a body falling freely on Earth, to a common interaction, a gravitational force between bodies that diminishes as the inverse square of the distance between them. Thus, if the distance between the bodies is doubled, the force on them is reduced to a fourth of the original.
Newton saw that the gravitational force between bodies must depend on the masses of the bodies. Since a body of mass M experiencing a force F accelerates at a rate F/M, a force of gravity proportional to M would be consistent with Galileo’s observation that all bodies accelerate under gravity toward Earth at the same rate, a fact that Newton also tested experimentally.
Observations of the orbital motions of double stars, of the dynamic motions of stars collectively moving within their galaxies, and of the motions of the galaxies themselves verify that Newton’s law of gravity is valid to a high degree of accuracy throughout the visible universe.
Ocean tides, phenomena that mystified thinkers for centuries, were also shown by Newton to be a consequence of the universal law of gravitation, although the details of the complicated phenomena were not understood until comparatively recently. They are caused specifically by the gravitational pull of the Moon and, to a lesser extent, of the Sun.
Newton showed that the equatorial bulge of Earth was a consequence of the balance between the centrifugal forces of the rotation of Earth and the attractions of each particle of Earth on all others. The value of gravity at the surface of Earth increases in a corresponding way from the Equator to the poles. Among the data that Newton used to estimate the size of the equatorial bulge were the adjustments to his pendulum clock that the English astronomer Edmond Halley had to make in the course of his astronomical observations on the southern island of Saint Helena. Jupiter, which rotates faster than Earth, has a proportionally larger equatorial bulge, the difference between its polar and equatorial radii being about 10 percent. Another success of Newton’s theory was his demonstration that comets move in parabolic orbits under the gravitational attraction of the Sun. In a thorough analysis in the Principia, he showed that the great comet of 1680–81 did indeed follow a parabolic path.
It was already known in Newton’s day that the Moon does not move in a simple Keplerian orbit. Later, more-accurate observations of the planets also showed discrepancies from Kepler’s laws. The motion of the Moon is particularly complex; however, apart from a long-term acceleration due to tides on Earth, the complexities can be accounted for by the gravitational attraction of the Sun and the planets. The gravitational attractions of the planets for each other explain almost all the features of their motions. The exceptions are nonetheless important. Uranus, the seventh planet from the Sun, was observed to undergo variations in its motion that could not be explained by perturbations from Saturn, Jupiter, and the other planets. Two 19th-century astronomers, John Couch Adams of Britain and Urbain-Jean-Joseph Le Verrier of France, independently assumed the presence of an unseen eighth planet that could produce the observed discrepancies. They calculated its position within a degree of where the planet Neptune was discovered in 1846. Measurements of the motion of the innermost planet, Mercury, over an extended period led astronomers to conclude that the major axis of this planet’s elliptical orbit precesses in space at a rate 43 arc seconds per century faster than could be accounted for from perturbations of the other planets. In this case, however, no other bodies could be found that could produce this discrepancy, and very slight modification of Newton’s law of gravitation seemed to be needed. Einstein’s theory of relativity precisely predicts this observed behaviour of Mercury’s orbit.
Effects of local mass differences
Spherical harmonics are the natural way of expressing the large-scale variations of potential that arise from the deep structure of Earth. However, spherical harmonics are not suitable for local variations due to more-superficial structures. Not long after Newton’s time, it was found that the gravity on top of large mountains is less than expected on the basis of their visible mass. The idea of isostasy was developed, according to which the unexpectedly low acceleration of gravity on a mountain is caused by low-density rock 30 to 100 km underground, which buoys up the mountain. Correspondingly, the unexpectedly high force of gravity on ocean surfaces is explained by dense rock 10 to 30 km beneath the ocean bottom.
Portable gravimeters, which can detect variations of one part in 109 in the gravitational force, are in wide use today for mineral and oil prospecting. Unusual underground deposits reveal their presence by producing local gravitational variations.
Acceleration around Earth, the Moon, and other planets
The value of the attraction of gravity or of the potential is determined by the distribution of matter within Earth or some other celestial body. In turn, as seen above, the distribution of matter determines the shape of the surface on which the potential is constant. Measurements of gravity and the potential are thus essential both to geodesy, which is the study of the shape of Earth, and to geophysics, the study of its internal structure. For geodesy and global geophysics, it is best to measure the potential from the orbits of artificial satellites. Surface measurements of gravity are best for local geophysics, which deals with the structure of mountains and oceans and the search for minerals.
Variations in g
Changes due to location
The acceleration g varies by about 1/2 of 1 percent with position on Earth’s surface, from about 9.78 metres per second per second at the Equator to approximately 9.83 metres per second per second at the poles. In addition to this broad-scale variation, local variations of a few parts in 106 or smaller are caused by variations in the density of Earth’s crust as well as height above sea level.
Changes with time
The gravitational potential at the surface of Earth is due mainly to the mass and rotation of Earth, but there are also small contributions from the distant Sun and Moon. As Earth rotates, those small contributions at any one place vary with time, and so the local value of g varies slightly. Those are the diurnal and semidiurnal tidal variations. For most purposes it is necessary to know only the variation of gravity with time at a fixed place or the changes of gravity from place to place; then the tidal variation can be removed. Accordingly, almost all gravity measurements are relative measurements of the differences from place to place or from time to time.
Measurements of g
Unit of gravity
Because gravity changes are far less than 1 metre per second per second, it is convenient to have a smaller unit for relative measurements. The gal (named after Galileo) has been adopted for this purpose; a gal is one-hundredth metre per second per second. The unit most commonly used is the milligal, which equals 10−5 metre per second per second—i.e., about one-millionth of the average value of g.
Absolute measurements
Two basic ways of making absolute measurements of gravity have been devised: timing the free fall of an object and timing the motion under gravity of a body constrained in some way, almost always as a pendulum. In 1817 the English physicist Henry Kater, building on the work of the German astronomer Friedrich Wilhelm Bessel, was the first to use a reversible pendulum to make absolute measurements of g. If the periods of swing of a rigid pendulum about two alternative points of support are the same, then the separation of those two points is equal to the length of the equivalent simple pendulum of the same period. By careful construction, Kater was able to measure the separation very accurately. The so-called reversible pendulum was used for absolute measurements of gravity from Kater’s day until the 1950s. Since that time, electronic instruments have enabled investigators to measure with high precision the half-second time of free fall of a body (from rest) through one metre. It is also possible to make extremely accurate measurements of position by using interference of light. Consequently, direct measurements of free fall have replaced the pendulum for absolute measurements of gravity.
Nowadays, lasers are the sources of light for interferometers, while the falling object is a retroreflector that returns a beam of light back upon itself. The falling object can be timed in simple downward motion, or it can be projected upward and timed over the upward and downward path. Transportable versions of such apparatuses have been used in different locations to establish a basis for measuring differences of gravity over the entire Earth. The accuracy attainable is about one part in 〖10〗^8
More recently, interferometers using beams of atoms instead of light have given absolute determinations of gravity. Interference takes place between atoms that have been subject to different gravitational potentials and so have different energies and wavelengths. The results are comparable to those from bodies in free fall.
Relative measurements
From the time of Newton, measurements of differences of gravity (strictly, the ratios of values of gravity) were made by timing the same pendulum at different places. During the 1930s, however, static gravimeters replaced pendulums for local measurements over small ranges of gravity. Today, free-fall measurements have rendered the pendulum obsolete for all purposes.
Spring gravimeters balance the force of gravity on a mass in the gravity field to be measured against the elastic force of the spring. Either the extension of the spring is measured, or a servo system restores it to a constant amount. High sensitivity is achieved through electronic or mechanical means. If a thin wire is stretched by a mass hung from it, the tension in the wire, and therefore the frequency of transverse oscillations, will vary with the force of gravity upon the mass. Such vibrating string gravimeters were originally developed for use in submarines and were later employed by the Apollo 17 astronauts on the Moon to conduct a gravity survey of their landing site. Another relatively recent development is the superconducting gravimeter, an instrument in which the position of a magnetically levitated superconducting sphere is sensed to provide a measure of g. Modern gravimeters may have sensitivities better than 0.005 milligal, the standard deviation of observations in exploration surveys being of the order of 0.01–0.02 milligal.
Differences in gravity measured with gravimeters are obtained in quite arbitrary units—divisions on a graduated dial, for example. The relation between these units and milligals can be determined only by reading the instrument at a number of points where g is known as a result of absolute or relative pendulum measurements. Further, because an instrument will not have a completely linear response, known points must cover the entire range of gravity over which the gravimeter is to be used.
Since g is an acceleration, the problem of its measurement from a vehicle that is moving, and therefore accelerating relative to Earth, raises a number of fundamental problems. Pendulum, vibrating-string, and spring-gravimeter observations have been made from submarines; using gyrostabilized platforms, relative gravity measurements with accuracies approaching a few milligals have been and are being made from surface ships. Experimental measurements with various gravity sensors on fixed-wing aircraft as well as on helicopters have been carried out.
As a result of combining all available absolute and relative measurements, it is now possible to obtain the most probable gravity values at a large number of sites to high accuracy. The culmination of gravimetric work begun in the 1960s has been a worldwide gravity reference system having an accuracy of at least one part in 〖10〗^7 (0.1 milligal or better).
The value of gravity measured at the terrestrial surface is the result of a combination of factors:
Most geophysical surveys are aimed at separating out the last of these in order to interpret the geologic structure. It is therefore necessary to make proper allowance for the other factors. The first two factors imply a variation of gravity with latitude that can be calculated for an assumed shape for Earth. The third factor, which is the decrease in gravity with elevation, due to increased distance from the centre of Earth, amounts to −0.3086 milligal per metre. This value, however, assumes that material of zero density occupies the whole space between the point of observation and sea level, and it is therefore termed the free-air correction factor. In practice the mass of rock material that occupies part or all of this space must be considered. In an area where the topography is reasonably flat, this is usually calculated by assuming the presence of an infinite slab of thickness equal to the height of the station h and having an appropriate density σ; its value is +0.04185 σh milligal per metre. This is commonly called the Bouguer correction factor. Terrain or topographical corrections also can be applied to allow for the attractions due to surface relief if the densities of surface rocks are known. Tidal effects (the amplitudes are less than 0.3 milligal) can be calculated and allowed for.
Although the Apollo astronauts used a gravimeter at their lunar landing site, most scientific knowledge about the gravitational attractions of the Moon and the planets has been derived from observations of their effects upon the accelerations of spacecraft in orbit around or passing close to them. Radio tracking makes it possible to determine the accelerations of spacecraft very accurately, and the results can be expressed either as terms in a series of spherical harmonics or as the variation of gravity over the surface. As in the case of Earth, spherical harmonics are more effective for studying gross structure, while the variation of gravity is more useful for local features. Spacecraft must descend close to the surface or remain in orbit for extended periods in order to detect local gravity variations; such data had been obtained for the Moon, Venus, Mars, and Jupiter by the end of the 20th century.
The Moon’s polar flattening is much less than that of Earth, while its equator is far more elliptical. There are also large, more-local irregularities from visible and concealed structures. Mars also exhibits some large local variations, while the equatorial bulges of Mercury and Venus are very slight.
By contrast, the major planets, all of which rotate quite fast, have large equatorial bulges, and their gravity is dominated by a large increase from equator to pole. The polar flattening of Jupiter is about 10 percent and was first estimated from telescopic observation by Gian Domenico Cassini about 1664. As mentioned above, Edmond Halley subsequently realized that the corresponding effect on gravity would perturb the orbits of the satellites of Jupiter (those discovered by Galileo). The results of gravity measurements are crucial to understanding the internal properties of the planets.
The Newtonian theory of gravity is based on an assumed force acting between all pairs of bodies—i.e., an action at a distance. When a mass moves, the force acting on other masses had been considered to adjust instantaneously to the new location of the displaced mass. That, however, is inconsistent with special relativity, which is based on the axiom that all knowledge of distant events comes from electromagnetic signals. Physical quantities have to be defined in such a way that certain combinations of them—in particular, distance, time, mass, and momentum—are independent of choice of space-time coordinates. This theory, with the field theory of electrical and magnetic phenomena, has met such empirical success that most modern gravitational theories are constructed as field theories consistent with the principles of special relativity. In a field theory the gravitational force between bodies is formed by a two-step process: (1) One body produces a gravitational field that permeates all surrounding space but has weaker strength farther from its source. A second body in that space is then acted upon by this field and experiences a force. (2) The Newtonian force of reaction is then viewed as the response of the first body to the gravitational field produced by the second body, there being at all points in space a superposition of gravitational fields due to all the bodies in it.
In the 1970s the physicists Abdus Salam of Pakistan and Steven Weinberg and Sheldon L. Glashow of the United States were able to show that the electromagnetic forces and the weak force responsible for beta decay were different manifestations of the same basic interaction. That was the first successful unified field theory. Physicists are actively seeking other possible unified combinations. The possibility that gravitation might be linked with the other forces of nature in a unified theory of forces greatly increased interest in gravitational field theories during the 1970s and ’80s. Because the gravitational force is exceedingly weak compared with all others and because it seems to be independent of all physical properties except mass, the unification of gravitation with the other forces remains the most difficult to achieve. That challenge has provided a tremendous impetus to experimental investigations to determine whether there may be some failure of the apparent independence.
The prime example of a field theory is Einstein’s general relativity, according to which the acceleration due to gravity is a purely geometric consequence of the properties of space-time in the neighbourhood of attracting masses. (As will be seen below, general relativity makes certain specific predictions that are borne out well by observation.) In a whole class of more-general theories, these and other effects not predicted by simple Newtonian theory are characterized by free parameters; such formulations are called parameterized post-Newtonian (PPN) theories. There is now considerable experimental and observational evidence for limits to the parameters. So far, no deviation from general relativity has been demonstrated convincingly.
Field theories of gravity predict specific corrections to the Newtonian force law, the corrections taking two basic forms: (1) When matter is in motion, additional gravitational fields (analogous to the magnetic fields produced by moving electric charges) are produced; also, moving bodies interact with gravitational fields in a motion-dependent way. (2) Unlike electromagnetic field theory, in which two or more electric or magnetic fields superimpose by simple addition to give the total fields, in gravitational field theory nonlinear fields proportional to the second and higher powers of the source masses are generated, and gravitational fields proportional to the products of different masses are created. Gravitational fields themselves become sources for additional gravitational fields. Examples of some of these effects are shown below. The acceleration A of a moving particle of negligible mass that interacts with a mass M, which is at rest, is given in the following formula, derived from Einstein’s gravitational theory.
The expression for A now has, as well as the Newtonian expression from equation (1), further terms in higher powers of GM/R^2—that is, in G^2M^2/R^4. As elsewhere, V is the particle’s velocity vector, A is its acceleration vector, R is the vector from the mass M, and c is the speed of light. When written out, the sum is
This expression gives only the first post-Newtonian corrections; terms of higher power in 1/c are neglected. For planetary motion in the solar system, the 1/c2 terms are smaller than Newton’s acceleration term by at least a factor of 〖10〗^(-8) , but some of the consequences of these correction terms are measurable and important tests of Einstein’s theory. It should be pointed out that prediction of new observable gravitational effects requires particular care; Einstein’s pioneer work in gravity has shown that gravitational fields affect the basic measuring instruments of experimental physics—clocks, rulers, light rays—with which any experimental result in physics is established. Some of these effects are listed below:
According to general relativity, the curvature of space-time is determined by the distribution of masses, while the motion of masses is determined by the curvature. In consequence, variations of the gravitational field should be transmitted from place to place as waves, just as variations of an electromagnetic field travel as waves. If the masses that are the source of a field change with time, they should radiate energy as waves of curvature of the field. There are strong grounds for believing that such radiation exists. One particular double-star system has a pulsar as one of its components, and, from measurements of the shift of the pulsar frequency due to the Doppler effect, precise estimates of the period of the orbit show that the period is changing, corresponding to a decrease in the energy of the orbital motion. Gravitational radiation is the only known means by which that could happen.
Double stars in their regular motions (such as that for which a change in period has been detected) and massive stars collapsing as supernovas have been suggested as sources of gravitational radiation, and considerable theoretical effort has gone into calculating the signals to be expected from those and other sources.
Gravitational radiation is very weak. The changes of curvature would correspond to a dilation in one direction and a contraction at right angles to that direction. One scheme, first tried out about 1960, employed a massive cylinder that might be set in mechanical oscillation by a gravitational signal. The authors of this apparatus argued that signals had been detected, but their claim was not substantiated. In a more fruitful scheme an optical interferometer is set up with freely suspended reflectors at the ends of long paths that are at right angles to each other. Shifts of interference fringes corresponding to an increase in length of one arm and a decrease in the other would indicate the passage of gravitational waves. Such an interferometer, the Laser Interferometer Gravitational-Wave Observatory (LIGO), made the first detection of gravitational radiation in 2015. Two black holes about 1.3 billion light-years away spiralled into each other. The black holes were 36 and 29 times the mass of the Sun and formed a new black hole 62 times the mass of the Sun. In the merger, three solar masses were converted to energy in gravitational waves; the amount of power radiated was 50 times more than that of all the stars shining in the universe in that moment. As of 2020, LIGO has made 47 detections of gravitational radiation. Forty-four were from mergers of a black hole binary, two were the merger of a neutron star binary, and one was the possible merger of a black hole with a neutron star.
Both Michell and Laplace pointed out that the attraction of a very dense object upon light might be so great that the light could never escape from the object, rendering it invisible. Such a phenomenon is a black hole. The relativistic theory of black holes has been thoroughly developed in recent years, and astronomers have conducted extensive observations of them. One possible class of black holes comprises very large stars that have used up all of their nuclear energy so that they are no longer held up by radiation pressure and have collapsed into black holes (less-massive stars may collapse into neutron stars). Supermassive black holes with masses millions to billions times that of the Sun are thought to exist at the centres of most galaxies.
Black holes, from which no radiation is able to escape, cannot be seen by their own light, but there are observable secondary effects. If a black hole were one component of a double star, the orbital motion of the pair and the mass of the invisible member might be derived from the oscillatory motion of a visible companion. Because black holes attract matter, any gas in the vicinity of an object of this kind would fall into it and acquire, before vanishing into the hole, a high velocity and consequently a high temperature. The gas may become hot enough to produce X-rays and gamma rays from around the hole. Such a mechanism is the origin of at least some powerful X-ray and radio astronomical sources, including those at the centres of galaxies and quasars. In the case of the massive galaxy M87, the supermassive black hole at its centre, which has a mass 6.5 billion times that of the Sun has been directly observed.
Black Hole in M87
Black hole at the centre of the massive galaxy M87, about 55 million light-years from Earth, as imaged by the Event Horizon Telescope (EHT). The black hole is 6.5 billion times more massive than the Sun. This image was the first direct visual evidence of a supermassive black hole and its shadow. The ring is brighter on one side because the black hole is rotating, and thus material on the side of the black hole turning toward Earth has its emission boosted by the Doppler effect. The shadow of the black hole is about five and a half times larger than the event horizon, the boundary marking the black hole's limits, where the escape velocity is equal to the speed of light. This image was released in 2019 and created from data collected in 2017.
The essence of Newton’s theory of gravitation is that the force between two bodies is proportional to the product of their masses and the inverse square of their separation and that the force depends on nothing else. With a small modification, the same is true in general relativity. Newton himself tested his assumptions by experiment and observation. He made pendulum experiments to confirm the principle of equivalence and checked the inverse square law as applied to the periods and diameters of the orbits of the satellites of Jupiter and Saturn.
During the latter part of the 19th century, many experiments showed the force of gravity to be independent of temperature, electromagnetic fields, shielding by other matter, orientation of crystal axes, and other factors. The revival of such experiments during the 1970s was the result of theoretical attempts to relate gravitation to other forces of nature by showing that general relativity was an incomplete description of gravity. New experiments on the equivalence principle were performed, and experimental tests of the inverse square law were made both in the laboratory and in the field.
There also has been a continuing interest in the determination of the constant of gravitation, although it must be pointed out that G occupies a rather anomalous position among the other constants of physics. In the first place, the mass M of any celestial object cannot be determined independently of the gravitational attraction that it exerts. Thus, the combination GM, not the separate value of M, is the only meaningful property of a star, planet, or galaxy. Second, according to general relativity and the principle of equivalence, G does not depend on material properties but is in a sense a geometric factor. Hence, the determination of the constant of gravitation does not seem as essential as the measurement of quantities like the electronic charge or Planck’s constant. It is also much less well determined experimentally than any of the other constants of physics.
Experiments on gravitation are in fact very difficult, as a comparison of experiments on the inverse square law of electrostatics with those on gravitation will show. The electrostatic law has been established to within one part in 1016 by using the fact that the field inside a closed conductor is zero when the inverse square law holds. Experiments with very sensitive electronic devices have failed to detect any residual fields in such a closed cavity. Gravitational forces have to be detected by mechanical means, most often the torsion balance, and, although the sensitivities of mechanical devices have been greatly improved, they are still far below those of electronic detectors. Mechanical arrangements also preclude the use of a complete gravitational enclosure. Last, extraneous disturbances are relatively large because gravitational forces are very small (something that Newton first pointed out). Thus, the inverse square law is established over laboratory distances to no better than one part in 10^4.
Recent interest in the inverse square law arose from two suggestions. First, the gravitational field itself might have a mass, in which case the constant of gravitation would change in an exponential manner from one value for small distances to a different one for large distances over a characteristic distance related to the mass of the field. Second, the observed field might be the superposition of two or more fields of different origin and different strengths, one of which might depend on chemical or nuclear constitution. Deviations from the inverse square law have been sought in three ways:
Early in the 1970s an experiment by the American physicist Daniel R. Long seemed to show a deviation from the inverse square law at a range of about 0.1 metre. Long compared the maximum attractions of two rings upon a test mass hung from the arm of a torsion balance. The maximum attraction of a ring occurs at a particular point on the axis and is determined by the mass and dimensions of the ring. If the ring is moved until the force on the test mass is greatest, the distance between the test mass and the ring is not needed. Two later experiments over the same range showed no deviation from the inverse square law. In one, conducted by the American physicist Riley Newman and his colleagues, a test mass hung on a torsion balance was moved around in a long hollow cylinder. The cylinder approximates a complete gravitational enclosure and, allowing for a small correction because it is open at the ends, the force on the test mass should not depend on its location within the cylinder. No deviation from the inverse square law was found. In the other experiment, performed in Cambridge, Eng., by Y.T. Chen and associates, the attractions of two solid cylinders of different mass were balanced against a third cylinder so that only the separations of the cylinders had to be known; it was not necessary to know the distances of any from a test mass. Again no deviation of more than one part in 104 from the inverse square law was found. Other, somewhat less-sensitive experiments at ranges up to one metre or so also have failed to establish any greater deviation.
The geophysical tests go back to a method for the determination of the constant of gravitation that had been used in the 19th century, especially by the British astronomer Sir George Airy. Suppose the value of gravity g is measured at the top and bottom of a horizontal slab of rock of thickness t and density d. The values for the top and bottom will be different for two reasons. First, the top of the slab is t farther from the centre of Earth, and so the measured value of gravity will be less by 2(t/R)g, where R is the radius of Earth. Second, the slab itself attracts objects above and below it toward its centre; the difference between the downward and upward attractions of the slab is 4πGtd. Thus, a value of G may be estimated. Frank D. Stacey and his colleagues in Australia made such measurements at the top and bottom of deep mine shafts and claimed that there may be a real difference between their value of G and the best value from laboratory experiments. The difficulties lie in obtaining reliable samples of the density and in taking account of varying densities at greater depths. Similar uncertainties appear to have afflicted measurements in a deep bore hole in the Greenland ice sheet.
New measurements have failed to detect any deviation from the inverse square law. The most thorough investigation was carried out from a high tower in Colorado. Measurements were made with a gravimeter at different heights and coupled with an extensive survey of gravity around the base of the tower. Any variations of gravity over the surface that would give rise to variations up the height of the tower were estimated with great care. Allowance was also made for deflections of the tower and for the accelerations of its motions. The final result was that no deviation from the inverse square law could be found.
A further test of the inverse square law depends on the theorem that the divergence of the gravity vector should vanish in a space that is free of additional gravitational sources. An experiment to test this was performed by M.V. Moody and H.J. Paik in California with a three-axis superconducting gravity gradiometer that measured the gradients of gravity in three perpendicular directions. The sum of the three gradients was zero within the accuracy of the measurements, about one part in 104.
The absolute measurements of gravity described earlier, together with the comprehensive gravity surveys made over the surface of Earth, allow the mean value of gravity over Earth to be estimated to about one part in 106. The techniques of space research also have given the mean value of the radius of Earth and the distances of artificial satellites to the same precision. Thus, it has been possible to compare the value of gravity on Earth with that acting on an artificial satellite. Agreement to about one part in 106 shows that, over distances from the surface of Earth to close satellite orbits, the inverse square law is followed.
New measurements have failed to detect any deviation from the inverse square law. The most thorough investigation was carried out from a high tower in Colorado. Measurements were made with a gravimeter at different heights and coupled with an extensive survey of gravity around the base of the tower. Any variations of gravity over the surface that would give rise to variations up the height of the tower were estimated with great care. Allowance was also made for deflections of the tower and for the accelerations of its motions. The final result was that no deviation from the inverse square law could be found.
A further test of the inverse square law depends on the theorem that the divergence of the gravity vector should vanish in a space that is free of additional gravitational sources. An experiment to test this was performed by M.V. Moody and H.J. Paik in California with a three-axis superconducting gravity gradiometer that measured the gradients of gravity in three perpendicular directions. The sum of the three gradients was zero within the accuracy of the measurements, about one part in 104.
The absolute measurements of gravity described earlier, together with the comprehensive gravity surveys made over the surface of Earth, allow the mean value of gravity over Earth to be estimated to about one part in 106. The techniques of space research also have given the mean value of the radius of Earth and the distances of artificial satellites to the same precision. Thus, it has been possible to compare the value of gravity on Earth with that acting on an artificial satellite. Agreement to about one part in 106 shows that, over distances from the surface of Earth to close satellite orbits, the inverse square law is followed. New measurements have failed to detect any deviation from the inverse square law. The most thorough investigation was carried out from a high tower in Colorado. Measurements were made with a gravimeter at different heights and coupled with an extensive survey of gravity around the base of the tower. Any variations of gravity over the surface that would give rise to variations up the height of the tower were estimated with great care. Allowance was also made for deflections of the tower and for the accelerations of its motions. The final result was that no deviation from the inverse square law could be found.
A further test of the inverse square law depends on the theorem that the divergence of the gravity vector should vanish in a space that is free of additional gravitational sources. An experiment to test this was performed by M.V. Moody and H.J. Paik in California with a three-axis superconducting gravity gradiometer that measured the gradients of gravity in three perpendicular directions. The sum of the three gradients was zero within the accuracy of the measurements, about one part in 104.
The absolute measurements of gravity described earlier, together with the comprehensive gravity surveys made over the surface of Earth, allow the mean value of gravity over Earth to be estimated to about one part in 106. The techniques of space research also have given the mean value of the radius of Earth and the distances of artificial satellites to the same precision. Thus, it has been possible to compare the value of gravity on Earth with that acting on an artificial satellite. Agreement to about one part in 106 shows that, over distances from the surface of Earth to close satellite orbits, the inverse square law is followed. Thus far, all of the most reliable experiments and observations reveal no deviation from the inverse square law.
The constant of gravitation has been measured in three ways:
The first approach was suggested by Newton; the earliest observations were made in 1774 by the British astronomer Nevil Maskelyne on the mountain of Schiehallion in Scotland. The subsequent work of Airy and more-recent developments are noted above. The laboratory balance method was developed in large part by the British physicist John Henry Poynting during the late 1800s, but all the most recent work has involved the use of the torsion balance in some form or other for the direct laboratory measurement of the force between two bodies. The torsion balance was devised by Michell, who died before he could use it to measure G. Cavendish adapted Michell’s design to make the first reliable measurement of G in 1798; only in comparatively recent times have clearly better results been obtained. Cavendish measured the change in deflection of the balance when attracting masses were moved from one side to the other of the torsion beam. The method of deflection was analyzed most thoroughly in the late 1800s by Sir Charles Vernon Boys, an English physicist, who carried it to its highest development, using a delicate suspension fibre of fused silica for the pendulum. In a variant of that method, the deflection of the balance is maintained constant by a servo control.
The second scheme involves the changes in the period of oscillation of a torsion balance when attracting masses are placed close to it such that the period is shortened in one position and lengthened in another. Measurements of period can be made much more precisely than those of deflection, and the method, introduced by Carl Braun of Austria in 1897, has been used in many subsequent determinations. In a third scheme the acceleration of the suspended masses is measured as they are moved relative to the large attracting masses.
In another arrangement a balance with heavy attracting masses is set up near a free test balance and adjusted so that it oscillates with the same period as the test balance. The latter is then driven into resonant oscillations with an amplitude that is a measure of the constant of gravitation. The technique was first employed by J. Zahradnicek of Czechoslovakia during the 1930s and was effectively used again by C. Pontikis of France some 40 years later.
Suspensions for two-arm balances for the comparison of masses and for torsion balances have been studied intensively by T.J. Quinn and his colleagues at the International Bureau of Weights and Measures, near Paris, and they have found that suspensions with thin ribbons of metal rather than wires provide the most stable systems. They have used balances with such suspensions to look for deviations from the predictions of general relativity and have most recently used a torsion balance with ribbon suspension in two new determinations of the constant of gravitation.
The 20th-century English physicist P.A.M. Dirac, among others, suggested that the value of the constant of gravitation might be proportional to the age of the universe; other rates of change over time also have been proposed. The rates of change would be extremely small, one part in 10^11 per year if the age of the universe is taken to be 10^11 years; such a rate is entirely beyond experimental capabilities at present. There is, however, the possibility of looking for the effects of any variation upon the orbit of a celestial body, in particular the Moon. It has been claimed from time to time that such effects may have been detected. As yet, there is no certainty.
The constant of gravitation is plainly a fundamental quantity, since it appears to determine the large-scale structure of the entire universe. Gravity is a fundamental quantity whether it is an essentially geometric parameter, as in general relativity, or the strength of a field, as in one aspect of a more-general field of unified forces. The fact that, so far as is known, gravitation depends on no other physical factors makes it likely that the value of G reflects a basic restriction on the possibilities of physical measurement, just as special relativity is a consequence of the fact that, beyond the shortest distances, it is impossible to make separate measurements of length and time.