Information

How do we perceive acceleration?

How do we perceive acceleration?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Today me and my friend were coming on motor bike and I was sitting opposite direction because I was holding something in my hand (and it was fun :P). When he started bike I felt very high acceleration. I asked him to go slow however he said he is going with regular speed. Then out of curiosity, me and him tried different velocities and different accelerations while me sitting on either direction of acceleration and opposite to it. To my surprise I 'perceived' more acceleration when I was sitting opposite to direction of acceleration. While searching how we perceive acceleration, surprisingly it is not very well known. I found following explanations,

  • We overestimate arrival time ( ref: this )
  • Endolymph from Vestibular system ( ref: Wiki ) [Not exactly acceleration but rather balance]
  • Interpolated motion segments ( ref: this )

My question, Is it just visual perception or special mechanism for detection of acceleration? In any of these cases, why I was feeling more acceleration while siting opposite than in direction of acceleration? [ My initial guess is because I didn't have any support and visual cues were messed up ]

Update:

I found this post but I didn't get any satisfactory answer from book referred in answer.


My question, Is it just visual perception or special mechanism for detection of acceleration?

Acceleration is a synthesized conclusion from a multitude of systems.

Most prominent is the Endolymph system you already mentioned:

As you accelerate, the endolympth and the otoliths within (small, calcified deposits) pass over the hair cells and produce action potentials which travel to the brain. The higher the magnitude of acceleration, the more action potentials will be sent.

This is also where dizziness comes from, as the endolymph and otoliths do not come to rest at the same rate, momentarily giving the brain two different interpretations of what acceleration you're experiencing.

There is also the Doppler Effect:

Based in the cochlear system of the inner ear this time, instead of the vestibular system, the doppler effect is interpreted by the brain and gives a very rough interpretation of both velocity and acceleration (if the velocity of the object happens to be changing).

Then there is the entire visual side of things. Motion blur and the speed at which the perception of objects change size are large contributors to our sense of visual acceleration, but almost all of this is done via complex manipulation of visual data in the occipital and frontal lobes.

There are also countless minor contributors: skin and join tension (the sensation of weight due to acceleration), ease of breathing, sensory cells in your hair if your hair is exposed to the open air while acceleration, blood pressure sensors (and numbness when the body can't compensate), etc.

In any of these cases, why I was feeling more acceleration while siting opposite than in direction of acceleration?

My guess would be because your body is more visually attuned to facing the same direction as the acceleration, and when you were facing the opposing direction your mind overcompensated in an effort to protect you since you probably don't get a lot of opportunities to rapidly accelerate backwards over distances of more than a few feet.

Your brain might also have been panicking a bit because you couldn't see where you were going (usually very bad), and the heightened sensory state made the magnitude of the acceleration feel greater.

These are my best guesses, however. If someone has an academic source, please feel free to edit this answer!


SAT / ACT Prep Online Guides and Tips

"Whoa, you really went from zero to sixty there!"

Have you ever heard someone use the idiom "zero to sixty" like I did in the above example? When someone says something went from "zero to sixty," they’re really saying that things accelerated very quickly. Acceleration is the amount by which the velocity of something changes over a set period of time.

In this article, we’ll be talking all about acceleration: what it is and how to calculate it. Buckle up!


Imagination can change what we hear and see

A study from Karolinska Institutet in Sweden shows, that our imagination may affect how we experience the world more than we perhaps think. What we imagine hearing or seeing "in our head" can change our actual perception. The study, which is published in the scientific journal Current Biology, sheds new light on a classic question in psychology and neuroscience -- about how our brains combine information from the different senses.

"We often think about the things we imagine and the things we perceive as being clearly dissociable," says Christopher Berger, doctoral student at the Department of Neuroscience and lead author of the study. "However, what this study shows is that our imagination of a sound or a shape changes how we perceive the world around us in the same way actually hearing that sound or seeing that shape does. Specifically, we found that what we imagine hearing can change what we actually see, and what we imagine seeing can change what we actually hear."

The study consists of a series of experiments that make use of illusions in which sensory information from one sense changes or distorts one's perception of another sense. Ninety-six healthy volunteers participated in total.

In the first experiment, participants experienced the illusion that two passing objects collided rather than passed by one-another when they imagined a sound at the moment the two objects met. In a second experiment, the participants' spatial perception of a sound was biased towards a location where they imagined seeing the brief appearance of a white circle. In the third experiment, the participants' perception of what a person was saying was changed by their imagination of a particular sound.

According to the scientists, the results of the current study may be useful in understanding the mechanisms by which the brain fails to distinguish between thought and reality in certain psychiatric disorders such as schizophrenia. Another area of use could be research on brain computer interfaces, where paralyzed individuals' imagination is used to control virtual and artificial devices.

"This is the first set of experiments to definitively establish that the sensory signals generated by one's imagination are strong enough to change one's real-world perception of a different sensory modality" says Professor Henrik Ehrsson, the principle investigator behind the study.


How Smell Works

Smell is often our first response to stimuli. It alerts us to fire before we see flames. It makes us recoil before we taste rotten food. But although smell is a basic sense, it's also at the forefront of neurological research. Scientists are still exploring how, precisely, we pick up odorants, process them and interpret them as smells. Why are researchers, perfumers, developers and even government agencies so curious about smell? What makes a seemingly rudimentary sense so tantalizing?

Smell, like taste, is a chemical sense detected by sensory cells called chemoreceptors. When an odorant stimulates the chemoreceptors in the nose that detect smell, they pass on electrical impulses to the brain. The brain then interprets patterns in electrical activity as specific odors and olfactory sensation becomes perception -- something we can recognize as smell. The only other chemical system that can quickly identify, make sense of and memorize new molecules is the immune system.

But smell, more so than any other sense, is also intimately linked to the parts of the brain that process emotion and associative learning. The olfactory bulb in the brain, which sorts sensation into perception, is part of the limbic system -- a system that includes the amygdala and hippocampus, structures vital to our behavior, mood and memory. This link to brain's emotional center makes smell a fascinating frontier in neuroscience, behavioral science and advertising.

In this article, we'll explore how humans perceive smell, how it triggers memory and the interesting (and sometimes unusual) ways to manipulate odor and olfactory perception.


† Present address: Department of Veterinary Preclinical Science, University of Liverpool, Crown Street, Liverpool L69 7ZJ, UK

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

References

. 1998 Vertical jumping in Galago senegalensis: the quest for an obligate mechanical power amplifier . Phil. Trans. R. Soc. Lond. B 353, 1607–1620. (doi:10.1098/rstb.1998.0313). Link, ISI, Google Scholar

Aerts P., Van Damme R., d'Aout K.& Van Hooydonck B.

. 2003 Bipedalism in lizards: whole-body modelling reveals a possible spandrel . Phil. Trans. R. Soc. Lond. B 358, 1525–1533. (doi:10.1098/rstb.2003.1342). Link, ISI, Google Scholar

Askew G. N., Marsh R. L.& Ellington C. P.

. 2001 The mechanical power output of the flight muscles of blue-breasted quail (Coturnix chinensis) during take off . J. Exp. Biol. 204, 3601–3619. PubMed, Google Scholar

. 1975 The energetics of the jump of the locust Schistocerca gregaria . J. Exp. Biol. 63, 58–83. Google Scholar

. 1989 Scaling body support in mammals: limb posture and muscle mechanics . Science 245, 45–48. (doi:10.1126/science.2740914). Crossref, PubMed, ISI, Google Scholar

. 1993 Jumping in springtails: mechanism and dynamics . J. Zool. Lond. 229, 217–236. (doi:10.1111/j.1469-7998.1993.tb02632.x). Crossref, Google Scholar

. 2006 Jumping performance of froghopper insects . J. Exp. Biol. 209, 4607–4621. (doi:10.1242/jeb.02539). Crossref, PubMed, ISI, Google Scholar

Cavagna A. G., Komarek L.& Mazzoleni S.

. 1971 The mechanics of sprint running . J. Physiol. 217, 709–721. Crossref, PubMed, Google Scholar

. 1944 Studies in the mechanics of the tetrapod skeleton . J. Exp. Biol. 20, 88–116. Google Scholar

. 1938 The heat of shortening and the dynamic constants of muscle . Proc. R. Soc. Lond. B 126, 136–195. (10.1098/rspb.1938.0050). Link, Google Scholar

Janssen I., Heymsfield S. B., Wang Z.& Ross R.

. 2000 Skeletal muscle mass and distribution in 468 men and women aged 18–88 yr . J. Appl. Physiol. 89, 81–88. Crossref, PubMed, ISI, Google Scholar

Lee D. V., Bertram J. E.& Todhunter R. J.

. 1999 Acceleration and balance in trotting dogs . J. Exp. Biol. 202, 3565–3573. PubMed, ISI, Google Scholar

McGowan C. P., Baudinette R. V.& Biewener A. A.

. 2005 Joint work and power associated with acceleration and deceleration in tammar wallabies (Macropus eugenii) . J. Exp. Biol. 208, 41–53. (doi:10.1242/jeb.01305). Crossref, PubMed, ISI, Google Scholar

. 2006 The use of MP3 recorders to log data from equine hoof mounted accelerometers . Eq. Vet. J. 38, 675–680. (doi:10.2746/042516406X156578). Crossref, PubMed, Google Scholar

Pfau T., Witte T. H.& Wilson A. M.

. 2005 A method for deriving displacement data during cyclical movement using an inertial sensor . J. Exp. Biol. 208, 2503–2514. (doi:10.1242/jeb.01658). Crossref, PubMed, Google Scholar

. 2002 Mechanical power output during running accelerations in wild turkeys . J. Exp. Biol. 205, 1485–1494. PubMed, Google Scholar

. 2008 Measurement of stride parameters using a wearable GPS and inertial measurement unit . J. Biomech. 41, 1398–1406. (doi:10.1016/j.jbiomech.2008.02.021). Crossref, PubMed, Google Scholar

. 2005 Biomechanics: no force limit on greyhound sprint speed . Nature 438, 753–754. (doi:10.1038/438753a). Crossref, PubMed, Google Scholar

. 1998 Estimating velocities and accelerations of animal locomotion: a simulation experiment comparing numerical differentiation algorithms . J. Exp. Biol. 201, 981–995. ISI, Google Scholar

Williams S. B., Wilson A. M., Daynes J., Peckham K.& Payne R.

. 2008 Functional anatomy and muscle moment arms of the thoracic limb of an elite sprint athlete: the racing greyhound . J. Anat. 213, 361–372. (doi:10.1111/j.1469-7580.2008.00961.x). Crossref, PubMed, Google Scholar

Williams S. B., Usherwood J. R., Jespers K., Channon A. J.& Wilson A. M.

. 2009 Exploring the mechanical basis for acceleration: pelvic limb locomotor function during accelerations in racing greyhounds (Canis familiaris) . J. Exp. Biol. 212, 550–565. (doi:10.1242/jeb.018093). Crossref, PubMed, Google Scholar

Witte T. H., Knill K.& Wilson A. M.

. 2004 Determination of peak vertical ground reaction force from duty factor in the horse (Equus caballus) . J. Exp. Biol. 207, 3639–3648. (doi:10.1242/jeb.01182). Crossref, PubMed, ISI, Google Scholar


How do we perceive acceleration? - Biology

Do you ever wonder why we're surrounded with things that help us do everything faster and faster and faster? Communicate faster, but also work faster, bank faster, travel faster, find a date faster, cook faster, clean faster and do all of it all at the same time? How do you feel about cramming even more into every waking hour?

Well, to my generation of Americans, speed feels like a birthright. Sometimes I think our minimum speed is Mach 3. Anything less, and we fear losing our competitive edge. But even my generation is starting to question whether we're the masters of speed or if speed is mastering us.

I'm an anthropologist at the Rand Corporation, and while many anthropologists study ancient cultures, I focus on modern day cultures and how we're adapting to all of this change happening in the world. Recently, I teamed up with an engineer, Seifu Chonde, to study speed. We were interested both in how people are adapting to this age of acceleration and its security and policy implications. What could our world look like in 25 years if the current pace of change keeps accelerating? What would it mean for transportation, or learning, communication, manufacturing, weaponry or even natural selection? Will a faster future make us more secure and productive? Or will it make us more vulnerable?

In our research, people accepted acceleration as inevitable, both the thrills and the lack of control. They fear that if they were to slow down, they might run the risk of becoming obsolete. They say they'd rather burn out than rust out. Yet at the same time, they worry that speed could erode their cultural traditions and their sense of home. But even people who are winning at the speed game admit to feeling a little uneasy. They see acceleration as widening the gap between the haves, the jet-setters who are buzzing around, and the have-nots, who are left in the digital dust.

Yes, we have good reason to forecast that the future will be faster, but what I've come to realize is that speed is paradoxical, and like all good paradoxes, it teaches us about the human experience, as absurd and complex as it is.

The first paradox is that we love speed, and we're thrilled by its intensity. But our prehistoric brains aren't really built for it, so we invent roller coasters and race cars and supersonic planes, but we get whiplash, carsick, jet-lagged. We didn't evolve to multitask. Rather, we evolved to do one thing with incredible focus, like hunt — not necessarily with great speed but with endurance for great distance. But now there's a widening gap between our biology and our lifestyles, a mismatch between what our bodies are built for and what we're making them do. It's a phenomenon my mentors have called "Stone Agers in the fast lane."

A second paradox of speed is that it can be measured objectively. Right? Miles per hour, gigabytes per second. But how speed feels, and whether we like it, is highly subjective. So we can document that the pace at which we are adopting new technologies is increasing. For example, it took 85 years from the introduction of the telephone to when the majority of Americans had phones at home. In contrast, it only took 13 years for most of us to have smartphones. And how people act and react to speed varies by culture and among different people within the same culture. Interactions that could be seen as pleasantly brisk and convenient in some cultures could be seen as horribly rude in others. I mean, you wouldn't go asking for a to-go cup at a Japanese tea ceremony so you could jet off to your next tourist stop. Would you?

A third paradox is that speed begets speed. The faster I respond, the more responses I get, the faster I have to respond again. Having more communication and information at our fingertips at any given moment was supposed to make decision-making easier and more rational. But that doesn't really seem to be happening.

Here's just one more paradox: If all of these faster technologies were supposed to free us from drudgery, why do we all feel so pressed for time? Why are we crashing our cars in record numbers, because we think we have to answer that text right away? Shouldn't life in the fast lane feel a little more fun and a little less anxious? German speakers even have a word for this: "Eilkrankheit." In English, that's "hurry sickness." When we have to make fast decisions, autopilot brain kicks in, and we rely on our learned behaviors, our reflexes, our cognitive biases, to help us perceive and respond quickly. Sometimes that saves our lives, right? Fight or flight. But sometimes, it leads us astray in the long run.

Oftentimes, when our society has major failures, they're not technological failures. They're failures that happen when we made decisions too quickly on autopilot. We didn't do the creative or critical thinking required to connect the dots or weed out false information or make sense of complexity. That kind of thinking can't be done fast. That's slow thinking. Two psychologists, Daniel Kahneman and Amos Tversky, started pointing this out back in 1974, and we're still struggling to do something with their insights.

All of modern history can be thought of as one spurt of acceleration after another. It's as if we think if we just speed up enough, we can outrun our problems. But we never do. We know this in our own lives, and policymakers know it, too. So now we're turning to artificial intelligence to help us make faster and smarter decisions to process this ever-expanding universe of data. But machines crunching data are no substitute for critical and sustained thinking by humans, whose Stone Age brains need a little time to let their impulses subside, to slow the mind and let the thoughts flow.

If you're starting to think that we should just hit the brakes, that won't always be the right solution. We all know that a train that's going too fast around a bend can derail, but Seifu, the engineer, taught me that a train that's going too slowly around a bend can also derail.

So managing this spurt of acceleration starts with the understanding that we have more control over speed than we think we do, individually and as a society. Sometimes, we'll need to engineer ourselves to go faster. We'll want to solve gridlock, speed up disaster relief for hurricane victims or use 3-D printing to produce what we need on the spot, just when we need it. Sometimes, though, we'll want to make our surroundings feel slower to engineer the crash out of the speedy experience. And it's OK not to be stimulated all the time. It's good for adults and for kids. Maybe it's boring, but it gives us time to reflect. Slow time is not wasted time.

And we need to reconsider what it means to save time. Culture and rituals around the world build in slowness, because slowness helps us reinforce our shared values and connect. And connection is a critical part of being human. We need to master speed, and that means thinking carefully about the trade-offs of any given technology. Will it help you reclaim time that you can use to express your humanity? Will it give you hurry sickness? Will it give other people hurry sickness? If you're lucky enough to decide the pace that you want to travel through life, it's a privilege. Use it. You might decide that you need both to speed up and to create slow time: time to reflect, to percolate at your own pace time to listen, to empathize, to rest your mind, to linger at the dinner table.

So as we zoom into the future, let's consider setting the technologies of speed, the purpose of speed and our expectations of speed to a more human pace.


Contents

Although technological progress has been accelerating in most areas (though slowing in some), it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. [12] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans. [13]

If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI [14] [15] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

Intelligence explosion is a possible outcome of humanity building artificial general intelligence (AGI). AGI may be capable of recursive self-improvement, leading to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown, shortly after technological singularity is achieved.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. He speculated on the effects of superhuman machines, should they ever be invented: [16]

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Emergence of superintelligence Edit

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world. [7] [17]

Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Non-AI singularity Edit

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology, [18] [19] [20] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity. [7]

Speed superintelligence Edit

A speed superintelligence describes an AI that can do everything that a human can do, where the only difference is that the machine runs faster. [21] For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. [22] Such a difference in information processing speed could drive the singularity. [23]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept. [24] [25] [26]

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. These multiple paths to an intelligence explosion makes a singularity more likely, as they would all have to fail for a singularity not to occur. [22]

Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult to find. [27] Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity. [ citation needed ]

Whether or not an intelligence explosion occurs depends on three factors. [28] The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement should beget at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics will eventually prevent any further improvements.

There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used. [29] The former is predicted by Moore's Law and the forecasted improvements in hardware, [30] and is comparatively similar to previous technological advances. But there are some AI researchers, [ who? ] who believe software is more important than hardware. [31]

A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance of an intelligence explosion. Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely". [32]

Speed improvements Edit

Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. Simply put, [33] Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. [34] An upper limit on speed may eventually be reached, although it is unclear how high this would be. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place we'd just get there a bit faster. There would be no singularity." [35]

It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain.

Exponential growth Edit

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book [36] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes [37] ) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others. [38] Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months the per capita capacity of the world's general-purpose computers has doubled every 18 months the global telecommunication capacity per capita doubled every 34 months and the world's storage capacity per capita doubled every 40 months. [39] On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized as hyperbolic rather than exponential. [40]

Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains . There will be no distinction, post-Singularity, between human and machine". [41] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence." [42]

Accelerating change Edit

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. [5]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". [43] Kurzweil believes that the singularity will occur by approximately 2045. [38] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us". [6] [44]

Algorithm improvements Edit

Some intelligence technologies, like "seed AI", [14] [15] may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately. [ citation needed ] An AI rewriting its own source code could do so while contained in an AI box.

Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again. [45]

There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended. [46] [47] Secondly, AIs could compete for the same scarce resources humankind uses to survive. [48] [49]

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support humankind to promote its own goals, causing human extinction. [50] [51] [52]

Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. [53] An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang." [54]

Criticisms Edit

Some critics, like philosopher Hubert Dreyfus, assert that computers or machines cannot achieve human intelligence, while others, like physicist Stephen Hawking, hold that the definition of intelligence is irrelevant if the net result is the same. [55]

Psychologist Steven Pinker stated in 2008:

. There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. . [24]

[Computers] have, literally . no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. . [T]he machinery has no beliefs, desires, [or] motivations. [56]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future [57] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine." [58]

Theodore Modis [59] and Jonathan Huebner [60] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors. [61] While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor. [62]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers. [63]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists. [64]

Paul Allen argued the opposite of accelerating returns, the complexity brake [26] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, [65] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since. [60] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process." [66] He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination . to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics." [66]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2007–2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good. [67]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily. [68] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity. [69]

Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis. [70]

Uncertainty and risk Edit

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. [71] [72] It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. [73] [74] Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute, the Machine Intelligence Research Institute, [71] the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.

Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." [75] Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." [75] Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity: [75]

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.

Berglas (2008) claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators. [76] [77] [78] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments. [79] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources, [48] [80] and humans would be powerless to stop them. [81] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity. [52]

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. [82] Bill Hibbard (2014) harvtxt error: no target: CITEREFBill_Hibbard2014 (help) proposes an AI design that avoids several dangers including self-delusion, [83] unintended instrumental actions, [46] [84] and corruption of the reward generator. [84] He also discusses social impacts of AI [85] and testing AI. [86] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.

Next step of sociobiological evolution Edit

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. [ citation needed ]

In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.

A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels. we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes. With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".

The article further argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.

The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5 × 10 21 bytes). [88]

In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1 × 10 19 bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3 × 10 37 base pairs, equivalent to 1.325 × 10 37 bytes of information.

If growth in digital storage continues at its current rate of 30–38% compound annual growth per year, [39] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years". [87]

Implications for human society Edit

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. [89]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist. [89]

Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability. [90] Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.

In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development. [92] [93]

Ramez Naam argues against a hard takeoff. He has pointed that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to. design better CPUs!" However, this has not led to a hard takeoff rather, it has led to a soft takeoff in the form of Moore's law. [94] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1." [95]

J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world. [96]

Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff". [97]

Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years." [98]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. [99] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes. [100]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.

According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom. [101]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious." [102]

A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity. [103]

An early description of the idea was made in John Wood Campbell Jr.'s 1932 short story "The last evolution".

In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." [5]

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.

In 1981, Stanisław Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vernor Vinge greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines: [104] [105]

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between . so that the world remains intelligible.

In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time. [6] [106]

Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era", [7] spread widely on the internet and helped to popularize the idea. [107] This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express. [7]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity. [44]

In 2005, Kurzweil published The Singularity is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart. [108]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting. [19] [109] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability. [19]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges." [110] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the Joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity. [111] [112] [113]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016: [114]

One thing that we haven't talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren't spending a lot of time right now worrying about singularity—they are worrying about "Well, is my job going to be replaced by a machine?"


3 The Relation of Physics to Other Sciences

(There was no summary for this lecture.)

3–1 Introduction

Physics is the most fundamental and all-inclusive of the sciences, and has had a profound effect on all scientific development. In fact, physics is the present-day equivalent of what used to be called natural philosophy, from which most of our modern sciences arose. Students of many fields find themselves studying physics because of the basic role it plays in all phenomena. In this chapter we shall try to explain what the fundamental problems in the other sciences are, but of course it is impossible in so small a space really to deal with the complex, subtle, beautiful matters in these other fields. Lack of space also prevents our discussing the relation of physics to engineering, industry, society, and war, or even the most remarkable relationship between mathematics and physics. (Mathematics is not a science from our point of view, in the sense that it is not a natural science. The test of its validity is not experiment.) We must, incidentally, make it clear from the beginning that if a thing is not a science, it is not necessarily bad. For example, love is not a science. So, if something is said not to be a science, it does not mean that there is something wrong with it it just means that it is not a science.

3–2 Chemistry

The science which is perhaps the most deeply affected by physics is chemistry. Historically, the early days of chemistry dealt almost entirely with what we now call inorganic chemistry, the chemistry of substances which are not associated with living things. Considerable analysis was required to discover the existence of the many elements and their relationships—how they make the various relatively simple compounds found in rocks, earth, etc. This early chemistry was very important for physics. The interaction between the two sciences was very great because the theory of atoms was substantiated to a large extent by experiments in chemistry. The theory of chemistry, i.e., of the reactions themselves, was summarized to a large extent in the periodic chart of Mendeleev, which brings out many strange relationships among the various elements, and it was the collection of rules as to which substance is combined with which, and how, that constituted inorganic chemistry. All these rules were ultimately explained in principle by quantum mechanics, so that theoretical chemistry is in fact physics. On the other hand, it must be emphasized that this explanation is in principle. We have already discussed the difference between knowing the rules of the game of chess, and being able to play. So it is that we may know the rules, but we cannot play very well. It turns out to be very difficult to predict precisely what will happen in a given chemical reaction nevertheless, the deepest part of theoretical chemistry must end up in quantum mechanics.

There is also a branch of physics and chemistry which was developed by both sciences together, and which is extremely important. This is the method of statistics applied in a situation in which there are mechanical laws, which is aptly called statistical mechanics. In any chemical situation a large number of atoms are involved, and we have seen that the atoms are all jiggling around in a very random and complicated way. If we could analyze each collision, and be able to follow in detail the motion of each molecule, we might hope to figure out what would happen, but the many numbers needed to keep track of all these molecules exceeds so enormously the capacity of any computer, and certainly the capacity of the mind, that it was important to develop a method for dealing with such complicated situations. Statistical mechanics, then, is the science of the phenomena of heat, or thermodynamics. Inorganic chemistry is, as a science, now reduced essentially to what are called physical chemistry and quantum chemistry physical chemistry to study the rates at which reactions occur and what is happening in detail (How do the molecules hit? Which pieces fly off first?, etc.), and quantum chemistry to help us understand what happens in terms of the physical laws.

The other branch of chemistry is organic chemistry, the chemistry of the substances which are associated with living things. For a time it was believed that the substances which are associated with living things were so marvelous that they could not be made by hand, from inorganic materials. This is not at all true—they are just the same as the substances made in inorganic chemistry, but more complicated arrangements of atoms are involved. Organic chemistry obviously has a very close relationship to the biology which supplies its substances, and to industry, and furthermore, much physical chemistry and quantum mechanics can be applied to organic as well as to inorganic compounds. However, the main problems of organic chemistry are not in these aspects, but rather in the analysis and synthesis of the substances which are formed in biological systems, in living things. This leads imperceptibly, in steps, toward biochemistry, and then into biology itself, or molecular biology.

3–3 Biology

Thus we come to the science of biology, which is the study of living things. In the early days of biology, the biologists had to deal with the purely descriptive problem of finding out what living things there were, and so they just had to count such things as the hairs of the limbs of fleas. After these matters were worked out with a great deal of interest, the biologists went into the machinery inside the living bodies, first from a gross standpoint, naturally, because it takes some effort to get into the finer details.

There was an interesting early relationship between physics and biology in which biology helped physics in the discovery of the conservation of energy, which was first demonstrated by Mayer in connection with the amount of heat taken in and given out by a living creature.

If we look at the processes of biology of living animals more closely, we see many physical phenomena: the circulation of blood, pumps, pressure, etc. There are nerves: we know what is happening when we step on a sharp stone, and that somehow or other the information goes from the leg up. It is interesting how that happens. In their study of nerves, the biologists have come to the conclusion that nerves are very fine tubes with a complex wall which is very thin through this wall the cell pumps ions, so that there are positive ions on the outside and negative ions on the inside, like a capacitor. Now this membrane has an interesting property if it “discharges” in one place, i.e., if some of the ions were able to move through one place, so that the electric voltage is reduced there, that electrical influence makes itself felt on the ions in the neighborhood, and it affects the membrane in such a way that it lets the ions through at neighboring points also. This in turn affects it farther along, etc., and so there is a wave of “penetrability” of the membrane which runs down the fiber when it is “excited” at one end by stepping on the sharp stone. This wave is somewhat analogous to a long sequence of vertical dominoes if the end one is pushed over, that one pushes the next, etc. Of course this will transmit only one message unless the dominoes are set up again and similarly in the nerve cell, there are processes which pump the ions slowly out again, to get the nerve ready for the next impulse. So it is that we know what we are doing (or at least where we are). Of course the electrical effects associated with this nerve impulse can be picked up with electrical instruments, and because there are electrical effects, obviously the physics of electrical effects has had a great deal of influence on understanding the phenomenon.

The opposite effect is that, from somewhere in the brain, a message is sent out along a nerve. What happens at the end of the nerve? There the nerve branches out into fine little things, connected to a structure near a muscle, called an endplate. For reasons which are not exactly understood, when the impulse reaches the end of the nerve, little packets of a chemical called acetylcholine are shot off (five or ten molecules at a time) and they affect the muscle fiber and make it contract—how simple! What makes a muscle contract? A muscle is a very large number of fibers close together, containing two different substances, myosin and actomyosin, but the machinery by which the chemical reaction induced by acetylcholine can modify the dimensions of the muscle is not yet known. Thus the fundamental processes in the muscle that make mechanical motions are not known.

Biology is such an enormously wide field that there are hosts of other problems that we cannot mention at all—problems on how vision works (what the light does in the eye), how hearing works, etc. (The way in which thinking works we shall discuss later under psychology.) Now, these things concerning biology which we have just discussed are, from a biological standpoint, really not fundamental, at the bottom of life, in the sense that even if we understood them we still would not understand life itself. To illustrate: the men who study nerves feel their work is very important, because after all you cannot have animals without nerves. But you can have life without nerves. Plants have neither nerves nor muscles, but they are working, they are alive, just the same. So for the fundamental problems of biology we must look deeper when we do, we discover that all living things have a great many characteristics in common. The most common feature is that they are made of cells, within each of which is complex machinery for doing things chemically. In plant cells, for example, there is machinery for picking up light and generating glucose, which is consumed in the dark to keep the plant alive. When the plant is eaten the glucose itself generates in the animal a series of chemical reactions very closely related to photosynthesis (and its opposite effect in the dark) in plants.

In the cells of living systems there are many elaborate chemical reactions, in which one compound is changed into another and another. To give some impression of the enormous efforts that have gone into the study of biochemistry, the chart in Fig. 3–1 summarizes our knowledge to date on just one small part of the many series of reactions which occur in cells, perhaps a percent or so of it.

Here we see a whole series of molecules which change from one to another in a sequence or cycle of rather small steps. It is called the Krebs cycle, the respiratory cycle. Each of the chemicals and each of the steps is fairly simple, in terms of what change is made in the molecule, but—and this is a centrally important discovery in biochemistry—these changes are relatively difficult to accomplish in a laboratory. If we have one substance and another very similar substance, the one does not just turn into the other, because the two forms are usually separated by an energy barrier or “hill.” Consider this analogy: If we wanted to take an object from one place to another, at the same level but on the other side of a hill, we could push it over the top, but to do so requires the addition of some energy. Thus most chemical reactions do not occur, because there is what is called an activation energy in the way. In order to add an extra atom to our chemical requires that we get it close enough that some rearrangement can occur then it will stick. But if we cannot give it enough energy to get it close enough, it will not go to completion, it will just go part way up the “hill” and back down again. However, if we could literally take the molecules in our hands and push and pull the atoms around in such a way as to open a hole to let the new atom in, and then let it snap back, we would have found another way, around the hill, which would not require extra energy, and the reaction would go easily. Now there actually are, in the cells, very large molecules, much larger than the ones whose changes we have been describing, which in some complicated way hold the smaller molecules just right, so that the reaction can occur easily. These very large and complicated things are called enzymes. (They were first called ferments, because they were originally discovered in the fermentation of sugar. In fact, some of the first reactions in the cycle were discovered there.) In the presence of an enzyme the reaction will go.

An enzyme is made of another substance called protein. Enzymes are very big and complicated, and each one is different, each being built to control a certain special reaction. The names of the enzymes are written in Fig. 3–1 at each reaction. (Sometimes the same enzyme may control two reactions.) We emphasize that the enzymes themselves are not involved in the reaction directly. They do not change they merely let an atom go from one place to another. Having done so, the enzyme is ready to do it to the next molecule, like a machine in a factory. Of course, there must be a supply of certain atoms and a way of disposing of other atoms. Take hydrogen, for example: there are enzymes which have special units on them which carry the hydrogen for all chemical reactions. For example, there are three or four hydrogen-reducing enzymes which are used all over our cycle in different places. It is interesting that the machinery which liberates some hydrogen at one place will take that hydrogen and use it somewhere else.

The most important feature of the cycle of Fig. 3–1 is the transformation from GDP to GTP (guanosine-di-phosphate to guanosine-tri-phosphate) because the one substance has much more energy in it than the other. Just as there is a “box” in certain enzymes for carrying hydrogen atoms around, there are special energy-carrying “boxes” which involve the triphosphate group. So, GTP has more energy than GDP and if the cycle is going one way, we are producing molecules which have extra energy and which can go drive some other cycle which requires energy, for example the contraction of muscle. The muscle will not contract unless there is GTP. We can take muscle fiber, put it in water, and add GTP, and the fibers contract, changing GTP to GDP if the right enzymes are present. So the real system is in the GDP-GTP transformation in the dark the GTP which has been stored up during the day is used to run the whole cycle around the other way. An enzyme, you see, does not care in which direction the reaction goes, for if it did it would violate one of the laws of physics.

Physics is of great importance in biology and other sciences for still another reason, that has to do with experimental techniques. In fact, if it were not for the great development of experimental physics, these biochemistry charts would not be known today. The reason is that the most useful tool of all for analyzing this fantastically complex system is to label the atoms which are used in the reactions. Thus, if we could introduce into the cycle some carbon dioxide which has a “green mark” on it, and then measure after three seconds where the green mark is, and again measure after ten seconds, etc., we could trace out the course of the reactions. What are the “green marks”? They are different isotopes. We recall that the chemical properties of atoms are determined by the number of electrons, not by the mass of the nucleus. But there can be, for example in carbon, six neutrons or seven neutrons, together with the six protons which all carbon nuclei have. Chemically, the two atoms C$^<12>$ and C$^<13>$ are the same, but they differ in weight and they have different nuclear properties, and so they are distinguishable. By using these isotopes of different weights, or even radioactive isotopes like C$^<14>$, which provide a more sensitive means for tracing very small quantities, it is possible to trace the reactions.

Now, we return to the description of enzymes and proteins. Not all proteins are enzymes, but all enzymes are proteins. There are many proteins, such as the proteins in muscle, the structural proteins which are, for example, in cartilage and hair, skin, etc., that are not themselves enzymes. However, proteins are a very characteristic substance of life: first of all they make up all the enzymes, and second, they make up much of the rest of living material. Proteins have a very interesting and simple structure. They are a series, or chain, of different amino acids. There are twenty different amino acids, and they all can combine with each other to form chains in which the backbone is CO-NH, etc. Proteins are nothing but chains of various ones of these twenty amino acids. Each of the amino acids probably serves some special purpose. Some, for example, have a sulfur atom at a certain place when two sulfur atoms are in the same protein, they form a bond, that is, they tie the chain together at two points and form a loop. Another has extra oxygen atoms which make it an acidic substance, another has a basic characteristic. Some of them have big groups hanging out to one side, so that they take up a lot of space. One of the amino acids, called proline, is not really an amino acid, but imino acid. There is a slight difference, with the result that when proline is in the chain, there is a kink in the chain. If we wished to manufacture a particular protein, we would give these instructions: put one of those sulfur hooks here next, add something to take up space then attach something to put a kink in the chain. In this way, we will get a complicated-looking chain, hooked together and having some complex structure this is presumably just the manner in which all the various enzymes are made. One of the great triumphs in recent times (since 1960), was at last to discover the exact spatial atomic arrangement of certain proteins, which involve some fifty-six or sixty amino acids in a row. Over a thousand atoms (more nearly two thousand, if we count the hydrogen atoms) have been located in a complex pattern in two proteins. The first was hemoglobin. One of the sad aspects of this discovery is that we cannot see anything from the pattern we do not understand why it works the way it does. Of course, that is the next problem to be attacked.

Another problem is how do the enzymes know what to be? A red-eyed fly makes a red-eyed fly baby, and so the information for the whole pattern of enzymes to make red pigment must be passed from one fly to the next. This is done by a substance in the nucleus of the cell, not a protein, called DNA (short for desoxyribose nucleic acid). This is the key substance which is passed from one cell to another (for instance sperm cells consist mostly of DNA) and carries the information as to how to make the enzymes. DNA is the “blueprint.” What does the blueprint look like and how does it work? First, the blueprint must be able to reproduce itself. Secondly, it must be able to instruct the protein. Concerning the reproduction, we might think that this proceeds like cell reproduction. Cells simply grow bigger and then divide in half. Must it be thus with DNA molecules, then, that they too grow bigger and divide in half? Every atom certainly does not grow bigger and divide in half! No, it is impossible to reproduce a molecule except by some more clever way.

The structure of the substance DNA was studied for a long time, first chemically to find the composition, and then with x-rays to find the pattern in space. The result was the following remarkable discovery: The DNA molecule is a pair of chains, twisted upon each other. The backbone of each of these chains, which are analogous to the chains of proteins but chemically quite different, is a series of sugar and phosphate groups, as shown in Fig. 3–2. Now we see how the chain can contain instructions, for if we could split this chain down the middle, we would have a series $BAADCldots$ and every living thing could have a different series. Thus perhaps, in some way, the specific instructions for the manufacture of proteins are contained in the specific series of the DNA.

Attached to each sugar along the line, and linking the two chains together, are certain pairs of cross-links. However, they are not all of the same kind there are four kinds, called adenine, thymine, cytosine, and guanine, but let us call them $A$, $B$, $C$, and $D$. The interesting thing is that only certain pairs can sit opposite each other, for example $A$ with $B$ and $C$ with $D$. These pairs are put on the two chains in such a way that they “fit together,” and have a strong energy of interaction. However, $C$ will not fit with $A$, and $B$ will not fit with $C$ they will only fit in pairs, $A$ against $B$ and $C$ against $D$. Therefore if one is $C$, the other must be $D$, etc. Whatever the letters may be in one chain, each one must have its specific complementary letter on the other chain.

What then about reproduction? Suppose we split this chain in two. How can we make another one just like it? If, in the substances of the cells, there is a manufacturing department which brings up phosphate, sugar, and $A$, $B$, $C$, $D$ units not connected in a chain, the only ones which will attach to our split chain will be the correct ones, the complements of $BAADCldots$, namely, $ABBCDldots$ Thus what happens is that the chain splits down the middle during cell division, one half ultimately to go with one cell, the other half to end up in the other cell when separated, a new complementary chain is made by each half-chain.

Next comes the question, precisely how does the order of the $A$, $B$, $C$, $D$ units determine the arrangement of the amino acids in the protein? This is the central unsolved problem in biology today. The first clues, or pieces of information, however, are these: There are in the cell tiny particles called ribosomes, and it is now known that that is the place where proteins are made. But the ribosomes are not in the nucleus, where the DNA and its instructions are. Something seems to be the matter. However, it is also known that little molecule pieces come off the DNA—not as long as the big DNA molecule that carries all the information itself, but like a small section of it. This is called RNA, but that is not essential. It is a kind of copy of the DNA, a short copy. The RNA, which somehow carries a message as to what kind of protein to make goes over to the ribosome that is known. When it gets there, protein is synthesized at the ribosome. That is also known. However, the details of how the amino acids come in and are arranged in accordance with a code that is on the RNA are, as yet, still unknown. We do not know how to read it. If we knew, for example, the “lineup” $A$, $B$, $C$, $C$, $A$, we could not tell you what protein is to be made.

Certainly no subject or field is making more progress on so many fronts at the present moment, than biology, and if we were to name the most powerful assumption of all, which leads one on and on in an attempt to understand life, it is that all things are made of atoms, and that everything that living things do can be understood in terms of the jigglings and wigglings of atoms.

3–4 Astronomy

In this rapid-fire explanation of the whole world, we must now turn to astronomy. Astronomy is older than physics. In fact, it got physics started by showing the beautiful simplicity of the motion of the stars and planets, the understanding of which was the beginning of physics. But the most remarkable discovery in all of astronomy is that the stars are made of atoms of the same kind as those on the earth. 1 How was this done? Atoms liberate light which has definite frequencies, something like the timbre of a musical instrument, which has definite pitches or frequencies of sound. When we are listening to several different tones we can tell them apart, but when we look with our eyes at a mixture of colors we cannot tell the parts from which it was made, because the eye is nowhere near as discerning as the ear in this connection. However, with a spectroscope we can analyze the frequencies of the light waves and in this way we can see the very tunes of the atoms that are in the different stars. As a matter of fact, two of the chemical elements were discovered on a star before they were discovered on the earth. Helium was discovered on the sun, whence its name, and technetium was discovered in certain cool stars. This, of course, permits us to make headway in understanding the stars, because they are made of the same kinds of atoms which are on the earth. Now we know a great deal about the atoms, especially concerning their behavior under conditions of high temperature but not very great density, so that we can analyze by statistical mechanics the behavior of the stellar substance. Even though we cannot reproduce the conditions on the earth, using the basic physical laws we often can tell precisely, or very closely, what will happen. So it is that physics aids astronomy. Strange as it may seem, we understand the distribution of matter in the interior of the sun far better than we understand the interior of the earth. What goes on inside a star is better understood than one might guess from the difficulty of having to look at a little dot of light through a telescope, because we can calculate what the atoms in the stars should do in most circumstances.

One of the most impressive discoveries was the origin of the energy of the stars, that makes them continue to burn. One of the men who discovered this was out with his girlfriend the night after he realized that nuclear reactions must be going on in the stars in order to make them shine. She said “Look at how pretty the stars shine!” He said “Yes, and right now I am the only man in the world who knows why they shine.” She merely laughed at him. She was not impressed with being out with the only man who, at that moment, knew why stars shine. Well, it is sad to be alone, but that is the way it is in this world.

It is the nuclear “burning” of hydrogen which supplies the energy of the sun the hydrogen is converted into helium. Furthermore, ultimately, the manufacture of various chemical elements proceeds in the centers of the stars, from hydrogen. The stuff of which we are made, was “cooked” once, in a star, and spit out. How do we know? Because there is a clue. The proportion of the different isotopes—how much C$^<12>$, how much C$^<13>$, etc., is something which is never changed by chemical reactions, because the chemical reactions are so much the same for the two. The proportions are purely the result of nuclear reactions. By looking at the proportions of the isotopes in the cold, dead ember which we are, we can discover what the furnace was like in which the stuff of which we are made was formed. That furnace was like the stars, and so it is very likely that our elements were “made” in the stars and spit out in the explosions which we call novae and supernovae. Astronomy is so close to physics that we shall study many astronomical things as we go along.

3–5 Geology

We turn now to what are called earth sciences, or geology. First, meteorology and the weather. Of course the instruments of meteorology are physical instruments, and the development of experimental physics made these instruments possible, as was explained before. However, the theory of meteorology has never been satisfactorily worked out by the physicist. “Well,” you say, “there is nothing but air, and we know the equations of the motions of air.” Yes we do. “So if we know the condition of air today, why can’t we figure out the condition of the air tomorrow?” First, we do not really know what the condition is today, because the air is swirling and twisting everywhere. It turns out to be very sensitive, and even unstable. If you have ever seen water run smoothly over a dam, and then turn into a large number of blobs and drops as it falls, you will understand what I mean by unstable. You know the condition of the water before it goes over the spillway it is perfectly smooth but the moment it begins to fall, where do the drops begin? What determines how big the lumps are going to be and where they will be? That is not known, because the water is unstable. Even a smooth moving mass of air, in going over a mountain turns into complex whirlpools and eddies. In many fields we find this situation of turbulent flow that we cannot analyze today. Quickly we leave the subject of weather, and discuss geology!

The question basic to geology is, what makes the earth the way it is? The most obvious processes are in front of your very eyes, the erosion processes of the rivers, the winds, etc. It is easy enough to understand these, but for every bit of erosion there is an equal amount of something else going on. Mountains are no lower today, on the average, than they were in the past. There must be mountain-forming processes. You will find, if you study geology, that there are mountain-forming processes and volcanism, which nobody understands but which is half of geology. The phenomenon of volcanoes is really not understood. What makes an earthquake is, ultimately, not understood. It is understood that if something is pushing something else, it snaps and will slide—that is all right. But what pushes, and why? The theory is that there are currents inside the earth—circulating currents, due to the difference in temperature inside and outside—which, in their motion, push the surface slightly. Thus if there are two opposite circulations next to each other, the matter will collect in the region where they meet and make belts of mountains which are in unhappy stressed conditions, and so produce volcanoes and earthquakes.

What about the inside of the earth? A great deal is known about the speed of earthquake waves through the earth and the density of distribution of the earth. However, physicists have been unable to get a good theory as to how dense a substance should be at the pressures that would be expected at the center of the earth. In other words, we cannot figure out the properties of matter very well in these circumstances. We do much less well with the earth than we do with the conditions of matter in the stars. The mathematics involved seems a little too difficult, so far, but perhaps it will not be too long before someone realizes that it is an important problem, and really works it out. The other aspect, of course, is that even if we did know the density, we cannot figure out the circulating currents. Nor can we really work out the properties of rocks at high pressure. We cannot tell how fast the rocks should “give” that must all be worked out by experiment.

3–6 Psychology

Next, we consider the science of psychology. Incidentally, psychoanalysis is not a science: it is at best a medical process, and perhaps even more like witch-doctoring. It has a theory as to what causes disease—lots of different “spirits,” etc. The witch doctor has a theory that a disease like malaria is caused by a spirit which comes into the air it is not cured by shaking a snake over it, but quinine does help malaria. So, if you are sick, I would advise that you go to the witch doctor because he is the man in the tribe who knows the most about the disease on the other hand, his knowledge is not science. Psychoanalysis has not been checked carefully by experiment, and there is no way to find a list of the number of cases in which it works, the number of cases in which it does not work, etc.

The other branches of psychology, which involve things like the physiology of sensation—what happens in the eye, and what happens in the brain—are, if you wish, less interesting. But some small but real progress has been made in studying them. One of the most interesting technical problems may or may not be called psychology. The central problem of the mind, if you will, or the nervous system, is this: when an animal learns something, it can do something different than it could before, and its brain cell must have changed too, if it is made out of atoms. In what way is it different? We do not know where to look, or what to look for, when something is memorized. We do not know what it means, or what change there is in the nervous system, when a fact is learned. This is a very important problem which has not been solved at all. Assuming, however, that there is some kind of memory thing, the brain is such an enormous mass of interconnecting wires and nerves that it probably cannot be analyzed in a straightforward manner. There is an analog of this to computing machines and computing elements, in that they also have a lot of lines, and they have some kind of element, analogous, perhaps, to the synapse, or connection of one nerve to another. This is a very interesting subject which we have not the time to discuss further—the relationship between thinking and computing machines. It must be appreciated, of course, that this subject will tell us very little about the real complexities of ordinary human behavior. All human beings are so different. It will be a long time before we get there. We must start much further back. If we could even figure out how a dog works, we would have gone pretty far. Dogs are easier to understand, but nobody yet knows how dogs work.

3–7 How did it get that way?

In order for physics to be useful to other sciences in a theoretical way, other than in the invention of instruments, the science in question must supply to the physicist a description of the object in a physicist’s language. They can say “why does a frog jump?,” and the physicist cannot answer. If they tell him what a frog is, that there are so many molecules, there is a nerve here, etc., that is different. If they will tell us, more or less, what the earth or the stars are like, then we can figure it out. In order for physical theory to be of any use, we must know where the atoms are located. In order to understand the chemistry, we must know exactly what atoms are present, for otherwise we cannot analyze it. That is but one limitation, of course.

There is another kind of problem in the sister sciences which does not exist in physics we might call it, for lack of a better term, the historical question. How did it get that way? If we understand all about biology, we will want to know how all the things which are on the earth got there. There is the theory of evolution, an important part of biology. In geology, we not only want to know how the mountains are forming, but how the entire earth was formed in the beginning, the origin of the solar system, etc. That, of course, leads us to want to know what kind of matter there was in the world. How did the stars evolve? What were the initial conditions? That is the problem of astronomical history. A great deal has been found out about the formation of stars, the formation of elements from which we were made, and even a little about the origin of the universe.

There is no historical question being studied in physics at the present time. We do not have a question, “Here are the laws of physics, how did they get that way?” We do not imagine, at the moment, that the laws of physics are somehow changing with time, that they were different in the past than they are at present. Of course they may be, and the moment we find they are, the historical question of physics will be wrapped up with the rest of the history of the universe, and then the physicist will be talking about the same problems as astronomers, geologists, and biologists.

Finally, there is a physical problem that is common to many fields, that is very old, and that has not been solved. It is not the problem of finding new fundamental particles, but something left over from a long time ago—over a hundred years. Nobody in physics has really been able to analyze it mathematically satisfactorily in spite of its importance to the sister sciences. It is the analysis of circulating or turbulent fluids. If we watch the evolution of a star, there comes a point where we can deduce that it is going to start convection, and thereafter we can no longer deduce what should happen. A few million years later the star explodes, but we cannot figure out the reason. We cannot analyze the weather. We do not know the patterns of motions that there should be inside the earth. The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.

A poet once said, “The whole universe is in a glass of wine.” We will probably never know in what sense he meant that, for poets do not write to be understood. But it is true that if we look at a glass of wine closely enough we see the entire universe. There are the things of physics: the twisting liquid which evaporates depending on the wind and weather, the reflections in the glass, and our imagination adds the atoms. The glass is a distillation of the earth’s rocks, and in its composition we see the secrets of the universe’s age, and the evolution of stars. What strange array of chemicals are in the wine? How did they come to be? There are the ferments, the enzymes, the substrates, and the products. There in wine is found the great generalization: all life is fermentation. Nobody can discover the chemistry of wine without discovering, as did Louis Pasteur, the cause of much disease. How vivid is the claret, pressing its existence into the consciousness that watches it! If our small minds, for some convenience, divide this glass of wine, this universe, into parts—physics, biology, geology, astronomy, psychology, and so on—remember that nature does not know it! So let us put it all back together, not forgetting ultimately what it is for. Let it give us one more final pleasure: drink it and forget it all!


How do we know gravitational acceleration is the same as other forms of acceleration?

You often hear that being in a box sitting stationary in a gravitational field is equivalent to being in a box that is accelerating, and there is no way for an observer inside the box to know the difference.

How do we know this? Is there any possibility there is some quality to being in a gravitational field that would be different from just being in an accelerating box? What experiments have been done to confirm they are equivalent?

Basically, if the equivalence principle is violated, it implies that the values of certain fundamental constants change over time. These changes have not been observed.

The Einstein equivalence principle can be tested by searching for variation of dimensionless constants and mass ratios. The present best limits on the variation of the fundamental constants have mainly been set by studying the naturally occurring Oklo natural nuclear fission reactor, where nuclear reactions similar to ones we observe today have been shown to have occurred underground approximately two billion years ago. These reactions are extremely sensitive to the values of the fundamental constants.

For electrostatic attraction you have two quantities, the mass of an object and its charge. The charge q determines how strong a force F is exerted on the charge by an electric field E. F = qE, whereas the mass determines how easily a force F will accelerate the mass, a = F/m (for given F and m that determines the acceleration).

Transitioning to the very similar theory of Newtonian gravity (assuming constant g = 9.81 m/s²), we find that the charge and mass are the same thing and cancel out of the equation, so something is different about gravity. (In an intermediate step you can put some scrutiny into the question whether the mass that determines the gravitational force and the mass that determines how easily an object is accelerated are really the same, but after checking this to high precision they seem to be.)

What you mention about the box though is true for a constant gravitational field or "locally" (ie if you look at a small neighbourhood of the box), on the larger scale you will have differences in the gravitational attraction and so-called tidal effects that mean you can distinguish linear acceleration from being in a gravitational field.

Nevertheless the insight leads us to general relativity where we make sense of the fact that the mass cancels from the Newtonian equation and the trajectories of bodies under gravity don't depend on their mass (even massless objects are affected), by describing gravity as a geometric effect.


Earthquake Hazards 201 - Technical Q&A

A list of technical questions & answers about earthquake hazards.

What is %g?

What is acceleration? peak acceleration? peak ground acceleration (PGA)?

What is spectral acceleration (SA)?

PGA (peak acceleration) is what is experienced by a particle on the ground, and SA is approximately what is experienced by a building, as modeled by a particle mass on a massless vertical rod having the same natural period of vibration as the building.

The mass on the rod behaves about like a simple harmonic oscillator (SHO). If one "drives" the mass-rod system at its base, using the seismic record, and assuming a certain damping to the mass-rod system, one will get a record of the particle motion which basically "feels" only the components of ground motion with periods near the natural period of this SHO. If we look at this particle seismic record we can identify the maximum displacement. If we take the derivative (rate of change) of the displacement record with respect to time we can get the velocity record. The maximum velocity can likewise be determined. Similarly for response acceleration (rate of change of velocity) also called response spectral acceleration, or simply spectral acceleration, SA (or Sa).

PGA is a good index to hazard for short buildings, up to about 7 stories. To be a good index, means that if you plot some measure of demand placed on a building, like inter story displacement or base shear, against PGA, for a number of different buildings for a number of different earthquakes, you will get a strong correlation.

PGA is a natural simple design parameter since it can be related to a force and for simple design one can design a building to resist a certain horizontal force.PGV, peak ground velocity, is a good index to hazard to taller buildings. However, it is not clear how to relate velocity to force in order to design a taller building.

SA would also be a good index to hazard to buildings, but ought to be more closely related to the building behavior than peak ground motion parameters. Design might also be easier, but the relation to design force is likely to be more complicated than with PGA, because the value of the period comes into the picture.

PGA, PGV, or SA are only approximately related to building demand/design because the building is not a simple oscillator, but has overtones of vibration, each of which imparts maximum demand to different parts of the structure, each part of which may have its own weaknesses. Duration also plays a role in damage, and some argue that duration-related damage is not well-represented by response parameters.

On the other hand, some authors have shown that non-linear response of a certain structure is only weakly dependent on the magnitude and distance of the causative earthquake, so that non-linear response is related to linear response (SA) by a simple scalar (multiplying factor). This is not so for peak ground parameters, and this fact argues that SA ought to be significantly better as an index to demand/design than peak ground motion parameters.

There is no particular significance to the relative size of PGA, SA (0.2), and SA (1.0). On the average, these roughly correlate, with a factor that depends on period.While PGA may reflect what a person might feel standing on the ground in an earthquake, I don't believe it is correct to state that SA reflects what one might "feel" if one is in a building. In taller buildings, short period ground motions are felt only weakly, and long-period motions tend not to be felt as forces, but rather disorientation and dizziness.

What is probability of exceedence (PE)?

For any given site on the map, the computer calculates the ground motion effect (peak acceleration) at the site for all the earthquake locations and magnitudes believed possible in the vicinity of the site. Each of these magnitude-location pairs is believed to happen at some average probability per year. Small ground motions are relatively likely, large ground motions are very unlikely.Beginning with the largest ground motions and proceeding to smaller, we add up probabilities until we arrive at a total probability corresponding to a given probability, P, in a particular period of time, T.

The probability P comes from ground motions larger than the ground motion at which we stopped adding. The corresponding ground motion (peak acceleration) is said to have a P probability of exceedance (PE) in T years.The map contours the ground motions corresponding to this probability at all the sites in a grid covering the U.S. Thus the maps are not actually probability maps, but rather ground motion hazard maps at a given level of probability.In the future we are likely to post maps which are probability maps. They will show the probability of exceedance for some constant ground motion. For instance, one such map may show the probability of a ground motion exceeding 0.20 g in 50 years.

What is the relationship between peak ground acceleration (PGA) and "effective peak acceleration" (Aa), or between peak ground velocity (PGV) and "effective peak velocity" (Av) as these parameters appear on building code maps?

Aa and Av have no clear physical definition, as such. Rather, they are building code constructs, adopted by the staff that produced the Applied Technology Council (1978) (ATC-3) seismic provisions. Maps for Aa and Av were derived by ATC project staff from a draft of the Algermissen and Perkins (1976) probabilistic peak acceleration map (and other maps) in order to provide for design ground motions for use in model building codes. Many aspects of that ATC-3 report have been adopted by the current (in use in 1997) national model building codes, except for the new NEHRP provisions.

This process is explained in the ATC-3 document referenced below, (p 297-302). Here are some excerpts from that document:

  • p. 297. "At the present time, the best workable tool for describing the design ground shaking is a smoothed elastic response spectrum for single degree-of-freedom systems…
  • p. 298. "In developing the design provisions, two parameters were used to characterize the intensity of design ground shaking. These parameters are called the Effective Peak Acceleration (EPA), Aa, and the Effective Peak Velocity (EPV), Av. These parameters do not at present have precise definitions in physical terms but their significance may be understood from the following paragraphs.
  • "To best understand the meaning of EPA and EPV, they should be considered as normalizing factors for construction of smoothed elastic response spectra for ground motions of normal duration. The EPA is proportional to spectral ordinates for periods in the range of 0.1 to 0.5 seconds, while the EPV is proportional to spectral ordinates at a period of about 1 second . . . The constant of proportionality (for a 5 percent damping spectrum) is set at a standard value of 2.5 in both cases.
  • "…The EPA and EPV thus obtained are related to peak ground acceleration and peak ground velocity but are not necessarily the same as or even proportional to peak acceleration and velocity. When very high frequencies are present in the ground motion, the EPA may be significantly less than the peak acceleration. This is consistent with the observation that chopping off the spectrum computed from that motion, except at periods much shorter than those of interest in ordinary building practice has very little effect upon the response spectrum computed from that motion, except at periods much shorter than those of interest in ordinary building practice. . . On the other hand, the EPV will generally be greater than the peak velocity at large distances from a major earthquake. "
  • p. 299. "Thus the EPA and EPV for a motion may be either greater or smaller than the peak acceleration and velocity, although generally the EPA will be smaller than peak acceleration while the EPV will be larger than the peak velocity.
  • ". . .For purposes of computing the lateral force coefficient in Sec. 4.2, EPA and EPV are replaced by dimensionless coefficients Aa and Av respectively. Aa is numerically equal to EPA when EPA is expressed as a decimal fraction of the acceleration of gravity. "

Now, examination of the tripartite diagram of the response spectrum for the 1940 El Centro earthquake (p. 274, Newmark and Rosenblueth, Fundamentals of Earthquake Engineering) verifies that taking response acceleration at .05 percent damping, at periods between 0.1 and 0.5 sec, and dividing by a number between 2 and 3 would approximate peak acceleration for that earthquake. Thus, in this case, effective peak acceleration in this period range is nearly numerically equal to actual peak acceleration.

However, since the response acceleration spectrum is asymptotic to peak acceleration for very short periods, some people have assumed that effective peak acceleration is 2.5 times less than true peak acceleration. This would only be true if one continued to divide response accelerations by 2.5 for periods much shorter than 0.1 sec. But EPA is only defined for periods longer than 0.1 sec.

Effective peak acceleration could be some factor lower than peak acceleration for those earthquakes for which the peak accelerations occur as short-period spikes. This is precisely what effective peak acceleration is designed to do.

On the other hand, the ATC-3 report map limits EPA to 0.4 g even where probabilistic peak accelerations may go to 1.0 g, or larger. THUS EPA IN THE ATC-3 REPORT MAP may be a factor of 2.5 less than than probabilistic peak acceleration for locations where the probabilistic peak acceleration is around 1.0 g.

The following paragraphs describe how the Aa, and Av maps in the ATC code were constructed.

The USGS 1976 probabilistic ground motion map was considered. Thirteen seismologists were invited to smooth the probabilistic peak acceleration map, taking into account other regional maps and their own regional knowledge. A final map was drawn based upon those smoothing's. Ground motions were truncated at 40 % g in areas where probabilistic values could run from 40 to greater than 80 % g. This resulted in an Aa map, representing a design basis for buildings having short natural periods. Aa was called "Effective Peak Acceleration."

An attenuation function for peak velocity was "draped" over the Aa map in order to produce a spatial broadening of the lower values of Aa. The broadened areas were denominated Av for "Effective Peak Velocity-Related Acceleration" for design for longer-period buildings, and a separate map drawn for this parameter.

Note that, in practice, the Aa and Av maps were obtained from a PGA map and NOT by applying the 2.5 factors to response spectra.

Note also, that if one examines the ratio of the SA(0.2) value to the PGA value at individual locations in the new USGS national probabilistic hazard maps, the value of the ratio is generally less than 2.5.

Sources of Information:

  • Algermissen, S.T., and Perkins, David M., 1976, A probabilistic estimate of maximum acceleration in rock in the contiguous United States, U.S. Geological Survey Open-File Report OF 76-416, 45 p.
  • Applied Technology Council, 1978, Tentative provisions for the development of seismic regulations for buildings, ATC-3-06 (NBS SP-510) U.S Government Printing Office, Washington, 505 p.

What is percent damping?

In our question about response acceleration, we used a simple physical modela particle mass on a mass-less vertical rod to explain natural period. For this ideal model, if the mass is very briefly set into motion, the system will remain in oscillation indefinitely. In a real system, the rod has stiffness which not only contributes to the natural period (the stiffer the rod, the shorter the period of oscillation), but also dissipates energy as it bends. As a result, the oscillation steadily decreases in size, until the mass-rod system is at rest again. This decrease in size of oscillation we call damping. We say the oscillation has damped out.

When the damping is small, the oscillation takes a long time to damp out. When the damping is large enough, there is no oscillation and the mass-rod system takes a long time to return to vertical. Critical damping is the least value of damping for which the damping prevents oscillation. Any particular damping value we can express as a percentage of the critical damping value.Because spectral accelerations are used to represent the effect of earthquake ground motions on buildings, the damping used in the calculation of spectral acceleration should correspond to the damping typically experienced in buildings for which earthquake design is used. The building codes assume that 5 percent of critical damping is a reasonable value to approximate the damping of buildings for which earthquake-resistant design is intended. Hence, the spectral accelerations given in the seismic hazard maps are also 5 percent of critical damping.

Why do you decluster the earthquake catalog to develop the Seismic Hazard maps?

The primary reason for declustering is to get the best possible estimate for the rate of mainshocks. Also, the methodology requires a catalog of independent events (Poisson model), and declustering helps to achieve independence.

Damage from the earthquake has to be repaired, regardless of how the earthquake is labeled. Some argue that these aftershocks should be counted. This observation suggests that a better way to handle earthquake sequences than declustering would be to explicitly model the clustered events in the probability model. This step could represent a future refinement. The other side of the coin is that these secondary events arent going to occur without the mainshock. Any potential inclusion of foreshocks and aftershocks into the earthquake probability forecast ought to make clear that they occur in a brief time window near the mainshock, and do not affect the earthquake-free periods except trivially. That is, the probability of no earthquakes with M>5 in a few-year period is or should be virtually unaffected by the declustering process. Also, in the USA experience, aftershock damage has tended to be a small proportion of mainshock damage.

How do I use the seismic hazard maps?

The maps come in three different probability levels and four different ground motion parameters, peak acceleration and spectral acceleration at 0.2, 0.3, and 1.0 sec. (These values are mapped for a given geologic site condition. Other site conditions may increase or decrease the hazard. Also, other things being equal, older buildings are more vulnerable than new ones.)

The maps can be used to determine (a) the relative probability of a given critical level of earthquake ground motion from one part of the country to another (b) the relative demand on structures from one part of the country to another, at a given probability level. In addition, © building codes use one or more of these maps to determine the resistance required by buildings to resist damaging levels of ground motion.

The different levels of probability are those of interest in the protection of buildings against earthquake ground motion. The ground motion parameters are proportional to the hazard faced by a particular kind of building.

Peak acceleration is a measure of the maximum force experienced by a small mass located at the surface of the ground during an earthquake. It is an index to hazard for short stiff structures.

Spectral acceleration is a measure of the maximum force experienced by a mass on top of a rod having a particular natural vibration period. Short buildings, say, less than 7 stories, have short natural periods, say, 0.2-0.6 sec. Tall buildings have long natural periods, say 0.7 sec or longer. A earthquake strong motion record is made up of varying amounts of energy at different periods. A building natural period indicates what spectral part of an earthquake ground-motion time history has the capacity to put energy into the building. Periods much shorter than the natural period of the building or much longer than the natural period do not have much capability of damaging the building. Thus, a map of a probabilistic spectral value at a particular period thus becomes an index to the relative damage hazard to buildings of that period as a function of geographic location.

Choose a ground motion parameter according to the above principles. For many purposes, peak acceleration is a suitable and understandable parameter.Choose a probability value according to the chance you want to take. One can now select a map and look at the relative hazard from one part of the country to another.

If one wants to estimate the probability of exceedance for a particular level of ground motion, one can plot the ground motion values for the three given probabilities, using log-log graph paper and interpolate, or, to a limited extent, extrapolate for the desired probability level.Conversely, one can make the same plot to estimate the level of ground motion corresponding to a given level of probability different from those mapped.

If one wants to estimate the probabilistic value of spectral acceleration for a period between the periods listed, one could use the method reported in the Open File Report 95-596, USGS Spectral Response Maps and Their Use in Seismic Design Forces in Building Codes. (This report can be downloaded from the web-site.) The report explains how to construct a design spectrum in a manner similar to that done in building codes, using a long-period and a short-period probabilistic spectral ordinate of the sort found in the maps. Given the spectrum, a design value at a given spectral period other than the map periods can be obtained.

What if we need to know about total rates of earthquakes with M>5 including aftershocks?

Aftershocks and other dependent-event issues are not really addressable at this web site given our modeling assumptions, with one exception. The current National Seismic Hazard model (and this web site) explicitly deals with clustered events in the New Madrid Seismic Zone and gives this clustered-model branch 50% weight in the logic-tree. Even in the NMSZ case, however, only mainshocks are clustered, whereas NMSZ aftershocks are omitted. We are performing research on aftershock-related damage, but how aftershocks should influence the hazard model is currently unresolved.

The seismic hazard map values show ground motions that have a probability of being exceeded in 50 years of 10, 5 and 2 percent. What is the probability of their being exceeded in one year (the annual probability of exceedance)?

Let r = 0.10, 0.05, or 0.02, respectively. The approximate annual probability of exceedance is the ratio, r*/50, where r* = r(1+0.5r). (To get the annual probability in percent, multiply by 100.) The inverse of the annual probability of exceedance is known as the "return period," which is the average number of years it takes to get an exceedance.

Example: What is the annual probability of exceedance of the ground motion that has a 10 percent probability of exceedance in 50 years?

Answer: Let r = 0.10. The approximate annual probability of exceedance is about 0.10(1.05)/50 = 0.0021. The calculated return period is 476 years, with the true answer less than half a percent smaller.

The same approximation can be used for r = 0.20, with the true answer about one percent smaller. When r is 0.50, the true answer is about 10 percent smaller.

Example: Suppose a particular ground motion has a 10 percent probability of being exceeded in 50 years. What is the probability it will be exceeded in 500 years? Is it (500/50)10 = 100 percent?

Answer: No. We are going to solve this by equating two approximations:

r1*/T1 = r2*/T2. Solving for r2*, and letting T1=50 and T2=500,
r2* = r1*(500/50) = .0021(500) = 1.05.
Take half this value = 0.525. r2 = 1.05/(1.525) = 0.69.
Stop now. Don't try to refine this result.

The true answer is about ten percent smaller, 0.63.For r2* less than 1.0 the approximation gets much better quickly.

For r2* = 0.50, the error is less than 1 percent.
For r2* = 0.70, the error is about 4 percent.
For r2* = 1.00, the error is about 10 percent.

Caution is urged for values of r2* larger than 1.0, but it is interesting to note that for r2* = 2.44, the estimate is only about 17 percent too large. This suggests that, keeping the error in mind, useful numbers can be calculated.

Here is an unusual, but useful example. Evidently, r2* is the number of times the reference ground motion is expected to be exceeded in T2 years. Suppose someone tells you that a particular event has a 95 percent probability of occurring in time T. For r2 = 0.95, one would expect the calculated r2 to be about 20% too high. Therefore, let calculated r2 = 1.15.

The previous calculations suggest the equation,
r2calc = r2*/(1 + 0.5r2*)
Find r2*.r2* = 1.15/(1 - 0.5x1.15) = 1.15/0.425 = 2.7

This implies that for the probability statement to be true, the event ought to happen on the average 2.5 to 3.0 times over a time duration = T. If history does not support this conclusion, the probability statement may not be credible.

The seismic hazard map is for ground motions having a 2% probability of exceedance in 50 years. Are those values the same as those for 10% in 250?

Yes, basically. This conclusion will be illustrated by using an approximate rule-of-thumb for calculating Return Period (RP).

A typical seismic hazard map may have the title, "Ground motions having 90 percent probability of not being exceeded in 50 years." The 90 percent is a "non-exceedance probability" the 50 years is an "exposure time." An equivalent alternative title for the same map would be, "Ground motions having 10 percent probability of being exceeded in 50 years." A typical shorthand to describe these ground motions is to say that they are 475-year return-period ground motions. This means the same as saying that these ground motions have an annual probability of occurrence of 1/475 per year. "Return period" is thus just the inverse of the annual probability of occurrence (of getting an exceedance of that ground motion).

To get an approximate value of the return period, RP, given the exposure time, T, and exceedance probability, r = 1 - non-exceedance probability, NEP, (expressed as a decimal, rather than a percent), calculate:

RP = T / r* Where r* = r(1 + 0.5r).r* is an approximation to the value -loge ( NEP ).
In the above case, where r = 0.10, r* = 0.105 which is approximately = -loge ( 0.90 ) = 0.10536
Thus, approximately, when r = 0.10, RP = T / 0.105

Consider the following table:

Rule of Thumb Exact
NEP T r r* Calculation RP RP
0.90 50 0.10 0.105 50/0.105 476.2 474.6
0.90 100 0.10 0.105 100/0.105 952.4 949.1
0.90 250 0.10 0.105 250/0.105 2381.0 2372.8

In this table, the exceedance probability is constant for different exposure times. Compare the results of the above table with those shown below, all for the same exposure time, with differing exceedance probabilities.

Rule of Thumb Exact
NEP T r r* Calculation RP RP
0.90 50 0.10 0.105 50/0.105 476.2 474.6
0.95 50 0.05 0.05125 50/0.05125 975.6 974.8
0.98 50 0.02 0.0202 50/0.0202 2475.2 2475.9

Comparison of the last entry in each table allows us to see that ground motion values having a 2% probability of exceedance in 50 years should be approximately the same as those having 10% probability of being exceeded in 250 years: The annual exceedance probabilities differ by about 4%. Corresponding ground motions should differ by 2% or less in the EUS and 1 percent or less in the WUS, based upon typical relations between ground motion and return period.

I am trying to calculate the ground motion effect for a certain location in California. I obtained the design spectrum acceleration from your site, but I would like to identify the soil type of this location - how can I get that?

You can't find that information at our site.

We don't know any site that has a map of site conditions by National Earthquake Hazard Reduction Program (NEHRP) Building Code category. There is a map of some kind of generalized site condition created by the California Division of Mines and Geology (CDMG). The map is statewide, largely based on surface geology, and can be seen at the web site of the CDMG. It does not have latitude and longitude lines, but if you click on it, it will blow up to give you more detail, in case you can make correlations with geographic features. There is no advice on how to convert the theme into particular NEHRP site categories.

For sites in the Los Angeles area, there are at least three papers in the following publication that will give you either generalized geologic site condition or estimated shear wave velocity for sites in the San Fernando Valley, and other areas in Los Angeles. Look for papers with author/coauthor J.C. Tinsley. This is older work and may not necessarily be more accurate than the CDMG state map for estimating geologic site response.

  • Ziony, J.I., ed, 1985, Evaluating earthquake hazards in the Los Angeles region--an earth-science perspective, U.S. Geological Survey Professional Paper 1360, US Gov't Printing Office, Washington, 505 p.
  • C. J. Wills, et al:, A Site-Conditions Map for California Based on Geology and Shear-Wave Velocity, BSSA, Bulletin Seismological Society of America,December 2000, Vol. 90 Number 6, Part B Supplement, pp. S187-S208.In general, someone using the code is expected either to get the geologic site condition from the local county officials or to have a geotechnical engineer visit the site.

What is a distance metric? Why is the choice of distance metric important in probability assessments? What distance should I use?

For earthquakes, there are several ways to measure how far away it is. The one we use here is the epicentral distance or the distance of the nearest point of the projection of the fault to the Earth surface, technically called Rjb. Even if the earthquake source is very deep, more than 50 km deep, it could still have a small epicentral distance, like 5 km. Frequencies of such sources are included in the map if they are within 50 km epicentral distance.

Several cities in the western U.S. have experienced significant damage from earthquakes with hypocentral depth greater than 50 km. These earthquakes represent a major part of the seismic hazard in the Puget Sound region of Washington. If the probability assessment used a cutoff distance of 50 km, for example, and used hypocentral distance rather than epicentral, these deep Puget Sound earthquakes would be omitted, thereby yielding a much lower value for the probability forecast. Another example where distance metric can be important is at sites over dipping faults. The distance reported at this web site is Rjb =0, whereas another analysis might use another distance metric which produces a value of R=10 km, for example, for the same site and fault. Thus, if you want to know the probability that a nearby dipping fault may rupture in the next few years, you could input a very small value of Maximum distance, like 1 or 2 km, to get a report of this probability.

This distance (in km not miles) is something you can control. If you are interested only in very close earthquakes, you could make this a small number like 10 or 20 km. If you are interested in big events that might be far away, you could make this number large, like 200 or 500 km. The report will tell you rates of small events as well as large, so you should expect a high rate of M5 earthquakes within 200 km or 500 km of your favorite site, for example. Most of these small events would not be felt. If an M8 event is possible within 200 km of your site, it would probably be felt even at this large of a distance.

Where can I find information on seismic zones 0,1,2,3,4?

A seismic zone could be one of three things:

  1. A region on a map in which a common level of seismic design is required. This concept is obsolete.
  2. An area of seismicity probably sharing a common cause. Example: "The New Madrid Seismic Zone."
  3. A region on a map for which a common areal rate of seismicity is assumed for the purpose of calculating probabilistic ground motions.

Building code maps using numbered zones, 0, 1, 2, 3, 4, are practically obsolete. 1969 was the last year such a map was put out by this staff. The 1997 Uniform Building Code (UBC) (published in California) is the only building code that still uses such zones. Generally, over the past two decades, building codes have replaced maps having numbered zones with maps showing contours of design ground motion. These maps in turn have been derived from probabilistic ground motion maps. Probabilistic ground motion maps have been included in the seismic provisions of the most recent U.S. model building codes, such as the new "International Building code," and in national standards such as "Minimum Design Loads for Buildings and Other Structures," prepared by the American Society of Civil Engineers.

Zone maps numbered 0, 1, 2, 3, etc., are no longer used for several reasons:

  • A single map cannot properly display hazard for all probabilities or for all types of buildings. Probabilities: For very small probabilities of exceedance, probabilistic ground motion hazard maps show less contrast from one part of the country to another than do maps for large probabilities of exceedance. Buildings: Short stiff buildings are more vulnerable to close moderate-magnitude events than are tall, flexible buildings. The latter, in turn, are more vulnerable to distant large-magnitude events than are short, stiff buildings. Thus, the contrast in hazard for short buildings from one part of the country to another will be different from the contrast in hazard for tall buildings.
  • Building codes adapt zone boundaries in order to accommodate the desire for individual states to provide greater safety, less contrast from one part of the state to another, or to tailor zones more closely to natural tectonic features. Because of these zone boundary changes, the zones do not have a deeper seismological meaning and render the maps meaningless for applications other than building codes. An example of such tailoring is given by the evolution of the UBC since its adaptation of a pair of 1976 contour maps. First, the UBC took one of those two maps and converted it into zones. Then, through the years, the UBC has allowed revision of zone boundaries by petition from various western states, e.g., elimination of zone 2 in central California, removal of zone 1 in eastern Washington and Oregon, addition of a zone 3 in western Washington and Oregon, addition of a zone 2 in southern Arizona, and trimming of a zone in central Idaho.

Older (1994, 1997) versions of the UBC code may be available at a local or university library. A redrafted version of the UBC 1994 map can be found as one of the illustrations in a paper on the relationship between USGS maps and building code maps.


Watch the video: Accelerated motion with constant acceleration and Statistics and Probability (December 2022).