Information

Can there be significant new changes in physical features of Humans due to evolution in 10000 years of span?

Can there be significant new changes in physical features of Humans due to evolution in 10000 years of span?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Humans migrated from Africa about 60000 years. And in these years humans physical features undergone significantly in terms of skin color, hair, eye color and facial features.

So, with this we can say that given 10000 years of span we can see a significant noticeable new changes in physical features of humans? like some humans with new skin color (apart from today's white, black and brown), new color eye balls, big heads etc.?


Yes & perhaps (or probably?) no, depending on what you define as significant changes.

Less than 10,000 years ago everyone in the british isles & the rest of europe were dark skinned so the answer if (unlike me) you consider the change in skin color a significant change is obviously a resounding yes.

Here's what English people looked like 10,000 years ago

Darker skinned than you were expecting perhaps.

If as suggested in this article white skin arrived in Europe around 5,000 years ago that only leaves 2,000 years before early Greek & Roman art we have available which shows it as ubiquitous, so it perhaps took only 2,000 years or so (maybe less) to become dominant in europe, that's fast.

Using 20 years as the measure of a generation that's only 100 generations, so, very fast.

Timeline of human prehistory

The first reconstruction in the link below is a reconstruction of a Neanderthal woman found in a cave in Gibraltar. She died at least 30,000 years ago.

Here she is, the skin tone may not be accurate but we do know from gene's recovered from Neanderthal remains that they were relatively light skinned.

Personally I don't consider her appearance to be significantly different from modern humans.

29 Reconstructed Faces Of Ancient People

So my answer based on what I consider significant changes would be no.

But for you or others the answer may well be yes.

And of course a mutation for a new eye colour could appear at any time in one individual & spread like wildfire practically overnight just because we think it's unusual & 'cool' (aka sexual selection) so if eye color ticks your boxes it's a very definite yes.


Neolithic Revolution

The Neolithic Revolution, also called the Agricultural Revolution, marked the transition in human history from small, nomadic bands of hunter-gatherers to larger, agricultural settlements and early civilization. The Neolithic Revolution started around 10,000 B.C. in the Fertile Crescent, a boomerang-shaped region of the Middle East where humans first took up farming. Shortly after, Stone Age humans in other parts of the world also began to practice agriculture. Civilizations and cities grew out of the innovations of the Neolithic Revolution.


CAREER CONNECTION

Geneticists Use Karyograms to Identify Chromosomal Aberrations

The karyotype is a method by which traits characterized by chromosomal abnormalities can be identified from a single cell. To observe an individual’s karyotype, a person’s cells (like white blood cells) are first collected from a blood sample or other tissue. In the laboratory, the isolated cells are stimulated to begin actively dividing. A chemical is then applied to the cells to arrest mitosis during metaphase. The cells are then fixed to a slide.

The geneticist then stains chromosomes with one of several dyes to better visualize the distinct and reproducible banding patterns of each chromosome pair. Following staining, chromosomes are viewed using bright-field microscopy. An experienced cytogeneticist can identify each band. In addition to the banding patterns, chromosomes are further identified on the basis of size and centromere location. To obtain the classic depiction of the karyotype in which homologous pairs of chromosomes are aligned in numerical order from longest to shortest, the geneticist obtains a digital image, identifies each chromosome, and manually arranges the chromosomes into this pattern (Figure 1).

At its most basic, the karyogram may reveal genetic abnormalities in which an individual has too many or too few chromosomes per cell. Examples of this are Down syndrome, which is identified by a third copy of chromosome 21, and Turner syndrome, which is characterized by the presence of only one X chromosome in women instead of two. Geneticists can also identify large deletions or insertions of DNA. For instance, Jacobsen syndrome, which involves distinctive facial features as well as heart and bleeding defects, is identified by a deletion on chromosome 11. Finally, the karyotype can pinpoint translocations , which occur when a segment of genetic material breaks from one chromosome and reattaches to another chromosome or to a different part of the same chromosome. Translocations are implicated in certain cancers, including chronic myelogenous leukemia.

By observing a karyogram, geneticists can actually visualize the chromosomal composition of an individual to confirm or predict genetic abnormalities in offspring even before birth.

Nondisjunctions, Duplications, and Deletions

Of all the chromosomal disorders, abnormalities in chromosome number are the most easily identifiable from a karyogram. Disorders of chromosome number include the duplication or loss of entire chromosomes, as well as changes in the number of complete sets of chromosomes. They are caused by nondisjunction , which occurs when pairs of homologous chromosomes or sister chromatids fail to separate during meiosis. The risk of nondisjunction increases with the age of the parents.

Nondisjunction can occur during either meiosis I or II, with different results (Figure 2). If homologous chromosomes fail to separate during meiosis I, the result is two gametes that lack that chromosome and two gametes with two copies of the chromosome. If sister chromatids fail to separate during meiosis II, the result is one gamete that lacks that chromosome, two normal gametes with one copy of the chromosome, and one gamete with two copies of the chromosome.

Figure 2: Following meiosis, each gamete has one copy of each chromosome. Nondisjunction occurs when homologous chromosomes (meiosis I) or sister chromatids (meiosis II) fail to separate during meiosis.

An individual with the appropriate number of chromosomes for their species is called euploid in humans, euploidy corresponds to 22 pairs of autosomes and one pair of sex chromosomes. An individual with an error in chromosome number is described as aneuploid , a term that includes monosomy (loss of one chromosome) or trisomy (gain of an extraneous chromosome). Monosomic human zygotes missing any one copy of an autosome invariably fail to develop to birth because they have only one copy of essential genes. Most autosomal trisomies also fail to develop to birth however, duplications of some of the smaller chromosomes (13, 15, 18, 21, or 22) can result in offspring that survive for several weeks to many years. Trisomic individuals suffer from a different type of genetic imbalance: an excess in gene dose. Cell functions are calibrated to the amount of gene product produced by two copies (doses) of each gene adding a third copy (dose) disrupts this balance. The most common trisomy is that of chromosome 21, which leads to Down syndrome. Individuals with this inherited disorder have characteristic physical features and developmental delays in growth and cognition. The incidence of Down syndrome is correlated with maternal age, such that older women are more likely to give birth to children with Down syndrome (Figure 3).

Figure 3: The incidence of having a fetus with trisomy 21 increases dramatically with maternal age.


Ten Astounding Cases of Modern Evolution and Adaptation

An evolved bedbug. Volker Steger/Photo Researchers

When we look at how evolution has taken us from eyeless blobs to moderately capable bloggers, it can seem like a vast, unknowable force. But when we look at individual traits and how they appear and disappear in clever ways, the functioning of cause and effect is clear, and fascinating, to see. People keep poisoning your lake? Well, Mr. Fish, why don’t you develop a resistance to that poison, and pass it down to your kids? Bats keep ignoring your flower and pollinating others? Well, tropical vine, how about evolving an echolocation-reflecting satellite-dish-shaped leaf? We gathered a list of ten evolutions and adaptations that are either new or newly discovered, ranging from plants to animals to, yes, people. We’re not perfect, either.

Click to launch a list of ten amazing evolutions.

A note: these examples span a few different types of changes, including individual mutations (as with the humans), learned behaviors (as with the Muscovite dogs), new adaptations (as with the cave fish) and newly discovered evolutions (as with the satellite-dish-shaped leaf). Think of this as more of an overview of how things can change rather than any particular argument.

The Perfect Bird Perch University of Toronto

Babiana ringens, a South African flowering plant locally known as the Rat’s Tail, shows a very particular evolution to invite pollinating birds to dip their beaks into its flowers: a specialized bird perch. B. ringens‘s flowers grow on the ground, which could mean it garners less attention from birds that don’t wish to hang around in that dangerous spot for too long. To entice the Malachite sunbird, the plant has evolved to grow a firm stalk in a perfect perching position for feeding. This one is interesting because the very same plant shows a distinct difference depending on where it is, according to University of Toronto researchers–when it relies on the sunbird for pollination, it grows a long and appealing stalk (quiet, guys), while in areas with lots of potential pollinators, that stalk has shrunk over many generations of less use. But the stalk is still a major advantage for the plant–plants without the stalk, whether it was broken off or whatever, produce only half as many seeds as those with an intact stalk.

A Mouse Immune to Mouse Poison Rama via Wikimedia Commons

Earlier this summer, we stumbled upon this newly-poison-resistant house mouse, which can now survive some of mankind’s deadliest rodenticide thanks to some very recent hybridization-as-evolution. Warfarin, a common mouse poison, works on most species of mouse, including the common house mouse, but it doesn’t work on the Algerian mouse, a separate though closely related species found on the Mediterranean coast. The two mouse species would never normally have met, but human travel introduced them, and the inevitable hybrid mouse began popping up in Germany, safe and sound–due to this new beneficial trait.

An Echo-Acoustic Flower to Attract Bats Courtesy Korinna M. Koch

We’re not the only ones who love bats–turns out the Cuban rainforest vine Marcgravia evenia works pretty hard to get their attention, too. In a recently discovered (though not recently developed) evolution, M. evenia‘s leaves have a distinct concave shape that work like little satellite dishes. Why? To send back a strong signal when bombarded with echolocation from bats. That makes the flower uniquely recognizable to our flying mammal friends, who often rely on echolocation to make up for their poor eyesight. The design isn’t great for photosynthesis, but apparently the benefits outweigh the negatives.

An Evolved Bedbug, Every New Yorker’s Worst Enemy Volker Steger/Photo Researchers

The most feared and panic-causing insect in New York isn’t the cockroach–it’s the bedbug. In the late 1990s, after a half-century of “relative inactivity,” as we noted back in May, the bedbug suddenly reappeared, stronger than ever. Turns out the bedbug had evolved in ways that make it much harder to eradicate, including a thick, waxlike exoskeleton that repels pesticides, a faster metabolism to create more of the bedbug’s natural chemical defenses, and dominant mutations to block search-and-destroy pyrethroids. You almost have to admire the little monsters.

Adapting to Radiation Wikimedia Commons

A few weeks ago, we found an example of evolution in action: evolution at the cellular level, and within humans to boot. A small study of cardiologists, who use x-rays very frequently in their work, found that the doctors did have higher-than-normal levels of hydrogen peroxide in the blood, a development that could serve as a warning signal for potential carcinogens down the road. But they also found that that raised level of hydrogen peroxide triggered production of an antioxidant called glutathione, a protector of cells. Essentially, these doctors are developing protections against the hazards of their jobs from the inside out, starting deep down inside the cells. It’s an amazing story–read more about it here.

Moscow’s Dogs Adapt to Ride the Subway Maxim Marmur, via The Financial Times

Moscow has a serious stray dog problem. For every 300 Muscovites (we might have guessed “Moswegian” for the demonym, but nope), there’s one stray dog, enough that a researcher at the A.N. Severtsov Institute of Ecology and Evolution, Andrei Poyarkov, has been analyzing them from an evolutionary perspective. Poyarkov has separated the dogs into four personality types, ranging from a reversion to wolf-like qualities to a specialized “beggar” type. That latter type is particular interesting, as it’s a totally new set a behaviors: beggar dogs understand which humans are most likely to give them food, and have even evolved the ability to ride the subway, incorporating multiple stops into their territory. You can read more about Moscow’s dogs here.

Cane Toads Show Surprisingly Unhealthy Evolution Wikimedia Commons

The tale of Australia’s cane toads is tragic and mysterious in equal measure. Introduced in 1935 to control the native cane beetle, which was gnawing up the country’s crops, the toads pretty much immediately began breeding like Mogwai and eating up everything in sight, dooming many indigenous species. While the toads spread over most of northeast Australia, researchers began noting something very strange: the toads were mutating to have a very particular set of characteristics: longer legs, greater endurance, more speed. Those mutations allowed the newly evolved cane toads to move faster and spread further, but here’s the thing: it actually made them less healthy. The faster toads had the highest mortality rates, and often developed spinal problems. So what was the point of that evolution? After analyzing the environment, researchers came up with a new term for this kind of natural selection: spacial sorting. The idea is that the faster a toad could move, thus expanding the cane toad’s territory, the easier time it would have attracting a mate–even though the toads were less healthy, and even though there was no real need to keep expanding (certainly not a lack of food). The researchers describe it as “not as important as Darwinian processes but nonetheless capable of shaping biological diversity by a process so-far largely neglected.” [Wired]

A Rootworm That’s Immune to Rootworm Poison USDA

It’s a pretty bad sign when you develop a genetically modified type of corn, weathering all the usual complaints about safety and playing God and all that, all to avoid having your crop gnawed on by a certain kind of pesky bug, only to find that, well, the bug has mutated. That happened to Monsanto (the GM corn maker) and the western corn rootworm (the bug, pictured above in its adult stage). Rootworms developed a natural resistance to the pesticide inherent in Monsanto’s genetically modified corn very quickly. We wrote: “The corn seed also contains a gene that produces a crystalline protein called Cry3Bb1, which delivers an unpleasant demise to the rootworm (via digestive tract destruction) but otherwise is harmless to other creatures (we think).” But an Iowa State University research paper described an instance of the rootworm developing an effective resistance to that protein, raising concerns that the rootworm is flexible enough to respond to all kinds of genetic protection.

Why Do Dogs Bark? Wikimedia Commons

We tend to take it for granted that a dog barks–but in the wild, canines hardly ever do, instead whining or yipping or howling. A few studies have examined why this is, and the current conclusion is that dogs bark, well, for us. That conclusion comes in a bit of a roundabout way: Csaba Molnar’s studies show both that a dog’s bark contains information, and that humans can understand that information. Despite a dog owner’s insistence otherwise, dog owners typically cannot tell their dog’s bark apart from the bark of a different dog of the same breed. But humans can quite easily distinguish “alarm” barks from “play” barks, and spectrum analysis shows that alarm barks tend to be very similar to each other, and very distinct from other types of barks. Evolutionarily speaking, dogs are not very far removed from their wild cousins, perhaps 50,000 years, so Molnar’s theory (and the generally accepted theory, to be fair check out this great New Yorker piece for more) is that wild dogs and wolves were selectively bred for particular traits, one of which may have been the willingness to bark.

Surviving Religion Mona Lisa Productions

Every year, the Zoque people of southern Mexico dump a toxic paste made from the root of the barbasco plant into their local sulfur cave as part of a religious ceremony, praying for rain. The paste is highly toxic to the Poecilia mexicana a small cave fish closely related to the guppy, which is the point of the ceremony. The fish die, the Zoque eat the fish, and hopefully southern Mexico gets some rain. The Mexican government has actually banned this practice, due to that whole massive slaughter of fish thing, but if they had waited a little while longer, they may not have needed to. P. mexicana has actually begun evolving to resist the toxin, according to a paper published last year in the journal Biology Letters. A team of researchers found that some fish somehow managed to survive the wholesale attack, and that even the ones that succumbed seemed to be surviving longer than that species normally would. They tested the fish found in this cave against fish of the same type found elsewhere, and discovered that the cave fish have selectively bred a resistance to the toxin, surviving around 50 percent longer than the non-cave fish. As a side note, this Livescience article on the subject notes that the fish taste pretty awful.


Can there be significant new changes in physical features of Humans due to evolution in 10000 years of span? - Biology

Humans are “artificial apes,” as one modern anthropologist put it, highlighting the role of technology in the development of human society. From the earliest beginnings of humanity, technological innovation and biological evolution have been dialectically linked in an intricate web of reciprocal determination.

Selective pressures triggered by the development of tools and other aspects of culture have prompted biological changes, not only in obvious features such as the hand and the brain, but in many other human physical characteristics. At the same time, biological changes, such as the elaboration of brain architecture (permitting increasingly sophisticated abstract thought) and increased manual dexterity (e.g., the fully opposable thumb and other changes in wrist and hand bones) have facilitated and promoted cultural innovation. Several recently published scientific articles elucidate this complex process.

The development of agriculture (the domestication of select plants and animals) was the most profound cultural innovation that humans have accomplished since the initial development of tools. Evidence of domestication begins to appear in the archeological record following the end of the last Ice Age (the end of the Pleistocene, roughly 10,000-12,000 years ago), though the process may have begun earlier, at least in some areas.

During the great majority of human existence, even if we only count the span of modern humans (dating back 200,000 years at most), people lived off naturally occurring resources, by hunting animals and gathering plant foods. This economic system is generally known as hunting and gathering or foraging. The independent development of agriculture more or less simultaneously (compared to the time frame of human existence) in a number of regions of the world, was a truly revolutionary change, and strongly suggests that some global process was at work. While much remains to be learned about the mechanisms that accomplished this change, the consequences were many and varied.

The most significant among these was the ability to produce a surplus of food beyond the immediate needs of daily subsistence. Some hunter-gatherer groups, such as those harvesting large annual fish runs on the northwest coast of North America, could amass and store food surpluses, but the quantities were limited by the natural abundance, timing, and geographic location of the resource, which could not be manipulated. By contrast, plant and animal husbandry, the care and controlled breeding of selected species, led to genetic changes that allowed greater yields and increased the geographic ranges across which the domesticated species could be grown, among other changes, thus greatly expanding the potential food resources available to humans.

These developments produced a revolution in human life. Most notably, the increase in abundance and reliability of food allowed human groups increased sedentism. Communities that had hitherto been relatively small in size and forced to make seasonal moves across the landscape to follow the shifting availability of naturally occurring resources could now stay in one place for long periods and grow in population size. In turn, this permitted the elaboration of the division of labor. Specialization further promoted technological innovation. And, while some limited social stratification existed among certain hunter-gatherers (like the native peoples of the northwest coast referenced above), the development of agriculture greatly amplified such tendencies and, ultimately, led to the formation of full-fledged class divisions.

These changes also had consequences for human biology. Alterations in diet leading to nutritional deficiencies, increases in tooth decay, and other problems shifts in the patterns of labor increased exposure to diseases (due to living in larger settlements) and the effects of living in new climates, among others, resulted in evolutionary changes, in reaction to, but also in some ways enhancing human’s ability to live under the new conditions brought about by agriculture.

Several studies published over the past year highlight increases in the understanding of this complex process.

Research published in the Proceedings of the National Academy of Sciences (Timothy M. Ryan and Colin N. Shaw, “Gracility of the modern Homo sapiens skeleton is the result of decreased biomechanical loading,” PNAS vol. 112 no. 2, 13 Jan 2015) examined the relative massiveness (gracility vs. robusticity) of the human skeleton before and after the advent of agriculture and contrasted these with a variety of living primates. The study compared bone density in the hip joints of specimens from 31 extant primate taxa with human remains from four separate archaeological populations including both hunter-gatherers and sedentary agriculturalists. All the human populations whose remains were examined were from Native American sites in eastern North America.

The study showed that hunter-gatherers, living about 7,000 years ago, had bone strength (the ability to withstand breakage) proportionally similar to that seen in the sample of modern primates. By contrast, agriculturalists, living 6,000 years later, had significantly lighter and weaker bones, more susceptible to breakage. Their bone mass was 20 percent less than that of their predecessors. These findings suggest that the decreased skeletal robusticity in recent humans is not the result of bipedality (walking on two limbs rather than four, which occurred millions of years ago), but rather has to do with the development of agriculture.

The researchers reviewed data to examine whether changes in diet, like reduced calcium intake, between hunter-gatherers and agriculturalists may be the primary reason for differences in bone density. They conclude, however, that it is principally changes in the pattern of physical activity, from highly mobile foragers to relatively sedentary agriculturalists that explain these differences.

These findings do not imply that farmers work less than foragers. Indeed, anthropological research has shown that at least some foragers have more free time than agriculturalists. One distinction may be the necessity for frequent movement from one settlement to another by the former. This is supported by the results of another study, published in the same issue of PNAS (Habiba Chirchir, et al., “Recent origin of low trabecular bone density in modern humans,” PNAS vol. 112 no. 2, 13 Jan 2015), which demonstrates that changes in bone density were more marked in the lower limbs than in the upper. It reviewed hominin fossils from a number of extinct species, stretching back to Australopithecus africanus , demonstrating that high bone densities were maintained throughout the span of human evolution until the development of agriculture. This raises the question of whether changes in anatomy aside from bone density may be identifiable as resulting from activities characteristic of an agricultural existence.

The results are important in understanding the evolutionary context of such diseases as osteoporosis and geriatric bone loss in contemporary populations.

Another study, this one published in the journal Nature (Iain Mathieson et al., “Genome-wide patterns of selection in 230 ancient Eurasians,” Nature, 16152, 23 November 2015), uses ancient DNA to trace the arrival of the first farmers from the Near East into Europe and examine a number of genetic changes experienced by the immigrants. The adaptations include changes in height, digestion, the immune system, and skin color.

DNA recovered from samples of ancient human bone provides a new source of data, supplementing archaeological artifacts, anatomical studies of human skeletons, and studies of DNA from contemporary human populations, to examine the introduction of agriculture into Europe. In particular, ancient DNA provides a more direct view of the evolutionary changes that humans underwent as they and their recently developed agricultural technology adapted to a new environment.

Modern humans moved into Europe from the Near East sometime between 40,000 and 50,000 years ago, absorbing and/or displacing the existing Neanderthal inhabitants. Both populations had hunting and gathering economies. Then, about 8,500 years ago, new immigrants, also from the Near East, began spreading into Europe. This time, however, they brought with them a revolutionary new economic system—agriculture. Another wave of agriculturalists moved into Europe from the Russian steppes, about 2,300 years ago.

The study reported in Nature compared ancient DNA from Europe, Turkey, and Russia with that from modern populations.

Foragers, who rely on naturally occurring foods, tend to have a varied diet in order to cover their nutritional needs. Agriculturalists, on the other hand, focus on a relatively narrow range of plant and/or animal species, perhaps supplemented by some wild food resources. This more limited diet may not meet all dietary requirements or may predominantly rely on foods that, while conducive to domestication, may not be easily digested. Dairy products and wheat are examples.

The consumption of milk and milk products is not natural for adult mammals. The capacity to digest lactose, a milk sugar, exists in infant mammals, but is usually lost once they are weaned. The domestication of a number of larger mammals, including sheep, goats, and cattle, presented the possibility of using their milk as a food source, converting grass, an abundant resource, but indigestible to humans, into a new food source. However, since hunter-gatherers do not typically consume milk, the widespread lactose-intolerance in adult humans was a major problem for early farmers who sought to employ this food source.

One of the results of the Nature study indicates that a gene that allows lactose digestion to continue into adulthood appears to have taken thousands of years to become widespread in European populations, despite its apparent selective advantage, only beginning to appear about 4,000 years ago. This raises the question of whether technological adaptations, such as the production of aged cheese, which has less lactose, may have allowed for the use of milk products in earlier times.

Another gene was identified that enhances the ability to absorb an important amino acid, ergothioneine, which exists in low amounts in wheat and other domesticated grains. The spread of such a gene would represent a distinct advantage for diets that focused on grains as a food source. However, the effects of genes are often complex, and sometimes have unexpected consequences. This same gene appears to raise the risk of digestive disorders, such as irritable bowel syndrome. Evolutionary adaptations often represent a dynamic balance between positive and negative effects.

The researchers also found evidence regarding an evolutionary change in skin color. The predominance of lightly colored skin among Europeans appears to be a relatively recent phenomenon, possibly related to the need to produce more Vitamin D, which can occur in a reaction caused by sunlight absorbed in the skin. Lighter colored skin is thought to facilitate this process.

The study concludes that modern Europeans have significant genetic differences with early Neolithic populations of the region, despite having a largely common ancestry. The authors propose that these differences reflect evolutionary adaptations to the adoption of an agricultural lifestyle in a new environment as well as successive waves of immigration.

These findings are valuable in that they reinforce our understanding that human physical evolution is a complex and dynamic process of dialectical interaction with the natural and cultural environment. In a very real sense, the development of agriculture involved not only the domestication of a range of plants and animals by humans, but, as part of that process, the transformation of the humans themselves.


Human Evolution and the Anthropocene

Changing climate is not a unique feature of the Anthropocene. Earth’s environments have been in a constant state of creation, destruction, and change for the planet’s entire history. The last six million years (when hominins began to appear in the fossil record) were particularly volatile and saw many different shifts in environments. The key to human survival in these settings was an extraordinary ability of our ancestors to alter their behavior and the world around them. Our success in these times was largely due to the evolution over time of a number of traits that allowed us to be more adaptable to a large variety of environmental conditions.

© Copyright Smithsonian Institution, Human Origins Program The layers of sediment visible on this hillside in the Rift Valley of southern Kenya illustrate change in the environmental conditions faced by human ancestors around 1 million years ago.

The first bipedal hominins were able to live both on the ground and in trees, which gave them an advantage as the habitat oscillated between forests and grasslands. The ability of early humans to make and use tools, including the control of fire, allowed them to more easily access food by scraping meat off of bones more efficiently, crushing bones for the marrow inside, and obtaining new plant foods such as nutritious tubers and roots from underground. Tool use also enabled early hominins to diversify their diet, so they had plenty of options when certain plants and animals went extinct. And with a larger and more complex brain, early humans gained the capacity for everything from language to creative problem-solving. When humans began to expand out of Africa and into the rest of the world, they moved everywhere from mountains thousands of feet above sea level to blazing hot and extremely arid deserts, displaying an astonishing ability to adapt to the wide diversity of Earth’s environments.

© Copyright Smithsonian Institution, Human Origins Program These objects found in Africa illustrate the many thousands of clues discovered about human origins, including use of tools and symbols, increasing brain size, and footprints indicating walking upright.

© Copyright Smithsonian Institution, Human Origins Program Reconstruction of close evolutionary cousins, Neanderthals (Homo neanderthalensis), based on the skull from Shanidar 1, Iraq. (Artwork by John Gurche)

Other species in our evolutionary tree had features that were more specialized to one particular environment, and they were very successful for long periods of time in those environments. Yet these localized features restricted their ability to live in new conditions, limiting how effectively they could inhabit new geographic zones or could adjust to unusual climatic shifts. If they were unable to adapt to new conditions or change their location significantly, they died out. A good example of that are the Neanderthals, or Homo neanderthalensis. Members of this species had bodies that were well suited for cold climates their short, stocky bodies, large noses, and their ability to make clothing were all specialized features for successful living in the cold. In contrast, Homo sapiens had an extremely enhanced ability to adapt their behavior to new surroundings, despite having physical features more suited for an African climate. It became particularly difficult for Neanderthals to compete with the innovative Homo sapiens, and with a geographic range limited by their specialization to cold, they eventually went extinct. While Neanderthals and all other early human species exhibited some of the human characteristics of adaptability, Homo sapiens distinguish themselves with an extreme reliance on altering their landscapes and themselves for survival.

© Copyright Smithsonian Institution, Human Origins Program A chart describing the relationship between early human lineages, technological innovation, and periods of strong climate variability in East Africa.

The volatility of past climates does not diminish the effects of human activity in the Anthropocene. The types of changes that we have seen in the last two hundred years are far outside the range of variability we see in the past. Examining the Anthropocene through the lens of our evolutionary history shows us that the themes of resilience and adaptability are critical to the history of our species in the past and in the Anthropocene. These distinctive traits of our lineage have created a human species that is defined by its ability to alter its behavior and environment as a mode of survival. These themes are critical to understanding how the Anthropocene has come to be, and how we will survive into the future.


  • Image show what some dog breeds looked like before selective breeding
  • Many dogs have been developed to be more disease prone than before
  • We designed 167 different breeds with unique physical and mental traits
  • Humans began a relationship with dogs some 18,000 to 30,000 years ago

Published: 22:02 BST, 7 March 2016 | Updated: 10:00 BST, 8 March 2016

They may be man's best friend, but man has also changed them beyond all recognition, these incredible pictures of dog breeds reveal.

But just as we have modified food to taste better, we have also bred dogs to have unique physical and mental traits.

A new series of pictures show how human's obsession to create the perfect canine has shaped certain breeds into being almost unrecognizable from hundreds of years ago - and introduced painful diseases in the process.

Humans have been domesticating dogs before they learned how to farm. But with our obsession to create a perfect breed, they are almost unrecognizable from their early ancestors. Here, the English bulldog is said to be the most changed dog from its ancestors, as it has endured so much breeding that it suffers from almost every disease possible.

COMMON PEDIGREE PROBLEMS

Some breeds are partially susceptible to certain hereditary defects and illnesses.

For example, retrievers are prone to inherited eye disorders such as juvenile cataracts.

Chronic eczema is common among German Shepherds, Healthline reported.

Dogs including the Shar Pei and Basset Hound can be bred for folded or droopy skin that can interfere with their vision.

Jack Russells are prone to glaucoma, which may result in a gradual loss of vision.

Irish Setters can have a serious hereditary neurological disorder known as quadriplegia.

The lead leading cause of death among English Bulldogs is cardiac arrest, cancer and old age.

The Pug's curled tail is actually a genetic defect that leads to paralysis.

Dachshunds are prone to achondroplastic related pathologies, PRA and problems with their legs

Dogs with the condition sometimes have trouble standing up and suffer seizures.

By identifying which traits are the strongest and better looking, such as size, coat and demeanor, we have designed at least 167 different breeds with unique physical and mental characteristics,according to the Science of Dogs.

This breeding is slowly mutating and disfiguring dogs and some of these changes have caused these animals unbearable pain.

The pressure to create the perfect canine derives from American Kennel Club standards, which is the official guidelines for show dogs.

These standards can be from the colour of the dog's eyes, size of their paws to the curve of its tail.

'Nowadays, many breeds are highly inbred and express an extraordinary variety of genetic defects as a consequence: defects ranging from anatomical problems, like hip dysplasia, that cause chronic suffering, to impaired immune function and loss of resistance to fatal diseases like cancer,' James A. Serpell, a professor of Animal Ethics and Welfare at the University of Pennsylvania's School of Veterinary Medicine, told WhoWhatWhy.

'The only sensible way out of this genetic dead-end is through selective out-crossing with dogs from other breeds, but this is considered anathema by most breeders since it would inevitably affect the genetic 'purity' of their breeds.'

Most of the present day dog breeds can only be traced back about 150 years when the breeds were first registered and codified during the Victorian Era in England, reports Tech Insider.


MENTAL HEALTH

Mental illness is a serious public health issue. It has been estimated that by 2010 mental illness will account for 15 percent of the global burden of disease (Biddle and Mutrie, 2008 Biddle and Asare, 2011). Young people are disproportionately affected by depression, anxiety, and other mental health disorders (Viner and Booy, 2005 Biddle and Asare, 2011). Approximately 20 percent of school-age children have a diagnosable mental health disorder (U.S. Public Health Service, 2000), and overweight children are at particular risk (Ahn and Fedewa, 2011). Mental health naturally affects academic performance on many levels (Charvat, 2012). Students suffering from depression, anxiety, mood disorders, and emotional disturbances perform more poorly in school, exhibit more behavioral and disciplinary problems, and have poorer attendance relative to mentally healthy children. Thus it is in schools' interest to take measures to support mental health among the student population. In addition to other benefits, providing adequate amounts of physical activity in a way that is inviting and safe for children of all ability levels is one simple way in which schools can contribute to students' mental health.

Impact of Physical Activity on Mental Health

Several recent reviews have concluded that physical activity has a positive effect on mental health and emotional well-being for both adults and children (Peluso and Guerra de Andrade, 2005 Penedo and Dahn, 2005 Strong et al., 2005 Hallal et al., 2006 Ahn and Fedewa, 2011 Biddle and Asare, 2011). Numerous observational studies have established the association between physical activity and mental health but are inadequate to clarify the direction of that association (Strong et al., 2005). It may be that physical activity improves mental health, or it may be that people are more physically active when they are mentally healthy. Most likely the relationship is bidirectional.

Several longitudinal and intervention studies have clarified that physical activity positively impacts mental health (Penedo and Dahn, 2005 Strong et al., 2005). Physical activity has most often been shown to reduce symptoms of depression and anxiety and improve mood (Penedo and Dahn, 2005 Dishman et al., 2006 Biddle and Asare, 2011). In addition to reducing symptoms of depression and anxiety, studies indicate that regular physical activity may help prevent the onset of these conditions (Penedo and Dahn, 2005). Reductions in depression and anxiety are the commonly measured outcomes (Strong et al., 2005 Ahn and Fedewa, 2011). However, reductions in states of confusion, anger, tension, stress, anxiety sensitivity (a precursor to panic attacks and panic disorders), posttraumatic stress disorder/psychological distress, emotional disturbance, and negative affect have been observed, as well as increases in positive expectations fewer emotional barriers general well-being satisfaction with personal appearance and improved life satisfaction, self-worth, and quality of life (Heller et al., 2004 Peluso and Guerra de Andrade, 2005 Penedo and Dahn, 2005 Dishman et al., 2006 Hallal et al., 2006 Ahn and Fedewa, 2011 Biddle and Asare, 2011). Among adolescents and young adult females, exercise has been found to be more effective than cognitive-behavioral therapy in reducing the pursuit of thinness and the frequency of bingeing, purging, and laxative abuse (Sundgot-Borgen et al., 2002 Hallal et al., 2006). The favorable effects of physical activity on sleep may also contribute to mental health (Dishman et al., 2006).

The impact of physical activity on these measures of mental health is moderate, with effect sizes generally ranging from 0.4 to 0.7 (Biddle and Asare, 2011). In one meta-analysis of intervention trials, the RCTs had an effect size of 0.3, whereas other trials had an effect size of 0.57.

Ideal Type, Length, and Duration of Physical Activity

Intervention trials that examine the relationship between physical activity and mental health often fail to specify the exact nature of the intervention, making it difficult to determine the ideal frequency, intensity, duration, and type of physical activity involved (Penedo and Dahn, 2005 Ahn and Fedewa, 2011 Biddle and Asare, 2011).

Many different types of physical activity—including aerobic activity, resistance training, yoga, dance, flexibility training, walking programs, and body building—have been shown to improve mood and other mental health indicators. The evidence is strongest for aerobic physical activity, particularly for reduction of anxiety symptoms and stress (Peluso and Guerra de Andrade, 2005 Dishman et al., 2006 Martikainen et al., 2013), because more of these studies have been conducted (Peluso and Guerra de Andrade, 2005). One meta-analysis of RCTs concluded that physical activity interventions focused exclusively on circuit training had the greatest effect on mental health indicators, followed closely by interventions that included various types of physical activity (Ahn and Fedewa, 2011). Among studies other than RCTs, only participation in sports had a significant impact on mental health (Ahn and Fedewa, 2011). The few studies that investigated the impact of vigorous- versus lower-intensity physical activity (Larun et al., 2006 Biddle and Asare, 2011) found no difference, suggesting that perhaps all levels of physical activity may be helpful. Among adults, studies have consistently shown beneficial effects of both aerobic exercise and resistance training. Ahn and Fedewa (2011) concluded that both moderate and intense physical activity have a significant impact on mental health, although when just RCTs were considered, only intense physical activity was significant (Ahn and Fedewa, 2011). While physical activity carries few risks for mental health, it is important to note that excessive physical activity or specialization too early in certain types of competitive physical activity has been associated with negative mental health outcomes and therefore should be avoided (Peluso and Guerra de Andrade, 2005 Hallal et al., 2006). Furthermore, to reach all children, including those that may be at highest risk for inactivity, obesity, and mental health problems, physical activity programming needs to be nonthreatening and geared toward creating a positive experience for children of all skill and fitness levels (Amis et al., 2012).

Various types of physical activity programming have been shown to have a positive influence on mental health outcomes. Higher levels of attendance and participation in physical education are inversely associated with feelings of sadness and risk of considering suicide (Brosnahan et al., 2004). Classroom physical activity is associated with reduced use of medication for attention deficit hyperactivity disorder (Katz et al., 2010). And participation in recess is associated with better student classroom behavior, better focus, and less fidgeting (Pellegrini et al., 1995 Jarrett et al., 1998 Barros et al., 2009).

Strong evidence supports the short-term benefits of physical activity for mental health. Acute effects can be observed after just one episode and can last from a few hours to up to 1 day after. Body building may have a similar effect, which begins a few hours after the end of the exercise. The ideal length and duration of physical activity for improving mental health remain unclear, however. Regular exercise is associated with improved mood, but results are inconsistent for the association between mood and medium- or long-term exercise (Dua and Hargreaves, 1992 Slaven and Lee, 1997 Dimeo et al., 2001 Dunn et al., 2001 Kritz-Silverstein et al., 2001 Sexton et al., 2001 Leppamaki et al., 2002 Peluso and Guerra de Andrade, 2005). Studies often do not specify the frequency and duration of physical activity episodes among those that do, interventions ranged from 6 weeks to 2 years in duration. In their meta-analysis, Ahn and Fedewa (2011) found that, comparing interventions entailing a total of more than 33 hours, 20-33 hours, and less than 20 hours, the longer programs were more effective. Overall, the lack of reporting and the variable length and duration of reported interventions make it difficult to draw conclusions regarding dose (Ahn and Fedewa, 2011).

In addition to more structured opportunities, naturally occurring physical activity outside of school time is associated with fewer depressive symptoms among adolescents (Penedo and Dahn, 2005). RCTs have demonstrated that physical activity involving entire classrooms of students is effective in alleviating negative mental health outcomes (Ahn and Fedewa, 2011). Non-RCT studies have shown individualized approaches to be most effective and small-group approaches to be effective to a more limited extent (Ahn and Fedewa, 2011). Interventions have been shown to be effective in improving mental health when delivered by classroom teachers, physical education specialists, or researchers but may be most effective when conducted with a physical education specialist (Ahn and Fedewa, 2011). Many physical activity interventions include elements of social interaction and support however, studies to date have been unable to distinguish whether the physical activity itself or these other factors account for the observed effects on mental health (Hasselstrom et al., 2002 Hallal et al., 2006). Finally, a few trials (Larun et al., 2006 Biddle and Asare, 2011) have compared the effects of physical activity and psychosocial interventions, finding that physical activity may be equally effective but may not provide any added benefit.

Subgroup Effects

Although studies frequently fail to report the age of participants, data on the effects of physical activity on mental health are strongest for adults participating in high-intensity physical activity (Ahn and Fedewa, 2011). However, evidence relating physical activity to various measures of mental health has shown consistent, significant effects on individuals aged 11-20. A large prospective study found that physical activity was inversely associated with depression in early adolescence (Hasselstrom et al., 2002 Hallal et al., 2006) fewer studies have been conducted among younger children. Correlation studies have shown that the association of physical activity with depression is not affected by age (Ahn and Fedewa, 2011).

Few studies have examined the influence of other sociodemographic characteristics of participants on the relationship between physical activity and mental health (Ahn and Fedewa, 2011), but studies have been conducted in populations with diverse characteristics. One study of low-income Hispanic children randomized to an aerobic intensity program found that the intervention group was less likely to present with depression but did not report reduced anxiety (Crews et al., 2004 Hallal et al., 2006). A study that included black and white children (aged 7-11) found that a 40-minute daily dose of aerobic exercise significantly reduced depressive symptoms and increased physical appearance self-worth in both black and white children and increased global self-worth in white children compared with controls (Petty et al., 2009). Physical activity also has been positively associated with mental health regardless of weight status (normal versus overweight) or gender (male versus female) (Petty et al., 2009 Ahn and Fedewa, 2011) however, results are stronger for males (Ahn and Fedewa, 2011).

Improvements in mental health as a result of physical activity may be more pronounced among clinically diagnosed populations, especially those with cognitive impairment or posttraumatic stress disorder (Craft and Landers, 1998 Ahn and Fedewa, 2011 Biddle and Asare, 2011). Evidence is less clear for youth with clinical depression (Craft and Landers, 1998 Larun et al., 2006 Biddle and Asare, 2011). Individuals diagnosed with major depression undergoing an intervention entailing aerobic exercise have shown significant improvement in depression and lower relapse rates, comparable to results seen in participants receiving psychotropic treatment (Babyak et al., 2000 Penedo and Dahn, 2005). One program for adults with Down syndrome providing three sessions of exercise and health education per week for 12 weeks resulted in more positive expectations, fewer emotional barriers, and improved life satisfaction (Heller et al., 2004 Penedo and Dahn, 2005). Ahn and Fedewa (2011) found that, compared with nondiagnosed individuals, physical activity had a fivefold greater impact on those diagnosed with cognitive impairment and a twofold greater effect on those diagnosed with emotional disturbance, suggesting that physical activity has the potential to improve the mental health of those most in need.

In sum, although more studies are needed, and there may be some differences in the magnitude and nature of the mental health benefits derived, it appears that physical activity is effective in improving mental health regardless of age, ethnicity, gender, or mental health status.

Sedentary Behavior

Sedentary behavior also influences mental health. Screen viewing in particular and sitting in general are consistently associated with poorer mental health (Biddle and Asare, 2011). Children who watch more television have higher rates of anxiety, depression, and posttraumatic stress and are at higher risk for sleep disturbances and attention problems (Kappos, 2007). Given the cross-sectional nature of these studies, however, the direction of these associations cannot be determined. A single longitudinal study found that television viewing, but not playing computer games, increased the odds of depression after 7-year follow-up (Primack et al., 2009 Biddle and Asare, 2011), suggesting that television viewing may contribute to depression. Because of design limitations of the available studies, it is unclear whether this effect is mediated by physical activity.

Television viewing also is associated with violence, aggressive behaviors, early sexual activity, and substance abuse (Kappos, 2007). These relationships are likely due to the content of the programming and advertising as opposed to the sedentary nature of the activity. Television viewing may affect creativity and involvement in community activities as well however, the evidence here is very limited (Kappos, 2007). Studies with experimental designs are needed to establish a causal relationship between sedentary behavior and mental health outcomes (Kappos, 2007).

Although the available evidence is not definitive, it does suggest that sedentary activity and television viewing in particular can increase the risk for depression, anxiety, aggression, and other risky behaviors and may also affect cognition and creativity (Kappos, 2007), all of which can affect academic performance. It would therefore appear prudent for schools to reduce these sedentary behaviors during school hours and provide programming that has been shown to be effective in reducing television viewing outside of school (Robinson, 1999 Robinson and Borzekowski, 2006).

Mechanisms

It is not surprising that physical activity improves mental health. Both physiological and psychological mechanisms explain the observed associations. Physiologically, physical activity is known to increase the synaptic transmission of monoamines, an effect similar to that of anti-depressive drugs. Physical activity also stimulates the release of endorphins (endogenous opoids) (Peluso and Guerra de Andrade, 2005), which have an inhibitory effect on the central nervous system, creating a sense of calm and improved mood (Peluso and Guerra de Andrade, 2005 Ahn and Fedewa, 2011). Withdrawal of physical activity may result in irritability, restlessness, nervousness, and frustration as a result of a drop in endorphin levels. Although more studies are needed to specify the exact neurological pathways that mediate this relationship, it appears that the favorable impact of physical activity on the prevention and treatment of depression may be the result of adaptations in the central nervous system mediated in part by neurotropic factors that facilitate neurogenerative, neuroadaptive, and neuroprotective processes (Dishman et al., 2006). It has been observed, for example, that chronic wheel running in rats results in immunological, neural, and cellular responses that mitigate several harmful consequences of acute exposure to stress (Dishman et al., 2006). A recent study found that children who were more physically active produced less cortisol in response to stress, suggesting that physical activity promotes mental health by regulating the hormonal responses to stress (Martikainen et al., 2013).

Psychological mechanisms that may explain why physical activity improves mental health include (1) distraction from unfavorable stimuli, (2) increase in self-efficacy, and (3) positive social interactions that can result from quality physical activity programming (Peluso and de Andrade, 2005) (see also the discussion of psychosocial health above). The relative contribution of physiological and psychological mechanisms is unknown, but they likely interact. Poor physical health also can impair mood and mental function. Health-related quality of life improves with physical activity that increases physical functioning, thereby enhancing the sense of well-being (McAuley and Rudolph, 1995 HHS, 2008).

Physical activity during childhood and adolescence may not only be important for its immediate benefits for mental health but also have implications for long-term mental health. Studies have shown a consistent effect of physical activity during adolescence on adult physical activity (Hallal et al., 2006). Physical activity habits established in children may persist into adulthood, thereby continuing to confer mental health benefits throughout the life cycle. Furthermore, physical activity in childhood may impact adult mental health regardless of the activity's persistence (Hallal et al., 2006).

Summary

Physical activity can improve mental health by decreasing and preventing conditions such as anxiety and depression, as well as improving mood and other aspects of well-being. Evidence suggests that the mental health benefits of physical activity can be experienced by all age groups, genders, and ethnicities. Moderate effect sizes have been observed among both youth and adults. Youth with the highest risk of mental illness may experience the most benefit. Although evidence is not adequate to determine the ideal regimen, aerobic and high-intensity physical activity are likely to confer the most benefit. It appears, moreover, that a variety of types of physical activity are effective in improving different aspects of mental health therefore, a varied regimen including both aerobic activities and strength training may be the most effective. Frequent episodes of physical activity are optimal given the well-substantiated short-term effects of physical activity on mental health status. Although there are well-substantiated physiological bases for the impact of physical activity on mental health, physical activity programming that effectively enhances social interactions and self-efficacy also may improve mental health through these mechanisms. Quality physical activity programming also is critical to attract and engage youth of all skills level and to effectively reach those at highest risk.

Sedentary activity may increase the risk of poor mental health status independently of, or in addition to, its effect on physical activity. Television viewing in particular may lead to a higher risk of such conditions as depression and anxiety and may also increase violence, aggression, and other high-risk behaviors. These impacts are likely the result of programming and advertising content in addition to the physiological effects of inactivity and electronic stimuli.

In conclusion, frequently scheduled and well-designed opportunities for varied physical activity during the school day and a reduction in sedentary activity have the potential to improve students' mental health in ways that could improve their academic performance and behaviors in school.


Homo sapiens – modern humans

Click to enlarge image Toggle Caption

All people living today belong to the species Homo sapiens. We evolved only relatively recently but with complex culture and technology have been able to spread throughout the world and occupy a range of different environments.

Homo sapiens background

Homo sapiens age

300,000 years ago to present:

  • archaic Homo sapiens from 300,000 years ago
  • modern Homo sapiens from about 160,000 years ago

What the name Homo sapiens means

The name we selected for ourselves means ‘wise human’. Homo is the Latin word for ‘human’ or ‘man’ and sapiens is derived from a Latin word that means ‘wise’ or ‘astute’.

Other Homo sapiens names

Various names have been used for our species including:

  • ‘Cro-Magnon Man’ is commonly used for the modern humans that inhabited Europe from about 40,000 to 10,000 years ago.
  • The term ‘archaic’ Homo sapiens has sometimes been used for African fossils dated between 300,000 and 150,000 years of age that are difficult to classify due to a mixture of modern and archaic features. Some scientists prefer to place these fossils in a separate species, Homo helmei.
  • Homo sapiens sapiens is the name given to our species if we are considered a sub-species of a larger group. This name is used by those that describe the specimen from Herto, Ethiopia as Homo sapiens idàltu or by those who believed that modern humans and the Neanderthals were members of the same species. (The Neanderthals were called Homo sapiens neanderthalensis in this scheme).

Stay in the know

Uncover the secrets of the Australian Museum with our monthly emails.

Major fossil sites of early Homo sapiens

Fossils of the earliest members of our species, archaic Homo sapiens, have all been found in Africa. Fossils of modern Homo sapiens have been found in Africa and in many other sites across much of the world. Sites older than 150k include Florisbad, Omo-Kibish, Ngaloba and Herto. Sites dating to about 100k include Klasies River Mouth, Border Cave, Skhul and Qafzeh. Sites younger than 40k include Dolni Vestonice, Cro-Magnon, Aurignac and Lake Mungo.

Homo sapiens Relationships with other species

Homo sapiens evolved in Africa from Homo heidelbergensis. They co-existed for a long time in Europe and the Middle East with the Neanderthals, and possibly with Homo erectus in Asia and Homo floresiensis in Indonesia, but are now the only surviving human species.

For information on modern humans interbreeding with other human species see:

The transition to modern humans

African fossils provide the best evidence for the evolutionary transition from Homo heidelbergensis to archaic Homo sapiens and then to early modern Homo sapiens. There is, however, some difficulty in placing many of the transitional specimens into a particular species because they have a mixture of intermediate features which are especially apparent in the sizes and shapes of the forehead, brow ridge and face. Some suggest the name Homo helmei for these intermediate specimens that represent populations on the brink of becoming modern. Late surviving populations of archaic Homo sapiens and Homo heidelbergensis lived alongside early modern Homo sapiens before disappearing from the fossil record by about 100,000 years ago. Key specimens that reveal an evolutionary transition from archaic to modern Homo sapiens include Florisbad cranium, LH18 from Laetoli, Omo 1 and 2 from Omo-Kibish, Herto skull from Ethiopia and Skhul 5 from Israel.

Important specimens: Late early modern Homo sapiens

  • Liujiang – a skull discovered in 1958 in Guanxi province, South China. Age is uncertain, but at least 15,000 years old. This skull lacks the typically northern Asian features found in modern populations from those regions, lending support to popular theories that such features only arose in the last 8000 years.
  • Aurignac – skull discovered in Aurignac, France. The first Aurignac fossils were accidentally found in 1852. A workman digging a trench in a hillside found a cave that had been blocked by rock but after clearing away the debris he found 17 skeletons. The skeletons were taken to a local cemetery for burial but later investigations indicated that the skeletons were actually up to 10,000 years old.
  • Cro-Magnon 1 – a 32,000-year-old skull discovered in 1868 in Cro-Magnon rockshelter, Les Eyzies, France. This adult male represents the oldest known skull of a modern human from western Europe. Cro-Magnon skeletons have proportions similar to those of modern Africans rather than modern Europeans. This suggests that the Cro-Magnons had migrated from a warmer climate and had a relatively recent African ancestry.

Important specimens: Early modern Homo sapiens

  • Herto – a 160,000-year-old partial skull discovered in1997 in Herto, Ethiopia. This skull from an adult male and those of another adult and a child were found in 1997 and publicly announced in 2003. They are some of the oldest fossils of modern Homo sapiens yet discovered. Some scientists regard these fossils as a sub-species of modern humans (named Homo sapiens idàltu) because of some slight differences in their skull features. They show a suite of modern human traits, mixed with archaic and early modern features. Also of significance are cut marks on the child's skull. These were made when the bone was still fresh in a manner indicating ritual practice. The skull also appeared 'polished' from repeated handling before it was laid in the ground.
  • Omo 1 – a partial skull discovered in1967 in Omo-Kibish, Ethiopia. A recently published date for this skull was about 195,000 years old, but this is disputed. However, it is still one of the oldest known fossils of early modern Homo sapiens. Features which show the transition from an archaic to an early modern Homo sapiens include a more rounded and expanded braincase and a high forehead. Now dated to the same age as Omo 2, it does raise interesting questions about why it appears to have slightly more advanced features than Omo 2. Were they from the same population?
  • Skhul 5 – a 90,000-year-old skull discovered in1932 in Skhul Cave, Mount Carmel, Israel. This skull of an adult male has developed relatively modern features including a higher forehead although it still retains some archaic features including a brow ridge and slightly projecting face. This specimen and others from the Middle East are the oldest known traces of modern humans outside of Africa. They prove that Homo sapiens had started to spread out of Africa by 100,000 years ago, although it may be that these remains represent a population that did not expand beyond this region – with migrations to the rest of the world occurring later, about 60-70,000 years ago.

Important specimens: Archaic Homo sapiens

  • LH 18 – skull discovered in 1976 in Ngaloba, Laetoli, Tanzania. Age is about 120,000 years old (but debated). This skull is transitional between Homo heidelbergensis and early modern Homo sapiens. It has a number of primitive features but also has some modern characteristics such as a reduced brow ridge and smaller facial features. The late date of this specimen indicates that archaic humans lived alongside modern populations for some time.
  • Florisbad – a 260,000-year-old partial cranium discovered in 1932 in Florisbad, South Africa. This skull shows features intermediate between Homo heidelbergensis and early modern Homo sapiens. The face is broad and massive but still relatively flat and the forehead is approaching the modern form.
  • Omo 2 – a 195,000-year-old braincase discovered in 1967 in Omo-Kibish, Ethiopia. Like LH 18, this braincase shows a blend of primitive and modern features that places it as a member of a population transitional between Homo heidelbergensis and early modern Homo sapiens. Its primitive features include a heavier, more robust construction an angled rather than rounded rear section and a lower, sloping forehead. Refer to Omo 1 specimen for interesting comparisons.

Homo sapiens key physical features

Homo sapiens skulls have a distinctive shape that differentiates them from earlier human species. Their body shape tends to vary, however, due to adaptation to a wide range of environments.

Homo sapiens Body size and shape

  • the earliest Homo sapiens had bodies with short, slender trunks and long limbs. These body proportions are an adaptation for surviving in tropical regions due to the greater proportion of skin surface available for cooling the body. More stocky builds gradually evolved when populations spread to cooler regions, as an adaptation that helped the body retain heat.
  • Modern humans now have an average height of about 160 centimetres in females and 175 centimetres in males.

Brain

  • Homo sapiens living today have an average brain size of about 1350 cubic centimetres which makes-up 2.2% of our body weight. Early Homo sapiens, however, had slightly larger brains at nearly 1500 cubic centimetres.

Skull

  • modern Homo sapiens skulls have a short base and a high braincase. Unlike other species of Homo, the skull is broadest at the top. The fuller braincase also results in almost no post-orbital constriction or narrowing behind the eye sockets
  • back of the skull is rounded and indicates a reduction in neck muscles
  • face is reasonably small with a projecting nose bone
  • brow ridge is limited and the forehead is tall
  • orbits (eye sockets) are square rather than round

Jaws and teeth

  • jaws are short which result in an almost vertical face
  • usually no gap (retromolar space) between the last molar teeth and the jaw bone
  • jaws are lightly built and have a protruding bony chin for added strength. Homo sapiens is the only species to have a protruding chin.
  • shortened jaw has affected the arrangement of the teeth within the jaw. They are now arranged in a parabolic shape in which the side rows of teeth splay outwards rather than remain parallel as in our earliest long jawed ancestors.
  • teeth are relatively small compared with earlier species. This is especially noticeable in the front incisor and canine teeth.
  • front premolar teeth in the lower jaw have two equal-sized cusps (bumps on the chewing surface)

Limbs and pelvis

  • limb bones are thinner and less robust than earlier human species and indicate a reduction in muscle size from earlier humans.
  • legs are relatively long compared with the arms.
  • finger and toe bones are straight and without the curvature typical of our earliest australopithecine ancestors.
  • pelvis is narrower from side-to-side and has a deeper bowl-shape from front-to-back than previous human species.

Homo sapiens Lifestyle

Homo sapiens Culture and technology

The earliest Homo sapiens had a relatively simple culture, although it was more advanced than any previous species. Rare evidence for symbolic behaviour appears at a number of African sites about 100,000 years ago, but these artistic expressions appear more of a flicker of creativity than a sustained expression. It is not until about 40,000 years ago that complex and highly innovative cultures appear and include behaviour that we would recognise as typical of modern humans today.

Many researchers believe this explosion of artistic material in the archaeological record about 40,000 years ago is due to a change in human cognition - perhaps humans developed a greater ability to think and communicate symbolically or memorise better. However, as there are obvious attempts at art before this, perhaps there are other reasons. One theory is that population size and structure play a key role as social learning is considered more beneficial to developing complex culture than individual innovations are. Bigger populations often accumulate more cultural attributes than isolated groups.

Homo sapiens Tools

Initially, Homo sapiens made stone tools such as flakes, scrapers and points that were similar in design to those made by the Neanderthals (Homo neanderthalensis). This technology appeared about 250,000 years ago, coinciding with the probable first appearance of early Homo sapiens. It required an ability for abstract thought to mentally plan a series of steps that could then be executed. Only a small number of tools were produced from each core (the original stone selected for shaping) but the tools produced by this prepared-core method maximised the cutting edge available. Historically, archaeologists used different terminologies for Lower Palaeolithic cultures in different parts of the world. Many of these terms are now consolidated within the Mode 3 technology to emphasise the similarities between these technologies.

As more sophisticated techniques developed in some parts of the world, this early Mode 3 technology was replaced by either Mode 4 or Mode 5 technology and the use of a wider range of materials including bone, ivory and antler. Mode 4 technology first appeared in Africa about 100,000 years ago. It is characterised by the production of long, thin stone flakes that were shaped into long blade knives, spearheads and other tools. Mode 5 technology specialised in the production of very small blades (microliths) that were often used in composite tools having several parts. These tools included small-headed arrows, barbed spears and sickles. Regional variation in these tool cultures developed with an influx of new styles and techniques especially within the last 40,000 years, including the Magdalenian and Aurignacian.

Homo sapiens use of Fire

Sophisticated control of fire, including complex hearths, pits and kilns, allowed Homo sapiens to survive in regions that even the cold-adapted Neanderthals had been unable to inhabit.

The Cro-Magnon site at Dolni Vestonice in the Czech Republic produced the earliest evidence for high temperature kilns and ceramic technology. The kilns, dated at 26,000 years old, were capable of firing clay figurines at temperatures over 400 degrees Celsius. About 2000 fired lumps of clay were found scattered around the kiln.

Homo sapiens Clothing and personal adornment

Animal hide clothing may have been worn in cooler areas, although direct evidence of clothing only exists for the last 30,000 years. This evidence includes specialised tools such as needles adornments such as buttons and beads sewn onto clothing and the remains of animals, such as arctic foxes and wolves, that indicate they were trapped for their fur. Clothes that were sewn provided better protection from the cold than clothes that were merely tied together.

Fibres from flax plants were discovered in a cave in Georgia in 2009, dating to about 36,000 years old. The flax was most likely used to make clothes and woven baskets, and a small number appear to be dyed. They are the oldest example of their kind ever found. Textile impressions have been discovered at other European sites have, but no actual remains.

Items of personal adornment not sewn onto clothing include ivory, shell, amber, bone and tooth beads and pendants. Ostrich eggshell beads that date from about 45,000 years ago have been found in Africa, as well as pierced shell beads in Morocco dating to 80,000 years ago and marine shell beads from Israel dating to 90,000 years old, but body adornment only become prolific from about 35,000 years ago.

One of the earliest known pendants is a horse carved in mammoth ivory from Vogelherd, Germany. It is dated at 32,000 years old. Body adornments like this are evidence that humans had progressed from merely trying to survive and were now concerned with their appearance.

Homo sapiens Art

Cave art began to be produced about 40,000 years ago in Europe and Australia. Most of the art depicts animals or probable spiritual beings, but smaller marks in many caves in France, and possibly others in Europe, are now being analysed as they may be a written ɼode' familiar to many prehistoric tribes. In particular, 26 symbols appear over and over again across thousand of years, some of them in pairs and groups in what could be a rudimentary 'language'. These suggest that early Europeans were attempting to represent ideas symbolically rather than realistically and share information acorss generations. The oldest of these symbols date to about 30,000 years old.

Evidence of musical instruments first appeared about 32,000 years ago in Europe. Palaeolithic bone flutes and whistles from various sites in France range in age from 30,000 to 10,000 years old.

Portable artwork, such as carved statuettes, first appeared about 35-40,000 years ago in Europe. Venus figurines were widespread in Europe by 28,000 years ago. Fragments from Germany found in 2009, suggest their origins started at least 35,000 years ago. An ivory female head with bun from Dolni Vestonice, Czech Republic, is one of only 2 human head carvings from this period that show eye sockets, eyelids and eyeballs. It is dated at 26,000 years old.

Red ochre pieces from Blombos Cave in South Africa, dating to about 100-80,000 years ago, show evidence of engraving that may be an expression of art or simply incidental marking made during other activities. However, other signs of possible symbolic behaviour, including shell beads and sophisticated tools (known as Still Bay points) have also come from this site, strengthening the case for early artistic expression.

Homo sapiens Settlement

Early Homo sapiens often inhabited caves or rock shelters if these were available. More recently, especially within the last 20,000 years, natural shelters were enhanced with walls or other simple modifications. In open areas, shelters were constructed using a range of framework materials including wooden poles and the bones of large animals, such as mammoths. These structures were probably covered with animal hides and the living areas included fire hearths.

Living sites were much larger than those occupied by earlier humans and a comparison with modern traditional peoples suggests that clans consisted of between 25 and 100 members.


The Evolution of Diet

Some experts say modern humans should eat from a Stone Age menu. What's on it may surprise you.

Fundamental Feasts For some cultures, eating off the land is𠅊nd always has been𠅊 way of life.

It’s suppertime in the Amazon of lowland Bolivia, and Ana Cuata Maito is stirring a porridge of plantains and sweet manioc over a fire smoldering on the dirt floor of her thatched hut, listening for the voice of her husband as he returns from the forest with his scrawny hunting dog.

With an infant girl nursing at her breast and a seven-year-old boy tugging at her sleeve, she looks spent when she tells me that she hopes her husband, Deonicio Nate, will bring home meat tonight. “The children are sad when there is no meat,” Maito says through an interpreter, as she swats away mosquitoes.

Nate left before dawn on this day in January with his rifle and machete to get an early start on the two-hour trek to the old-growth forest. There he silently scanned the canopy for brown capuchin monkeys and raccoonlike coatis, while his dog sniffed the ground for the scent of piglike peccaries or reddish brown capybaras. If he was lucky, Nate would spot one of the biggest packets of meat in the forest—tapirs, with long, prehensile snouts that rummage for buds and shoots among the damp ferns.

This evening, however, Nate emerges from the forest with no meat. At 39, he’s an energetic guy who doesn’t seem easily defeated—when he isn’t hunting or fishing or weaving palm fronds into roof panels, he’s in the woods carving a new canoe from a log. But when he finally sits down to eat his porridge from a metal bowl, he complains that it’s hard to get enough meat for his family: two wives (not uncommon in the tribe) and 12 children. Loggers are scaring away the animals. He can’t fish on the river because a storm washed away his canoe.

The story is similar for each of the families I visit in Anachere, a community of about 90 members of the ancient Tsimane Indian tribe. It’s the rainy season, when it’s hardest to hunt or fish. More than 15,000 Tsimane live in about a hundred villages along two rivers in the Amazon Basin near the main market town of San Borja, 225 miles from La Paz. But Anachere is a two-day trip from San Borja by motorized dugout canoe, so the Tsimane living there still get most of their food from the forest, the river, or their gardens.

I’m traveling with Asher Rosinger, a doctoral candidate who’s part of a team, co-led by biological anthropologist William Leonard of Northwestern University, studying the Tsimane to document what a rain forest diet looks like. They’re particularly interested in how the Indians’ health changes as they move away from their traditional diet and active lifestyle and begin trading forest goods for sugar, salt, rice, oil, and increasingly, dried meat and canned sardines. This is not a purely academic inquiry. What anthropologists are learning about the diets of indigenous peoples like the Tsimane could inform what the rest of us should eat.

Rosinger introduces me to a villager named José Mayer Cunay, 78, who, with his son Felipe Mayer Lero, 39, has planted a lush garden by the river over the past 30 years. José leads us down a trail past trees laden with golden papayas and mangoes, clusters of green plantains, and orbs of grapefruit that dangle from branches like earrings. Vibrant red “lobster claw” heliconia flowers and wild ginger grow like weeds among stalks of corn and sugarcane. “José’s family has more fruit than anyone,” says Rosinger.

Yet in the family’s open-air shelter Felipe’s wife, Catalina, is preparing the same bland porridge as other households. When I ask if the food in the garden can tide them over when there’s little meat, Felipe shakes his head. “It’s not enough to live on,” he says. “I need to hunt and fish. My body doesn’t want to eat just these plants.”

The Tsimane of Bolivia get most of their food from the river, the forest, or fields and gardens carved out of the forest.

Click here to launch gallery.

As we look to 2050, when we’ll need to feed two billion more people, the question of which diet is best has taken on new urgency. The foods we choose to eat in the coming decades will have dramatic ramifications for the planet. Simply put, a diet that revolves around meat and dairy, a way of eating that’s on the rise throughout the developing world, will take a greater toll on the world’s resources than one that revolves around unrefined grains, nuts, fruits, and vegetables.

Until agriculture was developed around 10,000 years ago, all humans got their food by hunting, gathering, and fishing. As farming emerged, nomadic hunter-gatherers gradually were pushed off prime farmland, and eventually they became limited to the forests of the Amazon, the arid grasslands of Africa, the remote islands of Southeast Asia, and the tundra of the Arctic. Today only a few scattered tribes of hunter-gatherers remain on the planet.

That’s why scientists are intensifying efforts to learn what they can about an ancient diet and way of life before they disappear. “Hunter-gatherers are not living fossils,” says Alyssa Crittenden, a nutritional anthropologist at the University of Nevada, Las Vegas, who studies the diet of Tanzania’s Hadza people, some of the last true hunter-gatherers. “That being said, we have a small handful of foraging populations that remain on the planet. We are running out of time. If we want to glean any information on what a nomadic, foraging lifestyle looks like, we need to capture their diet now.”

So far studies of foragers like the Tsimane, Arctic Inuit, and Hadza have found that these peoples traditionally didn’t develop high blood pressure, atherosclerosis, or cardiovascular disease. 𠇊 lot of people believe there is a discordance between what we eat today and what our ancestors evolved to eat,” says paleoanthropologist Peter Ungar of the University of Arkansas. The notion that we’re trapped in Stone Age bodies in a fast-food world is driving the current craze for Paleolithic diets. The popularity of these so-called caveman or Stone Age diets is based on the idea that modern humans evolved to eat the way hunter-gatherers did during the Paleolithic—the period from about 2.6 million years ago to the start of the agricultural revolution𠅊nd that our genes haven’t had enough time to adapt to farmed foods.

A Stone Age diet “is the one and only diet that ideally fits our genetic makeup,” writes Loren Cordain, an evolutionary nutritionist at Colorado State University, in his book The Paleo Diet: Lose Weight and Get Healthy by Eating the Foods You Were Designed to Eat. After studying the diets of living hunter-gatherers and concluding that 73 percent of these societies derived more than half their calories from meat, Cordain came up with his own Paleo prescription: Eat plenty of lean meat and fish but not dairy products, beans, or cereal grains𠅏oods introduced into our diet after the invention of cooking and agriculture. Paleo-diet advocates like Cordain say that if we stick to the foods our hunter-gatherer ancestors once ate, we can avoid the diseases of civilization, such as heart disease, high blood pressure, diabetes, cancer, even acne.

That sounds appealing. But is it true that we all evolved to eat a meat-centric diet? Both paleontologists studying the fossils of our ancestors and anthropologists documenting the diets of indigenous people today say the picture is a bit more complicated. The popular embrace of a Paleo diet, Ungar and others point out, is based on a stew of misconceptions.

The Hadza of Tanzania are the world’s last full-time hunter-gatherers. They live on what they find: game, honey, and plants, including tubers, berries, and baobab fruit.

Click here to launch gallery.

Meat has played a starring role in the evolution of the human diet. Raymond Dart, who in 1924 discovered the first fossil of a human ancestor in Africa, popularized the image of our early ancestors hunting meat to survive on the African savanna. Writing in the 1950s, he described those humans as �rnivorous creatures, that seized living quarries by violence, battered them to death … slaking their ravenous thirst with the hot blood of victims and greedily devouring livid writhing flesh.”

Eating meat is thought by some scientists to have been crucial to the evolution of our ancestors’ larger brains about two million years ago. By starting to eat calorie-dense meat and marrow instead of the low-quality plant diet of apes, our direct ancestor, Homo erectus, took in enough extra energy at each meal to help fuel a bigger brain. Digesting a higher quality diet and less bulky plant fiber would have allowed these humans to have much smaller guts. The energy freed up as a result of smaller guts could be used by the greedy brain, according to Leslie Aiello, who first proposed the idea with paleoanthropologist Peter Wheeler. The brain requires 20 percent of a human’s energy when resting by comparison, an ape’s brain requires only 8 percent. This means that from the time of H. erectus, the human body has depended on a diet of energy-dense food𠅎specially meat.

Fast-forward a couple of million years to when the human diet took another major turn with the invention of agriculture. The domestication of grains such as sorghum, barley, wheat, corn, and rice created a plentiful and predictable food supply, allowing farmers’ wives to bear babies in rapid succession—one every 2.5 years instead of one every 3.5 years for hunter-gatherers. A population explosion followed before long, farmers outnumbered foragers.

Over the past decade anthropologists have struggled to answer key questions about this transition. Was agriculture a clear step forward for human health? Or in leaving behind our hunter-gatherer ways to grow crops and raise livestock, did we give up a healthier diet and stronger bodies in exchange for food security?

When biological anthropologist Clark Spencer Larsen of Ohio State University describes the dawn of agriculture, it’s a grim picture. As the earliest farmers became dependent on crops, their diets became far less nutritionally diverse than hunter-gatherers’ diets. Eating the same domesticated grain every day gave early farmers cavities and periodontal disease rarely found in hunter-gatherers, says Larsen. When farmers began domesticating animals, those cattle, sheep, and goats became sources of milk and meat but also of parasites and new infectious diseases. Farmers suffered from iron deficiency and developmental delays, and they shrank in stature.

Despite boosting population numbers, the lifestyle and diet of farmers were clearly not as healthy as the lifestyle and diet of hunter-gatherers. That farmers produced more babies, Larsen says, is simply evidence that “you don’t have to be disease free to have children.”

The Inuit of Greenland survived for generations eating almost nothing but meat in a landscape too harsh for most plants. Today markets offer more variety, but a taste for meat persists.

Click here to launch gallery.

The real Paleolithic diet, though, wasn’t all meat and marrow. It’s true that hunter-gatherers around the world crave meat more than any other food and usually get around 30 percent of their annual calories from animals. But most also endure lean times when they eat less than a handful of meat each week. New studies suggest that more than a reliance on meat in ancient human diets fueled the brain’s expansion.

Year-round observations confirm that hunter-gatherers often have dismal success as hunters. The Hadza and Kung bushmen of Africa, for example, fail to get meat more than half the time when they venture forth with bows and arrows. This suggests it was even harder for our ancestors who didn’t have these weapons. 𠇎verybody thinks you wander out into the savanna and there are antelopes everywhere, just waiting for you to bonk them on the head,” says paleoanthropologist Alison Brooks of George Washington University, an expert on the Dobe Kung of Botswana. No one eats meat all that often, except in the Arctic, where Inuit and other groups traditionally got as much as 99 percent of their calories from seals, narwhals, and fish.

So how do hunter-gatherers get energy when there’s no meat? It turns out that “man the hunter” is backed up by “woman the forager,” who, with some help from children, provides more calories during difficult times. When meat, fruit, or honey is scarce, foragers depend on �llback foods,” says Brooks. The Hadza get almost 70 percent of their calories from plants. The Kung traditionally rely on tubers and mongongo nuts, the Aka and Baka Pygmies of the Congo River Basin on yams, the Tsimane and Yanomami Indians of the Amazon on plantains and manioc, the Australian Aboriginals on nut grass and water chestnuts.

“There’s been a consistent story about hunting defining us and that meat made us human,” says Amanda Henry, a paleobiologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig. 𠇏rankly, I think that misses half of the story. They want meat, sure. But what they actually live on is plant foods.” What’s more, she found starch granules from plants on fossil teeth and stone tools, which suggests humans may have been eating grains, as well as tubers, for at least 100,000 years—long enough to have evolved the ability to tolerate them.

The notion that we stopped evolving in the Paleolithic period simply isn’t true. Our teeth, jaws, and faces have gotten smaller, and our DNA has changed since the invention of agriculture. 𠇊re humans still evolving? Yes!” says geneticist Sarah Tishkoff of the University of Pennsylvania.

One striking piece of evidence is lactose tolerance. All humans digest mother’s milk as infants, but until cattle began being domesticated 10,000 years ago, weaned children no longer needed to digest milk. As a result, they stopped making the enzyme lactase, which breaks down the lactose into simple sugars. After humans began herding cattle, it became tremendously advantageous to digest milk, and lactose tolerance evolved independently among cattle herders in Europe, the Middle East, and Africa. Groups not dependent on cattle, such as the Chinese and Thai, the Pima Indians of the American Southwest, and the Bantu of West Africa, remain lactose intolerant.

Humans also vary in their ability to extract sugars from starchy foods as they chew them, depending on how many copies of a certain gene they inherit. Populations that traditionally ate more starchy foods, such as the Hadza, have more copies of the gene than the Yakut meat-eaters of Siberia, and their saliva helps break down starches before the food reaches their stomachs.

These examples suggest a twist on “You are what you eat.” More accurately, you are what your ancestors ate. There is tremendous variation in what foods humans can thrive on, depending on genetic inheritance. Traditional diets today include the vegetarian regimen of India’s Jains, the meat-intensive fare of Inuit, and the fish-heavy diet of Malaysia’s Bajau people. The Nochmani of the Nicobar Islands off the coast of India get by on protein from insects. “What makes us human is our ability to find a meal in virtually any environment,” says the Tsimane study co-leader Leonard.

Studies suggest that indigenous groups get into trouble when they abandon their traditional diets and active lifestyles for Western living. Diabetes was virtually unknown, for instance, among the Maya of Central America until the 1950s. As they’ve switched to a Western diet high in sugars, the rate of diabetes has skyrocketed. Siberian nomads such as the Evenk reindeer herders and the Yakut ate diets heavy in meat, yet they had almost no heart disease until after the fall of the Soviet Union, when many settled in towns and began eating market foods. Today about half the Yakut living in villages are overweight, and almost a third have hypertension, says Leonard. And Tsimane people who eat market foods are more prone to diabetes than those who still rely on hunting and gathering.

For those of us whose ancestors were adapted to plant-based diets𠅊nd who have desk jobs—it might be best not to eat as much meat as the Yakut. Recent studies confirm older findings that although humans have eaten red meat for two million years, heavy consumption increases atherosclerosis and cancer in most populations𠅊nd the culprit isn’t just saturated fat or cholesterol. Our gut bacteria digest a nutrient in meat called L-carnitine. In one mouse study, digestion of L-carnitine boosted artery-clogging plaque. Research also has shown that the human immune system attacks a sugar in red meat that’s called Neu5Gc, causing inflammation that’s low level in the young but that eventually could cause cancer. “Red meat is great, if you want to live to 45,” says Ajit Varki of the University of California, San Diego, lead author of the Neu5Gc study.

Many paleoanthropologists say that although advocates of the modern Paleolithic diet urge us to stay away from unhealthy processed foods, the diet’s heavy focus on meat doesn’t replicate the diversity of foods that our ancestors ate—or take into account the active lifestyles that protected them from heart disease and diabetes. “What bothers a lot of paleoanthropologists is that we actually didn’t have just one caveman diet,” says Leslie Aiello, president of the Wenner-Gren Foundation for Anthropological Research in New York City. “The human diet goes back at least two million years. We had a lot of cavemen out there.”

In other words, there is no one ideal human diet. Aiello and Leonard say the real hallmark of being human isn’t our taste for meat but our ability to adapt to many habitats𠅊nd to be able to combine many different foods to create many healthy diets. Unfortunately the modern Western diet does not appear to be one of them.

The Bajau of Malaysia fish and dive for almost everything they eat. Some live in houses on the beach or on stilts others have no homes but their boats.

Click here to launch gallery.

The latest clue as to why our modern diet may be making us sick comes from Harvard primatologist Richard Wrangham, who argues that the biggest revolution in the human diet came not when we started to eat meat but when we learned to cook. Our human ancestors who began cooking sometime between 1.8 million and 400,000 years ago probably had more children who thrived, Wrangham says. Pounding and heating food “predigests” it, so our guts spend less energy breaking it down, absorb more than if the food were raw, and thus extract more fuel for our brains. 𠇌ooking produces soft, energy-rich foods,” says Wrangham. Today we can’t survive on raw, unprocessed food alone, he says. We have evolved to depend upon cooked food.

To test his ideas, Wrangham and his students fed raw and cooked food to rats and mice. When I visited Wrangham’s lab at Harvard, his then graduate student, Rachel Carmody, opened the door of a small refrigerator to show me plastic bags filled with meat and sweet potatoes, some raw and some cooked. Mice raised on cooked foods gained 15 to 40 percent more weight than mice raised only on raw food.

If Wrangham is right, cooking not only gave early humans the energy they needed to build bigger brains but also helped them get more calories from food so that they could gain weight. In the modern context the flip side of his hypothesis is that we may be victims of our own success. We have gotten so good at processing foods that for the first time in human evolution, many humans are getting more calories than they burn in a day. “Rough breads have given way to Twinkies, apples to apple juice,” he writes. “We need to become more aware of the calorie-raising consequences of a highly processed diet.”

It’s this shift to processed foods, taking place all over the world, that’s contributing to a rising epidemic of obesity and related diseases. If most of the world ate more local fruits and vegetables, a little meat, fish, and some whole grains (as in the highly touted Mediterranean diet), and exercised an hour a day, that would be good news for our health𠅊nd for the planet.

The Kyrgyz of the Pamir Mountains in northern Afghanistan live at a high altitude where no crops grow. Survival depends on the animals that they milk, butcher, and barter.

Click here to launch gallery.

On my last afternoon visiting the Tsimane in Anachere, one of Deonicio Nate’s daughters, Albania, 13, tells us that her father and half-brother Alberto, 16, are back from hunting and that they’ve got something. We follow her to the cooking hut and smell the animals before we see them—three raccoonlike coatis have been laid across the fire, fur and all. As the fire singes the coatis’ striped pelts, Albania and her sister, Emiliana, 12, scrape off fur until the animals’ flesh is bare. Then they take the carcasses to a stream to clean and prepare them for roasting.

Nate’s wives are cleaning two armadillos as well, preparing to cook them in a stew with shredded plantains. Nate sits by the fire, describing a good day’s hunt. First he shot the armadillos as they napped by a stream. Then his dog spotted a pack of coatis and chased them, killing two as the rest darted up a tree. Alberto fired his shotgun but missed. He fired again and hit a coati. Three coatis and two armadillos were enough, so father and son packed up and headed home.

As family members enjoy the feast, I watch their little boy, Alfonso, who had been sick all week. He is dancing around the fire, happily chewing on a cooked piece of coati tail. Nate looks pleased. Tonight in Anachere, far from the diet debates, there is meat, and that is good.

The people of Crete, the largest of the Greek islands, eat a rich variety of foods drawn from their groves and farms and the sea. They lived on a so-called Mediterranean diet long before it became a fad.

Click here to launch gallery.

Ann Gibbons is the author of The First Human: The Race to Discover Our Earliest Ancestors. Matthieu Paley photographed Afghanistan’s Kyrgyz for our February 2013 issue.

The magazine thanks The Rockefeller Foundation and members of the National Geographic Society for their generous support of this series of articles.



Comments:

  1. Jeffrey

    I think you will allow the mistake. Write to me in PM.

  2. Riagan

    Exactly! This seems like a good idea to me. I agree with you.

  3. Vudozilkree

    I fully share her point of view. Good idea, I agree with you.

  4. Angel

    You are wrong. I can defend my position. Write to me in PM.

  5. Funsani

    The authoritative answer, curiously...



Write a message