Chapter 16 Genomics, Nutrigenomics, Nutrigenetics, and the Path of Personalized Medicine
Genes and Environment—Nature and Nurture
Nutrigenetics and Nutrigenomics
The Clinical Utility of Nutrigenomics and Nutrigenetics
Clinical Application of Nutrigenetics
About 3 billion years ago, the experiment known as “life” began on Earth. All living creatures from the five kingdoms are descended from a single common ancestor. We know that all creatures on this planet are related to one another because we all share the same digital “language of life,” known as DNA and RNA, made from five simple nucleic acids, connected along a sugar phosphate chain. If we could meet our primordial progenitor, we might not recognize it as living at all. We suspect it was nothing more than an RNA receptor-catalyst that somehow replicated itself by consuming chemicals in its immediate environment. These simple “ribo-organisms” were inherently unstable, forming and falling apart quite easily. Only over hundreds of millions of years did these organisms gain more stability when single-stranded RNA evolved into double-stranded DNA, and even more stability when the DNA began coding for proteins that would eventually evolve into cellular structure.1
Inherent in this narrative is a central fact that is easy to overlook: life began and continues to evolve within specific environments. We all know the classic riddle, “Which came first, the chicken or the egg?” From the perspective of evolutionary biology, this is not much of a riddle because the chicken’s ancestors and their eggs preceded the arrival of the chicken by at least 100 million years. However on closer reflection, the riddle asks a far deeper and more perplexing question, “What is the relationship between the individual and its environment, between the chicken and its egg?” A hospitable environment had to precede the development of life, and no new life can evolve unless there is an environment to support it, but environments change over time, and a species must change to accommodate the new changes in its environment or invariably become extinct, as the vast majority of species that have lived on this planet have done. The central premise of Charles Darwin’s grand theory of the origin of species speaks of survival of the fittest, but over time the fittest species is the one that can adapt best to a changing environment. Adaptation is not exclusive to the development of new species, but also plays a critical role in the survival of any individual within a species as well.
The twenty-first century may well be remembered as the century in which science first truly began to understand the complex interaction between the genetic information inherent in every individual and the environment to which that individual is exposed.
Genes and Environment—Nature and Nurture
“Environment” is broadly understood in genetics to include everything that is not the genetic information, or genome, itself. Environment includes climate, physical surroundings, exogenous chemicals or toxins, and infectious agents, but it also includes diet, lifestyle, and behavioral factors. Scientists once believed that genes were immutable archives of digital information that simply coded for proteins, which in turn determined the structure and function of an individual’s (analogue) body. However, it is now irrefutable that gene expression is substantially affected by and sensitive to environmental change. Genes quite literally respond to the environment to which they are exposed.2 Because genes respond to the environment we subject them to, the environment itself can be proactively manipulated to alter gene expression, and therefore, to change the health state of the individual. This is the central premise of preventive or functional genomics.
In the interactive symphony between genes and environment, the balance between the two may be altered whenever either changes. Because the environment is inherently unpredictable, one effective strategy to increase the chances of survival has been promoting genetic variation within a species, because slightly altered individuals may survive environmental change, whereas others may not. It is rather like a strategy for winning a lottery: odds of winning increase linearly with the greater variety of numbers that one is able to choose. Variety may truly be the spice of life. If populations from a single species diverge into different environments, the selective pressure over time may eventually lead to the creation of separate species.
The human genome is composed of approximately 3 billion nucleotides of genetic code. If you compare your DNA with the next person you happen to meet, you would find that about 1 in every 1000 nucleotides are different. That means that there are 3 million reasons why the two of you are not the same. These subtle variations of the genetic code are known as polymorphisms (literally, “many shapes,” because if the change in the genetic code results in an amino acid substitution, the resulting proteins will have different shapes and slightly altered functions as well). It is these polymorphisms that are largely responsible for our biochemical individuality. Our polymorphisms help make us unique individuals. There are many types of polymorphisms, but by far the most common is the single-nucleotide polymorphism (abbreviated SNP and pronounced “snip”), in which a single nucleotide of the DNA is altered. The sum total of all of an individual’s polymorphisms significantly affects protein synthesis and physiologic function, rendering each individual biologically and biochemically unique.
Why do Polymorphisms Exist?
The theory of natural selection has two central tenets: (1) all organisms compete for limited resources, and (2) organisms with some advantage in acquiring those resources are more likely to survive, to thrive, and to pass on that advantage to their offspring. Polymorphic variations are the ultimate source of these advantages. All polymorphisms that are found to be common in a species must afford some adaptive advantage to survival in some specific environment. Genetic polymorphisms are preserved and become more prevalent in a species when they endow a better chance of survival or of reproduction. The cumulative weight of slight genetic variations over time is the means by which variability within a species arises and by which new species also emerge. However, what may be good for a species and its evolution may not be good for a specific individual, because the environmental pressures exerted on a species over millions of years may be very different from the environment encountered in the present.
In individuals, altered polymorphic genes (the legacy of evolution) produce altered proteins, and altered proteins exhibit altered functions. For any given individual, altered protein function may be beneficial, neutral, or harmful, depending on the environment to which he or she is exposed. Prevalent polymorphisms are likely to be beneficial in certain environments but harmful in others. Because we cannot change our genes, the goal of preventive genomics is to alter an individual’s environment based on his or her specific genetic variations to optimize his or her genetic potential. Genes themselves cannot be modified, but gene expression can be. One of the common misconceptions of polymorphisms is that they reveal only limitations, but in reality, they reveal an individual’s potential.
Nutrigenetics and Nutrigenomics
From a clinical perspective, changes in diet are one major way of altering the gene–environment balance and improving an individual’s health. Two distinct but interrelated fields of inquiry and knowledge are emerging. Nutrigenetics, also referred to as personalized nutrition, is based on the understanding that our genetic polymorphisms change the way we respond physiologically to specific nutrients. By studying specific polymorphisms and their physiologic response to specific nutrients, we can determine a more optimal nutritional regimen for a specific individual based on his or her specific polymorphisms. Nutrigenomics, by contrast, focuses on the effects of specific nutrients (both macro and micro) on the genome as a whole, and subsequently on the body’s resulting total pool of proteins (the proteome) and on all its subsequent metabolic activity (the metabolome) as well.2,3
Food is, by definition, derived from other living organisms. Inherent in any food is an information code that we read and interpret by eating that food. There is a literal exchange of information that passes between the eaten and the eater. Not surprisingly, central to life is our ability to coordinate our metabolic activities with nutrient availability. Signals from the exterior world (food, toxins, weather, stress, etc.) turn on and turn off specific metabolic processes to improve our chances of survival in an ever-changing world.4
The most extreme example of the interaction of food availability and metabolic change is the situation, common in nature, when no food at all is available, i.e., famine. Clive McCay at Cornell University in the 1930s found that by restricting calorie intake in rats by at least 25% from the free-feeding level, he could substantially increase their average and extreme life expectancies, delay the age of tumor onset, delay cessation of reproductive function, and preserve functional homeostasis or stress response capacity.5 Since that time, no major investigation has failed to demonstrate significant benefits to health and longevity from calorie restriction. Calorie restriction lowers the level of insulin exposure, which in turn lowers the overall growth factor exposure, improves age-declining maintenance of mitochondrial function, and helps to maintain a long-term favorable balance of the insulin-to-growth hormone antagonism.6 From an evolutionary perspective, this makes intuitive sense. Only when calories are abundant does an individual have the metabolic resources for growth and reproduction. Survival during times of famine necessitates metabolic shifts that conserve resources and promote repair, because growth is not an option.7
Reducing calorie intake not only lowers overall insulin exposure but is now known to activate the transcription of a family of genes known as sirtuins (SIRT). The SIRT enzymes appear to have first arisen in primordial eukaryotes, possibly to help them cope with adverse conditions, like famine, and today are found in all plants, yeast, and animals. In response to calorie restriction, SIRT-1 stimulates the production of new mitochondria in skeletal muscle and liver cells, thereby increasing the capacity for metabolic repair and energy production. New mitochondria produce fewer free radicals than the old mitochondria, which leads to less free-radical damage and to delayed onset of metabolic aging. SIRT-1 also has a cascading effect on multiple genes, leading to increased catalase activity, increased free fatty acid oxidation for energy, and reduced inflammation via suppression of the enzyme nuclear factor-κβ (NF-κβ).8
The obvious problem with calorie restriction, however, is that we like food. Recent studies on the effects of the polyphenol compound resveratrol (initially isolated from grape skins) on obese, sedentary mice, suggested that it could mimic the beneficial health effects of calorie restriction even while the rats continued to eat a high-calorie, high-fat diet and did not exercise. Dietary supplementation with resveratrol was found to oppose the effects of the high-calorie diet in 144 of 153 altered biochemical pathways, most of which could be attributed to its activation of the transcription of the enzyme SIRT-1. Resveratrol increased insulin sensitivity, reduced insulin-like growth factor-1 production, increased adenosine monophosphate activated protein kinase, increased peroxisome proliferator-activated receptor-coactivator-1 (PPAR-1) activity, increased the number of mitochondria, and improved overall motor function.9 Such research suggests that at some future time we just may be able to have our cake, eat it too, and suffer few of the metabolic consequences for overeating. Nevertheless, the best current nutritional advice for longevity and health remains simply, “eat less.” Not only does it lower insulin load, but it activates the transcription of numerous enzymes that promote mitochondrial regeneration, health, and longevity.
Not only is nutrigenomics helping to elucidate the myriad effects of specific nutrients on our genome, but it is also helping us rethink and understand the mechanisms by which other nutrients act on our physiology. Ginkgo biloba is a commonly prescribed medicinal herb that is known to increase peripheral microcirculation and to contain rather potent antioxidants. The vasodilation allows delivery of these antioxidant compounds to poorly vascularized areas, like the brain. Not surprisingly, ginkgo is commonly used for impaired memory and mental function as we age. Using gene chip assays, researchers examined cellular extracts to look for altered levels of messenger RNA, an accurate measure of gene activity. In vitro studies with human bladder cancer cells that were incubated with ginkgo resulted in suppression of gene transcription by more than 50% in 16 genes and induction of 139 genes by more than 100%. The overall effect of adding ginkgo was to activate genes that code for improved mitochondrial function and antioxidant protection. Subsequent in vitro mouse studies showed activity of more than 12,000 genes and a preferential activation of genes within the brain with induction of more than 200% of 43 genes in the cortex and 13 genes in the hippocampus, including those genes that promote nerve cell growth, differentiation, regulation, and function, as well as increased mitochondrial activity and antioxidant protection.10
This elegant research points to a new understanding of why and how herbal medicines or other specific nutrients can act to change our physiology, but also helps to explain why specific herbs or nutrients act preferentially on specific tissues or organ systems. It is not just Ginkgo biloba that acts to alter gene transcription. Every medicinal herb, every nutrient, every food, and every pharmaceutical medicine is likely to act in a similar fashion. These compounds do not just have a gross chemical effect on our physiology, but they actually alter gene expression. Nutrigenomics is forcing us to rethink the ways in which our bodies respond to environmental stimuli in the form of food, herbs, or medicine.
In some areas, nutrigenomics and nutrigenetics can overlap. We propose the use of the term “preventive genomics” to include both nutrigenomics and nutrigenetics because they cannot always be easily separated. To illustrate, PPAR-γ is a nuclear hormone receptor that regulates many cellular functions, such as nutrient metabolism, cell proliferation, and cell differentiation in response to dietary macronutrients, specifically to carbohydrate and fat intake. PPAR-γ integrates the cellular control of energy, lipid, and glucose homeostasis. Its activation by increased dietary fat or sugar intake is clearly an example of nutrigenomic interaction. However, there is a common polymorphism in the gene that codes for PPAR-γ in which at the twelfth amino acid in the protein, an alanine is substituted for a proline (the polymorphism is referred to as PPAR-γ P12A). Individuals with the 12A variant display a greater metabolic tolerance to a high-fat diet, leading to a significantly reduced risk of developing insulin resistance, type 2 diabetes, coronary artery disease, and central obesity when consuming a typical Western diet.11 Individuals with the 12P variant are more sensitive to the ill effects of excessive dietary saturated fat intake, suggesting a clear therapeutic dietary strategy in P allele carriers to prevent obesity, diabetes, and heart disease.12
There are many polymorphisms like the PPAR-γ proline variation that exist in a large percentage of the population and appear to increase risk of certain serious diseases. They beg the question, why do these seemingly harmful polymorphisms exist? It is important to remember that the PPAR-γ proline variation is only harmful in individuals who eat a high-calorie, high-saturated fat typical Western diet. It behooves us to remember that in most of nature and for most of human history, too much food to eat was rarely a major risk factor. Quite the opposite was true when winter and famine were common occurrences. In these environments, the ability to extract more nutrition from the same caloric intake would be a distinct advantage for survival. It is only in the last 50 to 100 years in our culture of affluence that these variations have begun to pose significant risks to our health. We call these genes, “thrifty genes,” a term first coined by D.L. Coleman to explain why the Pima Indians from the desert of the American southwest were prone to develop obesity and diabetes. Thousands of years of survival in that harsh environment selected for genes that made this group incredibly efficient at retaining calories from food—a distinct adaptive advantage when food supply was scarce. However, with the 24-hour grocery only a car ride away, their genes are significantly less well adapted to survive.13 We see similar gene variations that increase inflammation throughout the body. This seems counterproductive until we realize that infectious disease has been a major environmental risk throughout evolution, and inflammation and immune activation are essentially the same biological process. It is imperative to remember that every polymorphism that exists in humans with significant prevalence confers protection and advantages to survival in some environment. Our task as clinicians is to identify that environment and to recommend it to those patients with that particular genetic variation.
Nature Versus Nurture
The reality is that the prevention and cure of complex diseases and syndromes are not to be found exclusively in our genes or our environment, but in the interactive symphony between the two. Nature (our genes) provides a plastic template that is largely adaptable to a wide range of environments (“survival of the most adaptable”), and slight variations in those genes can cause altered responses to specific environments (nutrigenetics). In contrast, nurture (our environment) switches genes on and off, largely controlling gene expression (nutrigenomics).
To illustrate this idea of gene–environment interaction, consider the research of Caspi et al.14 They studied variations in the promoter sequence for the gene coding for monoamine oxidase-A (MAO-A) and found that a promoter polymorphism caused some people to have high-activity MAO-A genes and others to have low-activity genes. Those with high activity MAO-A would deactivate catecholamine neurotransmitters, like dopamine and noradrenaline, more rapidly. They then examined whether these genes played a role in antisocial and violent behavior in men who had been abused as children. Remarkably, they found that men with the high-activity MAO-A gene were virtually immune to the effects of maltreatment as children, seldom if ever becoming violent offenders, whereas men with the low-activity MAO-A gene were much more antisocial and violent, but only if they themselves were abused as children. In other words, for violent behavior to manifest in adulthood, both the low-activity gene (nature) and childhood maltreatment (nurture) needed to be present. If either was missing from the equation, the adult was very likely to be well socialized and nonviolent.
The Centers for Disease Control and Prevention (CDC) published a Gene-Environment Interaction Fact Sheet in August 200015 that outlined the basic principles of a broad understanding of the causal interaction of genes and environment in human disease. In it, the CDC makes the following four main points:
1. Virtually all human diseases result from the interaction of genetic susceptibility and modifiable environmental factors.
2. Variations in genetic makeup are associated with almost all disease.
3. Genetic variations do not cause disease but rather influence a person’s susceptibility to environmental factors.
In this brief paper, the CDC essentially outlined the chief tenets of nutrigenetics and of preventive genomics.
Ironically, these ideas are hardly new. In 1909, Archibald Garrod16 published Inborn Errors of Metabolism, in which, after identifying the first human disease that behaved as a true Mendelian recessive trait (alkaptonuria), he went further to construct a sweeping hypothesis that altered heredity was “the seat of chemical individuality.” “Inborn errors of metabolism,” he wrote, “are due to a failure of a step in the metabolic sequence due to loss or malfunction of an enzyme.” By examining the subtle end products of metabolism, he continued, we should be able to identify the differences that altered heredity produces in each individual. This is a remarkable insight, given that the words gene and genetic did not exist in 1909, and it would be roughly 50 years before the structure and true function of DNA were confirmed. Moreover, Garrod’s book was published 3 years before the first vitamin was discovered, so he would have had no notion of vitamins as cofactors in enzymatic reactions. He concluded his prophetic work by envisioning the complex interaction between our unique genetic constitution and environmental factors in the exquisitely simple statement “These idiosyncrasies may be summed up in the proverbial saying that one man’s meat is another man’s poison.”
The Clinical Utility of Nutrigenomics and Nutrigenetics
The point of using genetic and genomic information in a clinical setting is to personalize the therapeutic regimen and to develop an effective strategy towards true disease prevention. It is a common mistake, however, to think that somehow the new preventive genomic information we can access will make all previous therapies obsolete. This is evident in the mindset that thinks we can attribute disease risk to polymorphisms without reference to environment (read any genetic newspaper headline). Simply put, genetic information is no more and no less valuable than environmental information. We need them both to make an optimal difference. Furthermore, at this stage in preventive genomic research there are at best only 100 or so polymorphisms about which we have sufficient clinical information to make personalized nutritional recommendations.17 It is useful information, to be sure, but it is insufficient for comprehensive nutritional and therapeutic recommendations.
Carl Sagan once said, “If you want to make an apple pie from scratch, you must first create the universe.” Fortunately for making an apple pie, the universe has already been created, and fortunately for making comprehensive nutritional recommendations, natural selection has been active since the beginning of life on this planet. As humans, we and our ancestors have been adapting to environments for the last 3 billion years, if you want to think of all life, or for a mere 600 million years, if you want to think of eukaryotic cells with aerobic respiration. Either way, it is a lot of experimental trial and error that brought us to the present. The proper starting point of nutrigenomics and nutrigenetics is not genetics per se, but good epidemiology. Good epidemiology can tell us what is the best nutrition and lifestyle for the average person. Specific genetic polymorphisms can help us modify these recommendations to meet the specific genetic needs of an individual patient.
Discovering the optimal nutrition and lifestyle for a specific individual will depend on developing a functional matrix that is minimally composed of:
1. Good epidemiologic dietary and lifestyle data
2. Specific genomic polymorphisms that alter specific macronutrient and micronutrient requirements
3. Functional laboratory assessment of individual function, physiology, and micronutrient status