Chapter 16 Genomics, Nutrigenomics, Nutrigenetics, and the Path of Personalized Medicine
Introduction
About 3 billion years ago, the experiment known as “life” began on Earth. All living creatures from the five kingdoms are descended from a single common ancestor. We know that all creatures on this planet are related to one another because we all share the same digital “language of life,” known as DNA and RNA, made from five simple nucleic acids, connected along a sugar phosphate chain. If we could meet our primordial progenitor, we might not recognize it as living at all. We suspect it was nothing more than an RNA receptor-catalyst that somehow replicated itself by consuming chemicals in its immediate environment. These simple “ribo-organisms” were inherently unstable, forming and falling apart quite easily. Only over hundreds of millions of years did these organisms gain more stability when single-stranded RNA evolved into double-stranded DNA, and even more stability when the DNA began coding for proteins that would eventually evolve into cellular structure.1
Genes and Environment—Nature and Nurture
“Environment” is broadly understood in genetics to include everything that is not the genetic information, or genome, itself. Environment includes climate, physical surroundings, exogenous chemicals or toxins, and infectious agents, but it also includes diet, lifestyle, and behavioral factors. Scientists once believed that genes were immutable archives of digital information that simply coded for proteins, which in turn determined the structure and function of an individual’s (analogue) body. However, it is now irrefutable that gene expression is substantially affected by and sensitive to environmental change. Genes quite literally respond to the environment to which they are exposed.2 Because genes respond to the environment we subject them to, the environment itself can be proactively manipulated to alter gene expression, and therefore, to change the health state of the individual. This is the central premise of preventive or functional genomics.
Nutrigenetics and Nutrigenomics
From a clinical perspective, changes in diet are one major way of altering the gene–environment balance and improving an individual’s health. Two distinct but interrelated fields of inquiry and knowledge are emerging. Nutrigenetics, also referred to as personalized nutrition, is based on the understanding that our genetic polymorphisms change the way we respond physiologically to specific nutrients. By studying specific polymorphisms and their physiologic response to specific nutrients, we can determine a more optimal nutritional regimen for a specific individual based on his or her specific polymorphisms. Nutrigenomics, by contrast, focuses on the effects of specific nutrients (both macro and micro) on the genome as a whole, and subsequently on the body’s resulting total pool of proteins (the proteome) and on all its subsequent metabolic activity (the metabolome) as well.2,3
Food is, by definition, derived from other living organisms. Inherent in any food is an information code that we read and interpret by eating that food. There is a literal exchange of information that passes between the eaten and the eater. Not surprisingly, central to life is our ability to coordinate our metabolic activities with nutrient availability. Signals from the exterior world (food, toxins, weather, stress, etc.) turn on and turn off specific metabolic processes to improve our chances of survival in an ever-changing world.4
The most extreme example of the interaction of food availability and metabolic change is the situation, common in nature, when no food at all is available, i.e., famine. Clive McCay at Cornell University in the 1930s found that by restricting calorie intake in rats by at least 25% from the free-feeding level, he could substantially increase their average and extreme life expectancies, delay the age of tumor onset, delay cessation of reproductive function, and preserve functional homeostasis or stress response capacity.5 Since that time, no major investigation has failed to demonstrate significant benefits to health and longevity from calorie restriction. Calorie restriction lowers the level of insulin exposure, which in turn lowers the overall growth factor exposure, improves age-declining maintenance of mitochondrial function, and helps to maintain a long-term favorable balance of the insulin-to-growth hormone antagonism.6 From an evolutionary perspective, this makes intuitive sense. Only when calories are abundant does an individual have the metabolic resources for growth and reproduction. Survival during times of famine necessitates metabolic shifts that conserve resources and promote repair, because growth is not an option.7
Reducing calorie intake not only lowers overall insulin exposure but is now known to activate the transcription of a family of genes known as sirtuins (SIRT). The SIRT enzymes appear to have first arisen in primordial eukaryotes, possibly to help them cope with adverse conditions, like famine, and today are found in all plants, yeast, and animals. In response to calorie restriction, SIRT-1 stimulates the production of new mitochondria in skeletal muscle and liver cells, thereby increasing the capacity for metabolic repair and energy production. New mitochondria produce fewer free radicals than the old mitochondria, which leads to less free-radical damage and to delayed onset of metabolic aging. SIRT-1 also has a cascading effect on multiple genes, leading to increased catalase activity, increased free fatty acid oxidation for energy, and reduced inflammation via suppression of the enzyme nuclear factor-κβ (NF-κβ).8
The obvious problem with calorie restriction, however, is that we like food. Recent studies on the effects of the polyphenol compound resveratrol (initially isolated from grape skins) on obese, sedentary mice, suggested that it could mimic the beneficial health effects of calorie restriction even while the rats continued to eat a high-calorie, high-fat diet and did not exercise. Dietary supplementation with resveratrol was found to oppose the effects of the high-calorie diet in 144 of 153 altered biochemical pathways, most of which could be attributed to its activation of the transcription of the enzyme SIRT-1. Resveratrol increased insulin sensitivity, reduced insulin-like growth factor-1 production, increased adenosine monophosphate activated protein kinase, increased peroxisome proliferator-activated receptor-coactivator-1 (PPAR-1) activity, increased the number of mitochondria, and improved overall motor function.9 Such research suggests that at some future time we just may be able to have our cake, eat it too, and suffer few of the metabolic consequences for overeating. Nevertheless, the best current nutritional advice for longevity and health remains simply, “eat less.” Not only does it lower insulin load, but it activates the transcription of numerous enzymes that promote mitochondrial regeneration, health, and longevity.
Not only is nutrigenomics helping to elucidate the myriad effects of specific nutrients on our genome, but it is also helping us rethink and understand the mechanisms by which other nutrients act on our physiology. Ginkgo biloba is a commonly prescribed medicinal herb that is known to increase peripheral microcirculation and to contain rather potent antioxidants. The vasodilation allows delivery of these antioxidant compounds to poorly vascularized areas, like the brain. Not surprisingly, ginkgo is commonly used for impaired memory and mental function as we age. Using gene chip assays, researchers examined cellular extracts to look for altered levels of messenger RNA, an accurate measure of gene activity. In vitro studies with human bladder cancer cells that were incubated with ginkgo resulted in suppression of gene transcription by more than 50% in 16 genes and induction of 139 genes by more than 100%. The overall effect of adding ginkgo was to activate genes that code for improved mitochondrial function and antioxidant protection. Subsequent in vitro mouse studies showed activity of more than 12,000 genes and a preferential activation of genes within the brain with induction of more than 200% of 43 genes in the cortex and 13 genes in the hippocampus, including those genes that promote nerve cell growth, differentiation, regulation, and function, as well as increased mitochondrial activity and antioxidant protection.10
In some areas, nutrigenomics and nutrigenetics can overlap. We propose the use of the term “preventive genomics” to include both nutrigenomics and nutrigenetics because they cannot always be easily separated. To illustrate, PPAR-γ is a nuclear hormone receptor that regulates many cellular functions, such as nutrient metabolism, cell proliferation, and cell differentiation in response to dietary macronutrients, specifically to carbohydrate and fat intake. PPAR-γ integrates the cellular control of energy, lipid, and glucose homeostasis. Its activation by increased dietary fat or sugar intake is clearly an example of nutrigenomic interaction. However, there is a common polymorphism in the gene that codes for PPAR-γ in which at the twelfth amino acid in the protein, an alanine is substituted for a proline (the polymorphism is referred to as PPAR-γ P12A). Individuals with the 12A variant display a greater metabolic tolerance to a high-fat diet, leading to a significantly reduced risk of developing insulin resistance, type 2 diabetes, coronary artery disease, and central obesity when consuming a typical Western diet.11 Individuals with the 12P variant are more sensitive to the ill effects of excessive dietary saturated fat intake, suggesting a clear therapeutic dietary strategy in P allele carriers to prevent obesity, diabetes, and heart disease.12
There are many polymorphisms like the PPAR-γ proline variation that exist in a large percentage of the population and appear to increase risk of certain serious diseases. They beg the question, why do these seemingly harmful polymorphisms exist? It is important to remember that the PPAR-γ proline variation is only harmful in individuals who eat a high-calorie, high-saturated fat typical Western diet. It behooves us to remember that in most of nature and for most of human history, too much food to eat was rarely a major risk factor. Quite the opposite was true when winter and famine were common occurrences. In these environments, the ability to extract more nutrition from the same caloric intake would be a distinct advantage for survival. It is only in the last 50 to 100 years in our culture of affluence that these variations have begun to pose significant risks to our health. We call these genes, “thrifty genes,” a term first coined by D.L. Coleman to explain why the Pima Indians from the desert of the American southwest were prone to develop obesity and diabetes. Thousands of years of survival in that harsh environment selected for genes that made this group incredibly efficient at retaining calories from food—a distinct adaptive advantage when food supply was scarce. However, with the 24-hour grocery only a car ride away, their genes are significantly less well adapted to survive.13 We see similar gene variations that increase inflammation throughout the body. This seems counterproductive until we realize that infectious disease has been a major environmental risk throughout evolution, and inflammation and immune activation are essentially the same biological process. It is imperative to remember that every polymorphism that exists in humans with significant prevalence confers protection and advantages to survival in some environment. Our task as clinicians is to identify that environment and to recommend it to those patients with that particular genetic variation.
Nature Versus Nurture
To illustrate this idea of gene–environment interaction, consider the research of Caspi et al.14 They studied variations in the promoter sequence for the gene coding for monoamine oxidase-A (MAO-A) and found that a promoter polymorphism caused some people to have high-activity MAO-A genes and others to have low-activity genes. Those with high activity MAO-A would deactivate catecholamine neurotransmitters, like dopamine and noradrenaline, more rapidly. They then examined whether these genes played a role in antisocial and violent behavior in men who had been abused as children. Remarkably, they found that men with the high-activity MAO-A gene were virtually immune to the effects of maltreatment as children, seldom if ever becoming violent offenders, whereas men with the low-activity MAO-A gene were much more antisocial and violent, but only if they themselves were abused as children. In other words, for violent behavior to manifest in adulthood, both the low-activity gene (nature) and childhood maltreatment (nurture) needed to be present. If either was missing from the equation, the adult was very likely to be well socialized and nonviolent.
The Centers for Disease Control and Prevention (CDC) published a Gene-Environment Interaction Fact Sheet in August 200015 that outlined the basic principles of a broad understanding of the causal interaction of genes and environment in human disease. In it, the CDC makes the following four main points:
1. Virtually all human diseases result from the interaction of genetic susceptibility and modifiable environmental factors.
2. Variations in genetic makeup are associated with almost all disease.
3. Genetic variations do not cause disease but rather influence a person’s susceptibility to environmental factors.
Ironically, these ideas are hardly new. In 1909, Archibald Garrod16 published Inborn Errors of Metabolism, in which, after identifying the first human disease that behaved as a true Mendelian recessive trait (alkaptonuria), he went further to construct a sweeping hypothesis that altered heredity was “the seat of chemical individuality.” “Inborn errors of metabolism,” he wrote, “are due to a failure of a step in the metabolic sequence due to loss or malfunction of an enzyme.” By examining the subtle end products of metabolism, he continued, we should be able to identify the differences that altered heredity produces in each individual. This is a remarkable insight, given that the words gene and genetic did not exist in 1909, and it would be roughly 50 years before the structure and true function of DNA were confirmed. Moreover, Garrod’s book was published 3 years before the first vitamin was discovered, so he would have had no notion of vitamins as cofactors in enzymatic reactions. He concluded his prophetic work by envisioning the complex interaction between our unique genetic constitution and environmental factors in the exquisitely simple statement “These idiosyncrasies may be summed up in the proverbial saying that one man’s meat is another man’s poison.”
The Clinical Utility of Nutrigenomics and Nutrigenetics
The point of using genetic and genomic information in a clinical setting is to personalize the therapeutic regimen and to develop an effective strategy towards true disease prevention. It is a common mistake, however, to think that somehow the new preventive genomic information we can access will make all previous therapies obsolete. This is evident in the mindset that thinks we can attribute disease risk to polymorphisms without reference to environment (read any genetic newspaper headline). Simply put, genetic information is no more and no less valuable than environmental information. We need them both to make an optimal difference. Furthermore, at this stage in preventive genomic research there are at best only 100 or so polymorphisms about which we have sufficient clinical information to make personalized nutritional recommendations.17 It is useful information, to be sure, but it is insufficient for comprehensive nutritional and therapeutic recommendations.