Studies of drug disposition in critically ill children are limited.
Effective pharmacologic therapeutic interventions should focus on choosing the right drug, right time, right dose, right duration, and right route.
Age-dependent and pathophysiologic changes affect the pharmacokinetics and pharmacodynamics of drugs.
Drug disposition is controlled by pharmacokinetics, which describes the changes in drug or metabolite concentration in the body over time.
Drug effect is governed through pharmacodynamics, which describes the response that is effected by the drug in the body.
A comprehensive understanding of both pharmacokinetics and pharmacodynamics allows for rational therapeutic choices based on targeting the correct drug exposure to achieve the targeted response.
Pharmacokinetic processes that influence drug disposition include absorption, distribution, metabolism, and elimination.
Pharmacotherapeutic strategies in the critically ill must incorporate developmental and disease-dependent changes for effective therapy.
Approximately 25% of the worldwide population is younger than 15 years, meaning that a quarter of the world’s population is classified as pediatric. Pediatric studies were not required by the US Food and Drug Administration or other regulatory agencies prior to the late 1990s, leaving a lack of dosing guidance for children. Throughout the early 19th and 20th centuries, federal regulations for medications did not exist; most medications lacked efficacy while others were blatantly toxic. Some of the high-profile toxicity cases involved the disfigurement, harm, and death of children, including the tragedies of Mrs. Winslow’s soothing syrup, sulfanilamide containing diethylene glycol, and thalidomide. Today, regulatory agencies require pediatric-specific clinical trials before approval of new drug entities. Even with these requirements, pediatric patients, especially those who are critically ill, are undoubtedly a vulnerable population who exhibit significant physiologic differences that affect drug disposition compared with healthy children. ,
The basics of drug disposition are governed by pharmacokinetics and pharmacodynamics. There are many factors that affect the pharmacokinetics and pharmacodynamics of drugs, including, but not limited to, age and organ dysfunction. Maturation and development affect every system in the body—changes occur from gestation through the newborn and infant period throughout childhood and into adulthood. Some of these developmental changes have been well characterized, while others have limited data. Critical illness can impact all of the major organ systems, such as the immune, cardiac, hepatic, renal, and circulatory systems, which can lead to multiorgan dysfunction. , Changes in blood pH, hypoproteinemia, increased hydrostatic pressure, concomitant medications, and increased capillary permeability are all aspects of critical illness that can have profound effects on drug disposition, which will be described in greater detail later in this chapter.
Pharmacokinetics—the movement of compounds in, within, and out of the body—can be described using mathematical equations. In simple terms, it is what the body does to the drug over time. The most basic and important pharmacokinetic (PK) parameters are clearance (CL), volume of distribution (V d ), and half-life (t 1/2 ). For drugs not administered through the intravenous route (e.g., oral, transdermal, inhaled) bioavailability (F) must also be taken into account. All of these parameters influence the total concentration of drug in the body over time, also known as the area under the concentration-time curve (AUC).
Clearance is defined as the volume of blood from which the drug is removed over time (e.g., mL/min or L/h). CL is often weight normalized and represented per kilogram of body weight (e.g., mL/min per kg or L/h per kg). Given a patient profile, clearance is calculated by multiplying the volume of distribution of the compartment that contains the drug by the elimination rate constant (k e ) from that compartment, or the rate of drug eliminated from the body, using the following equation: CL = V d × k e . However, in the clinical setting, estimating the value of k e is not always easy or practical. Therefore, clearance can also be estimated after the first dose of drug is administered by the following equation:
where dose is in units of mass, such as grams or milligrams, F is the bioavailability, and AUC is in the units of mass × time/volume. Deriving clearance from these equations does not necessitate an understanding of where and how the drug is metabolized and/or cleared from the body (i.e., metabolized in the liver or cleared through the kidneys). It does, however, present an overall understanding of the time it takes to clear the body of the drug.
The two main organs of drug clearance are the liver and kidneys. Drugs may be bound or unbound to plasma or tissue proteins and establish an equilibrium within the body. Since only unbound drug can distribute throughout the body, be metabolized or cleared, and, usually, effect a response at the target, the ratio between bound to unbound drug is important. Overall, total body clearance of unbound drug (f u ) is equal to the sum of the clearances from all clearance mechanisms. In other words, CL Total = CL H + CL R + CL O , where CL H is the hepatic clearance, CL R is renal clearance, and CL O is the total clearance from elimination pathways other than the liver and kidney. Renal clearance is the sum of glomerular filtration and tubular secretion minus tubular resorption, represented by the equation:
Hepatic clearance relies on drug metabolism due to liver enzyme activity and impacted by the drug’s intrinsic clearance (CL int ), which is the ability of drug-metabolizing enzymes to clear the drug. Hepatic CL also relies on hepatic blood flow (Q H ) to deliver drugs to the sites of metabolism. Hepatic blood flow can be calculated using the following equation:
Drugs can be further defined by the extraction ratio (E), which is the fraction of drug removed during one pass through the liver. The hepatic clearance of high extraction ratio drugs ranges from 0.7 to 1.0. In this case, CL int is much larger than Q H ; thus, the equation for hepatic clearance simplifies to CL H = Q H . In summary, the CL of drugs with high extraction ratios is dependent primarily on hepatic blood flow. The hepatic clearance of low extraction ratio drugs, or those ranging from 0.01 to 0.30, simplifies down to CL H = f u × CL int , since CL int is much smaller than Q H .
The metabolic transformation of drugs that drives the intrinsic clearance is catalyzed by enzymes; most reactions follow Michaelis Menten kinetics. In this case, the rate of drug metabolism is defined as
where V max is the maximal rate of drug metabolism, S is the concentration of the drug or substrate, and K m is half the maximal rate of metabolism. When the drug concentration is much less than the K m (K m >> [S]), then the rate of drug metabolism is proportional to the concentration of the free drug, also known as first-order elimination kinetics . Most drugs are eliminated via linear first-order kinetics. However, if the drug concentration is much higher than the K m ([S] >> K m ), then the enzyme system is saturated, and the amount of drug metabolized is constant and at its maximum. In this event, the drugs follow zero-order or saturation elimination kinetics. In children, a number of important compounds demonstrate zero-order kinetics at clinically useful doses ( Box 122.1 ). Drugs can sometimes follow zero-order kinetics and, ultimately, when the concentration is less than the K m , transition to first order.
Understanding the mechanisms of elimination can clarify how this parameter can change with age and disease since renal function, hepatic blood flow, and enzyme abundance change over time and are susceptible to pathophysiologic changes.
Volume of distribution
After administration, the drug disseminates throughout the body where it can remain solely in the blood or plasma (central compartment) or can distribute extensively throughout the body into tissue compartments (peripheral compartments). The extent of distribution is represented by the volume of distribution, which is the theoretical volume of fluid that contains the compound. The units are volume, such as mL or L, and are often normalized to weight (L/kg). The volume of distribution does not explicitly correspond to physiologic space, which is why it can exceed the total volume of body water (0.5–0.6 L/kg). , However, the V d correlates with how much of the body the drug partitions into. For instance, a drug with a low volume of distribution distributes minimally, if at all, into tissues or compartments other than blood or plasma. Examples, such as sulfamethoxazole or aspirin, both have an apparent volume of distribution of approximately 0.2 to 0.5 L/kg. , Conversely, a large volume of distribution suggests extensive distribution into tissues or compartments other than the central compartment, such as chloroquine and labetalol, which both have an apparent volume of distribution of greater than 2 to 5 L/kg. ,
The apparent volume of distribution can be calculated after the administration of an intravenous bolus using the following concentration:
where C 0 is the initial concentration at time zero, or its peak immediately after administration. This equation can also be used to calculate the volume of distribution in the central compartment (V c ), which represents an instantaneous, rapid equilibration in the blood, plasma, and potentially any other fast-equilibrating tissue compartments.
Compounds that distribute into other tissue compartments that equilibrate slower than the central compartment have both a central (V c ) and peripheral (V p ) volume of distribution. When the rates of distribution in and out of the peripheral compartment are at equilibrium, the steady-state volume of distribution is reached and can be defined using the following equation:
where f up is the unbound fraction in the plasma and f ut is the unbound fraction in the tissue. This ratio of the unbound fraction in the plasma to the tissue is also referred to as the partition coefficient, or K p .
Elimination half-life of a drug represents the amount of time it takes for half the concentration to be cleared from the body. The amount of time for half of the drug to distribute into the tissues represents the distribution half-life. While it may seem that the drug is being cleared during distribution, it is still in the body and should not be confused with the elimination half-life. When percent of drug remaining in the body is plotted on a semi-logarithmic plot against the number of half-lives, the slope of the resulting line generates a slope of −0.693, or ln(0.5) ( Fig. 122.1 ). The absolute value of Ln(0.5) equals Ln(2). Therefore, the specific half-life of a drug can be estimated by dividing the natural log of 2 by the elimination rate, k e , or substituting CL and V d in for k e , as shown:
Therefore, changes in volume of distribution and/or clearance alter the elimination half-life.
Elimination half-life guides the dosing interval and schedule of peak and trough sampling for therapeutic drug monitoring (TDM). For multiple dosing or drug infusions, the elimination half-life is essentially when the rate of drug entering the body is equal to the rate of drug exiting the body. It takes approximately 3 to 5 half-lives for 90% to 95% of the drug to be eliminated from the body. A drug dosed at a regular interval based on the half-life achieves steady state after 3 to 5 doses. In order to achieve a steady-state concentration faster than 3 to 5 half-lives, a loading dose can be administered. Steady state cannot be achieved by increasing the dose or rate of infusion, as these methods only achieve a higher steady state in 3 to 5 half-lives, and the initial target steady-state concentration will be exceeded.
Pharmacokinetic parameters are based on intravenous administration, which is considered the reference standard for absorption. After intravenous administration, 100% (F = 1) of the drug is available in the body and can reach the target site. For compounds that are administered by other nonintravenous routes, the drug must be absorbed and the amount available to reach the target site can range anywhere from 0% to 100% (0.01 < F < 1.0). The barriers to complete absorption can be due to factors such as low permeability, low solubility, first-pass metabolism, and changes in transporters. First-pass metabolism refers to the metabolism of the drug before it enters the systemic circulation. This results in loss of drug before it can reach the target site. In order to calculate absolute bioavailability, the AUC after intravenous administration must be compared to that of the AUC after nonintravenous administration, normalizing for dose. For instance, relative bioavailability of an orally administered drug can be calculated using the following equation:
However, if a direct comparison to an intravenous formulation is not possible, then the relative bioavailability can be calculated by comparing the exposure to another nonintravenous formulation. Relative bioavailability is important when comparing a newer formulation to an existing formulation to calculate the need for a dosage adjustment. It is also important to note that if an intravenous formulation is unavailable and relative F is unknown or incalculable, then only apparent clearance (CL/F) and apparent volume of distribution (V d /F) can be calculated. When clearance or volume parameters are shown divided by F, it implies that these are not absolute values of the parameters and that the values are reliable only for that route of administration.
For compounds with a bioavailability of 1, no dosage adjustment is required when switching between intravenous and nonintravenous administration. However, doses must be increased when the bioavailability of the nonintravenous route of administration is less than 1. Examples of lowered bioavailability have been demonstrated in critically ill patients for oral and subcutaneous medications. The bioavailability of both moxifloxacin and lansoprazole, administered as an oral and an orally disintegrating tablet, respectively, are reduced compared with intravenous administration by approximately 25%. A comparison of enoxaparin bioavailability after intravenous and subcutaneous administration indicates that critical illness reduces bioavailability of the subcutaneous route by almost 40%. Morphine, a commonly administered medication in critically ill patients, has a low oral bioavailability of 20% to 30% but is metabolized into two metabolites, morphine-3-glucuronide (M3G) and morphine-6-glucuronide (M6G). The M6G metabolite is pharmacologically active; thus, differences in metabolism could change the ratio of parent to active metabolite. For diazepam, oral and rectal bioavailability is approximately 80% to 90% compared with intravenous bioavailability. These examples provide evidence that if a nonintravenous route of administration is considered, clinicians need to understand that route of administration will impact the total amount delivered to the systemic circulation, and dosage adjustments will likely be needed.
Each pharmacokinetic parameter can be derived from plotting the drug concentration versus time. The total exposure to drug over time, also known as the area under the concentration-time curve (AUC), can be calculated by dividing the curve into trapezoids and summing the area of each. Additionally, these types of plots can provide information on the number of compartments in which the drug distributes, which can be determined when the concentration on the y -axis is converted into a logarithmic scale ( Fig. 122.2 ). The drug is said to distribute into one compartment if the drug concentration decreases in a proportional linear manner after converting the y -axis ( Fig. 122.2 A). The negative slope of the line represents the elimination rate constant, or k e . In contrast, the drug distributes into two compartments if the drug concentration decreases exponentially ( Fig. 122.2 B). The exponential decay of a two-compartment drug is classified as the alpha (α) phase (green line) and the beta (β) phase (purple line) and correspond to the distribution and elimination phases, respectively. If these lines are extended as shown in the figure, the intercepts (A and B) and slopes (α and β) for both lines can be identified. Using the slopes and intercepts, plasma concentration can be determined at any time after administration using the following equation:
where t is the time after administration. For two-compartment drugs, β refers to the rate of elimination, or k e ; thus, elimination half-life can be determined by substituting the β value for the k e . Distribution half-life can be calculated by substituting the α value for the k e and represents the time it takes for half of the drug to distribute throughout the body.
Applying these pharmacokinetic principles can facilitate therapeutic choices. To target a specific concentration, clinicians can use loading and maintenance doses. Loading doses facilitate faster attainment of the target concentration if it is necessary to achieve that concentration immediately. Loading doses can be calculated using the following equation:
where C p is the target concentration. Maintenance doses sustain the target concentration at a steady state and can be calculated using the following equation:
where τ is the dosing interval. Ultimately, these pharmacokinetic parameters can be used to determine dose, dosing interval, time to steady state, and other important factors influencing therapeutic choices.
Pharmacodynamics in simple terms is the effect that the drug has on the body and involves biochemical, physiologic, and molecular effects. Overall, it encompasses mechanism of action, safety profiles, drug-receptor interactions, and receptor-effector coupling. Pharmacodynamic responses can be characterized as either agonists or antagonists depending on the response after administration ( Fig. 122.3 ). Drugs that can produce a 100% response at the highest doses are labeled as full agonists whereas those that can produce only a fraction of the full response at the highest doses are partial agonists. Drugs that occupy the receptor and do not produce an effect or block an effect are labeled as antagonists. Examples of each of these effects can be seen with opioids: morphine is a full agonist, buprenorphine is a partial agonist, and naloxone is an antagonist. The extent of response can be characterized as linear, hyperbolic (E max ), or sigmoid (as shown in Fig. 122.3 ). Linear relationships have a direct relationship between concentration and effect; thus, a doubling of concentration will result in doubled response. Conversely, hyperbolic and sigmoid responses plateau at a certain point, and additional drug will not exert additional effect.
Pharmacodynamic response can be linked to the pharmacokinetics, as the exposure influences the amount of drug available to produce an effect. After understanding the concentration versus time profile, it is important to understand the concentration versus effect/response profile. There are direct or indirect responses ( Fig. 122.4 ). Drugs exhibiting a direct response demonstrate concentration and effect that peak simultaneously with an effect that is proportional to the drug concentration. Examples of these drugs include blood pressure medication and muscle relaxants. Direct effects signify that drugs rapidly equilibrate to site of action. Drugs exhibiting an indirect response demonstrate a temporal delay between the maximum drug concentration and effect. Indirect responses are characterized by the requirement to transport the drug to the site of action or the response requires downstream synthesis or demolition of a factor controlling response. Examples of these include warfarin, interferon-α 2a , and cimetidine.
Aside from understanding the mechanism of action for producing a response, clinicians should also recognize the methods of assessing the clinical outcomes. For practical reasons, the intended clinical end point might develop or occur in the future, rendering measurement of pharmacodynamic response infeasible. In these cases, alternative methods are required to measure the efficacy of therapeutic interventions to prevent long-term sequelae. Biomarkers are surrogate end points used as indicators of normal physiologic processes, pathogenic processes, or pharmacologic responses to therapeutic interventions. Optimally, biomarkers must be easily identifiable and quantifiable physiologic effects validated as reliable predictors of a clinical end point. Examples of biomarkers are blood pressure, cholesterol, HbA1c, tumor shrinkage, or human immunodeficiency virus (HIV) viral load, which help clinicians predict the clinical end points of stroke, coronary artery disease, diabetes-related morbidity, overall survival or progression-free survival in cancer, or HIV/AIDS (acquired immunodeficiency syndrome), respectively. Measuring the timing and percent of change in these biomarkers helps clinicians chose appropriate therapies.
If there is a clear relationship between the concentration and response, then this information can be used to target a desired effect. Therapeutic drug monitoring (TDM) is when drug concentrations in the blood or plasma are measured at specific intervals. TDM can also be used to assess the concentration of drug over time and the individual subject’s pharmacokinetic parameters to alter or continue dosing to reach the target concentration. TDM has been used to confirm that specific drug concentrations known to be associated with safety and efficacy are achieved, especially for drugs with a narrow therapeutic index. The therapeutic index is the range of concentrations considered to be efficacious, with an acceptable toxicity and safety profile. A narrow therapeutic index refers to drugs for which the range is small, which increases the difficulty of dosing these medications to achieve efficacy while avoiding toxicity. It is important to understand the targeted pharmacodynamic response and its temporal relationship with pharmacokinetics to optimize dosing, especially with dynamic age- and disease-dependent changes impacting pharmacokinetics.
Determinants of effective therapy
Therapeutic choices are based on the five Rs: the right drug, right time, right dose, right duration, and right route. Age-dependent changes in physiology can affect the pharmacokinetic parameters, such as clearance, volume of distribution, bioavailability, and half-life. Additionally, in the setting of critical illness, pathophysiologic changes can complicate these choices as they, too, alter pharmacokinetic parameters. Consequently, choosing a safe and efficacious dose in children entails recognizing and applying these developmental changes to therapeutic decision-making. Understanding the factors that affect pharmacokinetics and pharmacodynamics—and therefore drug disposition and effective therapy ( Fig. 122.5 )—can facilitate optimal drug choice and dosing recommendations.
Pharmacokinetics can be explained by four main processes: absorption, distribution, metabolism, and elimination (ADME). A drug’s pathway through each of these processes can be linear or circular, depending on its physicochemical properties ( Fig. 122.6 ). For instance, metabolism can occur immediately following absorption or metabolites can be distributed back into the systemic circulation before elimination. Each of these processes is affected by developmental and physiologic changes. These age- and disease-dependent changes in the ADME processes can inform the changes of the clinical pharmacokinetic parameters of clearance, volume of distribution, half-life, and exposure.
One of the most common forms of drug administration is through the intravenous route. This ensures complete administration of the drug into the bloodstream. All other forms of administration do not guarantee complete delivery into the blood and rely on the process of absorption. Absorption refers to the movement of drug from the site of administration to the bloodstream. Depending on the site of administration, absorption can vary greatly across different types of formulations for the same drug. Absorption is regulated by the site of administration, the composition of the tissues involved, the drug physicochemical properties ( Table 122.1 ), and disease states ( Box 122.2 ).
|Physicochemical Factors||Patient Factors|
|Disintegration of tablets or solid phase||Surface area for absorption|
|Dissolution in gastric or intestinal fluids||pH of gastrointestinal tract|
|Lipophilicity/hydrophilicity||Gastric emptying and intestinal transit times|
|Molecular weight||Stomach and duodenal volume|
|Drug ionization||Bile salt concentration|
|Particle size||Bacterial colonization|
Gastric acid secretion
Proximal small bowel resection
Delayed gastric emptying
Congestive heart failure
Protein calorie malnutrition
Intestinal transit time
Bile salt excretion
Cholestatic liver disease
Extrahepatic biliary obstruction
Decreased surface area
Short bowel syndrome