Interpreting Laboratory Tests

Chapter 15 Interpreting Laboratory Tests




The use of the clinical laboratory to evaluate patients for the presence or absence of disease transcends medical and surgical specialties. Physicians in all areas of medical practice are dependent on laboratory testing to arrive at a correct diagnosis. Because many factors increase the uncertainty associated with a test result, physicians need to understand the limitations of interpreting test results.


Clinical decision making using diagnostic laboratory testing is based on the assumption that a given test is accurate and precise. Diagnostic test accuracy is the ability of a test to distinguish patients with a disease from those who are disease free (Leeflang et al., 2008). Test accuracy is not necessarily fixed; accuracy may vary among patient populations and with different clinical conditions. Precision is a measure of the reproducibility of a test measurement when the same specimen is rechecked under the same circumstances. Sources of imprecision include biologic variability and analytic variability. Biologic variability is the variation in a test result in the same person at different times because of physiologic processes, constitutional factors, and extrinsic factors (McClatchey, 2002) (Table 15-1). Analytic variation refers to the variation in repeated tests on the same specimen and relates to analytic technique and specimen processing. With current technology, biologic variation plays a larger role than analytic variation in most laboratory tests.


Table 15-1 Biologic Variables that Affect Test Results



































Biologic Rhythms
Circadian
Ultradian
Infradian
Constitutional Factors
Age
Gender
Genotype
Extrinsic Factors
Posture
Exercise
Diet: caffeine
Drugs and pharmaceuticals: oral contraceptives
Alcohol use
Pregnancy
Intercurrent illness

From Holmes EA. The interpretation of laboratory tests. In McClatchey KD (ed). Clinical Laboratory Medicine, 2nd ed. Philadelphia, Lippincott–Williams & Wilkins, 2002, p 98.



The Concept of “Normal”


The result of a laboratory test is compared with a reference standard, which traditionally has indicated values that are seen in healthy persons. Using the terms “normal results” or “normal range” implies that there is a clear distinction between healthy and diseased persons, when in reality there is considerable overlap.


The current standard of comparison for laboratory results is the reference range, which is frequently defined by results that are between chosen percentiles (typically the 2.5th to 97.5th percentiles) in a healthy reference population. Several problems are encountered when deriving a reference range. Often, the reference population is not representative of persons being tested. Differences in gender, age distribution, race, ethnicity, or the setting (hospitalized vs. ambulatory patients) between the reference population and the person receiving the test may be present. The person being tested should be tested under similar physiologic conditions (e.g., fasting, sitting, resting) as the reference population. The size of the reference population may be too small to include a representative range of the population.


Two statistical methods, parametric and nonparametric, are generally used to define the reference intervals. The parametric method applies when the results of the sample population fits a normal gaussian distribution, with a bell-shaped curve around the mean. In this case, the 2.5th and 97.5th percentiles can be calculated using statistical formulas. When the reference values do not follow a normal distribution, nonparametric methods are used, arranging the results from the reference subjects in ascending order, and identifying values between the 2.5th and 97.5th percentiles as within the reference range.


Reference ranges for a particular test can be the manufacturer’s suggested reference range or may be modified because of differences in the population using the laboratory. The Clinical Laboratory Improvement Act of 1998 (CLIA) has defined three requirements for reference values: the normal or reference ranges must be made available to the ordering physician; the normal or reference ranges must be included in the laboratory procedure manual; and the laboratory must establish specifications for performance characteristics, including the reference range, for each test before reporting patient results. Using the manufacturer’s reference range is valid when the analytic processing of the test is the same as that done by the manufacturer and when the population being tested is similar to the reference population used to define the reference range. When selecting a reference range that includes 95% of the test results, 5% of the population will fall outside the reference range for a single test. When more than one test is ordered, the probability increases that at least one result will be outside the reference range. Table 15-2 compares the number of independent tests ordered with the probability of an abnormal result being present in healthy persons.


Table 15-2 Probability that a Healthy Person Will Have an Abnormal Result with Multiple Tests






























Number of Independent Tests Probability of Abnormal Result (%)
1 5
2 10
5 23
10 40
20 64
50 92
90 99
Infinity 100

From Burke MD. Laboratory tests: basic concepts and realistic expectations. Postgrad Med 1978;63:55.



Evaluating a Test’s Performance Characteristics


Given that tests are not totally accurate or precise, one must have a way to quantify these shortcomings. A test’s ability to discriminate diseased from nondiseased persons is defined by its sensitivity, specificity, and positive and negative predictive values. Table 15-3 shows how each is calculated. Sensitivity and specificity are inherent technical aspects of a test and are independent of the prevalence of disease in the population tested. However, given that diseases have a spectrum of manifestations, sensitivity and specificity are improved if the population is heavily weighted with patients who have advanced (vs. early) illness.


Table 15-3 Diagnostic Test Performance Characteristics















Finding Disease Present Disease Absent
Test positive True positive (TP) False positive (FP)
Test negative False negative (FN) True negative (TN)

Sensitivity = TP/(TP + FN); Specificity = TN/(TN + FP).


Positive predictive value = TP/(TP + FP); Negative predictive value = TN/(TN + FN).


Sensitivity is defined as the percentage of persons with the disease who are correctly identified by the test. Specificity is the percentage of persons who are disease-free and correctly excluded by the test. The positive predictive value is defined as the percentage of persons with a positive test who actually have the disease, whereas the negative predictive value is the percentage of persons with a negative test who do not have the disease. Predictive value is influenced by the sensitivity and specificity of the test and the prevalence (the percentage of people in a population who at a given time have the disease).



Separating Diseased from Disease-Free Persons


Under ideal circumstances, sensitivity and specificity approach 100%. In reality, they are lower. The best currently available test to decide who is diseased or disease-free could be imperfect and have sensitivities and specificities in the 80% range. Moreover, discrepancies between a test’s efficacy and its effectiveness are common. Efficacy is a test’s performance under ideal conditions, whereas effectiveness is its performance under usual circumstances. Tests under development are evaluated under highly rigorous criteria, but in clinical practice, inadvertent error can be introduced into the technical performance or interpretation of the test results. Also, test values for the diseased and disease-free populations overlap.


A cutoff value may be chosen to separate “normal” from abnormal (Figure 15-1). This decision is arbitrary and involves selecting a balance between sensitivity and specificity. The receiver operating characteristic (ROC) curve is a graphic analysis used to identify a cutoff that minimizes false-positive and false-negative results (Figure 15-2). The sensitivity and specificity are calculated for a number of cutoff values, with the variables 1-Specificity plotted on the x axis and Sensitivity plotted on the y axis. Each point on the curve represents a cutoff for the test. A perfect test would have a cutoff that allowed both 100% sensitivity and 100% specificity. This would be a point at the upper-left corner of the graph. The most efficient cutoff for a single test is the one that gives the most correct results, represented by the value that plots nearest to the upper-left corner of the graph.




The optimal cutoff depends on the purpose of the test and essentially is a risk/benefit analysis. In situations where disease detection is most important, the cutoff may be chosen that maximizes sensitivity at the expense of decreasing specificity. If disease exclusion is the goal, sensitivity and negative predictive value need to be maximized. It is important that negative results be true negatives as opposed to false negatives, so that a negative test has correctly excluded the individual as having disease. Similarly, if disease confirmation is the goal, specificity and positive predictive value are critical. It is important that positive results are true positives and not false positives, so that healthy persons are not misidentified, especially when treatments (e.g., surgery) have serious risks.


The predictive value of a test is directly related to the pretest probability of disease. When the prevalence of disease is high in the population, a positive test result is expected and a negative result is not expected, because the disease is common. Similarly, when the prevalence is low, a negative test result is anticipated because few people have the disease. These characteristics of predictive value become clinically useful when one compares the outcome of a positive or negative test result with the pretest probability of disease (Figure 15-3). Prevalence (pretest probability of disease) is plotted against predictive value for a positive and negative test. Note that a test result loses its ability to discriminate those who have disease from those who do not at the extremes of prevalence. If disease probability is low, a positive or negative result does not change the post-test probability much—it is still low. On the other hand, if disease probability is high, the post-test results, whether positive or negative, do not substantially alter an already high probability of disease being present. The predictive value has the greatest power to discriminate those with disease from those who are disease free in the mid-pretest probability range, near 50%. A positive test result suggests a higher post-test probability of disease than a negative result.




Multiple Test Ordering


For many diseases, more than one test is available for diagnostic or screening purposes. The dilemma then becomes whether a positive result on several tests must be present before the diagnosis is confirmed, or whether a single positive test is sufficient to label the person as diseased. The various possibilities will have an impact on sensitivity and specificity if the tests are viewed separately. Consider the example in which two tests are available for the diagnosis of a disease. Three combinations can lead to an affirmative diagnosis:





The first combination will increase sensitivity and decrease specificity in comparison with each test alone, and the second combination will decrease sensitivity and increase specificity. These effects on sensitivity and specificity for multiple test ordering are similar to shifting the cutoff point for a single test.


The value of performing a second test only when the first is positive generally comes into play when the first test is significantly less expensive and easier to administer than the second but is less specific, although highly sensitive. The second test is highly sensitive and specific but more costly to perform on large populations, especially for screening purposes. An example is the enzyme-linked immunosorbent assay (ELISA) and Western blot test for human immunodeficiency virus (HIV) testing. The ELISA has a high sensitivity and is relatively inexpensive and easy to perform, but it is less specific. The Western blot test has high sensitivity and specificity, but it is more expensive and more difficult to perform. Using the ELISA first identifies almost everyone with the disease, whereas the Western blot excludes the fraction of persons incorrectly labeled as having disease (false positives) by the ELISA test. This testing sequence has improved sensitivity and specificity over each test alone and is more cost-effective than initially performing both tests.




Albumin


Albumin is a transport protein that is produced mainly in the liver and maintains osmotic pressure. Albumin has a long half-life (20 days) and a small (~5%) daily turnover. In humans, albumin levels rise from birth up to age 1 year, thereafter remaining stable at approximately 3.5 to 5.5 grams per deciliter (g/dL) throughout adult life. Albumin levels are reduced with advancing liver disease, nephrotic syndrome, protein-losing enteropathy, malnutrition, and some inflammatory diseases (Table 15-4). Elevations of serum albumin are unusual except in dehydration.


Table 15-4 Causes of Decreased Albumin Levels









































Reduced Absorption
Malabsorption
Malnutrition
Decreased Synthesis
Chronic liver disease
Protein Catabolism
Infection
Hypothyroidism
Burns
Malignancy
Chronic inflammation
Increased Losses
Nephrotic syndrome
Cirrhosis
Protein-losing enteropathies
Hemorrhage
Dilutional
Syndrome of inappropriate antidiuretic hormone secretion (SIADH)
Intravenous hydration

In severe acute infection, reduced albumin production combined with increased catabolism causes a reduction in serum albumin levels beginning in 12 to 36 hours and reaching a maximum nadir in about 5 days. As a marker for malnutrition, however, albumin levels decline relatively late. Albumin levels are most helpful in the evaluation of edema, liver disease, and proteinuria.


The difference between the serum albumin level and the albumin in ascites fluid, the serum-ascites albumin gradient (SAAG), can help differentiate portal hypertension from other causes of ascites. SAAG greater than 1.1 g/dL is seen with portal hypertension; SAAG less than 1.1 g/dL suggests another cause of the ascites, such as peritoneal inflammation or malignancy.


Most of the albumin filtered through the kidneys is reabsorbed, so significant urinary albumin is a sign of abnormal renal function. Large amounts (>300 mg/dL) of albumin can be detected on standard urine dipsticks. Microalbuminuria is defined as a persistent increase of urinary albumin that is below the detectable range of the standard dipstick test. Microalbuminuria is a marker for early diabetic nephropathy and also predicts macrovascular disease. Urinary albumin can be assayed from a spot urine specimen, which is corrected by the urine creatinine, or a 24-hour urine collection. A 24-hour urinary albumin excretion in mg/day equates to the same numeric value for the spot urine albumin (mg)/creatinine (g) ratio. Therefore the reference ranges for each test are normal <30, microalbuminuria 30-300, and clinical albuminuria >300. Factors that may interfere with the test accuracy include strenuous or prolonged exercise, upright posture, hematuria, menses, genital or urinary infections, congestive heart failure, uncontrolled hypertension or uncontrolled hyperglycemia, and high protein or high salt intake.



Alkaline Phosphatase


Alkaline phosphatase (ALP) is found in a wide variety of tissues, including the liver, bone, intestine, and placenta. The reference value for ALP depends on age and gender, with higher levels in childhood, adolescence, and pregnancy. A typical reference range in an adult is 25 to 100 U/L. In adults, the source of an elevated ALP is the liver, bone, or medication (Table 15-5). Typically, hepatic elevations of ALP are suggestive of cholestatic liver disease or biliary tract dysfunction. Mild ALP elevations (one to two times above reference range) can occur with parenchymal liver disease, such as hepatitis or cirrhosis. Marked ALP elevations occur with infiltrative liver disease or biliary obstruction, intrahepatic or extrahepatic. A persistently elevated ALP level can be an early sign of primary biliary cirrhosis. In cholestatic liver disease, bilirubin and gamma-glutamyltransferase (GGT) levels are increased as well, with less prominent elevations in aminotransferase levels. To confirm a hepatic source of an elevated ALP level, one can simultaneously measure GGT, which is elevated in obstructive liver disease but not with bone disease. Imaging studies of the liver, by sonography or computed tomography (CT), can define an anatomic basis for obstruction in the setting of an elevated ALP level of hepatic origin.


Table 15-5 Causes of Increased Alkaline Phosphatase Levels























































Bone Origin
Paget’s disease
Osteomalacia
Rickets
Hyperparathyroidism
Metastatic disease
Liver Origin
Extrahepatic biliary obstruction
Pancreatic cancer
Biliary cancer
Common bile duct stone
Intrahepatic obstruction
Metastatic liver disease
Infiltrative diseases
Hepatitis
Primary biliary cirrhosis
Sclerosing cholangitis
Cirrhosis
Passive hepatic congestion
Other Causes
Drugs
Phenobarbital
Phenytoin
Chlorpropamide
Hyperthyroidism
Temporal arteritis


Aminotransferases


Liver chemistry tests are widely used to assess hepatic function. Common markers of hepatocellular damage are the aminotransferases, aspartate aminotransferase (AST) and alanine aminotransferase (ALT). While AST is also found in other tissues, such as the heart, blood, and skeletal muscle, ALT is more specific for liver. The aminotransferases are released by hepatocytes with cell injury or death. The reference range is approximately 10 to 40 U/L for AST and 15 to 40 U/L for AST. The magnitude of the elevation of aminotransferases and the ratio of AST to ALT can help suggest the cause of liver disease. Mild elevation (<5 times the upper limit of normal) of the ALT or AST, with ALT > AST, is frequently found with chronic liver disease, including chronic viral hepatitis, fatty liver, and medications. Probably the most common cause of persistently elevated unexplained aminotransferases is fatty infiltration of the liver. Less common causes of mildly elevated aminotransferases with ALT > AST include autoimmune hepatitis, hemochromatosis, alpha-1 antitrypsin disease, Wilson’s disease, metastatic disease, and cholestatic liver disease. Mild aminotransferase elevations with AST > ALT are more suggestive of alcohol-related liver disease, but can also occur with cirrhosis and fatty liver. With alcoholic hepatitis, AST levels typically are approximately twice ALT levels, but the AST levels rarely are greater than 300 U/L. Marked elevations (greater than 15 times upper limit of normal) of AST and ALT suggest significant necrosis, such as seen in acute viral or drug-induced hepatitis, in ischemic hepatitis, or as can occur with acute biliary obstruction (Green and Flamm, 2002). However, the magnitude of elevation of the aminotransferases does not necessarily correlate with the severity of underlying liver disease or the prognosis. In fact, normal or minimally elevated aminotransferases may be seen in patients with end-stage liver disease. When AST is elevated without elevation of ALT, one should consider extrahepatic causes, particularly myocardial or skeletal muscle sources. When AST and ALT are elevated approximately the same, a hepatic origin is most likely. Table 15-6 compares the differences in liver function tests between hepatocellular and obstructive disorders.


Table 15-6 Pattern of Liver Function Elevation



























Test Hepatocellular Disorders Obstructive Disorders
Bilirubin + ++
Aminotransferases +++ +
Alkaline phosphatase + ++
γ-Glutamyltransferase + ++
Albumin Decreased Normal

Lactate dehydrogenase (LDH) is elevated in liver disease but is nonspecific; it is also found in skeletal muscle, cardiac muscle, blood, and some pulmonary disorders. Measurement of LDH rarely adds useful information to the evaluation of liver disease. GGT is a microsomal enzyme that is inducible by alcohol and certain drugs, including warfarin and some anticonvulsants. Although not specific for alcohol abuse, GGT is the most sensitive liver enzyme for alcohol abuse.



Amylase and Lipase


Pancreatic disease, particularly acute pancreatitis, is often associated with elevations in amylase and lipase. Table 15-7 lists common causes of elevated amylase and lipase. Lipase levels have greater sensitivity and specificity for pancreatic disease than amylase levels. Because there are many different assays for amylase and lipase, with different reference ranges, physicians should consult their laboratory’s reference range to determine their upper limits of normal. Amylase and lipase values increase 3 to 6 hours after the onset of acute pancreatitis, both peaking at approximately 24 hours. Amylase levels fall to normal in 3 to 5 days; lipase levels return to normal in 8 to 14 days. Because of exocrine insufficiency caused by recurrent pancreatitis, amylase levels tend to be lower when alcohol is the cause of pancreatitis, as opposed to gallstone or drug-induced pancreatitis. Pancreatitis is likely when the amylase is elevated to three times the upper limit of normal. When lipase levels are more than five times normal, pancreatitis is virtually always present. A normal amylase value, however, does not exclude pancreatitis, especially when induced by hypertriglyceridemia.


Table 15-7 Causes of Elevated Amylase and Lipase Levels
















Amylase Lipase
Pancreatic Diseases







Nonpancreatic Diseases














Antinuclear Antibodies


Antinuclear antibodies (ANAs) are autoantibodies against parts of the cell’s nucleus. Combined with clinical features, ANA testing can help diagnose certain collagen vascular disorders (Table 15-8). The likelihood that an ANA test will help with diagnosis depends on the pretest probability of disease. ANA tests are reported as negative (no staining) or positive at the highest cutoff of dilution of the serum that shows immunofluorescent nuclear staining. If positive, the description of the pattern is noted. When the ANA test is positive, testing for specific nuclear antigens should be guided by the clinical findings.


Table 15-8 Conditions Associated with Positive Antinuclear Antibody (ANA) Test





































ANA very useful for diagnosis
Systemic lupus erythematosus
Systemic sclerosis
ANA somewhat useful for diagnosis
Sjögren’s syndrome
Polymyositis-dermatomyositis
ANA very useful for monitoring or prognosis
Drug-associated lupus
Mixed connective tissue disease
Autoimmune hepatitis
ANA not useful or has no proven value for diagnosis, monitoring, or prognosis
Rheumatoid arthritis
Multiple sclerosis
Thyroid disease
Infectious diseases
Idiopathic thrombocytopenia purpura
Fibromyalgia

From Solomon DH, Kavanaugh AJ, Schur PH, et al. Evidence-based guidelines for the use of immunologic tests: antinuclear antibody testing. Arthritis Rheum 2002;47:434-444.


Although the ANA is 95% sensitive for systemic lupus erythematosus (SLE), it is not specific and is seen in other diseases. Higher titers are more specific for SLE but may be seen in the other autoimmune diseases. About 20% of normal people have an ANA titer of 1:40 or higher, and 5% have a titer of 1:160 or higher. Less than 5% of patients with definite SLE have a negative ANA titer. Because of the high prevalence of positive ANAs in normal people, physicians need to reserve the diagnosis of SLE for patients who have clinical findings compatible with SLE. ANA titers correlate poorly with relapses, remission, and severity of disease and are not helpful in monitoring the course or response to therapy. ANA testing should be ordered when a connective tissue disease is considered, but it is not generally helpful in the evaluation of nonspecific complaints, such as fatigue or back pain (Solomon et al., 2002).


For patients with a positive ANA titer, further testing for specific nuclear antibodies can be obtained, guided by the pattern of ANA staining and the clinical findings. The interpretation of testing for specific nuclear antigens can also be difficult; most of the “specific” antigens are not 100% specific for a particular disease and need to be interpreted in the clinical context. The anti-DNA test is highly specific for SLE, with about 95% specificity but only 50% to 60% sensitivity, and it can be used as a confirmatory test in patients with a positive ANA. Similarly, the anti-Sm (Smith) test is also highly specific for SLE, but with 30% sensitivity. Anti-SSA/Ro and anti-SSS/La are often used to diagnose Sjögren’s syndrome but can also be found in SLE. Anti Scl-70 is found in scleroderma but is not a requirement for diagnosis.



Bilirubin


Bilirubin is produced by catabolism of hemoglobin in extrahepatic tissues. Hepatocytes conjugate the bilirubin, and it is then excreted into bile. Blood bilirubin levels are a function of production rate and biliary excretion. Total bilirubin is a combination of lipid-soluble unconjugated bilirubin and water-soluble conjugated bilirubin. Total bilirubin is less than 1.5 mg/dL and is normally primarily unconjugated bilirubin. The initial step in the evaluation of an elevated bilirubin level is to distinguish conjugated (direct) from unconjugated (indirect) hyperbilirubinemia.


Probably the most common cause of unconjugated hyperbilirubinemia is Gilbert’s syndrome, a benign condition that affects up to 5% of the population. In Gilbert’s syndrome, only the unconjugated bilirubin is elevated; the rest of the liver enzymes are normal. Other causes of unconjugated hyperbilirubinemia include hemolysis, ineffective erythropoiesis (as in megaloblastic anemias), or a recent hematoma. With normal hepatic function, hemolysis is not associated with bilirubin levels greater than 5 mg/dL. In an asymptomatic person with mildly elevated unconjugated hyperbilirubinemia (<4 mg/dL), a presumptive diagnosis of Gilbert’s syndrome can be made if there are no medications that cause elevated bilirubin, there is no evidence of hemolysis, and the liver enzymes are normal (Green and Flamm, 2002). Conjugated hyperbilirubinemia generally occurs with defects of hepatic excretion, including extrahepatic obstruction, intrahepatic cholestasis, cirrhosis, hepatitis, and toxins. Bilirubinuria is a fairly sensitive marker for biliary obstruction and may occasionally be found before jaundice is evident.



Blood Urea Nitrogen and Creatinine


Blood urea nitrogen (BUN) is a byproduct of protein metabolism and is produced by the liver. The reference range for BUN level is 7 to 18 mg/dL. A rise in BUN can be seen with worsening renal function. However, an elevated BUN level is not specific for intrinsic renal disease and can be seen with prerenal causes of azotemia such as hypovolemia and congestive heart failure, postrenal causes of obstructive nephropathy, and gastrointestinal bleeding. At low flow rates, the renal tubules will increase reabsorption of urea, thereby elevating BUN proportionately more than creatinine. BUN can also be reduced in severe liver disease, malnutrition, the syndrome of inappropriate antidiuretic hormone secretion (SIADH), or occasionally the third trimester of pregnancy.


Creatinine is a product of muscle metabolism, and production is related to muscle mass, age, gender, and race, and dietary meat intake. Creatinine is filtered by the glomerulus and secreted by the proximal tubule. Creatinine levels increase as renal function is reduced. At normal renal function, most of the urinary creatinine excretion is from glomerular filtration, with about 5% to 10% from tubular secretion. As the glomerular filtration rate (GFR) declines, a larger proportion of creatinine excretion comes from secretion; therefore, direct measurements of creatinine clearance overestimate GFR with progressive reductions in renal function. Some drugs, including cimetidine, trimethoprim, fenofibrate, salicylates, and pyrimethamine, can block the secretion of creatinine and falsely elevate creatinine levels, particularly in the setting of a low GFR. Although serum creatinine has long been used to estimate renal function, current guidelines from the National Kidney Foundation recommend using estimated GFR (eGFR) from serum creatinine to report kidney function. Many clinical laboratories now automatically report the eGFR using the Modification of Diet in Renal Disease (MDRD) equation. This equation uses age, serum creatinine, and gender to estimate the GFR, expressing GFR in mL/min/1.73 m2. Limitations of the eGFR include lack of standardization of creatinine assays in different laboratories and underestimation of GFR in healthy persons. In addition, the equations were developed in persons with chronic kidney disease and may not accurately calculate GFR in elderly, nonwhite, or healthy persons (Stevens and Levey, 2005).


The BUN/creatinine ratio can help differentiate prerenal and postrenal causes of renal insufficiency from intrinsic renal disease. Ratios of 10:1 suggest intrinsic renal pathology; ratios greater than 20:1 suggest prerenal or postrenal causes.



Calcium


The total calcium level is a measurement of free (also called ionized) calcium, protein-bound calcium, and a chelated fraction. Approximately 50% of total calcium is ionized, 40% to 50% is bound to albumin, and 5% to 20% is bound to other ions. Only the free or ionized portion of calcium is physiologically active. Because of the binding of calcium with albumin, simultaneous measurements of calcium and albumin need to be performed to interpret calcium abnormalities. For every 1 g/dL that serum albumin is decreased below 4 g/dL, the estimated serum calcium is corrected by adding 0.8 mg/dL to the measured calcium level. An alternative is to measure ionized calcium levels in patients with abnormalities of serum albumin. The reference range for serum calcium is 8.5 to 10.5 mg/dL and for ionized calcium, 4.65 to 5.28 mg/dL. Serum calcium measurements are not precise enough to differentiate normal levels from mildly elevated calcium levels reliably; therefore a number of measurements are needed to confirm true mild hypercalcemia.


The etiology of hypercalcemia is either hyperparathyroidism or malignancy in more than 90% of hypercalcemic patients. In the ambulatory setting, most patients with hypercalcemia have hyperparathyroidism. Typically the hypercalcemia of hyperparathyroidism is modest, with calcium levels less than 11 mg/dL and minimal symptoms. Hospitalized patients are more likely to have malignancy as a cause of hypercalcemia. Calcium levels greater than 13 mg/dL are usually associated with malignancy. Intact parathyroid hormone (PTH, parathormone) levels can differentiate hyperparathyroidism from other causes of hypercalcemia. Nonhyperparathyroid causes of hypercalcemia will give low or “normal” intact PTH levels in a setting of hypercalcemia, whereas the PTH level will be increased in hyperparathyroidism. Occasionally, patients with a family history of hypercalcemia show a reduction in calcium excretion and have familial hypocalciuric hypercalcemia. Other causes of hypercalcemia are related to increased gastrointestinal (GI) absorption, increased bone resorption, and decreased renal excretion (Table 15-9).


Table 15-9 Causes of Calcium Abnormalities

































































Hypercalcemia
Hyperparathyroidism (primary and secondary)
Malignancies: breast, lung, prostate, renal, myeloma, T-cell leukemia, lymphoma
Drugs
Thiazide diuretics
Milk-alkali syndrome
Vitamin D intoxication
Granulomatous diseases
Sarcoidosis
Tuberculosis
Chronic renal failure
Immobilization
Hyperthyroidism
Hypocalcemia
Hypomagnesemia
Hypoparathyroidism
Malabsorption of calcium or vitamin D
Acute pancreatitis
Rhabdomyolysis
Hyperphosphatemia
Chronic renal failure
Transfusion of multiple units of citrated blood
Drugs
Loop diuretics
Phenytoin
Phenobarbital
Cisplatin
Gentamicin
Pentamidine
Ketoconazole
Calcitonin

Perhaps the most common cause of a low total calcium level is a low albumin level. When hypocalcemia is found, one should establish that the serum albumin is normal. If serum albumin is also reduced, one should perform the above correction to confirm true hypocalcemia. Another important cause of hypocalcemia is hypomagnesemia, which can lead to PTH resistance or reduced PTH secretion. Correction of the magnesium deficiency usually results in correction of the hypocalcemia. Other causes of hypocalcemia include chronic kidney disease, vitamin D deficiency, malabsorption, acute pancreatitis, transfusion with citrated blood, rhabdomyolysis, hypoparathyroidism, and pseudohypoparathyroidism, and occasionally bisphosphonate therapy.



Carcinoembryonic Antigen


Carcinoembryonic antigen (CEA), an oncofetal glycoprotein antigen, has been mainly used in the evaluation of patients with adenocarcinomas of the GI tract, especially colorectal cancer. CEA may be elevated in benign as well as malignant diseases (Table 15-10). CEA is not recommended as a screening test for occult cancer (including colorectal) because of its low sensitivity and specificity, but it may be used as supportive evidence in a patient undergoing diagnostic evaluation because of signs and symptoms of colon cancer. Its main value is in monitoring for persistent, metastatic or recurrent colon cancer after surgery. A preoperative elevation should return to normal in 6 to 12 weeks (CEA half-life, 2 weeks), if all disease has been resected. The liver metabolizes CEA, and therefore hepatic diseases can result in delayed clearance. Treatment (surgery, radiation, chemotherapy) may produce transient artifactual elevations. CEA has a 97% sensitivity for detecting recurrence in the patient whose postoperative CEA value has returned to normal, and 66% sensitivity for recurrence in the patient with normal preoperative levels.


Table 15-10 Conditions Associated with Elevated Carcinoembryonic Antigen (CEA) Level


































































Disease Patients with Elevated CEA (%)
Carcinoma of entodermal origin (colon, stomach, pancreas, lung) 60-75
Colon cancer  
Overall 63
Dukes Stage A 20
Dukes Stage B 58
Dukes Stage C 68
Lung cancer  
Small cell carcinoma About 33
Non–small cell carcinoma About 67
Carcinoma of nonendodermal origin (e.g., head and neck, ovary, thyroid) 50
Breast cancer  
Metastatic disease ≥50
Localized disease About 25
Acute nonmalignant inflammatory disease, especially gastrointestinal tract (e.g., ulcerative colitis, regional enteritis, diverticulitis, peptic ulcers, chronic pancreatitis) Variable
Liver disease (alcoholic cirrhosis, chronic active hepatitis, obstructive jaundice) Variable
Renal failure, fibrocystic breast disease, hypothyroidism Variable
Healthy persons  
Nonsmokers 3
Smokers 19
Former smokers 7

The adult reference range for CEA is 2.5 ng/mL or less for nonsmokers and 5.0 ng/mL or less for smokers. The degree of CEA elevation correlates with tumor bulk at diagnosis and therefore with prognosis. Values less than 5 ng/mL before therapy suggest localized disease and favorable prognosis, whereas levels greater than 10 ng/mL suggest extensive disease and a worse prognosis. About 30% of patients with metastatic colon cancer have normal CEA levels. Benign diseases do not usually produce CEA levels greater than 5 to 10 ng/mL. For an individual patient, repeat testing or longitudinal monitoring should be conducted at the same laboratory with the same methods because of variability among assays. A 20% to 25% increase in plasma concentration is considered a significant change. A rising CEA level may detect recurrent disease 2 to 6 months before it is clinically apparent.




Coagulation Studies


The most common coagulation studies, prothrombin time (PT) and partial thromboplastin time (PTT), are used to evaluate patients with clotting disorders or to monitor patients taking heparin or oral anticoagulants. It is helpful for the laboratory to know whether a patient is taking an anticoagulant at testing. Hospitalized patients with nonsurgical diagnoses, who do not have liver disease or a history of anticoagulant use, do not benefit from routine PT and PTT testing. These are poor screening tests for postoperative bleeding in patients without historical risk factors, physical findings, or a medication history that suggests an increased bleeding risk (Eckman et al., 2003). Preoperative PT and PTT should be reserved for patients with known or suspected coagulation disorders and those receiving anticoagulation therapy.


Prothrombin time, a simple and inexpensive test for evaluating the extrinsic coagulation pathway, is the time in seconds for citrated plasma to clot after the addition of calcium and thromboplastin. Test accuracy depends on proper collection and instrument technique. Common uses include monitoring anticoagulant therapy with warfarin, evaluating liver function (because the liver synthesizes most of the clotting factors), and screening for coagulation disorders of the extrinsic system. PT is prolonged by defects in factors I (fibrinogen), II (prothrombin), V, VII, and X. The normal range for PT is 11 to 13 seconds.


Previously, PT measurements exhibited variability across laboratories because of differences in thromboplastin sensitivity. To correct for the type of thromboplastin used, the World Health Organization (WHO) recommends using the international normalized ratio (INR) to report PT results for patients taking oral anticoagulants. Now widely accepted, the INR is calculated as follows:



image



The ISI is the international sensitivity index of the thromboplastin used at the local laboratory. Provided by the test’s manufacturer, ISI reflects the responsiveness of the thromboplastin used in the PT test. The reference range for the INR in the non-anticoagulated patient is 0.9 to 1.1. The PT is prolonged in persons with vitamin K deficiency, including those with fat malabsorption syndromes, recent broad-spectrum antibiotic use, and premature infants. In addition, the use of warfarin, many drugs and herbs, severe liver disease, alcoholism, deficiencies of clotting factors, and circulating anticoagulants can prolong the PT. The PT is not affected by platelet disorders or platelet count. The target INR varies with specific indications. In general, an INR goal of 2.5 (range, 2.0-3.0) is generally accepted for the treatment of venous thromboembolic disease and atrial fibrillation, and 3.0 (range, 2.5-3.5) for patients at risk for arterial thromboembolism, including those with mechanical heart valves.


The activated partial thromboplastin time (aPTT, or simply PTT) is a simple, inexpensive test for evaluating the intrinsic coagulation pathway, monitoring heparin therapy, screening for hemophilia A and B, and detecting clotting inhibitors. PTT is the time in seconds for citrated plasma to clot after a contact activator is added to plasma and incubated at 37° C for 5 minutes. Thromboplastin and calcium are added and the time to clot formation is recorded, which should be within 10 seconds of the control. PTT is abnormally prolonged in most patients with coagulation disorders (~90%) and is therefore the best screening test in persons suspected of having a clotting disorder. PTT screens for all coagulation factors that lead to thrombin formation except VII and XIII. These factors include factors I, II, V, VIII (antihemophiliac), IX (Christmas), X, and XII (Hageman). PTT is useful to evaluate patients with a known, suspected, or active bleeding disorder; consumptive coagulopathy (e.g., disseminated intravascular coagulation); disorder of fibrin clot formation; or fibrinogen deficiency. In addition, PTT is prolonged with deficiency of the Fletcher (prekallikrein) and Fitzgerald factors, warfarin or heparin therapy, lupus anticoagulant, and vitamin K deficiency. PTT is significantly shortened by hemolysis, is affected by high or low hematocrit, but is not affected by platelet dysfunction or count. A prolonged PT or PTT can be caused by either a factor inhibitor or a deficiency of a clotting factor. To differentiate the two, a mixing study can be performed. When the abnormality is corrected after mixing with normal blood, a factor deficiency is likely. Failure to correct after mixing suggests the presence of a factor inhibitor.


When monitoring heparin therapy, the most widely used target for anticoagulation is a PTT 1.5 to 2.5 times the upper limit of normal. Now, however, because of the great variation in thromboplastins used in different PTT assays, PTT results vary widely among laboratories. Therapeutic heparin levels, as measured by antifactor Xa units, are approximately 0.3 to 0.7 antifactor Xa IU/mL. With plasma concentrations of heparin at 0.3IU/mL, investigators have found that mean PTT values ranged from 48 to 108 seconds, depending on the laboratory methods used. The American College of Chest Physicians recommends against the use of a fixed PTT therapeutic range for the treatment of venous thrombosis; instead, they recommend that each laboratory determine the PTT range that corresponds to a therapeutic heparin level: 0.3 to 0.7 IU/mL by factor Xa dilution. Anti–factor Xa levels may also be used to monitor appropriate anticoagulation doses in patients with obesity or renal failure, because these groups are more likely to be over-anticoagulated using weight-based heparin dosing (Hirsch et al., 2008).

Stay updated, free articles. Join our Telegram channel

Oct 3, 2016 | Posted by in MANUAL THERAPIST | Comments Off on Interpreting Laboratory Tests

Full access? Get Clinical Tree

Get Clinical Tree app for offline access