21 Validation of Finite Element Models
While in natural sciences, empiricism is predominant, mathematical modeling is traditionally limited to inductive models that extrapolate from repeated experimental observations. The extreme specialization of research has slowly separated mathematical modeling skills from experimental skills in most research groups, and it is not rare to see groups where only one of these skills is truly developed. This is a pity: The complexity involved with understanding the biomechanical behavior of the musculoskeletal system is overwhelming; to advance comprehension, one should be ready to use every technique available.
The loading scenarios applied in vitro generally follow two different philosophies that reflect the complexity of the human musculoskeletal system:
Individual load components are applied to the bone, with no direct connection to any specific in vivo loading scenario for several reasons. First of all, many bone segments undergo in vivo a number of quite different loading during a variety of motor tasks. 1 Rather than replicating a large number of loading conditions, in some cases it is preferable to separately apply the main load components to the bone. Furthermore, often no details are available about the magnitude and direction of the loads applied in vivo to the bones. When information is scarce or inaccurate, it is preferable to bypass the problem by focusing on a simplified (and better controlled) loading scenario.
When adequate knowledge is available and it is necessary to include the complexity of in vivo loading in the in vitro simulation, experimental studies aim at replicating the load components applied during selected motor tasks. 2 In order to represent the physiological range of loading configurations, different motor tasks need to be simulated. 3, 4 In general, such in vitro simulations involve a more complex loading system, often including the action of relevant muscle groups. 5, 6
In the past decades, the mechanical behavior of bone structures has been intensively investigated with mathematical models. The most commonly used numerical models in biomechanics are finite element (FE) models. An FE model is a numerical model that enables calculation of selected physical quantities (e.g., stress, strain, risk of fracture) based on discretization of the structure into elements of simple geometry (see Chapter 20). FE models also have been extensively used for the determination of the mechanical stresses that physiological activities, pathological conditions, or surgical treatment induce in bones. FE models can also be used to investigate the mechanobiological phenomena underlying bone adaptation. 7 FE models, if compared to most experimental techniques, offer the advantage of estimating the stress/strain distribution over the whole structure rather than in a few selected points/regions, and enable a time-effective exploration of the effect of relevant study parameters. Subject-specific modeling procedures enable the creation of an FE model of a bone segment from computed tomography (CT) images. This makes it possible to estimate mechanical magnitudes in bones that cannot be measured in vivo without invasive or unethical procedures.
Both numerical models and in vitro experiments are models of the physical event under investigation. Therefore, both their relevance and their reliability cannot be taken for granted. A synergistic use of numerical models and in vitro experiments can provide at the same time corroboration to both types of models and a deeper insight into the physical event being investigated. There are a few studies where numerical modeling and controlled in vitro experiments are combined in a single study, mostly using the experiments to corroborate/falsify the numerical models (this process is usually called validation). Validation is a crucial aspect as it is the only procedure that enables quantifying the reliability of a model for a clinical application. Unfortunately, the combination of numerical and experimental approaches is most often restricted to the validation purposes, with no contribution in the opposite direction (from FE models to in vitro experiments). However, the great potential of experimental-numerical integration is the cross-fertilization between the two approaches.
21.1 Weak Points and Needs of Finite Element Models
21.1.1 Limitations of Numerical Models
One should never forget that a model cannot account for something that is totally unknown and unexpected. Mathematical models are fabrications of the human mind and can only know what is already to some extent known. Therefore, numerical models can only be used to investigate known (or at least suspected) scenarios. Besides this primary limitation, there are others related to the specific nature of numerical models. The biggest limitation comes from the process at the root of each model: idealization. We observe the physical reality, and from this observation we develop an idealized representation of the phenomenon of interest, which we describe in mathematical terms. Such an idealization can either be achieved by neglecting certain aspects (Aristotelian idealization) or by assuming true something we know to be false (Galilean idealization; e.g., a massless object). In both cases, this process is associated with some limitations of the validity of the model, which cannot be overcome. For instance, a model where contact is assumed to be frictionless will never be able to elucidate anything useful about frictional abrasion.
A second limitation of numerical models derives from the numerical tools used. Any numerical solution is approximated: Such an approximation defines a “resolution” for our model. Details and events that are finer than this resolution cannot be investigated with that model. Some numerical approximations, such as those caused by finite-precision computing, are usually negligible. However, in the FE method, there are other problems such as the discretization of the integration domain that might induce critical errors (see Chapter 20). For instance, an FE estimate of the contact stresses at the tip of a sharp object (e.g., like the thread of a bone screw) may not be reliable up to two- or three-elements distance.
The third problem with the FE method is the so-called identification of the model. This consists in the determination of the values to be assigned to the model parameters (i.e., Young′s modulus, friction coefficient). In general, these values derive from experimental measurements or estimates and are available with limited precision. Such uncertainty in the model parameters propagates to the model predictions. A typical problem is the identification of the boundary condition in validation experiments.
21.1.2 Requirements of Finite Element Models: Definition of the Modeling Scope
As said previously, a model of a given bone segment or anatomical region is not universal and is not capable of addressing any possible biomechanical questions. Therefore, the scope of the model should be defined as clearly and unambiguously as possible. Models can be used simply to represent a phenomenon (e.g., for teaching purposes, for memorization). The modeling scope must be defined with great detail and should include which portion of reality we want to capture, which biomechanical quantities we need to estimate, and under which conditions.
21.1.3 Requirements of Finite Element Models: Idealization and Deployment
A model consists in an idealization of a portion of reality by observing how mechanical/biological quantities are organized in space and time, and how they interact with each other. In scientific modeling, this cognitive artifact should be expressed in logical terms: Models can be divided into inductive models (i.e., regression models, data models), deductive models (i.e., models based on the laws of physics), or abductive models (i.e., Bayesian models).
The model is now converted into a tool that can be practically used to address the modeling scope. Typically, idealization is captured into mathematical form, which is then solved either analytically or numerically. Due to the complexity of the models involved in biomedicine, most models are solved numerically (i.e., in an approximated form).
21.1.4 Requirements of Finite Element Models: Verification
When a mathematical model is solved numerically, it is important to quantify the accuracy of such an approximate solution. For linear models, it is generally possible to estimate the errors associated with the numerical solution. Post hoc indicators such as the stress error indicator or convergence tests (Fig. 21.1) on parameters, such as potential energy of the entire bone, displacements, and strains at the points of interest, can estimate the error due to the spatial discretization of the domain, which is one of the most delicate aspects of FE modeling (see Chapter 20).
21.1.5 Requirements of Finite Element Models: Sensitivity Analysis
FE models estimate the state of a system (e.g., the stress/strain distribution) based on a set of initial values. It is important to verify how an uncertainty on such input parameters affects the estimates provided by the model. First of all, because the initial values used for the identification of the model are always associated with an error, we need to ensure that this uncertainty does not excessively affect the conclusions we aim to draw from the model. Secondly, if we notice that the model is hugely sensitive to small variation of some initial values, this can suggest that idealization or its mathematical or numerical deployments are critical. Assuming one has a reliable estimate of the uncertainty associated with each parameter to be used in the model, it is recommended to run a sensitivity analysis to estimate how these uncertainties propagate through the model and affect the model outcome. This can be done with a simpler exploratory analysis such as the design of experiment and related simplified Taguchi strategies, or using a more complex Monte Carlo–based statistical FE modeling approach. Sensitivity analysis is the best way to discover truly unforeseeable errors in the model.
21.1.6 Why Do Finite Element Models Need Validation?
The predictive accuracy of a model can be measured by comparing its outcome against matching quantities measured in a controlled physical experiment. Every model is reasonably accurate within certain limits (the modeling scope must be compatible with such limits [i.e., no material behaves linear-elastic indefinitely]). It must be noted that validation (in the sense of determining if a computational model represents the actual physical event with sufficient accuracy) is not even possible.
There is not a single possible approach for validation that applies to all problems. When it is not obvious which mathematical model is best suited for the scope, a strong inference approach is advisable. Strong inference consists of having two or more candidate mathematics compete with respect to the results of one or more controlled experiments. One should always remember the Ockham′s razor: If two models show similar predictive accuracy, the one with fewer assumptions should be chosen.
21.2 Limitations of in Vitro Experiments
While in vitro experiments can help in addressing some problems of FE models, one should also be aware of the limitations of experimental measurements. First of all, in vitro experiments are time-consuming and require costly strain/displacement/force transducers, including dedicated data loggers. Moreover, experimental measurements are affected by both random and systematic error (Fig. 21.2).
21.2.1 Error Affecting Experimental Measurement: Bias
Systematic error can be induced by a number of factors including:
Defective preparation/use of the transducers: this can result in largely biased readouts.
Perturbation induced by the measurement systems (i.e., when a strain sensor is bonded to a bone it reinforces its surface, contributing to load bearing). Therefore, the actual strain distribution is underestimated systematically.
Ambiguous or ill-defined anatomical reference frames: As such, reference frames often rely upon subjective identification of bone landmarks, and different operators will achieve different alignment of the specimens.
Poor information about in vivo loads.
Ill-designed loading setup: In some cases, the loading system results in overconstrained conditions, where additional load components (other than the intended one[s]) are generated within the loading system.
21.2.2 Error Affecting Experimental Measurement: Noise
The second type of error is different in nature. Random error (noise) can be induced by:
Measurement noise: All measurement systems, including mechanical ones, are affected by “noise,” including mechanical vibration, electromagnetic interference, etc.
Uncertainty in the pose of the test specimens: Holding and applying loads to bone segments can be difficult because of the irregular geometry. This results in variability between test repetitions or between specimens.
Uncertainty in the positioning and alignment of the transducers: If a transducer is randomly misplaced or misaligned, the readout will suffer from an unpredictable error.
Scarce repeatability of the applied loads: In most cases, material testing machines or dedicated simulators are used. In all such cases, actuators, loading fixtures, and control systems are used that unavoidably introduce some random error.