53 Nonparametric versus Parametric Tests



10.1055/b-0035-122053

53 Nonparametric versus Parametric Tests

Nicholas Clement

The terms “nonparametric” and “parametric” are used as broad classifications of statistical procedures used to analyze and determine the significance of data between groups. 1 A basic fundamental knowledge of statistical concepts is needed to understand these two terms. These statistical fundamentals include random variables, probability distributions, parameters, population, sample, sampling distributions, and the central limit theorem. These will be covered in this chapter at a superficial level to enable a basic understanding of statistics to be achieved.


The field of statistics exists because it is generally impossible to gather data from all individuals of interest (i.e., the whole population at risk). Hence, the only alternative is to study a smaller subset (sample) of the population at risk, with the ultimate aim being to know the outcome for the whole population.1 “Parameters” are used to describe the study population; these include quantities such as means, standard deviations, and proportions. Logistically, it is normally not possible to obtain data for the whole population, and hence we cannot determine the parameters of the population. Using a subset (sample) of the population, we can calculate estimates of the parameters for the whole population. When parameters are calculated from the sample data, they are called “statistics”; hence, a statistic estimates a parameter.


There is no exact definition of what constitutes parametric or nonparametric data. However, for practical purposes, parametric statistical procedures rely on assumptions about the shape of the distribution (assumed to have a normal distribution) in the underlying population and about the form or parameters (means and standard deviations) of the assumed distribution.1 Nonparametric statistical procedures rely on no or few assumptions about the shape or parameters of the population distribution from which the sample was drawn. Hence, before either a parametric or a nonparametric test can be used to analyze the sample data, descriptive statistics must be performed. This chapter will cover how to perform statistical analysis of quantitative and qualitative data, covering data cleaning, descriptive statistics, and parametric and nonparametric statistical tests.



53.1 Data Types


To enable descriptive statistics to be performed, a knowledge of the data type is essential. There are two data types: quantitative and qualitative data (Fig. 53.1). 2

Fig. 53.1 A flow diagram demonstrating the different types of data.


53.1.1 Cleaning the Data


Data cleaning is an important step that must be performed; do not overlook this step with a view to trying to analyze data quickly, because doing so will result in erroneous conclusions. This process basically entails assessing the sample data to identify “out of range” numbers that need to be amended before statistical analysis takes place. 3 Qualitative variables are best recorded into numerical codes (e.g., male gender = 1 and female gender = 2). Relying upon a string/text variable may result in three groups for gender. For example, male, MALE, and female, due to capitals being used for some males and not for others. It is also important to record each variable in separate columns (e.g., a blood pressure measurement of 140/80 should be recorded as separate variables in different columns). This will allow systolic and diastolic pressures to be analyzed independently. Frequency tables are useful to identify out-of-range figures; this will highlight variables that are simply not possible (e.g., age 205 years, which has probably had a decimal place missing and is more likely to be 20.5 years old).



53.1.2 Descriptive Statistics


Basic descriptive statistical analysis enables the sample population to be described in relation to a normal population.2 Quantitative and qualitative variables are described differently. Quantitative variables are described according to their central tendency, normality, spread, and confidence intervals. Qualitative variables, however, do not have these properties and are generally described as frequencies or percentages.



53.1.3 Quantitative Variables



Central Tendency

There are three measures of central tendency: mean, median, and mode (Fig. 53.2).2 The sample mean is the average value of the data collected (total of all variables/n). An alternative measure is the sample median, which is the ranked value that is in the middle of the dataset (e.g., ages 24, 28, 34, 45, and 58 will result in a median age of 34). Hence it is the value that divides the distribution of the scores into two equal halves. The mode is the most frequently occurring value in the sample data, which is normally quoted with a percentage of the sample population that falls within that value. The mean and the median are the most commonly used measure of central tendency in medical research: The distribution of the data dictates which measure should be used. 4 If the data is normally distributed, all measures (mean, median, and mode) should be equal, and the mean is generally quoted. However, if the data is skewed, which may be to the right or left, the median is a more appropriate measure (Fig. 53.2).

Fig. 53.2 Distribution of two different quantitative data distributions (normal and a left skewed population).


Normality

It is essential to check the normality of the sample population to allow the appropriate descriptive and correct statistical tests to be performed. There are three methods of doing this: by using graphs, descriptive statistics (skewness and kurtosis), or by statistical tests.


A histogram is the simplest way to observe whether the sample data is distributed normally or skewed (Fig. 53.3).3 Alternatively, Q-Q plots can be used to help decide the normality of the data. This plot compares the quantiles (Q) of data distribution with the quantiles (Q) of a standardized theoretical distribution (i.e., normal distribution). If the distributions match, the points will demonstrate a straight line (Fig. 53.3a), signifying a normally distributed population. In contrast, if the plots illustrate a curve (Fig. 53.3b), this indicates different patterns of distribution representative of a skewed population.3 Deviations at the ends of plot indicate outliers within the population.

Fig. 53.3 Histogram and Q-Q plots for a normally distributed population (a) and a skewed population (b).

The value derived from the skewness indicates whether the data is skewed to the right (> 0), normal (~ 0), or to left (< 0).2 The skewness ranges from −3 to 3, with an acceptable range for normally distributed data lying between −1 to 1. Skewness should not be used in isolation; kurtosis measures the relative peakness of the bell shaped curve (Fig. 53.4). A range between −1 and 1 indicates a normal distribution of the population (Fig. 53.3).

Fig. 53.4 Three normally distributed populations with differing kurtosis, where the solid line has a kurtosis > 0, the dashed line has a kurtosis ~ 0, and the dotted line has a kurtosis of < 0. The standard deviation also varies, with the solid line having a standard distribution of 1, whereas the dotted line has a standard deviation of 2.

Komolgorov-Smirnov and Shapiro-Wilk are formal statistical tests that can be performed where significant differences are observed for data that is skewed.3 Reliance upon these tests is dependent upon the sample size, where the likelihood of obtaining a significant result (skewed) from a small sample (n < 30) is low. In contrast, for larger sample sizes (n > 100), a minor deviation from normal may produce a significant result but demonstrate a relatively normally distribution. So how do we know what is normal and what is not? A general rule is that for small samples (n < 30), a skewed distribution should be assumed for the population. Moderate samples (n = 30 to 100) that are statistically significant (skewed) are accepted, but if a nonsignificant result is obtained, double check using graphs, skewness, and kurtosis to affirm normality. In contrast, nonsignificant results (normal distribution) for large samples (n > 100) are accepted, but if significance is demonstrated this should be checked using graphs, skewness, and kurtosis to affirm a skewed population.2



Measures of Spread

An important descriptor to be presented with quantitative data is the variability or the spread of the data points.4 A simple method of presenting this is the range of scores, being the minimum and maximum scores within the dataset. However, this does not give an indication of how the data is distributed between these points for the study sample (Fig. 53.4). Standard deviation of the data is used to demonstrate how the data is distributed around the mean. The standard deviation is calculated from the square root of the variance, which is measure of spread (corrected sum of the squares about the mean). One standard deviation includes 68%, two standard deviations include 95%, and three standard deviations include 99% of the sample population.2 The smaller the standard deviation, the more the data is centered around the mean score, whereas the greater the standard deviation the greater to spread of the data around the mean (Fig. 53.4). When quoting a mean, it is convention to accompany this with a standard deviation, often in brackets subsequent to the mean. A small standard deviation is not always a good thing; for example, if the standard deviation for the sample population was small, this would suggest that the intervention may only be applicable to a specific age group. Alternatively, a small standard deviation in an outcome score demonstrates consistency of the intervention.


Data that demonstrates a skewed distribution the interquartile range is used instead of the standard deviation. This range excludes values at the extremes of the sample population, giving an indication of how the data is centered around the median. Using 3, 4, 4, 6, 8, 8, 10, 10, 11, 12, 31 as a sample population, the median is 8 with the lower quartile being 4 and the upper quartile being 11. The interquartile range is the difference between the upper quartile and the lower quartile (e.g., the interquartile range for the example is 11 − 4 = 7).



Confidence Intervals

Due to the use of a sample population, if a study was repeated multiple times, the mean and standard deviation would vary. Using the theory of “sampling distribution of the mean,” the mean of all means obtained from repeated samples would give a more specific estimate for the population. However, due to finite budgets and ethical constraints, repeated studies are not feasible. Using the central limit theorem with a large enough sample, an interval estimate can be calculated between which we are confident the true mean lies.2 This is normally quoted as the 95% confidence interval, whereby we can be 95% sure that the true mean lies between these two values. The confidence limits are calculated using the standard error of the mean, which is defined as the standard deviation divided by the square root of the sample size. The standard error of the mean is then multiplied by 1.96 to give 95% confidence intervals (e.g., sample mean ± [standard error of the mean × 1.96]). This confidence interval indicates the quality of the result; for example, if the difference in an outcome score was 6 with 95%, confidence intervals of ± 5 (2.55 × 1.96) may be due to a small sample size or a wide variation in the difference between the groups. Confidence intervals are generally used when illustrating a difference after an intervention; for example, a change in an outcome score before and after total knee replacement.



53.1.4 Qualitative Variables


Descriptive statistics of categorical variables is limited; frequency tables are probably adequate.2 Ordinal variables may benefit from visual presentation using a bar or pie chart (e.g., very satisfied, satisfied, neutral, unsatisfied, and very unsatisfied).



53.2 Data Interpretation


The null hypothesis is the key concept to statistical analysis, where the assumption is made that any observed difference is a chance occurrence. The collected data is then used to disprove the null hypothesis, and if a statistically significant result is obtained then the null hypothesis is rejected. The observed difference must therefore be real and not have occurred by chance. Most researchers are willing to accept a 5% probability that the difference occurred by chance, hence the commonly quoted p value of 0.05 is equal to that 5% probability that the result is due to chance.4


Errors can arise when accepting or rejecting the null hypothesis. 5 A type I error occurs when a significant difference is found but in reality no difference exists. The null hypothesis is wrongly rejected (i.e., one of the 5% of results that occur by chance). A type II error occurs when no significant difference is identified but in reality a difference does exist. This may be due to a small sample size, hence the importance of a power calculation before undertaking a study (see Chapter 52). These two errors are inversely related, decreasing the accepted p value reduces type I error but increases type II error and vice versa.5



53.2.1 Statistical Tests


There are multiple statistical tests available, but the specific test used is dependent upon the type of analysis being performed and whether the data demonstrates a parametric or nonparametric characteristics.3 Table 53.1 illustrates which statistical test is most appropriate according to these criteria.

















































Table 53.1 Statistical Tests That are Appropriate for a Given Sample Distribution or Data Type According to the Required Analysis

Type of analysis


Parametric


Nonparametric



Normal distribution


Skewed distribution


Nominal data


Two groups


Unpaired t-test


Mann-Whitney U test


Chi-square test or Fisher′s exact (n < 5)






Intervention in the same group


Paired t-test


Wilcoxon signed rank test


McNemar′s test






More than two groups


Analysis of one-way variance


Kruskal Wallis test


Chi-square test


Association between two variables


Pearson correlation


Spearman correlation


Contingency coefficients


The first part of the following section deals with which statistical test should be used according to the data analysis required and data parameters; both quantitative and qualitative data will be discussed.



Qualitative Data

Parametric tests will be applied when assumptions of normality are satisfied; if this is not the case, the equivalent nonparametric test will then be discussed.




  • Unpaired data: For example, comparison of two different groups after an intervention.


An example of this would be to compare the improvement in the Oxford knee score after a total knee replacement between male and female genders. A power analysis should be performed before the commencement of such a comparative study is undertaken (see Chapter 52). There are three assumptions5 that must be fulfilled before an unpaired t-test can be performed:




  1. Normal distribution for both populations



  2. Both groups are random samples



  3. Homogeneity of variance (population variances are the same)


The first two assumptions can be checked as described previously for normality and by the study design (see Chapter 51). The third assumption is normally given as part of the output of the statistical package used. Levene′s test for equality of variances checks that the population variances are the same.3 Table 53.2 demonstrates a typical output from a statistically package for an unpaired t-test, in this case Statistical Package for Social Sciences version 17.0 (SPSS Inc., Chicago, IL). The nonsignificant difference between variances of 0.17 in the third column for Levene′s test of equal variances means that the values on the first row (“Equal variances assumed”) can be used. This illustrates that there is a 1.4 point difference, with a 95% confidence interval of 0.20 to 2.68 and a p value of 0.023. However, if the p value for Levene′s test was significant (< 0.05), the second row would be used (“Equal variances not assumed”) because the distribution of the groups is significantly different. SPSS adjusts for this different and returns differing 95% confidence intervals and p values.





























































Table 53.2 Unpaired t-test for the Improvement in Oxford Knee Score before and after a Total Knee Replacement According to Gender

Independent samples test



Levene′s test for equality of variances



t-test for equality of means







F


Significance


Significance


Mean difference


Standard error difference


95% confidence interval of the difference






(Two-tailed)




Lower


Upper


Equal variances assumed


1.865


0.172


0.023


1.44127


0.63368


0.19772


2.68482


Equal variances not assumed




0.024


1.44127


0.63811


0.18887


2.69368


When the normality assumptions are not satisfied, a Mann–Whitney U test can be performed. There is, however, a difference in the statistical output from this test, returning a z score and a p value only.3




  • Paired data: For example, before and after an intervention within the same sample population.


An example of this would be the improvement in the Oxford knee score before and after a total knee replacement. Simple descriptive statistics should be performed first, such as the mean or median for pre- and postintervention scores with their respective standard deviation or interquartile range. The difference between these two samples should then be computed as a new variable and checked for normality.5 Table 53.3 demonstrates a typical output from a statistical package for a paired t-test (SPSS Inc.).3 This illustrates that there was a 15.7 point increase in the Oxford knee score after a total knee replacement, and we can be 95% sure that the true value lies between 15.1 and 16.3 points, which is statistically significant. This is also likely to be clinically significant. A minimal clinically important difference is defined as the smallest change of a score to be of importance, which is generally defined as half the standard deviation of the variable. 6, 7 Standard deviation for this sample population was 9.8; hence the minimal clinically important difference is 4.9, which has been surpassed by the mean difference and the lower 95% confidence interval limit.







































Table 53.3 Paired t-test for the Oxford Knee Score before and after a Total Knee Replacement


Paired differences



Mean


Standard deviation


Standard error mean


95% confidence interval of the difference


Significance (two-tailed)






Lower


Upper



Pair 1


15.680


9.787


0.315


15.062


16.298


0.000


If the normality assumptions were not satisfied, a Wilcoxon signed rank test is performed. In this case, medians of the variables are analyzed rather than the mean. The Wilcoxon signed rank test uses the magnitude of positives and negative as ranks to calculate significance.3




  • Multiple groups: For example, comparison of three or more different groups after an intervention.


An example of this would be to compare the improvement in the Oxford knee score after a total knee replacement between social quintiles (five ordinal groups, where 1 is the most deprived and 5 is the least deprived). Analysis of one-way variance is an extension of the t-test. The aforementioned three assumptions, for the unpaired t-test, also apply to the sample populations for the analysis of one-way variance. Again, the same principles apply to satisfy these assumptions. The homogeneity of variance test in SPSS for preoperative Oxford knee scores according to social quintile produces Table 53.4, which indicates no significant difference (p = 0.377), and we can assume equal variance for population samples.3 Table 53.5 demonstrates that there is a significant difference between groups with a p value of < 0.0001; post hoc testing needs to be carried out to determine between which groups these differences occur. Bonferroni corrected post hoc analysis is a commonly used technique, which multiples the p value by the number of comparisons performed to adjust the type 1 error (in this case 10).5 Table 53.6 demonstrates that on post hoc analysis, with Bonferroni correction, there were only statistically significant differences between social quintiles 1 with 4 (p = 0.047) and 5 (p < 0.001), and social quintile 2 with 5 (p < 0.001). To illustrate the effect of the Bonferroni correction, if a t-test is performed comparing social quintile 1 with 4, the p value is 0.0047 (i.e., 10 times greater to correct for the number of comparisons made).
























Table 53.4 Homogeneity of Variance Test between Multiple Groups

Test of homogeneity of variances


Difference in Oxford knee score


Levene statistic


df1


df2


Significance


1.057


4


957


0.377








































Table 53.5 Analysis of One-Way Variance Results for Difference in Oxford Knee Score According to Social Quintile


Sum of squares


df


Mean square


F


Significance


Between groups


1639.822


4


409.955


7.664


0.000


Within groups


51192.952


957


53.493




Total


52832.773


961


































































































































































































Table 53.6 Analysis of One-Way Variance with Bonferroni Correction for Post Hoc Analysis

(I) Quintile


(J) Quintile


Mean difference (I-J)


Standard error


Significance


95% confidence interval







Lower bound


Upper bound


1


2


0.670


0.861


1.000


−1.75


3.09



3


1.900


0.880


0.311


−0.58


4.38



4


2.499a


0.882


0.047


0.02


4.98



5


3.673a


0.820


0.000


1.37


5.98


2


1


−0.670


0.861


1.000


−3.09


1.75



3


1.230


0.743


0.982


-0.86


3.32



4


1.829


0.745


0.143


−0.27


3.93



5


3.003a


0.671


0.000


1.12


4.89


3


1


−1.900


0.880


0.311


−4.38


0.58



2


−1.230


0.743


0.982


−3.32


0.86



4


0.599


0.767


1.000


−1.56


2.76



5


1.774


0.695


0.108


−0.18


3.73


4


1


−2.499a


0.882


0.047


−4.98


−0.02



2


−1.829


0.745


0.143


−3.93


0.27



3


−0.599


0.767


1.000


−2.76


1.56



5


1.175


0.697


0.923


−0.79


3.14


5


1


−3.673a


0.820


0.000


−5.98


−1.37



2


−3.003a


0.671


0.000


−4.89


−1.12



3


−1.774


0.695


0.108


−3.73


0.18



4


−1.175


0.697


0.923


−3.14


0.79


aThe mean difference is significant at the 0.05 level.


When assumptions of normality and homogeneity are not satisfied for the sample groups, a Kruskal Wallis test is used.5 This will return a p value without an option of post hoc analysis, which will need to be performed using multiple Mann–Whitney U tests for each of the comparative groups.3 The p value would then need to be adjusted according to the number of comparisons made (multiple the p value by number of comparisons made).




  • Correlation between two continuous variables: For examples, relationship between two linear variables.


An example of this would be the relationship between preoperative knee function according to the Oxford knee score and improvement in the score postoperatively. The relationship between these two variables is shown in Fig. 53.5. This plot demonstrates that with increasing preoperative Oxford knee score, signifying more severe symptoms, the greater the improvement in the postoperative score (reduction of symptoms). To demonstrate the degree of the linear relationship between the two variables, a correlation coefficient is generated. 8 If both variables are normally distributed, a Pearson′s correlation is preformed, otherwise a Spearman′s correlation is performed.3 A Spearman′s correlation can also be used for categorical ordinal variables such as satisfaction (e.g., very satisfied, satisfied, neutral, unsatisfied, very unsatisfied). Table 53.7 demonstrates the Pearson′s correlation coefficient (r = 0.335) generated by SPSS for the plot in Fig. 53.5, which illustrates the correlation to be significant (p < 0.001). The value of r, the correlation coefficient, can be negative or positive.8 A negative r simply means that there is an inverse relationship between the variables, and as one increases the other decreases, whereas a positive r means as one variable increases so does the other. The value of r indicates the strength of the correlation, where a value of 0 equates to no correlation and + or −1 equates to a strong correlation (Table 53.8). 9 Using the example in Table 53.7, there is a “fair” correlation but the p value is small, which is due to the large size of the cohort (n = 966). It is important to acknowledge this difference between correlation and significance. A small cohort may result in failure to demonstrate a statistical significance despite a strong correlation, though a larger cohort may produce statistical significance but the correlation may be poor.

Fig. 53.5 A scattered plot demonstrating correlation between preoperative Oxford knee score with change in the score postoperatively.















































Table 53.7 Correlation Coefficient for Preoperative Oxford Knee Score and Improvement Postoperatively



Preoperative Oxford knee score


Change in Oxford knee score postoperatively


Preoperative Oxford knee score


Pearson correlation


1


0.335a



Significance (two-tailed)



0.000



N


966


966


Change in Oxford knee score postoperatively


Pearson correlation


0.335a


1



Significance (two-tailed)


0.000




N


966


966


aCorrelation is significant at the 0.01 level (two-tailed).























Table 53.8 Strength of the Linear Relationship According to the Correlation Coefficient

Correlation coefficient


Strength of linear relationship


1 to 0.8


Very strong


0.6 to < 0.8


Moderately strong


0.3 to < 0.6


Fair


< 0.3


Poor


The r squared gives the coefficient of determination that indicates the proportion of variance the two variables have in common.9 Using the example from Table 53.7, the r squared is 0.112, which means that 11.2% of the change postoperative score is explained by the preoperative score. However, this also means that 88.8% of the change is related to other factors. Precautions must be made to ensure subgroup populations are not different (e.g., gender), where one may be significant and the other is not. Also, outliers can affect the correlation: for example, a 40 entered in place of 4.0 due to failure to acknowledge the decimal point (Fig. 53.6). These two potential errors can be avoided by performing analysis using graphs that will illustrate subgroup differences and outliers (Fig. 53.6).

Fig. 53.6 Same scatter plot as Fig. 53.5, but there is an obvious outlier (circled) that needs to be investigated and amended accordingly before further analysis takes place.

A potential difficulty when analyzing correlations is the interaction of a third variable. Using the example in Table 53.7 again, age may also influence postoperative change in the Oxford knee score. The effect of age can be controlled for by using partial correlation.3 Table 53.9 illustrates that age has no effect upon change in the Oxford knee score, due to the fact the correlation coefficient has not changed to any great extent (r = 0.334).



















































Table 53.9 Partial Correlation between Preoperative Oxford Knee Score and Improvement Postoperatively Controlling for Age

Control variables


Preoperative Oxford knee score


Change in Oxford knee score postoperatively


Age


Preoperative Oxford knee score


Correlation


1.000


0.334




Significance (two-tailed)



0.000




df


0


963



Change in Oxford knee score postoperatively


Correlation


0.334


1.000




Significance (two-tailed)


0.000





df


963


0


Chi-square, Fisher′s exact, and McNemar tests are all methods of determining statistical significance between groups for categorical variables. 10 Such an example would be using a Chi-square test to reject the null hypothesis that there is no difference in the rate of satisfaction after total knee replacement between male and female genders. A cross-tabulation table can be constructed, according to the counts in each category. Table 53.10 demonstrates a typical 2 × 2 cross-tabulation table; however, this can vary according to the number of categorical variables (e.g., 2 × 4 or 3 × 4, etc.). From this table, a Chi-square test can be performed, and for the example in Table 53.10 the p value was 0.28, so we cannot reject the null hypothesis. If there was a significant association observed, the p value does not indicate what the association is or the strength of this association. Observing Table 53.10, the relationship is not clear; however, by calculating the satisfaction rate by gender reveals that males (351/421 = 83.4%) are not as satisfied as females (468/545 = 85.9%). The strength of the association can be calculated using an odds ratio or by the relative risk; however, these can only be calculated for 2 × 2 cross-tabulation table (Table 53.11). The odds ratio is the ratio of the odds of having the disease in group 1 compared to the ratio of the odds of having the disease in group 2 [odds ratio = (a × d) / (b × c)].10 For the example in Table 53.10, the odds ratio is 0.83 [(351 × 77) / (468 × 70)]. This value (< 1) indicates group 1 (males) are less likely to be satisfied, whereas if this value was greater than 1 it would indicate that they would be more likely to satisfied with their knee replacement relative to females. The relative risk is the ratio between having the disease in group 1 compared with having the disease in group 2 [relative risk = (a × (c + d)) / (c × (a + b))]. For the example in Table 53.10, the relative risk is 0.97 [(351 × 545) / (468 × 421)].









































Table 53.10 A 2 × 2 Cross-Tabulation Table for Satisfaction after a Total Knee Replacement According to Gender



Satisfied



Total




Yes


No



Gender


Male


351


70


421



Female


468


77


545


Total



819


147


966





























Table 53.11 A 2 × 2 Cross-Tabulation Table for the Presence of a Disease According to Groups



Disease




Yes


No


Group


1


a


b



2


c


d


The validity of the Chi-square test is violated when there are small frequencies in the cells within the cross-tabulation table. When the expected count is less than five, a Fisher′s exact test is used.10 Furthermore, if the number of subjects is small (n < 30), a Yates’ continuity correction has to be made in order to avoid individual values having an overly significant influence on the calculation.


A McNemar test is used when there is matched categorical data that has been gathered from a case control study (i.e., paired nominal data). The McNemar test compares the observations of the discordant pairs in a 2 × 2 table similar to a Chi-square test.9 An example of this is patients who had pain (yes or no) prior to a surgery compared to the same patients who did not have pain after a total knee replacement.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 10, 2020 | Posted by in ORTHOPEDIC | Comments Off on 53 Nonparametric versus Parametric Tests

Full access? Get Clinical Tree

Get Clinical Tree app for offline access