Mediating Variables and Partial Correlation

Contents

Mediating Variables
Partial Correlation
Which is the Mediator?
Mediating Pathways
Partial Correlation and Suppressor Variables
Residual Scores: Another Way to Understand Partial Correlation
Differential Research and Analysis of Covariance
Moderator Variables
Summary

Mediating Variables

Did you know that children with large big toes have better command of their language? One study found a strong correlation between language skills and size of the big toe in a sample of children. Of course, this should not be a surprise if you are told that the children who were tested ranged in age from 12 months to 12 years.

A number of states have laws requiring teenage drivers to log 30 hours of supervised driving after they obtain their license. Another recent study found that in these states 16-year-olds have 29% fewer fatal traffic accidents. Does this mean that fatal crashes in teenagers can be reduced by requiring parents to ride with teenagers? Or are there other differences between states with supervised driving laws and states without such laws? Perhaps teenagers in states with such laws simply drive less.

By now you are well aware of the dangers inherent in interpreting correlational data, whether the data come from correlational research or differential research. If two variables, X and Y, are correlated, it is difficult or impossible to say why this relationship exists. If a classification variable, X, is associated with changes in a dependent variable, Y, it is difficult or impossible to say why the changes in Y occur. Changes in X may be a cause of changes in Y, changes in Y may be a cause of changes in X, some third variable, Z, may be producing changes in both X and Y, or any combination of these possibilities may be true.

One reason for the difficulty is the likely presence of "mediating", or confounding, variables. A variable is a confounding variable if it is related to both X and Y, so making causal interpretations impossible. We call it a mediating variable if in some sense it "explains" the relationship between X and Y. I put "explain" in quotation marks because, as we'll see, we have to be very careful what we mean by that term.

In the study of language skills, both language skills and body size (including the big toe) increase with age. Presumably, if we were to control for age, the correlation of language skills and big toe size would disappear. Age, then, is the mediating variable. In the case of supervised driving laws and fatal traffic accidents, amount of driving may be a mediating variable. We need to compare fatal accidents in the states after controlling for driving time.

Note that in these examples we still cannot say that changes in age "cause" changes in size of big toe or changes in language skills, or that reduced time driving "causes" the reduction in fatal accidents. Nevertheless, we can say that age or driving time "account for" or "explain" the relationships, as long as we are quite clear that "account for" and "explain" are not used in a causal sense. It may turn out eventually that other mediating variables are present, in which case our interpretation will have to change.

Remember, in science all conclusions are tentative. With correlational data this caution is doubly important.

Researchers have found a correlation between depression and the incidence of cancer. People with greater levels of depression are more likely to develop cancer.

What do you think some of the mediating variables might be in this case?

Click to see answer

A common method for identifying mediating variables is to use the statistical procedure called multiple regression. For this course all you need to know about multiple regression is that it is a method for examining the relationships among many variables. One component of multiple regression is easy to use, however, and can be very useful for exploring data obtained from correlational and differential research. In correlational research it is called "partial correlation". In differential research it appears as the "Analysis of Covariance".

You might find it helpful first to look at the Summary and try to understand the important principles that are involved, then return and read the details that follow. Or, just plow ahead.

Partial Correlation

For the sake of generality, let X and Y be the two variables we have found to be correlated. I'll use r(X,Y) to represent their correlation. We'll call r(X,Y) the simple correlation.

We introduce a third variable, Z, which may or may not mediate the relationship between X and Y. We can find out if Z is a mediating variable by calculating the partial correlation of X and Y, controlling for Z. I'll write the partial correlation as r(X,Y|Z). The variable to the right of the vertical line is the control variable.

In other words, r(X,Y|Z) is a measure of the relationship between X and Y if we control for Z, that is, if statistically we hold Z constant. If r(X,Y) is relatively large, but r(X,Y|Z) is much smaller, we can conclude that Z is a mediating variable. Z may explain, at least in part, the observed relationship between X and Y. Be careful, though. Although we talk about "explaining" the relationship, based on correlations alone we will never know what "causes" the relationship.

In the big toe example, X is a measure of language skills, Y is size of the big toe, and Z is a child's age. Age explains the relationship between language skills and size of the big toe.

If you have three variables, there are three simple correlations among them, r(X,Y), r(X,Z), and r(Y,Z). Knowing these three correlations, it's very easy to calculate the partial correlation r(X,Y|Z). You can use this calculator for the purpose.

In the big toe example, suppose r(X,Y) = 0.40, r(X,Z) = 0.55, and r(Y,Z) = 0.65. Then r(X,Y|Z) = 0.07 (be sure you can duplicate that result). That is, the correlation between X and Y is close to zero when Z is introduced as a mediator. Age explains the relationship between language and big toe size.

You can also calculate partial correlations in SPSS (use the Correlations command in the Analyze menu and select Partial Correlations). Using SPSS, you can actually enter several mediating variables at the same time, and find out if the X-Y relationship might be explained by the combination of mediators.

Sometimes the partial correlation, r(X,Y|Z), is smaller than the simple correlation, r(X,Y), but it is still larger than zero. In this case we can say that the mediating variable Z "partly explains" the correlation between X and Y. In other words, there are other explanations for the relationship, including the possibility that X and Y themselves are causally related.

In one study, the correlation between a child's school achievement and the number of hours the child watches TV was -0.33. The correlation between school achievement and teacher ratings of the child's aggressiveness was -0.48. Ratings of aggressiveness and number of hours watching TV correlated 0.55.

Use partial correlations to interpret these results. Can we identify what causes what from these data?

Click to see answer

Which is the Mediator?

When you have three variables, any one of them might be considered a mediator for the correlation between the other two. Usually, however, it only makes theoretical sense to consider one of them to be the mediator.

In the big toe example, using the data provided above, the partial correlation of language skills and age, controlling for big toe size, is 0.42. The partial correlation of big toe size and age, controlling for language skills, is 0.56. These partial correlations are both smaller than the simple correlations, so we might assume that big toe size partly explains the relationship between age and language skills, or that language skills partly explain the relationship between age and big toe size. Of course, these assumptions make no sense. This is a theoretical decision, though, based on what we know about the variables. The statistical analysis has little to say about which variable should be considered the mediator.

Mediating Pathways

Asking which variable is the mediator leads to a consideration of "influence pathways" among the three variables. Consider Figure 1, which illustrates the big toe example. The presumed mediator, Age, is assumed to influence the X and Y variables, big toe size and language skills. We assume there is no direct connection between big toe size and language skills, apart from their joint dependence on age.

Figure 1. One way in which a mediating variable, Z, might influence X and Y.

Now consider a different kind of influence pathway. Let's assume we have found a correlation between students' SAT scores and their satisfaction with their college experience. Might there be a mediating variable in this case? What about the possibility that SAT scores influence the grades that students receive, and that their satisfaction depends only on their GPA? That is, GPA mediates the relationship between SAT scores and satisfaction.

This suggests a different kind of influence diagram, as in Figure 2. The X variable, SAT score, influences the mediator, GPA, which in turn influences the Y variable, Satisfaction. If this hypothesis is correct, the partial correlation of X and Y, controlling for Z, should be zero. (There's no direct connection between X and Y).

Figure 2. A second way in which a mediating variable, Z, might be related to X and Y.

Another example of Figure 2 might be the supervised driving laws and fatal traffic accidents example. It is possible that having these laws on the books (X) leads to less driving by teenagers (Z), which leads to fewer fatal accidents.

The influence diagrams in Figures 1 and 2 may suggest "causal" relationships, but strictly speaking we cannot show, using correlations alone, that the relationships are causal. Nor is there any way to tell from the correlations and partial correlations which of the two influence diagrams is correct. Both are possible once we show that the partial correlation of X and Y, controlling for Z, is effectively zero. However, extensions of the partial correlation method, called "Path Analysis", are often used to provide weak support for causal models such as these, and to differentiate among them.

Consider the previous example concerning school achievement and the number of hours a child watches TV, with teacher ratings of aggressiveness as the mediator. Would the influence pathway be like that in Figure 1 or like that in Figure 2?

Can you think of another mediator where the other kind of influence diagram would be appropriate?

Click to see answer


Partial Correlation and Suppressor Variables

Sometimes you will observe a very strange result when calculating partial correlations. It is possible for the simple correlation between X and Y to be close to zero, but for the partial correlation, r(X,Y|Z), to be large. In this case Z is suppressing, rather than mediating, the relationship between X and Y.

Here's an example. Suppose we ask women to judge the attractiveness of several men, and we want to find out if judgments of attractiveness are related to a man's weight. We observe a correlation, r(X,Y), of 0.05. There seems to be no relationship.

We introduce as a mediating variable a measure of a man's body fat. We find that the body fat index correlates 0.55 with weight (which seems plausible), and correlates -0.45 with judgments of attractiveness (it's a negative correlation: the women do not find obese men attractive). Plugging these numbers into the calculator, we find that the partial correlation of weight with judgments of attractiveness is 0.40. In other words, when we control for body fat, heavier men are indeed seen as more attractive.

We might call body fat index a "suppressor variable". That is, if it is not controlled, it suppresses the relationship between weight and attractiveness, because it is positively correlated with one and negatively correlated with the other.

The term "suppressor variable" has a number of different meanings in statistics, and there is some debate surrounding the term. You can ignore all of these complexities. From time to time you will see partial correlations that are larger than the simple correlations, and it may be difficult to understand why that would happen. This example may help to make it clearer. Probably, the mediator is suppressing the relationship by having opposite correlations with the first two variables.

A study measured adult patients' time to recover from an illness. The correlation between recovery time and a measure of family support (support received from family members during the illness) was found to be 0.03. That is, they were unrelated. However, the correlation between recovery time and the patient's age was 0.52, while family support and age correlated -0.49.

What do you make of these results?

Click to see answer

 

Residual Scores: Another Way to Understand Partial Correlation

Let's go back to the language skills example. We are pretty sure that the important predictor of language skills is age, not big toe size. Here's another way to think about the issue that may help you understand partial correlation better.

Suppose we want to predict scores on a test of language skills from a child's age. Then we could use a regression equation,

Predicted Language Score = A + B * Age,

where A and B are parameters obtained from a regression analysis (look it up if you don't remember regression analysis).

These predictions will probably be pretty close, but they won't be exact. So for each child we can calculate a Residual Score,

Residual = Language Score - Predicted Language Score.

The residual, then, is the error of the prediction, or the degree to which the language skills score is not accounted for by age. We call this "regressing language skills on age". Think of the residuals as what's left over in the language scores after we account for Age.

To find out if big toe size has any relationship with language skills, after accounting for age, we could now find the correlation between big toe size and the residual scores. This new correlation will, in fact, be the partial correlation of big toe size and language skills, with age held constant. It's the correlation of big toe size with "what's left over" in language skills after accounting for age. And, of course, after accounting for age, there's nothing left over for big toe size to explain, so the correlation of big toe size with residuals will be close to zero.

If we have two variables, X and Y, and a possible mediator, Z, the partial correlation r(X,Y|Z) is the same thing as the correlation between X and the residual scores after we regress Y on Z. We first find out if the mediator is related to one of the variables, then see if the residuals are related to the other variable. That is, after taking account of the mediator Z, we find out if the variable X can be used to predict the remaining part of Y .

A college looked at data for its students during the last 10 years. They found a strong correlation between students' college GPA and their high school GPA. They also found strong correlations between both GPAs and students' combined SAT scores. They used SAT scores to predict high school GPA, then took residual scores from the regression analysis and correlated the residuals with college GPAs. The correlation of college GPA with residual scores was almost zero.

What do you make of these findings?

Click to see answer

 

Differential Research and Analysis of Covariance

In differential research, one or more of the variables is a classification variable, with a small, fixed number of levels. Examples include the study of sex differences (male versus female), class year, psychiatric diagnosis, etc. The supervised driving example is an example of differential research, where States is the classification variable.

The problem in interpreting the results of differential research is exactly the same as the problem in using correlational research. Since no variable has been manipulated, no causal conclusions can be derived. Thus, for example, if we observe a difference in skills between males and females, we cannot say why the difference occurred. There may be one or more confounding variables that are correlated with sex. For example, males and females may have had systematically different educational experiences. Any confounding variable is a potential mediating variable, i.e., a variable that explains the differences among levels of the classification variable.

Differential research often uses the Analysis of Variance (ANOVA) to test for differences among levels of the classification variable. In these situations, there is a procedure similar to the use of partial correlations that can be used to identify mediating variables. It is called the Analysis of Covariance (ANCOVA). Statistically, ANCOVA is quite complex, but by using SPSS it is very easy to do the calculations.

Assume that you use Sex as a classification variable. This factor would vary between groups, because you have a group of males and a group of females. You will have one or more dependent variables, and you can use a straight forward ANOVA to find out if there are any differences between males and females on these dependent variables. (You could also use a t-test in this case, which would give you equivalent results)

Any potential mediator may be treated as a covariate. You should measure each covariate (each potential mediator) for each of the subjects. For example, you might record the number of math courses and language courses taken by each person. You then repeat the analysis using ANCOVA, in which the covariates are introduced as control variables.

If you find that the previously significant differences in the ANOVA are no longer significant in the ANCOVA, this means that the covariates "explain" the originally observed differences. If you still find significant differences, it means that the covariates do not explain the differences, at least not completely. If the effect of the classification variable is significant, but the effect disappears when we introduce a covariate, we can conclude that covariate is a mediating variable.

Consider the supervised driving laws example again. There is a significant difference in fatal accidents between states with and without such laws. Would the difference still be significant if we added amount of teenage driving as a covariate?

ANCOVA can also be thought of in terms of residuals (see above). After regressing the dependent variable on the covariate, we find out if different levels of the classification variable are associated with different residuals. That is, after finding out what effect the covariate has, we look to see if the classification variable has any effect on what is left over.

When using SPSS you will need to use the General Linear Model (GLM) analysis. See the separate notes on How to use SPSS, especially the sections dealing with the use of GLM for ANOVA and ANCOVA. As long as it is clear which are your classification (between groups) factors, which are your dependent variables, and which are your covariates, it should not be difficult.

A comparison of patients diagnosed with manic depressive disorder and schizophrenia found a large, significant difference between the two groups on a test of rational decision making, F(1, 26) = 14.7, p < .01. Schizophrenic patients scored lower.

An IQ test was given to each patient, and IQ was used as a covariate. In the analysis of covariance, the difference between the groups was reduced, although it was still significant, F(1, 25) = 7.6, p < .01.

How would you interpret these results?

Click to see answer

We saw above that with partial correlations an extraneous variable is sometimes a suppressor variable rather than a mediating variable. The same thing can happen with Analysis of Covariance. An ANOVA may show that a classification variable has no effect on the dependent variable. When a covariate is introduced, however, the ANCOVA shows a significant effect. The covariate in this case is a suppressor variable. When we control for it, an otherwise hidden effect becomes apparent.

You will meet ANCOVA again when considering the design of true experiments, where covariates can be used for quite different purposes.

When using ANCOVA, it is necessary to make some critical statistical assumptions. Understanding these assumptions is not required for this course, but if you use ANCOVA in serious research, you will need to explore this topic further.

Moderator Variables

The term "moderator variable" sounds a lot like "mediating variable", and the two are easily confused. There is a very important difference, however. The difference is well summarized by the two terms themselves. A mediating variable "mediates" a relationship (i.e., serves as a facilitator, makes the relationship possible). A moderating variable "moderates" a relationship (i.e., produces changes in the relationship, or modifies the relationship).

The concept of moderator variable is most easily explained if the moderator is dichotomous (i.e., has only two levels). Sex might be a good example. Suppose we study the correlation between anxiety and threat in a social setting. Men and women read scenarios describing varying levels of threat to their safety or property, and rate their reactions to the threat on several dimensions. We find that the overall correlation between threat and anxiety is positive but small.

We now calculate the correlation separately for men and for women. For women we find that the correlation is fairly large; for men it is essentially zero. If X is anxiety and Y is threat, we can write the simple correlation as, say, r(X,Y) = 0.24. However, r(X,Y|men) = 0.04, r(X,Y|women) = 0.46.

Notice that if Z is a third variable, I write r(X,Y|some level for Z). Z is a moderator if r(X,Y|some level for Z) is different for different levels of Z. Thus, Sex is a moderator variable. The correlation between threat and anxiety is different for different sexes.

The concept of a moderator variable is not restricted to dichotomous variables. For example, another moderator might be age: the correlation might change as we look at people of different ages.

Most important, you should be able to recognize that a moderator variable is nothing like a mediating variable. In the example here, gender does not "account for" the relationship between threat and anxiety (nor does it "suppress" it). Rather, the relationship between threat and anxiety changes, depending on whether we are talking about men or about women.

The concept of moderator variables is closely related to the concept of interaction, which is discussed at length when we talk about factorial experimental designs (see the notes on interactions).

A study found a small but significant correlation between a person's willingness to help others and his or her score on a test of intelligence. People with higher intelligence scores expressed greater willingness to help others.

Can you think of any moderator variables that might affect this relationship?

Can you think of any mediating variables that might account for the relationship?

Click to see answer

 

Summary

When we find a relationship between two dependent variables, or between a classification variable and a dependent variable, it is usually impossible to provide a causal explanation for this relationship. The reason is that there is a potentially infinite number of confounding variables that might be involved. In some cases, however, there is a theoretical reason for believing that a third, mediating variable can explain the relationship. We can test this hypothesis by examining partial correlations or by using the Analysis of Covariance.

Suppose X and Y are the two variables involved in the original relationship, and that Z is the possible mediating variable. To find out if Z is indeed a mediator, we use a three step procedure:

1. Examine the relationship between Z and one of the original variables, say, Y.

2. Find that part of Y that is not predicted by Z. We call this a "residual".

3. Find out if X is related to the residuals for Y.

If there is no relationship between X and the residuals, then Z completely accounts for the relationship between X and Y. If the relationship between X and the residuals is just as strong as the original relationship, then Z is not a mediating variable - it is not involved in the relationship. On rare occasions the relationship between X and the residuals is stronger than the original relationship, in which case Z is called a suppressor variable.

In the case of correlational research, this three step procedure is carried out by examining partial correlations. In the case of differential research, we use the Analysis of Covariance.

In none of these cases can we unambiguously identify any variable as the cause of any other variable, but the procedures can help us develop and test theories to explain relationships among the variables.

Note finally that moderator variables are not related to partial correlation and the Analysis of Covariance. Rather, they are closely related to the concept of interaction.