CORRELATIONAL RESEARCH:
In using the descriptive research methods we have discussed, researchers often wish to determine the relationship between two variables. Variables are behaviors, events, or other characteristics that can change, or vary, in some way. For example, in a study to determine whether the amount of studying makes a difference in test scores, the variables would be study time and test scores.
In correlational research, two sets of variables are examined to determine whether they are associated, or “correlated.” The strength and direction of the relationship between the two variables are represented by a mathematical statistic known as a correlation (or, more formally, a correlation coefficient), which can range from +1.0 to -1.0.
A positive correlation indicates that as the value of one variable increases, we can predict that the value of the other variable will also increase. For example, if we predict that the more time students spend studying for a test, the higher their grades on the test will be, and that the less they study, the lower their test scores will be, we are expecting to find a positive correlation. (Higher values of the variable “amount of study time” would be associated with higher values of the variable “test score,” and lower values of “amount of study time” would be associated with lower values of “test score.”) The correlation, then, would be indicated by a positive number, and the stronger the association was between studying and test scores, the closer the number would be to + 1.0. For example, we might find a correlation of +.85 between test scores and amount of study time, indicating a strong positive association.
In contrast, a negative correlation tells us that as the value of one variable increases, the value of the other decreases. For instance, we might predict that as the number of hours spent studying increases, the number of hours spent partying decreases. Here we are expecting a negative correlation, ranging between 0 and - 1.0. More studying is associated with less partying, and less studying is associated with more partying. The stronger the association between studying and partying is, the closer the correlation will be to -1.0. For instance, a correlation of -.85 would indicate a strong negative association between partying and studying.
Of course, it’s quite possible that little or no relationship exists between two variables. For instance, we would probably not expect to find a relationship between number of study hours and height. Lack of a relationship would be indicated by a correlation close to 0. For example, if we found a correlation of - .02 or +.03, it would indicate that there is virtually no association between the two variables; knowing how much someone studies does not tell us anything about how tall he or she is.
When two variables are strongly correlated with each other, we are tempted to assume that one variable causes the other. For example, if we find that more study time is associated with higher grades, we might guess that more studying causes higher grades. Although this is not a bad guess, it remains just a guess—because finding that two variables are correlated does not mean that there is a causal relationship between them. The strong correlation suggests that knowing how much a person studies can help us predict how that person will do on a test, but it does not mean that the studying causes the test performance. Instead, for instance, people who are more interested in the subject matter might study more than do those who are less interested, and so the amount of interest, not the number of hours spent studying, would predict test performance. The mere fact that two variables occur together does not mean that one causes the other.
Similarly, suppose you learned that the number of houses of worship in a large sample of cities was positively correlated with the number of people arrested, meaning that the more houses of worship, the more arrests there were in a city. Does this mean that the presence of more houses of worship caused the greater number of arrests? Almost surely not, of course. In this case, the underlying cause is probably the size of the city: In bigger cities, there are both more houses of worship and more arrests.
One more example illustrates the critical point that correlations tell us nothing about cause and effect but merely provide a measure of the strength of a relationship between two variables. We might find that children who watch a lot of television programs featuring high levels of aggression are likely to demonstrate a relatively high degree of aggressive behavior and that those who watch few television shows that portray aggression are apt to exhibit a relatively low degree of such behavior. But we cannot say that the aggression is caused by the TV viewing, because many other explanations are possible.
For instance, it could be that children who have an unusually high level of energy seek out programs with aggressive content and are more aggressive. The children’s energy level, then, could be the true cause of the children’s higher incidence of aggression. Also, people who are already highly aggressive might choose to watch shows with a high aggressive content because they are aggressive. Clearly, then, any number of causal sequences are possible—none of which can be ruled out by correlational research (Feshbach & Tangney, 2008; Grimes & Bergen, 2008).
The inability of correlational research to demonstrate cause-and-effect relationships is a crucial drawback to its use. There is, however, an alternative technique that does establish causality: the experiment.