Valuing Survey Data on the Value of College

In the past forty-eight hours media outlets from The Chronicle of Higher Education to the Wall Street Journal have posted headlines questioning the value of college. Their stories stem from a new poll of alumni reporting that only half of those polled strongly agree that college was worth the cost of attending.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

In the past forty-eight hours media outlets from The Chronicle of Higher Education to the Wall Street Journal have posted headlines questioning the value of college. Their stories stem from a new poll of alumni reporting that only half of those polled strongly agree that college was worth the cost of attending. This finding might fuel the growing tide of doubt and discontent around whether the benefits of a college could possibly outweigh the colossal costs.

Yet, these headlines could have just as easily have read, "Only 4% of Alumni Strongly Disagreed that College was Worth the Cost." So perhaps a better question to ask is how do we evaluate survey findings like this and how much should we value their data?

In this poll, the key survey result came from a common approach to soliciting public opinions. The pollsters stated, "My education from [University Name] was worth the cost" and asked alumni to select one of five options: "strongly disagree (1)... (2)... (3)... (4)... strongly agree (5)." In structuring their survey item this way, the pollsters appear to have violated three best practices of the science of survey design.

Survey design experts will point to three problems these pollsters created. First, the pollsters should have asked a question rather than pitching a statement at the survey respondents. Second, "agree-disagree" response options are a bad idea in general. Third, the mix-and-match approach used to develop the answer options introduces additional respondent error.

Do these survey design problems cause enough mischief to completely undermine the value of the findings? Let's take the issues one at a time. First, when a survey designer makes an assertion and invites respondents to react, a number of respondents will provide seemingly random answers simply because the wording of the initial assertion does not reflect how they think about the issue. In the present example, a number of respondents almost certainly 'strongly disagreed' while thinking, "My education was not worth the cost - it was a huge discount!"

Second, there is a robust tendency for respondents to simply agree with any statement that pops up on a survey when they are given an array of 'agreement' response options. Survey researchers know this from numerous studies where they ask respondents their opinion on a particular position and the opposite of that position. As compared to other ways to formulate survey items, respondents will frequently endorse a position and its exact opposite much more often when given the opportunity to agree with a statement.

Third, numerous studies show that respondents report their opinions more reliably when all response options are labeled with verbal labels. Intuitively, the meaning is clearer when an answer choice is "strongly agree" rather than "4," a reality that is borne out in the research. Rest assured that many respondents would have answered the college value question differently if they had known what the numbers meant.

Even beyond the structure of this item, other problems abound. The vague nature the term "education" provides little guidance about how to answer if you felt that your classes and academic learning were not worth the cost, but that the social network you developed in college made all the difference in your career. In addition, the pollsters try to get at a complex phenomenon through one simple question rather than a series of items - a practice that leads to volatile estimates of public opinions.

Together, the multiple, substantial sources of error embedded in this single survey question should cause most readers to doubt the survey findings more than they doubt the perceived value of college. When faced with the problem of determining how seriously to take survey results like this, what are the important criteria to consider? While the number of specific best practices in designing surveys is lengthy and technical, one broad guideline can be particularly helpful: Surveys should resemble conversations.

In everyday conversations, people ask questions and provide answers. People are practiced at these interchanges and provide more accurate responses than when survey designers make respondents rate statements. In conversations, we embed reminders of the key topic at hand, referring back to the topic of the question. Surveys that underscore the key theme of each question help respondents to focus. In this particular instance, asking questions about the value of one's overall college experience could have been reinforced with response options such as "Not at all valuable... slightly valuable... moderately valuable... quite valuable... extremely valuable." Minimizing vague or ambiguous terms like "education" improves quality for conversationalists and survey respondents alike.

While we know that the economists think that college is a great value, we still do not know much about the value that alumni place on their college education and experience. But data like this does not bring any closer to answering that question. If anything, these headlines distract us from the more important topic at hand: making sure that college is not only worth the cost, but also more accessible to those who wish to attend.

Popular in the Community

Close

What's Hot