A Brief Reflection on the PISA Results

Is it possible that it is precisely our approach to educational policy over the past several iterations of "reform" that has been contributing to results?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Recently the OECD released the results of the latest version of PISA, Programme for International Student Assessment. This is a comparison of samples of students from each of the participating nations in OECD. If one goes to the PISA website, one will read the following:

Are students well prepared for future challenges? Can they analyse, reason and communicate effectively? Do they have the capacity to continue learning throughout life? The OECD Programme for International Student Assessment (PISA) answers these questions and more, through its surveys of 15-year-olds in the principal industrialised countries. Every three years, it assesses how far students near the end of compulsory education have acquired some of the knowledge and skills essential for full participation in society.

Now remember, our 15-year-olds are in general in ninth or 10th grade. That means every year grades three through eight they have been tested in reading and in math. In most cases those test consist only of multiple choice items. We place high stakes for schools on these tests, and increasingly upon teachers (including the absurdity of value-added methodologies).

Each time an international comparison like PISA comes out we get the usual bloviation and hyperventilation. Thus we have heard from Secretary of Education Arne Duncan that the scores were "a massive wake-up call." He further said, in tweets repeated from the Department of Education, that "PISA results show that America needs to... accelerate student learning to remain competitive." For Duncan that seems to mean more of the same of what we have doing, with more tests, even as he acknowledges that the current generation of tests are problematic.

But what if it is the tests that are contributing to the results? Please consider the following. One can look at sample questions from the 2009 PISA, the test whose results were just released. In sample question one, after giving some information about lichens, test takers are instructed as follows:

Using the formula, calculate the diameter of the lichen, 16 years after the ice disappeared. Show your calculation.

Let's stop: Show your calculation. But in their experience of years of high stakes tests, the tests on which we put all the emphasis and on which we focus all of our attention, they are not asked to show their work. They are merely asked to pick one out of four or five answers offered to them.

So here is my question. Is it possible that it is precisely our approach to educational policy over the past several iterations of "reform" -- with the emphasis on the kinds of testing we have been doing -- that has been contributing to results about which some now bloviate and wring their hands?

I wonder.

Popular in the Community

Close

What's Hot