SAT. GMAT. ACT. MCAT. GRE. LSAT. Now tell me, how do those acronyms make you feel?
Perhaps you feel weary, imagining practice books full of unfilled bubbles, days of rote memorization and evenings of flashcards.
Perhaps you feel a sense of pride, remembering numbers that one day arrived in a sealed envelope and soon after confirmed your long-held suspicion that 90 percent of the populous would surely succumb to you in a game of Scrabble.
Or perhaps you feel embarrassed and combative, having never received any such assurance of your mental worth. I mean, what do standardized tests really matter anyway?
Standardized tests; ay, there’s the rub. We love them. We hate them. And we will regardless pay attention to them. Such has certainly been the case in Columbia of late, as story after story after story on local scores — and what they might signify — has rolled out into the ether.
Many of the articles have hinted at the same truth about our love for those exams: People love standardized tests because good scores come with bragging rights, bragging rights that are often overindulged.
Take the news release put out by MU at the end of August, wherein it's pointed out that “this year’s mean ACT score (for the class of 2013), a measure of the quality of this year’s freshman class, is 25.6, the highest in eight years.”
The best in eight years? Stop the presses. Call the mayor. That’s an impressive-sounding achievement. Well, impressive until you find out that these were the mean ACT scores for the seven years preceding, listed from most recent to least: 25.5, 25.4, 25.3, 25.4, 25.4, 25.4, 25.5. (If only the freshman class of 2001 hadn’t gotten that blasted 25.7 that they’re always going on about.)
The reason we hate those tests is handily encapsulated in that same little quote, in MU’s vague explanation of what that score signifies: a measure of quality. The ambiguity of what standardized tests actually show can drive us quite mad.
Scores on these tests don’t necessarily take into account how hardworking a student is; they can’t provide a measure of how much a student has learned or the amount of knowledge a person has. And you don’t have to look hard to find someone who scored poorly on, say, the ACT and who asserts in defense that no one would actually use the knowledge tested by it anyway.
It is, to be sure, unlikely that anyone will run up to you in the street and wildly demand that you illustrate your comprehension of Passage A or that buying groceries will involve an understanding of quadratic inequalities. To that extent, their objection is fair enough, and a person who nails such a test may well have only lived up to that old saying about Bourbon Kings: “Nothing learned, nothing forgotten.”
But the love and hate also don't really matter. We will invariably use these numbers because in many cases, they’re the only standard of comparison we’ve got; some standard is better than nothing.
Another story, one about how MU is going to measure improvements based on how it stacks up against 33 similar institutions, helps illustrate this reality. While everyone agreed that striving to be better was all well and good, one chairman pointed out that in college there’s no real way to measure student achievement like standardized tests do, which makes any such comparison among those schools seem arbitrary indeed.
And regardless of what flaws standardized exams might have or what flawed behavior they might elicit, they will ever be superior to feel-goodery measures. Due to poor test scores, for example, students at six Columbia schools have recently been allowed to switch districts under the No Child Left Behind Act. Basing that decision on little Jimmy’s MAP score might not be perfect, but it’s certainly fairer than letting him switch because science made him feel unhappy.
However they're used or abused, it's also true the general interest in these test scores will never wane. To illustrate: Suppose I told you I happen to know that in measures of ACT scores for all MU undergraduates, one gender scored higher than the other every year for the past ten years.
The winning gender might try to brag and say that makes them smarter, but no one would accept that as definitive proof. The losing gender might feel a little inferior and would likely assert that those score differences don't mean anything. But I defy anyone not to wonder who came out on top. (The answer, by the bye, is males.)
Bottom line: When it comes to standardized test scores, love them or hate them, we're all like moths to flames.
Katy Steinmetz is a columnist for the Missourian and an editor for Vox Magazine. She moved to Columbia after spending two years teaching in Winchester, England, and one year in Edinburgh, Scotland. Her work has been published by a variety of outlets, including The Guardian and Businessweek.com. Katy plans to complete her MU master's degree in 2010.