AFT - American Federation of Teachers

Shortcut Navigation:
Email ShareThis

The SAT Trap

Why Do We Make So Much of One 3-Hour Test?

By Clifford Adelman

It was quite a year for a test that we have all known for decades as the SAT. From talk shows to op-ed pages to the covers of Newsweek and The New Republic, those three letters were too much with us in 1999. In public communication, the word "SAT" is now shorthand for all standardized testing. Irrespective of the nature, purposes, virtues, and limitations of the test itself, our use of the shorthand has created a symbolic monster. There are far more valid and productive metrics for judging educational attainment and potential.

As teachers already know, the SAT is a proven measure of general learned abilities. Student performance on the test is influenced as much by the nature of household dinner-table conversation as it is by formal school instruction. That is, the vocabulary of households with a high socioeconomic status is the vocabulary of the examination. Even though more than 70 percent of students entering four-year colleges take either the SAT or ACT exams, perhaps 200 (out of 1,800) four-year colleges place enough weight on those scores in admissions decisions to make a difference in students' lives.

The scores are not used at the 1,200 community colleges in this country nor in hundreds of other open-door postsecondary institutions. At most, SAT scores have influenced the fate of one out of six students in four-year colleges, and one out of 13 undergraduates altogether. To pay as much attention to SAT scores as we do seems like letting an awfully small tail wag a very big dog.

The justification for using SAT scores in admissions decisions is that they are a decent predictor of first-year college grades. True, but so what? That criterion has nothing to do with the principal goal of students at four-year colleges and their families: completing a bachelor's degree. Nor do state legislatures give a hoot about grades when they judge the performance of public universities: Performance means graduation rates.

No three-hour test on a Saturday morning is anywhere near as strong a predictor of college graduation as the academic intensity and quality of the four-year high school curriculum that a student has completed. And high school grades and class rank are even weaker predictors than standardized tests. In an analysis of long-term degree completion (by age 30) in the most recently completed national longitudinal study (1980-93) conducted by the National Center for Education Statistics, with statistical controls for all major background characteristics of students, I've found that curriculum beats everything.

Not only is curriculum the best predictor of a Student's graduation from college, it's the only factor educators can do anything about. But people rarely talk about it. The symbol of the SAT has become so powerful that it blocks any other conversation. How did that happen?

In the mid 1980s, my own employer, the U.S. Department of Education, pumped up the SAT's status with something we called the Wall Chart, which displayed year-to-year changes in the mean SAT score, by state. Those judgments, along with other indicators, such as high school dropout rates, were presented in annual press conferences as a "national report card." Minnesota up three points, Arizona down two. The Wall Chart read like the stock tables, but it was far, far less faithful to the realities it purported to represent. Never mind the fallacy of using a test of general learned abilities to judge schools, let alone whole state systems of public education. Anyone who knows a smidgen about test scores knows that you do not represent change by metrics such as "up three, down two."

Until its final appearance in 1990, the Wall Chart made for good showtime visuals and gave the public easy-to-digest news bites. The annual hoopla beat the SAT into the consciousness of readers and viewers as the sole indicator of student potential and school system performance.

In the early-to-mid-1980s, there also was a proliferation of commercial guides to American colleges and universities that played up the test scores of entering freshmen as a basic indicator of institutional quality. The annual U.S. News & World Report rankings emerged in 1985, and are now awaited with the kind of anticipation usually reserved for the Oscars. At the core of the ranking system are—you got it—SAT scores (or ACT scores, where appropriate).

The colleges and universities report the scores of their entering freshmen to all the symbol-making handbooks, and their strategy is to look as good as their niche allows. As a former associate dean at a non-selective institution, I know that we played razzmatazz when we excluded our "special admits" (a euphemism for marginal students) from the SAT reporting pool, until our academic vice president worried that our scores were getting too high for our niche. SAT was image, even though, as a practical matter, many nonselective institutions did not use the scores for admissions.

With challenges to affirmative action, the symbolic status of the SAT has moved onto a very different stage. Wherever people have argued about race-based preferences in college and university or graduate- and professional-school admissions, a standardized test score has been at the center of the dispute. For example, in Regents of the University of California v. Bakke and Hopwood v. State of Texas, the white plaintiffs claimed that admittance of minority students with lower test scores than theirs had denied them a place in limited entering classes. The defendants (universities) argued that race could be a more important factor in admissions than test scores (which the universities nonetheless required).

While those two familiar legal decisions involved graduate professional schools and examinations such as the Law School Admissions Test, the media-consuming public does not discriminate either by test or by level of higher education. The public sees every standardized admissions test as essentially the SAT. The ironic consequence of that perception is that the mass of minority students continues to be hurt or demeaned.

For former Ivy League presidents William Bowen and Derek Bok, authors of a defense of affirmative action at highly selective institutions, The Shape of the River: Long-Term Consequences of Considering Race in College and University Admissions, (Princeton University Press, 1998), the SAT is the dominant indicator of institutional quality. The authors spin their arguments for race-conscious admissions with constant reference to that icon. In their view, the haves in our society, are divided from the have-nots by virtue of the SAT scores of their college companions.

Push messages such as that across enough op-ed pages and through enough television cameras "into the air"—in the words of the French social critic Jacques Ellul—and one sees what Ellul describes as the formation of sociological propaganda: it has become our unconscious habit to judge individuals by the SAT company they keep. The message tells most students, and the mass of minority students among them, that they were turned into have-nots in the college admissions line at age 18. That message is neither wise nor kind.

Claude Steele, a psychologist at Stanford University, has done pioneering research on the damage done to minority students by the dominance of SAT consciousness. African-American students, in particular, have been repeatedly told by public propaganda that they are not expected to perform well on such tests. Steele has documented that, as a result, minority students freeze up when taking any high-stakes test. Excessive public SAT-talk, then, damages the life chances of minority students everywhere.

Recently, my own department's Office for Civil Rights issued draft guidelines that, if applied, might limit the use of SAT and other standardized-test scores in decisions about college admissions. The draft guidelines were an attempt to address the fact that, in my colleagues' words, minority students don't do as well as others on standardized tests and are disproportionately affected by colleges that overplay test scores in their admissions.

The message the public received from the ensuing controversy was illustrated at a roundtable discussion of policymakers at the 1999 convention of the Education Commission of the States, where one legislator moaned that "just at the moment when we are working even harder to close the SAT gap, minority Students are being told that they don't have to take the test seriously." Although not an accurate interpretation of the intent of the guidelines, that's the kind of intimation that can result from keeping the SAT at the altar of public consciousness.

The latest public flurry over test scores concerns an SAT statistical simulation called "Strivers." Developed in a research project by Anthony Carnevale, a vice president of the Educational Testing Service (the developer and publisher of the SAT), the simulation takes the major background characteristics of a student (such as socioeconomic status, family income and structure, high school location, and, in one model, race) and, based on past performances of those with similar characteristics, predicts the student's SAT score. If the actual score is notably greater than the student's predicted score, an honorific Strivers label is attached. In other words, on the basis of a three-hour test on a Saturday morning, the student's stock jumps a dozen points in the hypothetical admissions line where the SAT is at the eye of judgment, and everybody feels good.

Of course, Strivers was only a simulation, but as Nicholas Lemann wrote in the New York Times, it generated "a major media feeding frenzy." Typical of the feeders in the pool was The New Republic, whose cover story trumpeted "The End of Meritocracy: a Debate on Affirmative Action, the S.A.T., and the Future of American Excellence," and whose interior pages pretended that "Strivers" was a real program rather than an interesting piece of research. Reflect on the collocation of words in that cover-story headline: Only one word refers to anything concrete—"S.A.T." The others are emotively loaded abstractions that glom onto the icon. Our nation, it is implied, will stand or fall on the SAT. Has it come to that?

It's time to stop talking about the SAT. All we have done with all this SAT jabber is to manipulate minority sensibilities, without doing anything substantive for the mass of minority students. Analysis of high school and college transcripts of the generation that attended college between 1982 and 1993 tells us that getting one step beyond Algebra II in high-school mathematics doubled students' chances of completing a bachelor's degree. The same analysis tells us that 72 percent of African-American students who got beyond Algebra II, took Advanced Placement courses, and subsequently attended a four-year college or university earned bachelor's degrees. For Hispanic students, the percentage was 79 percent. No jiggling or juggling with SAT scores, class rank, or grades can accomplish those results.

Our principal tasks should be to provide minority students with curricular opportunities, to ensure that minority students are not "tracked" away from those opportunities, and to secure family and peer support for academic effort. Those tasks require real sweat, not feel-good simulations.

Just as important, the metrics of those tasks must become our principal propaganda, too. Imagine what would happen if the college rankings dropped the SAT as a criterion of institutional quality. Instead, what if they told us the percentages of entering students who had reached the pre-calculus level in high school, had taken three laboratory science courses, and had demonstrated competence in a language other than English? We might then be able to establish an alternative symbolism that would reflect what education is really about and what we hope to do for all students.

Clifford Adelman is a senior research analyst with the U.S. Department of Education. He is the author; most recently, of the Education Department report "Answers in the Toll Box: Academic Intensity, Attendance Patterns, and Bachelor's Degree Attainment." This article originally appeared in the Nov. 5, 1999, issue of the Chronicle of Higher Education and is reprinted with the author's permission.