In Defense of the SAT

One of the most popular ideas of our time is the notion that in judging a young person's future success, we've become imbalanced, giving too much credence to whether a child has learned the stuff of textbooks, and too little value to whether that child has learned the stuff of real life.

The latter is a whole constellation of behaviors and skills, from creativity to emotional-intelligence to self-discipline to practical judgment. In this modern paradigm, the elements of real life success are characterized as highly generalizable, useful everywhere from the urban street corner to the boardroom. Meanwhile, the elements that go into book learning are characterized as being narrowly applicable, useful only for getting into college, at which point the other factors take over.

No matter who is making this argument–whether it's Daniel Goleman, Dan Pink, Robert Sternberg, Malcolm Gladwell, Thomas Stanley, or some college president–it always stands on a few key bricks. One of those bricks is that the SAT doesn't predict much of anything.

It's commonly said that the SAT, taken in a senior year of high school, has only about a 40% correlation with a student's freshman year college GPA. If it's that bad at predicting how well a kid does in college, just one year later, then how could it predict longer-term outcomes in life, when other factors become increasingly important? The SAT is designed, specifically, to screen for college success–if it doesn't accomplish what it's built for, then surely something else (that's not being tested) accounts for real success, in college and in life.

Using this argument, the door is opened for all these other variables to be postulated as the new basis for success.

I've always had a skeptical feeling about the 40% correlation statistic, and so I've never relied on it or used it in print. There are two self-selection problems that make it really hard to control the data. First, high schoolers of diverging abilities apply to different schools–the strongest students apply to one tier of colleges, and the average students apply to a less ambitious tier, with some overlap. Second, once students get to a college, they enroll in classes they believe they can do well in. Many of the strongest students try their hand at Organic Chemistry, while more of the less-confident students take Marketing 101. At each of these colleges and courses, students might average a B grade, but the degree of difficulty in achieving that B is not comparable.

Many scholars have attempted to control for these issues, looking at data from a single college or a single required course that all freshman have to take, and their work has suggested the 40% correlation is a significant underestimate. I've long wondered what would happen if an economist really took on this massive mathematical mess, on a large scale, harvesting data from a wide selection of universities.

Finally this has been done, by Christopher Berry of Wayne State University and Paul Sackett of the University of Minnesota. They pulled 5.1 million grades, from 167,000 students, spread out over 41 colleges. They also got the students' SAT scores from the College Board, as well as the list of schools each student asked the College Board to send their SAT scores to, an indicator of which colleges they applied to. By isolating the overlaps–where students had applied to the same colleges, and taken the same courses at the same time with the same instructor–they extracted a genuine apples-to-apples subset of data.

It turns out that an SAT score is a far better predictor than everyone has said. When properly accounting for the self-selection bias, SAT scores correlate with college GPA around 67%. In the social sciences, that's considered a great predictor.

My point isn't that other life variables don't matter. Some of the success factors that have been touted are certainly additive to what's tested by the SAT. It's still worthwhile to explore why people succeed, both at school and in real life. But we may not be imbalanced; our valuing of differing abilities may be right on target. Meanwhile, the argument all these alternative success-factors are built on needs repair.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go