A deeper analysis must be made of John Hechinger’s article, SAT Coaching Found to Boost Scores — Barely. If we are to take Mr. Hechinger’s conclusions at face value — conclusions that seem more concerned with drawing a crowd than with accurate reporting — then we are voluntarily subjugating the few facts in this discussion to a series of anecdotes and conjecture.
Mr. Hechinger draws a variety of conclusions about the ethical practices of test preparation companies, particularly about (1) the claims of substantial increases in student performance on standardized tests and (2) the validity of practice tests given by these companies.
In making his first point, Mr. Hechinger directs us to look at the study recently published by the NACAC. The NACAC report does not discuss improvement, only effect. The report makes no claims about the veracity of any particular “improvement” at all, but instead seeks to demonstrate that the net effect of expensive test preparation only differs by 30 points from other forms of less expensive preparation. Mr. Hechinger completely disregards one of the major premises of the report — that improvement and effect are two entirely different measurements.Â Â Unfortunately, by missing the major premise of the report, the conclusions he presented seem to further cloud the distinction the report attempts to make.
To address the second point, I counter that providing inaccurate scores is at least partly counter-productive for test preparation companies, which rely on analyzing student performance to enable instructors to direct and focus student preparation. Inflating or deflating scores simply disallows companies from providing effective training. If we accept, as Mr. Hechinger proposes, that the major form of marketing is trumpeting score improvements, companies have strong incentives to take actions that increase such improvements (and not just the perception thereof). The fact that there exist incentives for companies to skew results in their favor does not necessarily mean that they will. In fact, since there are no real tests on the market and ETS, the maker of the SAT, only publishes its own “practice tests” as a test-preparation option for students, one might argue that the same incentive exists for ETS. Since most students likely to be taking these courses would have taken the PSAT and possibly even the SAT, students have their own external benchmarks against which to measure their improvement.
Furthermore, the experiences of one student who scored a perfect score on his SAT is an inappropriate and peculiar example when considering average score ranges and attempting to disparage an entire industry. This type of anecdotal evidence is hardly indicative of student experiences on a large scale. When considering this type of student experience, a reasonable objective observer must also take into account a number of variables that can affect a student taking the official test. Students report a wide variety of feelings, ranging from fear and anxiety to excitement and exhilaration. In some cases these reactions translate into a stronger performance (collectively, this is known as eustress) while in other cases these reactions will translate into worse performances. To cite an informal study of a small population of a few students who worked with one provider and who had similar experiences explicitly ignores those students who might have performed on par with their practice tests or even underperformed those practice tests.
Mr. Hechinger should have perhaps noted that the NACAC report concludes with the following points:
- students should be encouraged to prepare before taking admissions tests
- students should be counseled to use cheaper forms of test preparation
- commercial coaching or private tutoring may well be worth the cost
Finally, the article presents two propositions: (1) SAT coaching resulted in around 30 points in score improvement, and (2) a third of schools with tight selection criteria said that an increase of 30 points would “significantly improve students’ likelihood of admission.” Even assuming that the improvement from the preparation is only 30 points, the author seems compelled to ignore the glaring conclusion that the ‘modest benefit’ can have a very real effect on potential admission to selective schools. To the extent that students can avail themselves of companies (like my own) that offer students the opportunity to get quality test preparation at 20 to 50 percent of the rates charged by the providers highlighted in Mr. Hechinger’s piece, the net value of that “modest benefit” increases dramatically.
The increasingly competitive world of standardized testing and college admission has forced students to seek out every available resource. Until schools become more transparent with how they value SAT scores, college applicants will reasonably pursue any gains, modest or significant, within their reach. And until a more extensive and finely tuned study is performed (the need for which is continually noted in the NACAC report), it is irresponsible to draw conclusions about test preparatory companies or their effectiveness. The more salient question, ignored by both Mr. Hechinger and the NACAC report, is to what extent are students without access to high-quality test preparation disadvantaged by their inability to get those modest 30 points?
Hashim Bello is co-founder of Bell Curves, a test preparation company which seeks to deliver high quality test preparation to traditionally underserved youth.