A measurement of what?; although “reliably constant,” experts say standardized test scores are often misunderstood and misused – related articles on college tests – Cover Story - Higher Education

Message to our Readers



Higher Education News and Jobs

A measurement of what?; although “reliably constant,” experts say standardized test scores are often misunderstood and misused – related articles on college tests – Cover Story

by Diverse Staff

Earlier this year at a gathering with members of the press, the
presidents of a handful of top research universities were discussing
their commitment to diversity. As if with one voice they said
standardized test scores play only a small part in their admissions
process. Their institutions, they pronounced, are virtually
disinterested in SAT and ACT scores.

The high-powered buzz of insistence ended abruptly, however when
the presidents were asked whether their institutions issue press
releases whenever their average freshman SAT scores go up.

The embarrassed silence illustrated the central paradox surrounding
standardized test scores — that even though higher education
professionals understand their limitations, especially as an aggregate
measure of groups of students, they also know that the general public
is impressed by high test scores and considers them of paramount
importance.

Last week, the College Board released its annual report on student
performance on the SAT. As the public and higher education institutions
now clamor to interpret and market the findings, it is useful to
examine what testing experts have to say about what these scores really
mean and the context in which they should be used.

The Power of Numbers

“America believes three is bigger than two. They believe these
numbers,” is the way Joan Snowden, a former director of a policy center
at the Educational Testing Service (ETS), describes the public’s
understanding of test scores.

Examples abound of the public’s reliance on SAT and ACT scores as a
measure of academic achievement. Colleges consistently publish average
freshman test scores, which high school students then use to determine
where they should apply. Real estate agents and parents use the high
school-by-high school listings of scores, published by local
newspapers, as guides to the “best” schools and neighborhoods. And the
National Collegiate Athletics Association (NCAA), which governs
intercollegiate sports competition, uses SAT and ACT scores as a way to
decide who can play: A student who receives a score of 17 on the ACT,
for example, has to sit out a year before playing intercollegiate
sports.

Even a magazine like The New Republic, which prides itself on
puncturing conventional wisdom, published an article recently with the
following assertion: “Institutions have long relied on standardized
tests because such tests, for all their faults, tend to be highly
reliable in their estimation of how well a particular applicant will
actually perform in college or on the job.”

The testing agencies themselves have much more modest claims.

The College Board, which administers the SAT, LSAT, MCAT, and other
standardized tests, claims only that the SAT correlates with first-year
grades a little less than half the time (42 percent). In other words,
students at a particular school who scored highest on the SAT wind up
with the highest grades in the first year of college a little less than
half the time. High school grades correlate with first-year college
grades a little better (48 percent). When combined, the SAT and high
school grades can be used to predict freshman year grades a little more
than half the time (55 percent).

That’s it. The College Board does not claim anything more. It has
nothing to say about later college performance — not even about the
likelihood of the student to graduate.

“I am proposing research on that,” says Gretchen Rigole, a
researcher at The College Board who did the latest study on what is
called the “predictability” of the SAT — or how well it predicts
first-year grades.

“[ETS’s] claim has always been correlation with first-year grades,”
says Snowden. “What’s shocking is when you realize how narrow that
[claim] is.”

ACT — the college entrance exam taken primarily in the Midwest, South, and West — makes a similar claim.

“It’s meant to measure how the student will do in the first year of
college,” says Kelley Hayden, ACT spokesman. And, he says, it does so
about half the time — like the SAT.

The claims of predictability narrow even further when subsets of
students are studied. For example, women tend to get slightly higher
freshman grades than their scores would indicate when compared to men.

And, Rigole says, “Consistent findings for both Latinos and African
Americans show that both grades and SAT scores overpredict.”

In other words, African Americans and Latinos who score 1200 on
their SATs tend to have lower freshman grades than White students with
the same score.

“When you look at students who are clones of to each other in every
way [college preparation, GPA, test scores, and family income], once
they sit in the classroom together there is a difference in grade
performance,” says Dr. Michael T. Nettles, executive director of the
Frederick D. Patterson Research Institute of The College Fund/UNCF, and
a former ETS researcher. “That suggests to us that there is something
happening once they get to campus in the freshman year.”

SAT and ACT scores are reliable predictors of college performance,
according to Nettles, but only when students are compared to others
within the same racial group.

“There are problems in the college performance and grading
processes that need to be examined more so than the test itself. This
may have implications for how [institutions] treat the test[s] when
they admit students,” he says.

There is a lot of anecdotal and some research evidence that
indicates that African American and Latino students need more time than
White students to adjust to college. But once they “get on their feet,”
as Rigole puts it, they perform equally or better. Rigole calls this
the “late bloomer theory,” and has proposed research on that subject as
well — but so far The College Board has yet to do it.

“What I believe is that [SATs and ACTs] fail to take into
consideration two important things,” says Dr. John Gardner of the
University of South Carolina. “First is [the student’s] motivation and
second is the university’s ability to intervene, to teach students, and
to motivate students.” Gardner heads a freshman-year program that, he
says, works with students whose test scores and grades mean they are
predicted by the university to have low grades and high drop-out rates.
And yet, he says, the students do very well. “What I’ve learned,
working with African American students in South Carolina, is their high
degree of motivation.”

These are the kinds of discrepancies and limitations that have
brought standardized test scores under attack for decades from
educational reformers who say two things: first, that SAT and ACT are
not good measures of academic aptitude and are biased against women,
African Americans, Hispanic Americans and children from poor or
uneducated homes; and second, that the tests are used inappropriately,
compounding the unfairness.

The most vigorous of the groups opposing the widespread use of
standardized test scores is FairTest, which is funded by the Ford
Foundation, the Joyce Foundation, the Rockefeller Family Fund and
others. People connected with it include: Dr. Deborah Meier of the
Annenberg Institute for School Reform, Chuck Stone of the University of
North Carolina Chapel Hill, Dr. Asa Hilliard of Georgia State
University, and Dr. Howard Gardner of the Harvard Graduate School of
Education.

Despite such backing, however, FairTest has had relatively little
success in denting the public perception that standardized test scores
are valid measures of success.

Dismantling Affirmative Action

Perhaps the most profound example of how the widespread acceptance
of standardized scores is affecting educational policy is the way
affirmative action is being dismantled because of them.

In Hopwood v. The State of Texas, four White students claimed that
they deserved admittance to the University of Texas at Austin law
school more than Hispanic students who had been admitted. Their case
rested on their slightly higher “Texas Index” scores, a combination of
college grade point averages and scores on the LSAT, the law school
version of the SAT. A panel of the Fifth U.S. Circuit Court of Appeals
agreed with them last year and dismantled the university’s affirmative
action program on that basis.

In an odd twist, a different panel of the same court ruled this
spring in Ayers v. Fordice that Mississippi’s use of ACT scores as the
basis for scholarships was discriminatory against African Americans.
But the ACT is still used as a basis for admissions in that state. Only
those students with minimum scores can get into the prestigious
University of Mississippi, for example, which eliminates many of
Mississippi’s African American students who score consistently lower on
ACT tests than White students.

In California, where a referendum ended higher education’s
affirmative action programs, disparate SAT scores are widely cited as
proof that Black students are less qualified than their white and Asian
counterparts and thus are less deserving of places in the prestigious
University of California system.

These are uses of the SAT and ACT scores that almost no one in
higher education defends, including the testing services themselves.
They say — repeatedly and in many different forums — that the test
scores should never be viewed in isolation from other information about
students.

This is a particularly important point for those whose scores are
low — and, despite dramatic gains in the last decade, the average
scores of African American, Hispanic and Native American students who
take the test continue to lag behind the average scores of White and
Asian students who take the test. This is also true of the average
score of poor students as opposed to wealthy students.

Both The College Board and ACT say that when students take rigorous
college preparatory curriculum, they have better test scores. This is
true for all ethnicities and income groups.

“The correlation is very high,” says Rigole. “If you take more courses, you’ll do better.”

The problem, they say, is that too many poor, African American, and
Latino students do not have access to those courses. Too often, they
are shunted away from college preparatory classes into vocational
education or other non-college preparatory programs.

“What bothers me is the implication that [the discrepancies] are
the test’s fault,” says The College Board’s spokesman, Fred Moreno,
“completely ignoring the fact that so many kids don’t take the courses
that will help them do well. They don’t take calculus and physics and
all the classes that help them with reasoning skills.”

Moreno continues: “You’ve got to start taking academic classes
beginning in the ninth grade — which really means starting earlier.”

Despite the fact that many newspapers and magazines try to portray
the SAT and ACT as measures of the educational progress of the nation,
they were not designed to do that. As great grandchildren of the IQ
tests developed in the first part of the century, the ACT and SAT are
simply devices to help colleges sort through the piles of applications.

“Admissions is a sorting process,” Rigole says. “If [colleges] have more applicants than places, they have to sort somehow.”

But it seems that this is where FairTest and other critics of
standardized test scores have succeeded, at least in part. Even as the
public perception of standardized test scores as objective measures of
achievement appears to have crystallized, FairTest and other critics of
standardized test scores have succeeded in convincing many higher
education officials to lessen their reliance on test scores in the
admissions process — witness the protestations of the university
presidents noted at the beginning of this article.

Curriculum, SCUGA and Test Scores

“Admission officers look at any measure with skepticism,” says
Roger Swanson, a former admissions director at Arizona State University
and California Polytechnic State University-San Luis Obispo. Swanson is
now the associate executive director of the American Association of
Collegiate Registrars and Admissions Officers (AACRAO). When he was
working in admissions, he recalls that the officers would rate every
application on a scale in which SAT and ACT test scores were a fairly
small part. The curriculum the student had taken — whether it was a
rigorous college preparatory curriculum, for example — held much more
weight.

The University of Michigan — a large, selective public university
— appears to agree with Swanson. University spokeswoman Julie Peterson
says that each of the 20,000 applications the university receives to
fill its 5,200 freshman spaces is scored on a complicated scale by
academic performance — with grades weighing twice as much as test
scores — and by something the admissions people named SCUGA, an
acronym for school, curriculum, unusual, geographic, and alumni. The
scale weighs elements such as whether the student’s high school had a
rigorous curriculum; whether the student took the most rigorous
curriculum offered by the school; any unusual characteristics such as
leadership, community service, or overcoming difficult circumstances;
whether the student can offer some geographic diversity to the student
body; and whether the student is a child of an alumnus.

The University of Michigan admissions office has done no
predictability studies of how well the SAT and ACT predict success in
college because, Peterson says, “So much national research has been
done that shows they are not good predictors that the admissions
officers don’t give them much weight.”

AACRAO’s Swanson says that the “one thing that tends to increase
the use of test scores are those students with less traditional
schooling — home-schooled students, for example, or students from
school systems with performance-based assessment.”

This is an ironic twist for people and organizations like FairTest,
which have long pushed for more “authentic” performance-based
assessments such as that adopted in Wisconsin — where students leave
high school with thick folders filled with projects, papers, and long,
written assessments by themselves, their teachers and their parents.
Colleges don’t always know what to do with all that information and,
Swanson says, they sometimes look to SAT and ACT scores in those cases.

If that’s the result, says FairTest’s Rooney, “That puts us in a
bind. That kind of undoes the benefits of using performance assessment
at the high school.”

But that gets at the reason SATs and ACTs — despite all their
limitations — are still used, even if sparingly, by admissions
offices. They give a number.

Burnie Bond, of the American Federation of Teachers, which has long
pushed for national standards, puts it this way: “It’s not that they’re
so good, it’s that there’s nothing else to use that is reliably
constant — not reliably good, but reliably constant.”

As FairTest’s Rooney says, “It’s not as if we had a national curriculum or national standards that we could test against.”

RELATED ARTICLE: The SAT and the ACT claim to be:

(A) measures of academic achievement by students

(B) predictors of whether students will graduate from college

(C) reliable predictors of college performance, race notwithstanding

(D) measures of academic rigor of local school systems

(E) none of the above

RELATED ARTICLE: Earning Points With AP

In contrast to the jaundiced view admissions officers have about
the SAT and ACT, they are very impressed by Advanced Placement courses
and exams.

Even if students do not take Advanced Placement tests, if they take
the courses in high school, “That’s a real positive sign,” says Roger
Swanson, the associate executive director of the American Association
of Collegiate Registrars and Admissions Officers (AACRAO). “Most
admissions people think that is special.”

When he was an admissions officer, Swanson says, “We would give an
extra grade point for AP.” So, for example, if a student got a 3.5 in
an AP class, the admissions office would give the student a 4.5.

The Advanced Placement exams, which are administered by The College
Board, are different from the ACT and SAT in that they test mastery of
Advanced Placement courses which have very specific curriculum and
standards. In contrast, the ACT and SAT pride themselves on not being
tied to any specific curricula but being tests of reasoning abilities.

The fact that advanced placement classes are so well regarded
increases the importance of the fact that few African American students
take them.

Wade Curry, head of Advanced Placement at The College Board, says
that most school systems offer the classes. But. he said, the schools
that don’t offer them fall into four categories: small religious
schools, small rural schools; schools in Wyoming, the Dakotas, and a
few other states; and, most important for African Americans and
Latinos, large urban schools — particularly if they feed into academic
magnet schools. Magnet high schools — such as Whitney Young Jr. in
Chicago and A. Philip Randolph in Harlem — draw off the most
academically inclined students and produce fairly large numbers of AP
scholars. But the schools they draw from then tend not to offer the
courses, leaving those students behind.

Curry said that as he has gone through the data for this year’s
freshman profile, he hats found that African American students who do
well on the AP tests tend to be either in the urban magnets or in
predominantly White, suburban school systems where there are between
five and twenty African American students who take the courses.

“That’s where achievement seems to be centered” Curry said.

For example, he said, at Quince Orchard High School in suburban
Montgomery County, Maryland, 82 percent of the African American
students who took AP exams received a three or better (on a scale where
five is the highest possible score). This contrasts with a national
average of 65 percent.

[ILLUSTRATIONS OMITTED]

RELATED ARTICLE: College Board Develops New SATII Biology

NEW YORK — Because of changes in high school biology curricula and
the emphasis on applied reasoning skills, the College Board has
developed a new Advanced Placement test — now called SAT II — in
biology.

“The new exam responds to the continually expanding biology
curriculum in American high schools — particularly the dual emphasis
on ecological and molecular approaches.” said Donald M. Stewart,
president of the College Board. “Although we have offered a biology
exam for several years, the new test reflects shifts in the teaching of
biology, with greater emphasis on fundamental concepts rather than just
collections of facts.”

The new exam, SAT II: Biology E/M, allows students to choose an
ecological emphasis or a molecular emphasis in the same test, giving
them the opportunity to take the test for which they feel better
prepared. The test, which took 18 months of research and development
contains eighty questions — sixty of which are common to both forms of
the test, and twenty that emphasize either ecological or molecular
biology.

“This new test will put greater focus on scientific reasoning and
less on memorization of facts,” said J. Jose Bonner, chair of the
test’s development committee. “The questions will allow students to
analyze and interpret data from hypothetical experiments, and they
represent a whole new effort to measure how students will apply the
subject matter to concrete situations.”

COPYRIGHT 1997 Cox, Matthews & Associates



© Copyright 2005 by DiverseEducation.com

Semantic Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *