Create a free Diverse: Issues In Higher Education account to continue reading

Before You EXIT

States are increasingly turning to standardized testing to hold colleges accountable for student outcomes.
By Charles Dervarics

Dr. Walter M. Kimbrough knows he’s in the minority on the issue of testing. As president of Philander Smith College, a public, historically Black college in Little Rock, Ark., he is familiar with all the data showing that students of color often score below other students on key assessment tests.Despite reservations that testing could serve as a barrier to college enrollment or graduation, Kimbrough says assessment and accountability measures — when used properly — are important barometers to show the strengths and weaknesses of a postsecondary institution.

Arkansas law requires Kimbrough and other public college administrators to assess college sophomores to gauge student progress. It’s a requirement he also fulfilled previously as an administrator in Georgia, where public college students must take a Regents’ Test to graduate.

“Everybody is afraid of the assessment piece. But there has to be more accountability. If we don’t measure, we don’t really know how we’re doing,” Kimbrough told Diverse. “It’s a good conversation to have, and it’s nothing to be afraid of.”

His views appear to reflect the sentiment of a growing number of policy-makers these days, since many signs point to a new national focus on testing and assessment in higher education. State lawmakers increasingly want accountability measures to evaluate public investments in higher education, and these issues also are front and center for U.S. Secretary of Education Margaret Spellings’ National Commission on the Future of Higher Education.

Created by the Bush administration to review postsecondary education, the commission in a draft report said states should require colleges to assess student learning. Colleges should also go public with results, so students, parents and other taxpayers can determine the value of higher education, the commission says.
A final report is due in September.

Groups such as the Educational Testing Service are also calling for new action to measure what students actually learn during their undergraduate careers. “We need to start addressing what student competencies are when they’ve arrived and what they are after they’ve departed college,” says Carol Dwyer. She is the co-author of an ETS study seeking a national framework that goes beyond grades to assess student learning.

Currently, about half of the states require public colleges to conduct some type of assessment or accountability measure, says Peter Ewell, vice president of the National Center for Higher Education Management Systems in Boulder, Colo. Many requirements are basic surveys of student engagement or satisfaction, where colleges collect feedback on their programs. But since the mid-1990s, there has been a small but growing number of states that ask colleges to do more intensive data gathering. 

Assessments generally fall into two categories: high-stakes tests, which may affect the progress of individual students, and low-stakes assessments, which focus less on students and more on college quality. According to Ewell, policy-makers should expect more public attention on these issues. “There’s a rising interest due to Spellings’ commission,” he says.

Marking Individual Students?
One of the oldest and best-known assessments for individual students is the Georgia Regents’ Test, which students must pass to graduate from a public college in the state. Students generally take the reading and writing exam in their sophomore year. If they do not pass, they take remedial instruction before additional attempts.

Data from 2003-2004 show that minority passage rates for the Regents test generally lagged behind those of Whites. The differences were most stark among African-Americans, of whom only 66 percent passed the test. The rate for White students was 85 percent, based on data supplied by the state board of regents.

Test scores at Georgia’s HBCUs varied widely. Among sophomores with at least 45 credit hours, the pass rate at Savannah State University was 66 percent. But the rates at Albany State University and Fort Valley State University were 49 percent and 46 percent, respectively.

“A lot of our institutions work very hard with minority students,” says Dr. Frank Butler, vice chancellor for academic, faculty and student affairs at the Board of Regents for Georgia’s public colleges. Each campus in the state system has a retention and graduation initiative focused on at-risk students. “We’re trying to get to the bottom of this issue,” he says.

Despite his students’ performance, Dr. Everette Freeman, president of Albany State, does not disparage the test. “It’s a challenge to our students, but not a barrier,” he says. “It’s been a very helpful diagnostic tool to understand where our students are.”

The biggest challenge, Freeman says, is that HBCUs must cure students of their “stage fright” about tests. “Because of cultural biases over time, the war stories are passed down generation to generation,” he says. But, he says, the college is “aggressive” about preparing students so they can succeed on the test.

Freeman is no fan of some exams —  “tests must be sensitive to the cultural background of students,” he says — but he notes that they are already a major part of the education landscape. Only by passing exams, he says, can students enter key professions such as teaching and health care. He adds that he takes pride in the university’s recent 100 percent pass rate for students on the state’s licensing exam for nursing. “We have to prepare students for these tests,” he says.

Florida: Changing the Rules
While some states are embracing standardized tests as a way to hold colleges accountable, Florida is backing away from its required

assessment because of its impact on minorities. The state has long required students going into their junior year to take the College-Level Academic Skills Project test before moving on to higher-level courses.

“Some colleges felt the test was more of a barrier to minority students rather than a method for quality control,” says Dr. Patricia Windham, associate vice chancellor for evaluation for the state’s community college system. Two-year colleges were among the most vocal opponents, particularly Miami Dade College, she says. The test began partly out of concern that community college students had not taken rigorous courses, Windham says, “but I’ve never seen anything to support that notion.”

With an essay section plus a focus on language arts, reading and math, the test once was standard for students pursuing a four-year degree. But today, relatively few students actually take the exam, since the state offers exemptions based on grades and SAT or ACT scores. Students are exempt with a score of at least 500 in the verbal and math SATs or a 2.5 GPA in two English and two math courses. Less than 1 percent of the state’s approximately 650,000 two- and four-year college students take the exam, Windham says.

She says she recognizes the value of assessments, although they are difficult to do well in a college environment. “It’s hard to determine what everyone should know,” she says, beyond basic computation and communication skills. “Once you get past the introductory courses, you want students to specialize in one area.”

Long-time critics of standardized testing agree. “There is no good data about the fairness and accuracy of these measures,” says Robert Schaeffer, the public education director for the Cambridge, Mass.-based FairTest, which opposes many standardized tests in education. While differences by race are common, he says, another challenge is creating assessments that are viable across different academic disciplines and colleges with different missions.

“People are looking for a one-size-fits-all system that can be applied to an art history major and a nuclear engineering major and still get meaningful results,” Schaeffer says. Given the increased testing at the K-12 level under the No Child Left Behind Act, it’s not surprising that higher education is taking greater interest in broad assessments. Yet, he adds, “These may be tests in search of a problem.”

Minority students who have endured varied high- and low-stakes testing also voice caution. “The literature is pretty clear that tests can be biased,” says Ivan Turnipseed, president of the National Black Graduate Student Association. A Ph.D. candidate and graduate teaching assistant at the University of Nevada-Las Vegas, Turnipseed says colleges need to use many different measures — including traditional grades — to assess student learning. “I’m completely opposed to tests required for graduation,” he says.

The Future Outlook
When the Bush administration’s higher education commission talks about testing, few experts envision a national high-stakes exam that may deny access to certain courses or a diploma.

While students might receive a standardized “test,” it likely would be used primarily to evaluate the quality of individual colleges
and universities.

“We strongly believe that a testing regime is not appropriate for America’s higher education system,” says Dr. Roger Benjamin, president of the Council for Aid to Education, in testimony to the commission. But his group has helped develop the Collegiate Learning Assessment, an exam that focuses on institutional effectiveness. Under a new partnership, about three-dozen small liberal arts colleges are giving the assessment to 100 of their freshmen and comparing the results with the assessment given to 100 of their seniors. The goal is to determine the “value added” by four years of college. In the future, colleges can conduct longitudinal studies examining a particular cohort of students when they begin and graduate from college.

Another option is the Collegiate Assessment of Academic Proficiency, developed by ACT. About 340 two- and four-year institutions use CAAP, which covers math, science, reading and writing. Students generally take the exam at the end of their sophomore year or beginning of their junior year, says David Chadima, ACT consultant for postsecondary assessment.

States and colleges can use CAAP as a high-stakes or low-stakes test, Chadima says. For example, both Arkansas and South Dakota use the assessment to meet state accountability mandates. About 90 percent of South Dakota students pass the test on the first try, though a small number may be denied re-admission for repeatedly failing the exam, according to a state spokeswoman. In Arkansas, colleges can test all students or only a sample, says Ron Harrell, associate director of planning and accountability at the Arkansas Department of Higher Education.

“It’s a low-stakes test,” Harrell says. “Students are not held back or penalized.” Instead, results go to individual institutions so they can improve programs and services.

The low-stakes approach is fine with Philander Smith’s Kimbrough, whose university is subject to the Arkansas requirement. “It does a pretty good job of finding out where your students are,” he says. But assessment isn’t the only way to improve higher education, he says, since the system needs more government and corporate funding to help at-risk students. “You can’t have a heavy-handed approach to assessment without more funding,” he says.


The ABCs of Assessments

The use of assessments is increasing in higher education, experts report. Here are a few of the better-known examples:

Collegiate Learning Assessment: Developed by RAND Corp. and the Council for Aid to Education, the assessment focuses on students’ writing and problem-solving skills. Under a special program, 34 small private colleges are using the CLA to test 100 freshmen and 100 seniors to show what students have learned at college. The assessment got a boost recently with a favorable mention in the draft report of the National Commission on the Future of Higher Education.

Collegiate Assessment of Academic Proficiency: This ACT assessment is used by more than 300 colleges and universities and is a key foundation of accountability systems in Arkansas and South Dakota. Colleges also can compare students’ ACT scores before college with their CAAP scores to determine the “value added” by attending college.

Georgia Regents’ Test: Public college and university students must pass this exam, begun in 1972 to assess whether students are graduating with core reading and writing skills. Based on 2003-2004 data, African-Americans trailed Whites in passing the exam. Many students earn exemptions from the test based largely on SAT scores. Georgia officials note that the test focuses on individual students and is not intended as an accountability measure for universities.

Measure of Academic Proficiency and Progress: This new assessment from the Educational Testing Service measures college-level reading, math, writing and critical thinking. It also merited a mention in the National Commission on the Future of Higher Education’s draft report. Despite that panel’s interest in assessment, most analysts do not expect it to recommend a specific test or measure in its final report.



© Copyright 2005 by DiverseEducation.com

A New Track: Fostering Diversity and Equity in Athletics
American sport has always served as a platform for resistance and has been measured and critiqued by how it responds in critical moments of racial and social crises.
Read More
A New Track: Fostering Diversity and Equity in Athletics