Visit Modern Campus

Rethinking Evaluation Methods for Remedial Education

When it comes to remedial education, too many colleges use a testing system that simply ranks students and does not give much information about them. This can lead to remedial classes with a wide range of ability, which forced educators to teach to the middle and risk overshooting the capacity of their low-end students and boring their upper-end students.

More than one million students in postsecondary institutions enroll in remedial courses annually, with 20 percent of them in reading, 25 percent in writing, and 34 percent in mathematics (McCabe, 2000). In 1995, the National Center for Education Statistics (NCES) reported an increase in the proportion of freshmen enrolled in remedial mathematics, reading and writing courses (NCES, 1995). There is nothing in NCES statistics today to suggest that the above numbers have changed significantly. NCES also said community colleges are more likely to offer remedial courses, but public four-year institutions are also significant providers of remedial education. NCES did not provide the percentage of remediation offered by each type of institution, but it did say that the percentage of institutions offering remedial education declined from 71 percent to 68 percent (NCES, 2003). An important distinction to make, however, is that this decline does not suggest that the number of students requiring remediation declined.

The decline in institutions above offering remedial education is marginal. The real question is what happens to those students once they reach college and take a college’s admissions test for placement. If the college gives a norm-referenced test, the primary test result will be a ranking, followed by a marginal assessment of the student’s academic ability. So the college doesn’t really know a lot about a student’s academic ability by using a norm-referenced test.

According to Hoyt and Sorenson (2001), depending on grading practices, norm- referenced test results are subject to error. Criterion-referenced tests, on the other hand, may be a better measure of a specific ability, such as mathemathics. Conducted long before, Casazza and Silverman’s research (1996) aligns with with Hoyt and Sorensen (2001) in claiming that that criterion-referenced tests may be better, because these tests are designed to show what a student has mastered according to a set of standards. Casazza and Silverman (1996) also concluded the following about the selection of either a criterion-referenced or norm-referenced test: what an institution wants to know about a student determines the choice of test. What is the purpose of a placement test? Is to compare students with one another? Is it to predict the likelihood of a grade, or is it to determine the extent to which a student has mastered specific content matter according to a set of standards for placement (Casazza & Silverman, 1996)?

The use of criterion-referenced testing would serve a student and an institution much better — specifically those students who test into remediation. Most criterion-referenced tests provide a system-generated diagnostic detailing a student’s academic deficiencies, and allowing for a more targeted instructional intervention. Such a detailed diagnostic is crucial, because even among students who test into remediation, there is a lot of skill diversity among these students. Some of these remedial or developmental education students need only a review of grammar or math, whereas other students need a much more comprehensive intervention, because they never mastered the skill(s) in high school. A criterion-referenced test would tell the institution the exact skill level at which a student is performing, and would include grade level, for a much more homogeneous grouping of students.

There are multiple benefits to grouping remedial or developmental education students by grade and/or skill level. First, this grouping allows the instructor to teach to one level specifically, instead of teaching to the middle and hoping those above and below the middle will “understand” the lesson. This homogeneous grouping allows for a targeted intervention. This grouping will create less frustration for remedial education (RE) students. This differs from the typical college class where the skill level of students range from average to excellent, and where excellent students become frustrated because the professor is teaching to the middle, to assure that the average student “gets it.” Meanwhile, the excellent student has to cool their jets and wait for other students to catch up.

A secondary benefit that will emerge from homogeneous grouping is increased retention and less attrition due to student frustration with instruction. Every time an RE student drops out of college, an institution’s budget is affected. If enough RE students drop out of an institution, a fiscal crisis may be looming around the corner. Today, because of rising tuition and competition for enrollments, it is to a college’s benefit to create strategies that will keep RE students in school.

One reason many colleges don’t engage in a secondary level of placement testing is money. It costs money to retest those students who have been deemed remedial by a norm-referenced test. A secondary level of testing for homogeneous placement really speaks to an institution’s values. In other words, where a college places its money shows what a college truly values. Remedial education, however, is not high on the list of some colleges’ institutional values. Forward thinking colleges, however, will see the value of homogeneous grouping and secondary testing and adopt this practice; for it not only benefits their bottom line, but it also benefits the students.

Author Perspective: