Does Performance Funding Work for Higher Ed?
By Henry Kronk
November 02, 2018
While the U.S. routinely ranks numerous universities among the best in the world, the system of higher education as a whole still has many problems to solve. Less than 60% of four-year public college students, for example, complete their degree within six years. To fix this problem and numerous others, many states have tested out performance-based funding. While states normally allocate appropriations for public higher education on a per-student basis, some have tried switching this up. Instead of using the number of students they get in the door, performance funding allocates resources based on whether or not a learner receives their degree, if they graduate learners in critical fields, if they achieve equity, and many other outcomes. Writing for the centrist think tank Third Way, Amy Li, an assistant professor at the University of Northern Colorado, conducted a survey of literature examining performance funding. The report was published on Tuesday.
To begin, performance funding seeks to address a very real problem. The historical system—where public two-year and four-year universities typically receive state funds based on number of students enrolled—incentivizes a meat grinder effect.
“Colleges would readily accept state funding based on “seats in the classroom,” but face no consequences if students failed or withdrew from the class or dropped out completely,” Li writes. “Performance funding was deemed the solution to a funding mechanism that failed to reward colleges for actually graduating students. In particular, states with lower rates of postsecondary educational attainment are more likely to pursue performance funding to try and increase those rates.”
Li found that 46 states either have performance funding in place to some degree or are currently considering it. Among its iterations, the amount funded based on performance can range from 2% in states like Hawaii to virtually 100% in Nevada and Ohio. State policies also often differentiate percent of funding, or ‘dosage’ as Li calls it, between two-year and four-year programs.
But does performance funding work in higher education? That depends what you’re hoping to achieve.
“Overwhelmingly, the empirical research on performance funding suggests that in most current iterations at the state level, the policy fails to improve degree completions and graduation rates,” Li writes. “At four-year institutions subject to performance funding, bachelor’s degree completions and graduation rates did not improve after the introduction of a performance funding policy.”
Just one report Li found proved the exception to this rule among four-year programs. (It looked at Tennessee institutions between 2011 and 2013.) But that’s not the extent of the bad news.
“At two-year colleges, performance funding generally fails to produce increases in associate degree completions, and in some cases, the policy produces declines in degree attainment,” Li writes. “This finding is consistent among studies of Washington, Ohio, and Tennessee. Multi-state studies also find no improvements in associate degree completion. Even among policies that give a higher “dose” of performance-based funds (defined as over 5%), the policy appears more likely to result in declines in associate degrees than increases.”
Adopting the method to boost outcomes can have further unintended consequences. It can lead two-year programs to push more students to graduate with certificates instead of degrees. And it can also cause four-year public institutions to become more selective in their application process. As Li writes, “These practices disproportionally restrict admissions to higher education for students from disadvantaged backgrounds. When institutions raise their admissions criteria in the form of SAT/ACT scores, students of color and low-income students are especially likely to be denied admission.”
The Silver Lining
But while it’s rarely effective at generating the outcomes for which it was designed, performance funding has been found to bring about positive ancillary results.
“Research also suggests that performance funding initiates changes in student service policies, procedures, and programmatic offerings. Institutional changes include increasing academic advising, inviting faculty to engage in advising, using data analytics to predict dropout rates, and using “intrusive advising” to reach out to students who appear at risk of dropping out,” Li writes. “Additional institutional responses consist of adding resources for in-person and online tutoring, improving upon first-year orientation programs and developing new first-year programming, eliminating fees for milestones such as applying for graduation, and increasing connections to employers to offer internships and improve job placement. Overall, performance funding catalyzes positive institutional actions to prioritize student outcomes, but these actions have not yet been enough to substantially move the needle on college completion.”
As Li describes it, performance funding serves as a method that could work in a vacuum. But institutions of higher education are subject to pushes and pulls from numerous societal and economic factors.
“Performance funding relies on the assumption that colleges can make changes that can influence student behaviors and outcomes. To some extent, inputs drive outputs, and colleges must play with “the hand they are dealt.” However, we do know that this is only partially true, as some colleges with similar student populations experience different outcomes.”
To conclude, Li does not suggest that administrators and policymakers throw out performance funding altogether. Instead, she suggests that corrective measures can be taken to make it work. These can include incentivizing need-based funding in tandem with selecting for outcomes.
“For example, Tennessee provides an additional 40% of funding for each student who graduates if that student is eligible for Pell Grants (lower-income) or is considered an adult (25 and older).47 Ohio provides extra funding for Pell-eligible, adult, and students of color. Recent studies examining the impact of how these special incentives—also referred to as premiums, equity metrics, and bonus funding—affect student enrollment and completion have been mixed in part due to variations in policy design. For example, in response to performance funding, two-year colleges in Tennessee appeared to decrease both the number and the proportion of enrolled students who are age 25 and older, yet increase the number and proportion of low-income students.”
Featured Image: Kolleen Gladden, Unsplash.