Blog (Kelchen on Education)

Trends in Student Fees at Public Universities

Out of all the research I have done during my time as an assistant professor, I get more questions from journalists and policymakers about my research on student fees than any other study. In this study (published in the Review of Higher Education in 2016), I showed trends in student fees at public four-year institutions and also examined the institutional-level and state-level factors associated with higher levels of fees. Yet due to the time it takes to write a paper and eventually get it published, the newest data on fees in the paper came from the 2012-13 academic year. In this blog post, I update the data on trends in fees at public universities for in-state students to go through the 2016-17 academic year.

It’s quite a bit harder than it appears to show trends in student fees because of the presence of fee rollbacks—colleges resetting their fees to a lower level and increasing tuition to compensate. Between the 2000-01 and 2016-17 academic years, 89 public universities reset their fees at least once (as measured by decreasing fees by at least $500 and increasing tuition by a larger amount). This includes most public universities in California, Massachusetts, Minnesota, and South Dakota, as well as a smattering of institutions in other states. Universities that reset their fees had a 115.3% increase in inflation-adjusted tuition and fees since 2000-01 (from $4,286 to $9,228), compared to an 83.7% increase for the 441 universities that did not reset their fees (from $4,936 to $9,068). With the caveat that I can’t break down consistent increases in tuition and fees for some of the colleges with the largest price increases, I present trends in tuition and fees for the other 441 institutions below.

The first figure shows average tuition (dashed) and fees (solid) levels for each year through 2000-01 through 2016-17. During this period, tuition increased from $3,999 to $7,183 in inflation-adjusted dollars (a 79.6% increase). Fees went up even faster, with a 106.7% increase from $912 to $1,885.

The second figure shows student fees as a percentage of overall tuition and fees. This percentage increased from 18.6% in 2000-01 to 20.8% in 2016-17.

This increase in fees is particularly important in conversations about free public college. Many of the policy proposals for free public higher education (such as the Excelsior Scholarship in New York) only cover tuition—and thus give states an incentive to encourage colleges to increase their fees while holding the line on tuition. It’s also unclear whether students and their families look at fees in the college search process in the same way they look at tuition, meaning that growing fee levels could surprise students when the first bills come due. More research needs to be done on how students and their families perceive fees.

A Peek Inside the New IPEDS Outcome Measures Dataset

Much of higher education policy focuses on “traditional” college students—those who started college at age 18 after getting dropped off in the family station wagon or minivan, enrolled full-time, and stayed at that institution until graduation. Yet although this is how many policymakers and academics experienced college (I’m no exception), this represents a minority of the current American higher education system. Higher education data systems have often followed this mold, with the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) collecting some key success and financial aid metrics for first-time, full-time students only.

As a result of the 1990 Student Right-to-Know Act, all colleges were required to start compiling graduation rates (and disclosing them upon request) for first-time, full-time students and a smaller group of colleges were also required to collect transfer-out rates. Colleges were then required to submit the data to IPEDS for students who began college in the 1996-97 academic year so information would be available to the public. This was a step forward for transparency, but it did little to accurately represent community colleges and less-selective four-year institutions. Some groups, such as the Student Achievement Measure, have developed to voluntarily provide information on completion rates for part-time and transfer students. These data have shown that IPEDS significantly understates overall completion rates even among students who initially fit the first-time, full-time definition.

After years of technical review panels and discussions about how to best collect data on part-time and non-first-time students along with a one-year delay to “address data quality issues,” the National Center for Education Statistics released the first year of the new Outcome Measures survey via College Navigator earlier this week. This covers students who began college in 2008 and were tracked for a period of up to eight years. Although the data won’t be easily downloadable via the IPEDS Data Center until mid-October, I pulled up data on six colleges (two community colleges, two public four-year colleges, and two private nonprofit colleges in New Jersey) to show the advantages of more complete outcomes data.

Examples of IPEDS Outcome Measures survey data, 2008 entering cohort.
Institution 6-year grad rate 8-year grad rate Still enrolled within 8 years Enrolled elsewhere within 8 years
Community colleges
Atlantic Cape Community College
First-time, full-time 26% 28% 3% 27%
Not first-time, but full-time 41% 45% 0% 29%
First-time, part-time 12% 14% 5% 20%
Not first-time, but part-time 23% 26% 0% 38%
Brookdale Community College
First-time, full-time 33% 35% 3% 24%
Not first-time, but full-time 36% 39% 2% 33%
First-time, part-time 17% 18% 3% 25%
Not first-time, but part-time 25% 28% 0% 28%
Public four-year colleges
Rowan University
First-time, full-time 64% 66% 0% 20%
Not first-time, but full-time 82% 82% 1% 7%
First-time, part-time 17% 17% 0% 0%
Not first-time, but part-time 49% 52% 5% 21%
Thomas Edison State University
Not first-time, but part-time 42% 44% 3% 29%
Private nonprofit colleges
Centenary University of NJ
First-time, full-time 61% 62% 0% 4%
Seton Hall University
First-time, full-time 66% 68% 0% 24%
Not first-time, but full-time 67% 68% 0% 18%
First-time, part-time 0% 0% 33% 33%
Not first-time, but part-time 38% 38% 0% 38%

There are several key points that the new data highlight:

(1) A sizable percentage of students enrolled at another college within eight years of enrolling in the initial college. The percentages at the two community colleges in the sample (Atlantic Cape and Brookdale) are roughly similar to the eight-year graduation rates, suggesting that quite a few students are transferring without receiving degrees. These rates are lower in the four-year sector, but still far from trivial.

(2) New colleges show up in the graduation rate data! Thomas Edison State University is well-known for focusing on adult students (they only accept students age 21 or older). So, as a result, they didn’t have a first-time, full-time cohort for the traditional graduation rate. But TESU has a respectable 42% graduation rate of part-time students within six years, and another 29% enrolled elsewhere within eight years. On the other hand, residential colleges may just have a first-time, full-time cohort (such as Centenary University) or small cohorts of other students for which data shouldn’t be trusted (such as Seton Hall’s tiny cohort of first-time, part-time students).

(3) Not first-time students graduate at similar or higher rates compared to first-time students. To some extent, this is not surprising as students enter with more credits. For example, at Rowan University, 82% of transfer students who entered full-time graduated within six years compared to 64% of first-time students.

(4) Institutional graduation rates don’t change much after six years. Among these six colleges, graduation rates went up by less than five percentage points between six and eight years and few students are still enrolled after eight years. It’s important to see if this is a broader trend, but this suggests that six-year graduation rates are fairly reasonable metrics.

Once the full dataset is available in October, I’ll return to analyze broader trends in the Outcome Measures data. But for now, take a look at a few colleges and enjoy a sneak peek into the new data!

Beware OPEIDs and Super OPEIDs

In higher education discussions, everyone wants to know how a particular college or university is performing across a range of metrics. For metrics such as graduation rates and enrollment levels, this isn’t a big problem. Each freestanding college (typically meaning that they have their own accreditation and institutional governance structure) has to report this information to the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) each year. But other metrics are more challenging to use and interpret because they can cover multiple campuses—something I dig into in this post.

In the 2015-16 academic year, there were 7,409 individual colleges (excluding administrative offices) in the 50 states and Washington, DC that reported data to IPEDS and were uniquely identified by a UnitID number. A common mistake that analysts make is to assume that all federal higher education (or even all IPEDS) data metrics represent just one UnitID, but that is not always the case. Enter researchers’ longtime nemesis—the OPEID.

OPEIDs are assigned by the U.S. Department of Education’s Office of Postsecondary Education (OPE) to reflect each postsecondary institution that has a program participation agreement to participate in federal student aid programs. However, some colleges within a system of higher education share a program participation agreement, in which one parent institution has a number of child institutions for financial aid purposes.

Parent/child relationships can generally be identified using OPEID codes; parent institutions typically have OPEIDs ending with “00,” while child institutions typically have OPEIDs ending in another value. These reporting relationships are fairly prevalent, with there being approximately 5,744 parent and 1,665 child institutions in IPEDS in the 2015-16 academic year based on OPEID values. For-profit college chains typically report using parent/child relationships, while a number of public college and university systems also aggregate institutional data to the OPEID level. For example, Penn State and Rutgers have parent/child relationships while the University of Missouri and the University of Wisconsin do not.

In this case of a parent/child relationship, all data that come from the Office of Federal Student Aid or from the National Student Loan Data System are aggregated up across a number of colleges. This includes all data on student loan repayment rates, earnings, and debt from the College Scorecard as well as student loan default rates that are currently used for accountability purposes. Additionally, some colleges report finance data out at the OPEID level on a seemingly chaotic basis—which can only be discovered by combing through data to see if child institutions do not have values. For example, Penn State always reports at the parent level, while Rutgers has reported at the parent level and the child level on different occasions over the last 15 years. Ozan Jaquette and Edna Parra have pointed out in some great research that failing to address parent/child issues can result in estimates from IPEDS or Delta Cost Project data being inaccurate (although trend data are generally reasonable).

If UnitIDs and OPEIDs were not enough, the Equality of Opportunity Project (EOP) dataset added a new term—super-OPEIDs—to researchers’ jargon. This innovative dataset, compiled by economists Raj Chetty, John Friedman, and Nathaniel Hendren, uses federal income tax records to construct social mobility metrics for 2,461 institutions of higher education based on pre-college family income and post-college student income. (I used this dataset last month in a blog post looking at variations in marriage rates across four-year colleges.) However, the limitation of this approach is that the researchers have to rely on the names of the institutions on tax forms, which are sometimes aggregated beyond UnitIDs or OPEIDs. Hence, the super-OPEID.

The researchers helpfully included a flag for super-OPEIDs that combined multiple OPEIDs (the variable name is “multi” in the dataset, for those playing along at home). There are 96 super-OPEIDs that have this multiple-OPEID flag, including a number of states’ public university systems. The full list can be found in this spreadsheet, but I wanted to pull out some of the most interesting pairings. Here are a few:

–Arizona State And Northern Arizona University And University Of Arizona

–University Of Maryland System (Except University College) And Baltimore City Community College

–Minnesota State University System, Century And Various Other Minnesota Community Colleges

–SUNY Upstate Medical University And SUNY College Of Environment Science And Forestry

–Certain Colorado Community Colleges

To get an idea of how many colleges (as measured by UnitIDs) have their own super-OPEID, I examined the number of colleges that did not have a multiple-OPEID flag in the EOP data and did not have any child institutions based on their OPEID. This resulted in 2,143 colleges having their own UnitID, OPEID, and super-OPEID—meaning that all of their data across these sources is not combined with different institutions. (This number would likely be higher if all colleges were in the EOP data, but some institutions were either too new or too small to be included in the dataset.)

I want to close by noting the limitations of both the EOP and Federal Student Aid/College Scorecard data for analytic purposes, as well as highlighting the importance of the wonky terms UnitID, OPEID, and super-OPEID. Analysts should carefully note when data are being aggregated across separate UnitIDs (particularly when different types of colleges are being combined) and consider omitting colleges where aggregation may be a larger concern across OPEIDs or super-OPEIDs.

For example, earnings data from the College Scorecard would be fine for the University of Maryland-College Park (as the dataset just reflects those earnings), but social mobility data would include a number of other institutions. Users of these data sources should also describe their strategies in their methods discussions to an extent that would allow users to replicate their decisions.

Thanks to Sherman Dorn at Arizona State University for inspiring this blog post via Twitter.

How Acela Corridor Educational Norms Look to an Outsider

Education policy discussions in the United States tend to be dominated by people living in the Acela Corridor—the densely-populated, highly-educated, and high-income portion of the United States that is served by Amtrak’s fast train from Boston to Washington, DC. Since moving from the Midwest to New Jersey four years ago to start on the tenure track at Seton Hall, I have probably logged 50 trips to Washington via Amtrak’s slower (and less-expensive) Northeast Regional train. (It sure beats driving, and Amtrak’s quiet car is a delight!)

Many of the suburban communities in northern New Jersey have median household incomes of well over $100,000 per year, which is roughly the top 20% of American families. The top 20% is notable because that is the cutoff that the Brookings Institution’s Richard Reeves uses in his new book, Dream Hoarders, to highlight how upper-income individuals have taken steps to make sure their children have every opportunity possible—typically at the expense of other families. The sheer concentration of high-income families within much of the Acela Corridor has created a powerful set of social norms regarding education that can leave outsiders flabbergasted.

Yet in spite of having two parents with bachelor’s degrees, a PhD in education, and being one half of a two-income professional household, I find myself confused by a number of practices that are at least somewhat common in the Acela Corridor but not in other parts of the country. This was highlighted by a piece in Sunday’s New York Times on affirmative action. The reporter spoke with two students at private boarding schools in New Jersey, of which there are apparently a fair number. My first reaction, as a small-town Midwesterner, was a little different than what many of my peers would think.

Here are some other things that have surprised me in my interactions with higher-income families in the Acela Corridor:

  • K-12 school choice debates. Unlike some people in the education world, I don’t have any general philosophical objections to charter schools. But in order for school choice to work (barring online options), there needs to be a certain population density. This is fine in urban and suburban areas, but not so great in rural areas where one high school may draw from an entire county. A number of Republican senators from rural states have raised concerns about school choice as a solution for this reason.
  • SAT/ACT test preparation. I attended a small-town public high school with about 200 students in my graduating class. The focus there was to get students to take the ACT (the dominant test in America as a whole, with the coasts being the exception), while also encouraging students to take the PLAN and PSAT examinations. But I never saw a sign advertising ACT prep services, nor was I even aware that was I thing people do. (I took the practice ACT that came with the exam the night before the test—that was it.) In the Northeast, there seem to be more signs on the side of the road advertising test prep than any other product or service.
  • The college admissions process. Going to a four-year college is the expectation for higher-income families in the Acela Corridor, and families treat the college choice process as being incredibly important. Using private college counselors to help manage the process, which often includes applying to ten or more colleges, is not uncommon. A high percentage of students also leave the state for college, which is quite expensive. (In New Jersey, about 37% of high school graduates head to other states to attend college.) Meanwhile, in much of the country, the goal is to get students to attend college at all rather than to get students to attend a slightly more prestigious institution. I can think of just one of my high school classmates who went out of state, and a large percentage of the class did not attend college immediately after high school.
  • Private tutoring while in college. I supplemented my income in graduate school by tutoring students in economics, typically charging between $25 and $40 per hour to meet with one or two students to help them prepare for exams. (I paid for an engagement ring using tutoring income!) I was never aware of anyone paying for private tutoring when I was an undergraduate at Truman State University, but this was a common practice at the University of Wisconsin-Madison. Nearly all of these students came from the suburbs of New York City or Washington, DC and were used to receiving private tutoring throughout their education. I got very few tutoring requests from in-state students, but they were typically paying for their own college (and thus got a substantial discount from my normal rates).

I worry about education policy discussions being dominated by the Acela Corridor regulars because their experiences are so different than what how most Americans experience both K-12 and higher education. If education committee staffers, academic researchers, and think tankers all share similar backgrounds, the resulting policy decisions may not reflect the needs of rural and urban lower-income individuals. It is important to seek out people from other walks of life to make sure policies are best for all Americans.

Not-so-Free College and the Disappointment Effect

One of the most appealing aspects of tuition-free higher education proposals is that they convey a simple message about higher education affordability. Although students will need to come up with a substantial amount of money to cover textbooks, fees, and living expenses, one key expense will be covered if students hold up their end of the bargain. That is why the results of existing private-sector college promise programs are generally promising, as shown in this policy brief that I wrote for my friends at the Midwestern Higher Education Compact.

But free college programs in the public sector often come with a key limitation—the amount of money that the state has to fund the program in a given year. Tennessee largely avoided this concern by endowing the Tennessee Promise program through lottery funds, and the program appears to be in good financial shape at this point. However, two other states are finding that available funds are insufficient to meet program demand.

  • Oregon will provide only $40 million of the $48 million needed to fund its nearly tuition-free community college program (which requires a $50 student copay). As a result, the state will eliminate grants to the 15% to 20% of students with the highest expected family contributions (a very rough proxy for ability to pay).
  • New York received 75,000 completed applications for its tuition-free public college program, yet still only expects to give out 23,000 scholarships. Some of this dropoff may be due to students attending other colleges, but other students are probably still counting on the money.

In both states, a number of students who expected to get state grant aid will not receive any money. While rationing of state aid dollars is nothing new (many states’ aid programs are first-come, first-served), advertising tuition-free college and then telling students they won’t receive grant aid close to the beginning of the academic year may have negative effects such as choosing not to attend college at all or diminished academic performance if they do attend. There is a sizable body of literature documenting the “disappointment effect” in other areas, but relatively little in financial aid. There is evidence that losing grant aid can hurt continuing students, yet this does not separate out the potential effect of not having money from the potential disappointment effect.

The Oregon and New York experiences provide for a great opportunity to test the disappointment effect. Both states could compare students who applied for but did not receive the grant in 2017-18 to similar students in years prior to the free college programs. This would allow for a reasonably clean test of whether the disappointment effect had any implications for college choice and eventual persistence.

Examining Variations in Marriage Rates across Colleges

This piece originally appeared at the Brookings Institution’s Brown Center Chalkboard blog.

Young adulthood is not only the time when most people attend college, but also a time when many marry. In fact, college attendance and marriage are linked and have social and economic consequences for individuals and their families.

When (and if) people get married is an important topic due to the presence of what is known as assortative mating. This phenomenon, in which a person is likely to marry someone with similar characteristics such as education, is a contributing factor to increasing levels of income inequality. In some circles, there is pressure to marry someone with a similar pedigree, as evidenced by the high-profile Princeton alumna who urged women at the university to find a spouse while in college. For people attending less-selective colleges, having the possibility of a second household income represents a key buffer against economic shocks.

In this blog post, I use a tremendous dataset compiled by The Equality of Opportunity Project that is based on deidentified tax records for 48 million Americans who were born between 1980 and 1991. This dataset has gotten a great deal of attention on account of its social mobility index, which examines the percentage of students who move well up in the income distribution by young adulthood.

I use the publicly available dataset to examine marriage rates of traditional-age college students through age 34 based on their primary institution of attendance. In particular, I am curious about the extent to which institutional marriage rates seem to be affected by the institution itself versus the types of students who happen to enroll there. My analyses are based on 820 public and private nonprofit four-year colleges that had marriage rates and other characteristics available at the institutional level. This excludes a number of public universities that reported tax data as a system (such as all four-year institutions in Arizona and Wisconsin).

The first two figures below show the distribution of marriage rates for the 1980-82 and 1989-91 birth cohorts as of 2014 for students who attended public, private religious, and private nonsectarian institutions. Marriage rates for the younger cohorts (who were between ages 23 and 25) were low, with median rates of 12% at public colleges, 14% at religiously-affiliated colleges, and just 5% at private nonsectarian colleges. For the older cohort (who were between ages 32 and 34), marriage rates were 59% at public colleges, 65% at religiously-affiliated colleges, and 56% at private nonsectarian colleges.

There is an incredible amount of variation in marriage rates within each of these three types of colleges. In the two figures below, I show the colleges with the five lowest and five highest marriage rates for both cohorts. In the younger cohort (Figure 3), the five colleges with the lowest marriage rates (between 0.9% and 1.5%) are all highly selective liberal arts colleges that send large percentages of their students to graduate school—a factor that tends to delay marriage. At the high end, there are two Brigham Young University campuses (which are affiliated with the Church of Jesus Christ of Latter-day Saints, widely known as the Mormon church), two public universities in Utah (where students are also predominately Mormon), and Dordt College in Iowa (affiliated with the Christian Reformed Church). Each of these colleges has at least 43% of students married by the time they reach age 23 to 25.

A similar pattern among the high-marriage-rate colleges emerges in the older cohorts, with four of the five colleges with the highest rates in students’ mid-20s had marriage rates over 80% in students’ early-30s.

A more fascinating story plays out among colleges with the lowest marriage rates. The selective liberal arts colleges with the lowest marriage rates in the early cohort had marriage rates approaching 60% in the later cohort, while the 13 colleges with the lowest marriage rates in the later cohort were all either historically black colleges or institutions with high percentages of African-American students. This aligns with the large gender gap in bachelor’s degree attainment among African-Americans, with women representing nearly 60% of African-American degree completions.

Finally, I examined the extent to which marriage rates were associated with the location of the college and the types of students who attended as well as whether the college was public, private nonsectarian, or religious. I ran regressions controlling for the factors mentioned below as well as the majors of graduates (not shown for brevity). These characteristics explain about 55% of the variation in marriage rates for the younger cohorts and 77% of the variation in older cohorts. Although students at religiously-affiliated institutions had higher marriage rates across both cohorts, this explains less than five percent of the overall variation after controlling for other factors. In other words, most of the marriage outcomes observed across institutions appear to be related mostly to students, and less to institutions.

Colleges in the Northeast had significantly lower marriage rates in both cohorts than the reference group of the Midwest, while colleges in the South had somewhat higher marriage rates. The effects of institutional type and region both got smaller between the two cohorts, which likely reflects cultural differences in when people get married rather than if they ever get married.

Race and ethnicity were significant predictors of marriage. Colleges with higher percentages of black or Hispanic students had much lower marriage rates than colleges with more white or Asian students. The negative relationship between the percentage of black students and marriage rates was much stronger in the older cohort. Colleges with more low-income students had much higher marriage rates in the earlier cohort but much lower marriage rates in the later cohort. Less-selective colleges had higher marriage rates for the younger cohort, while colleges with higher student debt burdens had lower marriage rates; neither was significant for the older cohort.

There has been a lot of discussion in recent years as to whether marriage is being increasingly limited to Americans in the economic elite, both due to the presence of assortative mating and the perception that marriage is something that must wait until the couple is financially secure. The Equality of Opportunity project’s dataset shows large gaps in marriage rates by race/ethnicity and family income by the time former students reach their early 30s, with some colleges serving large percentages of minority and low-income students having fewer than one in three students married by this time.

Yet, this exploratory look suggests that the role of individual colleges in encouraging or discouraging marriage is generally limited, since the location of the institution and the types of students it serves explain most of the difference in marriage rates across colleges.

My 2017 Higher Education Finance Reading List

The middle of July marks the two-thirds point in my academic summer, so I’m spending time getting ready for the fall semester in addition to packing in as much research and fun into this wonderful time of year. I am teaching a higher education finance class at Seton Hall University for the fourth time this fall semester and just posted my syllabus for my students to look at before the semester begins.

Here is the reading list I am assigning my students for the course, which is my best effort to capture the current state of knowledge in higher education finance. I teach students who are primarily administrators and practitioners, so I especially value articles that are clearly-written and explain research methods in a concise manner. I link to the final versions of the articles whenever possible, but those without access to an academic library should note that earlier versions of many of these articles are available online via a quick Google search.

I hope you enjoy the list!

 

Introduction to higher education finance

Lumina Foundation video on how the federal government distributes financial aid to students: https://www.luminafoundation.org/looking-back-to-move-forward-4

Chetty, R., Friedman, J. N., Saez, E., Turner, N., & Yagan, D. (2017). Mobility report cards: The role of colleges in intergenerational mobility. Working paper. (Also, look at their website for data on how your favorite college fares: http://www.equality-of-opportunity.org/college/.)

Ehrenberg, R. G. (2012). American higher education in transition. Journal of Economic Perspectives, 26(1), 193-216. (link)

Madzelan, D. (2013). The politics of student aid. Washington, DC: American Enterprise Institute. (link)

Schanzenbach, D. W., Bauer, L., & Breitwieser, A. (2017). Eight economic facts on higher education. Washington, DC: The Hamilton Project. (link)

National Center for Education Statistics (2015). IPEDS data center user manual. Washington, DC: Author. (skim as a reference) (link)

 

Institutional budgeting

Barr, M.J., & McClellan, G.S. (2010). Understanding budgets. In Budgets and financial management in higher education (pp. 55-85). San Francisco, CA: Jossey-Bass. (link)

Varlotta, L.E. (2010). Becoming a leader in university budgeting. New Directions for Student Services, 129, 5-20. (link)

Seton Hall’s FY 2016 Forms 990 and 990-T to the Internal Revenue Service: https://www13.shu.edu/offices/finance/index.cfm

The College of New Jersey’s FY 2016 audited financial statements: https://treasurer.tcnj.edu/files/2016/02/FY2016-Audited-Financials-and-Schedules-of-Federal-State-Awards.pdf

Moody’s credit rating report for The College of New Jersey: https://treasurer.tcnj.edu/files/2016/09/Moodys-TCNJ-Final-Report-8.15.2016.pdf

Information on The College of New Jersey’s budgeting cycle: https://treasurer.tcnj.edu/files/2012/06/FY2018-TCNJ-Strategic-Budget-Planning-Cycle.pdf

 

Policy analysis and higher education finance

DesJardins, S.L. (2001). Understanding and using efficiency and equity criteria in the study of higher education policy. In J.C. Smart & W.G. Tierney (Eds.), Higher education: Handbook of theory and research, Vol. 17 (pp. 173-220). Norwell, MA: Kluwer Academic Publishers. (link)

Ness, E. C. (2010). The role of information in the policy process: Implications for the examination of research utilization in higher education policy. In J. C. Smart (Ed.), Higher education: Handbook of theory and research, Vol. 25 (pp. 1-49). Dordrecht, The Netherlands: Springer. (link)

Weimer, D.L., & Vining, A.R. (1999). Thinking strategically about adoption and implementation. In Policy analysis: Concepts and practice (3rd Ed.) (pp. 382-416). Upper Saddle River, NJ: Prentice-Hall. (link)

Winston, G. C. (1999). Subsidies, hierarchy and peers: The awkward economics of higher education. Journal of Economic Perspectives, 13(1), 13-36. (link)

 

Higher education expenditures

Altonji, J. G., & Zimmerman, S. D. (2017). The costs of and net returns to college major. Cambridge, MA: National Bureau of Economic Research Working Paper 23029. (link)

Archibald, R. B., & Feldman, D. H. (2008). Explaining increases in higher education costs. The Journal of Higher Education, 79(3), 268-295.

Cheslock, J. J., & Knight, D. B. (2015). Diverging revenues, cascading expenditures, and ensuing subsidies: The unbalanced and growing financial strain of intercollegiate athletics on universities and their students. The Journal of Higher Education, 86(3), 417-447. (link)

Hurlburt, S., & McGarrah, M. (2016). Cost savings or cost shifting? The relationship between part-time contingent faculty and institutional spending. New York, NY: TIAA Institute. (link)

Commonfund Institute (2015). 2015 higher education price index. Wilton, CT: Author. (skim) (link)

Desrochers, D. M., & Hurlburt, S. (2016). Trends in college spending: 2003-2013. Washington, DC: American Institutes for Research. (skim) (link)

 

Federal sources of revenue

Cellini, S. R. (2010). Financial aid and for-profit colleges: Does aid encourage entry? Journal of Policy Analysis and Management, 29(3), 526-552. (link)

Kirshstein, R. J., & Hurlburt, S. (2012). Revenues: Where does the money come from? Washington, DC: American Institutes for Research. (link)

Pew Charitable Trusts (2015). Federal and state funding of higher education. Washington, DC: Author. (link)

Pew Charitable Trusts (2017). How governments support higher education through the tax code. Washington, DC: Author. (link)

(Note: I will add a draft paper I’m working on looking at whether law, medical, and business schools responded to a 2006 increase in Grad PLUS loan limits by raising tuition later in the semester. I’ll have a public draft of the paper to share in early November, but I think it’s good that students see a really rough draft to see how the research process works.)

 

State sources of revenue

Chatterji, A. K., Kim, J., & McDevitt, R. C. (2016). School spirit: Legislator school ties and state funding for higher education. Working paper. (link)

Doyle, W., & Zumeta, W. (2014). State-level responses to the access and completion challenge in the new era of austerity. The ANNALS of the American Academy of Political and Social Science, 655, 79-98. (link)

Fitzpatrick, M. D., & Jones, D. (2016). Post-baccalaureate migration and merit-based scholarships. Economics of Education Review, 54, 155-172. (link)

Hillman, N. W. (2016). Why performance-based funding doesn’t work. New York, NY: The Century Foundation. (link)

State Higher Education Executive Officers Association (2017). State higher education finance: FY 2017. Boulder, CO: Author. (skim) (link)

 

College pricing, tuition revenue, and endowments

Goldrick-Rab, S., & Kendall, N. (2016). The real price of college. New York, NY: The Century Foundation. (link)

Jaquette, O., Curs, B. R., & Posselt, J. R. (2016). Tuition rich, mission poor: Nonresident enrollment growth and the socioeconomic and racial composition of public research universities. Journal of Higher Education, 87(5), 635-673. (link)

Kelchen, R. (2016). An analysis of student fees: The roles of states and institutions. The Review of Higher Education, 39(4), 597-619. (link)

Levin, T., Levitt, S. D., & List, J. A. (2016). A glimpse into the world of high capacity givers: Experimental evidence from a university capital campaign. Cambridge, MA: National Bureau of Economic Research Working Paper 22099. (link)

Yau, L., & Rosen, H. S. (2016). Are universities becoming more unequal? The Review of Higher Education, 39(4), 479-514. (link)

Ma, J., Baum, S., Pender, M., & Welch, M. (2016). Trends in college pricing 2016. Washington, DC: The College Board. (skim) (link)

National Association of College and University Budget Offices (2017). 2016 NACUBO-Commonfund study of endowment results. http://www.nacubo.org/Research/NACUBO-Commonfund_Study_of_Endowments/Public_NCSE_Tables.html (skim)

 

Student debt and financing college

Akers, B., & Chingos, M. M. (2016). Game of loans: The rhetoric and reality of student debt (p. 13-37). Princeton, NJ: Princeton University Press. (link)

Boatman, A., Evans, B. J., & Soliz, A. (2017). Understanding loan aversion in education: Evidence from high school seniors, community college students, and adults. AERA Open, 3(1), 1-16. (link)

Chakrabarti, R., Haughwout, A., Lee, D., Scally, J., & van der Klaauw, W. (2017). Press briefing on household debt, with focus on student debt. New York, NY: Federal Reserve Bank of New York. (link)

Houle, J. N., & Warner, C. (2017). Into the red and back to the nest? Student debt, college completion, and returning to the parental home among young adults. Sociology of Education, 90(1), 89-108. (link)

Kelchen, R., & Li. A. Y. (2017). Institutional accountability: A comparison of the predictors of student loan repayment and default rates. The ANNALS of the American Academy of Political and Social Science, 671, 202-223. (link)

 

Financial aid practices, policies, and impacts

Watch the Lumina Foundation’s video on the history of the Pell Grant: https://www.luminafoundation.org/looking-back-to-move-forward-3

Bird, K., & Castleman, B. L. (2016). Here today, gone tomorrow? Investigating rates and patterns of financial aid renewal among college freshmen. Research in Higher Education, 57(4), 395-422. (link)

Carruthers, C. K., & Ozek, U. (2016). Losing HOPE: Financial aid and the line between college and work. Economics of Education Review, 53, 1-15. (link)

Goldrick-Rab, S., Kelchen, R., Harris, D. N., & Benson, J. (2016). Reducing income inequality in educational attainment: Experimental evidence on the impact of financial aid on college completion. American Journal of Sociology, 121(6), 1762-1817. (link)

Schudde, L., & Scott-Clayton, J. (2016). Pell Grants as performance-based scholarships? An examination of satisfactory academic progress requirements in the nation’s largest need-based aid program. Research in Higher Education, 57(8), 943-967. (link)

Baum, S., Ma, J., Pender, M., & Welch, M. (2016). Trends in student aid 2016. Washington, DC: The College Board. (skim) (link)

 

Free college programs/proposals

Deming, D. J. (2017). Increasing college completion with a federal higher education matching grant. Washington, DC: The Hamilton Project. (link)

Goldrick-Rab, S., & Kelly, A. P. (2016). Should community college be free? Education Next, 16(1), 54-60. (link)

Harnisch, T. L., & Lebioda, K. (2016). The promises and pitfalls of state free community college plans. Washington, DC: American Association of State Colleges and Universities. (link)

Murphy, R., Scott-Clayton, J., & Wyness, G. (2017). Lessons from the end of free college in England. Washington, DC: The Brookings Institution. (link)

Map of college promise/free college programs: https://ahead-penn.org/creating-knowledge/college-promise

 

Returns to education

Deterding, N. M., & Pedulla, D. S. (2016). Educational authority in the “open door” marketplace: Labor market consequences of for-profit, nonprofit, and fictional educational credentials. Sociology of Education, 89(3), 155-170. (link)

Doyle, W. R., & Skinner, B. T. (2017). Does postsecondary education result in civic benefits? The Journal of Higher Education. doi: 10.1080/00221546.2017.1291258. (link)

Giani, M. S. (2016). Are all colleges equally equalizing? How institutional selectivity impacts socioeconomic disparities in graduates’ labor outcomes. Research in Higher Education, 39(3), 431-461. (link)

Ma, J., Pender, M., & Welch, M. (2016). Education pays 2016: The benefits of higher education for individuals and society. Washington, DC: The College Board. (link)

Webber, D. A. (2016). Are college costs worth it? How ability, major, and debt affect the returns to schooling. Economics of Education Review, 53, 296-310. (link)

Examining Trends in the Pell Grant Program

The U.S. Department of Education recently released its annual report on the federal Pell Grant program, which provides detailed information about the program’s finances and who is receiving grants. The most recent report includes data from the 2015-16 academic year, and I summarize the data and trends over the last two decades in this annual post on the status of the Pell program. (Very preliminary data on Pell receipt for the first two quarters of the 2016-17 academic year can be found in the Title IV program volume reports on the Office of Federal Student Aid’s website.)

The number of Pell recipients fell for the fourth year in a row in 2015-16 to 7.66 million. This represents a 7.9% decline in the last year and an 18.9% drop since the peak in 2011-12. The decline is steepest in the for-profit sector (down 13.9% in one year and 36.7% since 2011-12) and among community colleges (down 13.3% and 28.3%, respectively), while private nonprofit and public four-year colleges stayed relatively constant. For the first time since at least 1993, more students at public four-year colleges received Pell Grants than community college students. While most of this change is likely due to a drop in community college enrollment, some could be due to community colleges offering a small number of bachelor’s degrees being counted as four-year colleges. (Thanks for Ben Miller of the Center for American Progress for pointing that out!)

Pell Grant expenditures fell to $28.6 billion in 2015-16, down from $35.7 billion in 2010-11. After adjusting for inflation, program expenditures are down 26% since the peak. This has allowed the Pell program to develop a surplus of $10.6 billion, $1.3 billion of which was taken to use for other programs in the 2017 budget deal. This surplus also allowed for the Pell Grant to be available for more than two semesters per year as of July 1, which was allowed between 2008 and 2011 before being cut due to budgetary concerns.

Most of the decline in Pell enrollment and expenditures can be attributed to a drop in the number of students who are considered independent for financial aid purposes (typically students who are at least 24 years of age, are married, or have a child). The number of independent Pell recipients fell by 28% in the last four years (to 4.05 million), while the number of dependent Pell recipients fell by just 6.4% (to 3.61 million), as shown in the chart below. However, independent students still make up the majority of Pell recipients, as they have every year since 1993.

There has been an even larger drop in the number of students with an automatic zero expected family contribution, who automatically qualify for the maximum Pell Grant based on family income and receiving means-tested benefits. (For more on these students, check out this article I wrote in the Journal of Student Financial Aid in 2015.) The number of independent students with dependents who received an automatic zero EFC fell by 50% since 2011-12, while the number of dependent students in this category fell by 29%. (Independent students without any dependents are not eligible to receive an automatic zero EFC.) Part of this decline was due to a decrease in the maximum income limit that automatically qualified students for an automatic zero EFC, while the rest can be attributed to an improving economy that has both induced adult students to return to the labor market and raised some incomes beyond the threshold for qualifying for an automatic zero EFC.

The Tangled Web of Student Debt Consolidation Companies

Like seemingly most American households, the Kelchens get far more junk in the mail than actual mail of value. We get about as many credit card applications as our shredder can handle, as well as folks trying to sell us a broad array of products and services. But letters that mention student loan debt and say “Final Notice” on them always get my attention, both as a researcher of higher education finance and as a proud part-owner of my wife’s law school debt.

The letter below came last week from a company called Direct Document Solutions out of Irvine, California. It says that we may be eligible to consolidate our existing federal student loan into a lower-interest federal loan—and that we may be eligible for loan forgiveness. While the fine print says that the company is fee-based and that they are not a part of the Department of Education, it’s in much smaller font than the rest of the letter.

After looking at this letter for a while, I realized that it looked vaguely like another student loan consolidation letter we had received several months prior. I dug through my Twitter media archives and found a nearly-identical letter (presented below) from last August from a company called Certified Document Center (which operates as Document Preparation Services at the same address as Direct Document Solutions). The Better Business Bureau gave the company a C rating, with 18 complaints in the last 12 months alone.

Just before I got the letter last week, NerdWallet put out a helpful list of about 130 companies that are less-than-ideal actors in the student debt consolidation business. To get on this list, companies needed to have faced significant complaints or have a D or F rating from the Better Business Bureau. So this means that Document Preparation Services sneaks over the bar and doesn’t make the list.

But in my research of this company, I discovered it was a part of the Association for Student Loan Relief—a group of 118 companies that specialize in student loan consolidation. A number of these companies show up on NerdWallet’s watch list. These companies tend to be clustered in certain areas—for example, nine are located in Irvine, California and quite a few are located in South Florida. This, along with the multiple aliases that several companies appear to have used, suggest the possibility that a number of these companies may be run by the same people or group of people.

People who are struggling to repay their federal loans (or are simply seeking a better deal) should probably start by talking with their current servicer or even reaching out to their former college’s financial aid office. If an income-driven repayment plan is the best choice, there is usually little need to involve a paid consolidation company. For students who are seeking a lower interest rate, there are legitimate companies (like Earnest and SoFi) and banks that will refinance student loans. Refinancing can be a great option for people who are certain that they won’t benefit from income-driven repayment plans and have fairly high incomes, but this is a decision that should be researched before making. Read reviews, look at BBB ratings (and the number of complaints), and be very skeptical when changing anything with your student loans.

No matter what you do, don’t put your student loans in the hands of some random company sending you “Final Notice” letters even though you have no relationship with them. That’s a great way to ruin your credit and empty your bank account.