Examining Trends in Living Allowances for College

The National Center for Education Statistics released a new report and data on trends in the cost of attendance for different types of colleges, including data from the 2012-13 to 2014-15 academic years. The report shows that, among colleges operating on a traditional academic year basis (excluding most vocationally-oriented colleges), tuition and fees generally increased at a rate faster than inflation among public and private nonprofit colleges over the last two years. However, tuition failed to keep up with inflation in the for-profit sector and allowances for other living expenses (such as transportation and laundry) declined over the past two years after taking inflation into account.

I dug deeper into the data, looking at the percentage of colleges by sector that increased, decreased, or held constant each of the cost of attendance components (tuition/fees, room and board, books and supplies, and other living expenses) between 2013-14 and 2014-15—without adjusting for inflation. I focused on students living off-campus without their family, as colleges have the ability to determine the room and board allowance but do not directly receive any housing revenue for off-campus students. (My blog post on the topic last year ended up connecting me to Braden Hosch at Stony Brook and Sara Goldrick-Rab at Wisconsin-Madison, and we’ve dug deeper into the accuracy and consistency of these estimates in a working paper.)

The results (below) show that for-profit colleges were far more likely to lower tuition and fees than public or private nonprofit colleges. While 75% of public colleges and 85% of private nonprofits increased tuition, just 42% of for-profit colleges did so. For-profits were also more likely to lower books/supplies and other living expense allowances, although the typical allowance was still higher than for nonprofit colleges. A majority of colleges across sectors increased room and board, while most colleges did not change their allowances for books and supplies.

 

Table 1: Changes in COA components by sector, 2013-14 to 2014-15.
Private nonprofit
Characteristic (2014-15) Public For-profit
Cost of attendance, students living off-campus without family
  Median ($) 18,328 37,900 28,796
  Increased from 2013-14 (pct) 77.8 84.9 56.3
  No change from 2013-14 (pct) 7.2 5.8 8.2
  Decreased from 2013-14 (pct) 15.0 9.3 35.5
Tuition and fees
  Median ($) 4,200 24,670 14,040
  Increased from 2013-14 (pct) 74.9 84.6 42.3
  No change from 2013-14 (pct) 19.5 11.0 38.5
  Decreased from 2013-14 (pct) 5.7 4.4 19.2
Room and board
  Median ($) 8,280 9,000 7,574
  Increased from 2013-14 (pct) 55.1 56.4 59.2
  No change from 2013-14 (pct) 34.6 34.5 28.2
  Decreased from 2013-14 (pct) 10.4 9.2 12.5
Books and supplies
  Median ($) 1,265 1,200 1,380
  Increased from 2013-14 (pct) 37.8 23.1 25.7
  No change from 2013-14 (pct) 54.4 69.3 59.1
  Decreased from 2013-14 (pct) 7.8 7.6 15.2
Other living expenses
  Median ($) 3,742 3,150 5,000
  Increased from 2013-14 (pct) 42.0 35.1 35.5
  No change from 2013-14 (pct) 36.8 48.9 27.4
  Decreased from 2013-14 (pct) 21.2 16.0 37.1
Number of colleges 1,573 1,233 719
SOURCE: Integrated Postsecondary Education Data System.
Note: Limited to colleges reporting costs on an academic year basis.

Yet as was noted in last year’s blog post on this topic, some colleges set room and board allowances that are unreasonably low by any standard. This year, I focused on the 27 colleges that reduced their room and board allowance for off-campus students by at least $3,000 between 2013-14 and 2014-15. Some of the changes may be reasonable, such as Thomas University’s drop from $15,200 to $10,530 for nine months of room and board. But many others are unlikely to meet any standard of reasonableness. For example, Emory & Henry College in Virginia reduced its allowance from $11,800 for nine months to just $3,000, while the College of DuPage in Illinois cut its allowance from $8,257 to $2,462. Good luck trying to rent an apartment and eating ramen on that budget!

Table 2: Colleges with large declines in off-campus room and board allowances, 2013-14 to 2014-15.
Name State 2013-14 2014-15 Change
Emory & Henry College VA 11,800 3,000 -8,800
Atlanta Metropolitan State College GA 10,753 3,160 -7,593
Mount Carmel College of Nursing OH 13,392 6,380 -7,012
Vanguard University of Southern California CA 11,286 4,600 -6,686
Louisiana Delta Community College LA 15,322 8,789 -6,533
Trinity College of Nursing & Health Sciences IL 12,346 5,858 -6,488
Arkansas Northeastern College AR 11,969 6,102 -5,867
College of DuPage IL 8,257 2,462 -5,795
College of the Mainland TX 11,330 5,665 -5,665
Randolph-Macon College VA 9,200 3,650 -5,550
The University of Texas at Brownsville TX 11,495 6,250 -5,245
SAE Institute of Technology-Nashville TN 15,000 10,000 -5,000
Bon Secours Memorial College of Nursing VA 15,000 10,000 -5,000
Thomas University GA 15,200 10,530 -4,670
Davenport University MI 8,692 4,340 -4,352
Southwestern Illinois College IL 8,516 4,280 -4,236
Lee University TN 11,650 7,520 -4,130
Grace School of Theology TX 12,684 8,584 -4,100
Prairie View A & M University TX 11,289 7,197 -4,092
NY Methodist Hospital Center for Allied Health Education NY 17,568 13,496 -4,072
College of Business and Technology-Flagler FL 12,000 8,320 -3,680
College of Business and Technology-Miami Gardens FL 12,000 8,320 -3,680
Anoka Technical College MN 10,356 6,994 -3,362
Central Penn College PA 6,855 3,500 -3,355
St Margaret School of Nursing PA 9,960 6,640 -3,320
Fortis Institute-Port Saint Lucie FL 12,732 9,495 -3,237
Southern California Seminary CA 14,616 11,493 -3,123
SOURCE: Integrated Postsecondary Education Data System.
Note: Limited to colleges reporting costs on an academic year basis.

Why do some colleges feel pressures to cut back living allowances? It’s all about accountability. The amount of loan dollars students can borrow is limited by the cost of attendance, meaning that reducing living allowances (and hence the cost of attendance) reduces borrowing—and potentially the risk of a college facing sanctions for high student loan default rates. The cost of attendance also determines the net price (the COA after grants are applied), an important accountability metric. Since colleges don’t directly benefit financially from a higher off-campus living allowance, they have an incentive to reduce the living allowance while continuing to increase tuition.

Unit Record Data Won’t Doom Students

The idea of a national unit record database in higher education, in which the U.S. Department of Education gathers data on individual students’ demographic information, college performance, and later outcomes, has been controversial for years—and not without good reason. Unit record data would represent a big shift in policy from the current institutional-level data collection through the Integrated Postsecondary Education Data System (IPEDS), which excludes part-time, transfer, and most nontraditional students from graduation rate metrics. The Higher Education Act reauthorization in 2008 banned the collection of unit record data, although bipartisan legislation has been introduced (but not advanced) to repeal that law.

Opposition to unit record data tends to fall into three categories: student privacy, the cost to the federal government and colleges, and more philosophical arguments about institutional freedom. The first two points are quite reasonable in my view; even as a general supporter of unit record data, it is still the burden of supporters to show that the benefits outweigh the costs. The federal government doesn’t have a great track record in keeping personally identifiable data private, although I have never heard of data breaches involving the Department of Education’s small student-level datasets collected for research purposes. The cost of collecting unit record data for the federal government is unknown, but colleges state the compliance burden would increase substantially.

I have less sympathy for philosophical arguments that colleges make against unit record data. The National Association of Independent Colleges and Universities (NAICU—the association for private nonprofit institutions) is vehemently opposed to unit record data, stating that “we do not believe that the price for enrolling in college should be permanent entry into a massive data registry.” Amy Laitinen and Clare McCann of the New America Foundation documented NAICU’s role in blocking unit record data, even though the private nonprofit sector is a relatively small segment of higher education and these colleges benefit from federal Title IV student financial aid dollars.

An Inside Higher Ed opinion piece by Bernard Fryshman, professor of physics at the New York Institute of Technology and recent NAICU award winner, opposes unit record data for the typical (and very reasonable) privacy concerns before taking a rather odd turn toward unit record data potentially dooming students later in life. He writes the following:

“The sense of freedom and independence which characterizes youth will be compromised by the albatross of a written record of one’s younger years in the hands of government. Nobody should be sentenced to a lifetime of looking over his/her shoulder as a result of a wrong turn or a difficult term during college. Nobody should be threatened by a loss of personal privacy, and we as a nation should not experience a loss of liberty because our government has decreed that a student unit record is the price to pay for a postsecondary education.”

He also writes that employers will request prospective employees to provide a copy of their student unit record, even if they are not allowed to mandate a copy be provided. This sounds suspiciously like a type of student record that already exists (and employers can ask for)—a college transcript. Graduate faculty responsible for admissions decisions already use transcripts in that process, and applications are typically not considered unless that type of unit record data is provided.

While there are plenty of valid reasons to oppose student unit record data (particularly privacy safeguards and potential costs), Professor Fryshman’s argument doesn’t advance that cause. The information from unit record data is already available for employers to request, making that point moot.

Spring Admissions: Expanding Access or Skirting Accountability?

More than one in five first-year students at the University of Maryland now start their studies in the spring instead of the fall, according to this recent article by Nick Anderson in the Washington Post. This seems to be an unusually high percentage among colleges and universities, but the plan makes a lot of sense. Even at selective institutions, some students will leave at the end of the first semester, and more space opens up on campus after other students graduate, study abroad, or take on internships. It can be a way to maximize revenue by better utilizing facilities throughout the academic year.

However, the article also notes that the SAT scores of spring admits are lower at Maryland. Among students starting in spring 2015, the median score was roughly a 1210 (out of 1500), compared to about 1300 for the most recent available data for fall admits in 2012. These students’ test scores suggest that spring admits are well-qualified to succeed in college, even if they didn’t quite make the cut the first time around. (It’s much less realistic to expect high-SAT students to defer, given the other attractive options they likely have.) This suggests Maryland’s program may have a strong access component.

However, deferring admission to lower-SAT students could be done for other reasons. Currently, colleges only have to report their graduation rates for first-time, full-time students who enrolled in the fall semester to the federal government. (That’s one of the many flaws of the creaky Integrated Postsecondary Education Data System, and one that I would love to see fixed.) If these spring admits do graduate at lower rates, the public will never know. Additionally, many college rankings systems give colleges credit for being more selective. With the intense pressure to rise in the U.S. News rankings, even a small increase in SAT scores can be very important to colleges.

So is Maryland expanding access or trying to skirt accountability systems for a number of students? I would probably say it’s more of the former, but don’t discount the pressure to look good to the federal government and external rankings bodies. This practice is something to watch going forward, even though better federal data systems would reduce its effectiveness of shaping a first-year class.

Let’s Track First-Generation Students’ Outcomes

I’ve recently written about the need to report the outcomes of students based on whether they received a Pell Grant during their first year of college. Given that annual spending on the Pell Grant is about $35 billion, this should be a no-brainer—especially since colleges are already required to collect the data under the Higher Education Opportunity Act. Household income is a strong predictor of educational attainment, so people interested in social mobility should support publishing Pell graduation rates. I’m grateful to get support from Ben Miller of the New America Foundation on this point.

Yet, there has not been a corresponding call to collect information based on parental education, even though there are federal programs targeted to supporting first-generation students. The federal government already collects parental education on the FAFSA, although the choice of “college or beyond” may be unclear. (It would be simple enough to clarify the question if desired.)

My proposal here is simple: track graduation rates by parental education. It can be easily done through the current version of IPEDS, although the usual caveats about IPEDS’s focus on first-time, full-time students still applies. This could be another useful data point for students and their families, as well as policymakers and potentially President Obama’s proposed college ratings. Collecting these data shouldn’t be an enormous burden on institutions, particularly in relationship to their Title IV funds received.

Let’s continue to work to improve IPEDS by collecting more useful data, and this should be a part of the conversation.

Free the Pell Graduation Data!

Today is an exciting data in my little corner of academia, as the end of the partial government shutdown means that federal education datasets are once again available for researchers to use. But the most exciting data to come out today is from Bob Morse, rankings guru for U.S. News and World Report. He has collected graduation rates for Pell Grant recipients, long an unknown for the majority of colleges. Despite the nearly $35 billion per year we spend on the Pell program, we have no idea what the national graduation rate is for Pell recipients. (Richard Vedder, economist of higher education at Ohio University, has mentioned a ballpark estimate of 30%-40% in many public appearances, but he notes that is just a guess.)

Morse notes in his blog post that colleges have been required to collect and disclose graduation rates for Pell recipients since the 2009 renewal of the Higher Education Act. I’ve heard rumors of this for years, but these data have not yet made their way into IPEDS. I have absolutely no problems with him using the data he collects in the proprietary U.S. News rankings, nor do I object to him holding the data very tight—after all, U.S. News did spend time and money collecting it.

However, given that the federal government requires that Pell graduation rates be collected, the Department of Education should collect this data and make it freely and publicly available as soon as possible. This would also be a good place for foundations to step in and help collect this data in the meantime, as it is certainly a potential metric for the President’s proposed college ratings.

Update: An earlier version of this post stated that the Pell graduation data are a part of the Common Data Set. Bob Morse tweeted me to note that they are not a part of that set and are collected by U.S. News. My apologies for the initial error! He also agreed that NCES should collect the data, which only understates the importance of this collection.

Yes, Student Characteristics Matter. But So Do Colleges.

It is no surprise to those in the higher education world that student characteristics and institutional resources are strongly associated with student outcomes. Colleges which attract academically elite students and have the ability to spend large sums of money on instruction and student support should be able to graduate more of their students than open-access, financially-strapped universities, even after holding factors such as teaching quality constant. But an article in today’s Inside Higher Ed shows that there is a great deal of interest in determining the correlation between inputs and outputs (such as graduation).

The article highlights two new studies that examine the relationship between inputs and outputs. The first, by the Department of Education’s Advisory Committee on Student Financial Assistance, breaks down graduation rates by the percentage of students who are Pell Grant recipients, per-student endowments, and ACT/SAT scores using IPEDS data. The second new study, by the president of Colorado Technical University, finds that four student characteristics (race, EFC, transfer credits, and full-time status) explain 74% of the variation in an unidentified for-profit college’s graduation rate. His conclusion is that “public [emphasis original] policy will not increase college graduates by focusing on institution characteristics.”

While these studies take different approaches (one using institutional-level data and the other using student-level data), they highlight the importance that student and institutional characteristics currently have in predicting student success rates. These studies are not novel or unique—they follow a series of papers in HCM Strategists’ Context for Success project in 2012 and even more work before that. I contributed a paper to the project (with Doug Harris at Tulane University) examining input-adjusted graduation rates using IPEDS data. We found R-squared values of approximately 0.74 using a range of student and institutional characteristics, although the predictive power varied by Carnegie classification. It is also worth noting that the ACSFA report calculated predicted graduation rates with an R-squared value of 0.80, but they control for factors (like expenditures and endowment) that are at least somewhat within an institution’s control and don’t allow for a look at cost-effectiveness.

This suggests the importance of taking a value-added approach in performance measurement. Just like K-12 education is moving beyond rewarding schools for meeting raw benchmarks and adopting a gain score approach, higher education needs to do the same. Higher education also needs to look at cost-adjusted models to examine cost-effectiveness, something which we do in the HCM paper and I have done in the Washington Monthly college rankings (a new set of which will be out later this month).

However, even if a regression model explains 74% of the variation in graduation rates, a substantial amount can be attributed either to omitted variables (such as motivation) or institutional actions. The article by the Colorado Technical University president takes exactly the wrong approach, saying that “student graduation may have little to do with institutional factors.” If his statement is accurate, we would expect colleges’ predicted graduation rates to be equal to their actual graduation rates. But, as anyone who was spent time on college campuses should know, institutional practices and policies can play an important role in retention and graduation. The 2012 Washington Monthly rankings included a predicted vs. actual graduation rate component. While Colorado Tech basically hit its predicted graduation rate of 25% (with an actual graduation rate one percentage point higher), other colleges outperformed their prediction given student and institutional characteristics. For example, San Diego State University and Rutgers University-Newark, among others, outperformed their prediction by more than ten percentage points.

While incoming student characteristics do affect graduation rates (and I’m baffled by the amount of attention on this known fact), colleges’ actions do matter. Let’s highlight the colleges which appear to be doing a good job with their inputs (and at a reasonable price to students and taxpayers) and see what we can learn from them.

Net Price and Pell Enrollment: The Good and the Bad

I am thrilled to see more researchers and policymakers taking advantage of the net price data (the cost of attendance less all grant aid) available through the federal IPEDS dataset. This data can be used to show colleges which do a good job keeping the out-of-pocket cost low either to all students who receive federal financial aid, or just students from the lowest-income families.

Stephen Burd of the New America Foundation released a fascinating report today showing the net prices for the lowest-income students (with household incomes below $30,000 per year) in conjunction with the percentage of students receiving Pell Grants. The report lists colleges which are successful in keeping the net price low for the neediest students while enrolling a substantial proportion of Pell recipients along with colleges that charge relatively high net prices to a small number of low-income students.

The report advocates for more of a focus on financially needy students and a shift to more aid based on financial need instead of academic qualifications. Indeed, the phrase “merit aid” has fallen out of favor in a good portion of the higher education community. An example of this came at last week’s Education Writers Association conference, where many journalists stressed the importance of using the phrase “non-need based aid” instead of “merit aid” to change the public’s perspective on the term. But regardless of the preferred name, giving aid based on academic characteristics is used to attract students with more financial resources and to stay toward the top of prestige-based rankings such as U.S. News and World Report.

While a great addition to the policy debate, the report deserves a substantial caveat. The measure of net price for low-income students only does include students with a household income below $30,000. This does not perfectly line up with Pell recipients, who often have household incomes around $40,000 per year. Additionally, focusing on just the lowest income bracket can result in a small number of students being used in the analysis. In the case of small liberal arts colleges, the net price may be based on fewer than 100 students. It can also result in ways to game the system by charging much higher prices to families making just over $30,000 per year—a potentially undesirable outcome.

As an aside, I’m defending my dissertation tomorrow, so wish me luck! I hope to get back to blogging somewhat more frequently in the next few weeks.

Recent Trends in Student Net Price

In the midst of the current economic climate and the rising sticker price of attending college, more people are paying attention to the net price of attendance. The federal government collects a measure of the net price of attendance in its IPEDS database, which is calculated as the total cost of attendance (tuition, fees, room and board, and other expenses) less any grant aid received. Since the 2008-2009 academic year, they have collected the average net price by family income among students who receive federal financial aid. In this post, I examine the trends in net price data by type of institution (public, private nonprofit, and for-profit) among four-year colleges and universities (n=1753).

The first figure shows the average net price that families faced in the 2010-11 academic year (the most recent year available) by family income bracket. This nicely shows the prevalence of tuition discounting models, in which institutions charge a fairly high sticker price and then discount that price with grant aid. (Part of the discount in the lowest two brackets is also state and federal need-based grant aid.)

figure1_netprice

The next figure shows the net price trends over the period from 2008-09 through 2010-11 for the lowest (less than $30,000 per year) family income bracket.

figure2_netprice

It is worth noting that the public and for-profit sectors largely held the net price for students from the lowest-income families constant over the three-year period (0.6% and -3.2%, respectively), while nonprofit colleges increased the net price by 5.6% during this time. This might show an institutional commitment to keeping the net price relatively low for the neediest students, but also keep in mind that the maximum Pell Grant increased from $4,041 to $5,273 during this period. Colleges may not have changed their effort, but instead relied on additional federal student aid. The uptick in the net price at private nonprofit universities may have been a function of pressures on endowments that restricted institutional financial aid budgets.

The final figure shows the net price trends for the highest family income bracket (more than $110,000 per year)—among students who received federal financial aid.

figure3_netprice

Three observations jump out here. First of all, the net prices for nonprofit and for-profit universities are nearly identical for the highest-income students. This shows the financial model for nonprofit education, in which “full-pay” students are heavily recruited in order to pay the bills and to help fund other students. Second, the average net price at public universities increased by 9.4% during this period for the highest income students, compared to only 4.6% at nonprofit and 0.4% at for-profit institutions. As per-student state appropriations declined during this period, public institutions relied more on tuition increases and recruiting out-of-state and foreign students if at all possible. Finally, the flat net price profile of for-profit colleges across the income distribution is worth emphasizing. It seems like these colleges have reached a point at which additional increases in the price of attendance will result in net revenue decreases.

I would love to hear your feedback on these figures, as well as suggestions for future analyses using the net price data. I am eagerly awaiting the 2011-12 net price data, but that may not be available until this fall.

Improving Net Price Data Reporting

As the sticker price of attending colleges and universities has steadily increased over the past decade, researchers and policymakers have begun to focus on the actual price that students and their families face. The federal government collects a measure of the net price of attendance in its IPEDS database, which is calculated as the total cost of attendance (tuition, fees, room and board, and other expenses) less any grant aid received. (More information can be found on the IPEDS website.) I have used the net price measure in my prior work, including the Washington Monthly rankings and my previous post on the Net Price Madness tournament. However, the data do have substantial limitations—some of which could be easily addressed in the data collection process.

There are two different net price measures currently available in the IPEDS dataset—one for all students receiving grant aid (federal, state, and/or institutional) and one for students receiving any federal financial aid (grants, loans, or work-study).  The average net price is available for the first measure, while the second measure breaks down the net price by family income (but does not report an average net price.) For public institutions, both of these measures only include first-time, full-time, degree-seeking students paying in-state tuition, which can substantially limit the generalizability of the results.

Here, I use my current institution (the University of Wisconsin-Madison) as an example. The starting sample for IPEDS is the 3,487 first-time, full-time, degree-seeking freshmen who are in-state students. Of those students, net price by family income is calculated for the 1,983 students receiving Title IV aid. (This suggests that just over half of in-state Madison freshmen file the FAFSA.) Here are the net price and number of students by income group:

0-30k: $6,363 (n=212)
30-48k: $10,098 (n=232)
48-75k: $15,286 (n=406)
75-110k: $19,482 (n=542)
110+k: $20,442 (n=591)

The average net price is calculated for a slightly different group of students—those who received grant aid for any source (n=1,858). The average net price is $14,940, which is lower than the average net price faced by students who file the FAFSA ($16,409) as some students who do not receive institutional grants are included in the latter measure. However, the latter number is not reported in the main IPEDS dataset and can only be calculated by digging into the institutional reports.

I would encourage IPEDS to add the average net price for all FAFSA filers into the dataset, as that better reflects what students from financially modest backgrounds will pay. Additionally, to counter the relatively small number of students who may have a family income of less than $30,000 and to tie into policy discussions, I would like to see the average net price for all Pell Grant recipients. These changes can easily be made given current data collection procedures and would provide more useful data to stakeholders.

Tying FAFSA Data to IPEDS: The Need for “Medium Data”

It is safe to say that I’m a fan of data in higher education. Students and their families, states, and the federal government spend a massive amount of money on higher education, yet we have relatively little data on outcomes other than graduation rates and student loan default rates for a small subset of students—those who started as first-time, full-time students. The federal government currently operates on what I call a “little data” model, with some rough institutional-level measures available through IPEDS. Some of these measures are also available through a slightly more student-friendly portal in the College Navigator website.

As is often the case, some states are light years ahead of the federal government regarding data collection and availability. Florida, Texas, and Ohio are often recognized as leaders in terms of higher education data availability, both in terms of collecting (deidentified) student-level data and tying together K-12, higher education, and workforce data outcomes. The Spellings Commission in 2006 did call for a student-level dataset at a national level, but Congress explicitly denied the Department of Education this authority in the reauthorization of the Higher Education Act. Although there are sporadic movements toward “big data” at the national level, making this policy shift will require Congressional support and a substantial amount of resources.

Although I am willing to direct resources to a much more robust data system (after all, how can we determine funding priorities if we know so little about student outcomes?), a “medium data” approach could easily be enacted by using data sources already collected by colleges or the federal governemnt. I spent a fair amount of the morning today trying to find a fairly simple piece of data—the percentage of students at given colleges whose parent(s) did not complete college. The topic of first-generation students is important in policy circles, yet we have no systemic data on how large this group of students is at most colleges.

FAFSA data could be used to expand the number of IPEDS measures to include such topics as the following, in addition to first-generation status:

(1)    The percentage of students who file the FAFSA

(2)    Average/median family income

(3)    Percentage of students with zero EFC

(4)    Information on means-tested benefit receipt (such as food stamps or TANF)

(5)    Marital status

Of course, these measures would only include students who file the FAFSA—which would exclude many students who would not qualify for need-based aid, as well as some students who are unable to navigate through the complicated form. But these measures would provide a better idea of institutional diversity beyond racial/ethnic diversity and the percentage of students receiving Pell Grants and could be incorporated into IPEDS at a fairly low cost. Adding these FAFSA measures would help move IPEDS from “little data” to “medium data” and provide more useful measures to higher education stakeholders.