Are “Affordable Elite” Colleges Growing in Size, or Just Selectivity?

A new addition to this year’s Washington Monthly college guide is a ranking of “Affordable Elite” colleges. Given that many students and families (rightly or wrongly) focus on trying to get into the most selective colleges, we decided to create a special set of rankings covering only the 224 most highly-competitive colleges in the country (as defined by Barron’s). Colleges are assigned scores based on student loan default rates, graduation rates, graduation rate performance, the percentage of students receiving Pell Grants, and the net price of attendance. UCLA, Harvard, and Williams made the top three, with four University of California campuses in the top ten.

I received an interesting piece of criticism regarding the list by Sara Goldrick-Rab, professor at the University of Wisconsin-Madison (and my dissertation chair in graduate school). Her critique noted that the size of the school and the type of admissions standards are missing from the rankings. She wrote:

“Many schools are so tiny that they educate a teensy-weensy fraction of American undergraduates. So they accept 10 poor kids a year, and that’s 10% of their enrollment. Or maybe even 20%? So what? Why is that something we need to laud at the policy level?”

While I don’t think that the size of the college should be a part of the rankings, it’s certainly worth highlighting the selective colleges that have expanded over time compared to those which have remained at the same size in spite of an ever-growing applicant pool.

I used undergraduate enrollment data from the fall semesters of 1980, 1990, 2000, and 2012 from IPEDS for both the 224 colleges in the Affordable Elite list and 2,193 public and private nonprofit four-year colleges not on the list. I calculated the percentage change between each year and 2012 for the selective colleges on the Affordable Elite list and the other less-selective colleges to get an idea of whether selective colleges are curtailing enrollment.

[UPDATE: The fall enrollment numbers include all undergraduates, including nondegree-seeking institutions. This doesn’t have a big impact on most colleges, but does at Harvard–where about 30% of total undergraduate enrollment is not seeking a degree. This means that enrollment growth may be overstated. Thanks to Ben Wildavsky for leading me to investigate this point.]

The median Affordable Elite college enrolled 3,354 students in 2012, compared to 1,794 students at the median less-selective college. The percentage change at the median college between each year and 2012 is below:

Period Affordable Elite Less selective
2000-2012 10.9% 18.3%
1990-2012 16.0% 26.3%
1980-2012 19.9% 41.7%

 

The distribution of growth rates is shown below:

enrollment_by_elite

So, as a whole, less-selective colleges are growing at a more rapid pace than the ones on the Affordable Elite list. But do higher-ranked elite colleges grow faster? The scatterplot below suggests not really—with a correlation of -0.081 between rank and growth, suggesting that higher-ranked colleges grow at slightly slower rates than lower-ranked colleges.

enrollment_vs_rank

But some elite colleges have grown. The top ten colleges in the Affordable Elite list have the following growth rates:

      Change from year to 2012 (pct)
Rank Name (* means public) 2012 enrollment 2000 1990 1980
1 University of California–Los Angeles (CA)* 27941 11.7 15.5 28.0
2 Harvard University (MA) 10564 6.9 1.7 62.3
3 Williams College (MA) 2070 2.5 3.2 6.3
4 Dartmouth College (NH) 4193 3.4 11.1 16.8
5 Vassar College (NY) 2406 0.3 -1.8 1.9
6 University of California–Berkeley (CA)* 25774 13.7 20.1 21.9
7 University of California–Irvine (CA)* 22216 36.9 64.6 191.6
8 University of California–San Diego (CA)* 22676 37.5 57.9 152.5
9 Hanover College (IN) 1123 -1.7 4.5 11.0
10 Amherst College (MA) 1817 7.2 13.7 15.8

 

Some elite colleges have not grown since 1980, including the University of Pennsylvania, MIT, Boston College, and the University of Minnesota. Public colleges have generally grown slightly faster than private colleges (the UC colleges are a prime example), but there is substantial variation in their growth.

Are Some Elite Colleges Understating Net Prices?

As a faculty member researching higher education finance, I’m used to seeing the limitations in federal data available to students and their families as they choose colleges. For example, the net price of attendance measure (measured as tuition and fees, room and board, books, and other expenses less any grants received) is only for first-time, full-time students—and therefore excludes a lot of students with great financial need. But a new graphic-heavy report from The Chronicle of Higher Education on net price revealed another huge limitation of the net price data.

The report, titled “Are Poor Families Really Paying Half Their Income at Elite Colleges?” looked at the two ways that some of the most selective public and private colleges calculate household income. About 400 colleges require students to file the CSS/Financial Aid PROFILE (or PROFILE for short) in addition to the FAFSA in order to receive institutional aid; unlike the FAFSA, the PROFILE requires all but the lowest-income students to pay an application fee. Selective colleges require the PROFILE because it includes more questions about household assets than the FAFSA, with the goal of getting a more complete picture of middle-income and upper-income families’ ability to pay for college. This form isn’t really necessary for families with low incomes and little wealth, and can serve as a barrier to attending certain colleges –as noted by Rachel Fishman of the New America Foundation.

The Chronicle piece looked at income data from Notre Dame, which provided both the FAFSA and PROFILE definitions of income. The PROFILE definition of family income resulted in far fewer students in the lowest income bracket (below $30,000 per year) than the FAFSA definition. Because Notre Dame targets more aid to the neediest students, the net price using PROFILE income below $30,000 (the very lowest-income students) was just $4,472 per year, compared to $11,626 using the FAFSA definition.

Notre Dame reported net prices to the Department of Education using the FAFSA definition of family income, which is the same way that all non-PROFILE colleges report income for net price. But the kicker in the Chronicle piece is that apparently some colleges use the PROFILE definition of income to generate net price data for the federal government. These selective colleges look much less expensive than a college like Notre Dame that reports data like most colleges do, giving them great publicity. Reporting PROFILE-based net prices can also improve these colleges’ performance on Washington Monthly’s list of best bang-for-the-buck colleges, as we use the average net price paid by students making less than $75,000 per year in the metric. (But many of the elite colleges don’t make the list since they fail to enroll 20% Pell recipients in their student body.)

The Department of Education should put forth language clarifying that net price data should be based on the FAFSA definition of income and not the PROFILE definition that puts fewer students in the lower income brackets and results in a seemingly lower net price. Colleges can report both FAFSA and PROFILE definitions on their own websites, but federal data need to be consistent across colleges.

Building a Better Student Loan Default Measure

Student loan default rates have been a hot political topic as of late given increased accountability pressures at the federal level. Currently, colleges can lose access to all federal financial aid (grants as well as loans) if more than 25% of students defaulted on their loans within two years of leaving college for three consecutive cohorts. Starting later this year, the measure used will be the default rate within three years of leaving college, and the cutoff for federal eligibility will rise to 30%. (Colleges can appeal this result if there are relatively few borrowers.)

But few students should ever have to default on their loans given the availability of various income-based repayment (IBR) plans. (PLUS loans typically aren’t eligible for income-based repayment, but their default rates oddly aren’t tracked and aren’t used for accountability purposes.) If a former student enrolled in IBR falls on tough times, his or her monthly payment will go down—potentially to zero if income is less than 150% of the federal poverty line. As a result, savvy colleges should be encouraging their students to enroll in IBR in order to reduce default rates.

And more students are enrolling in IBR. Jason Delisle at the New America Foundation analyzed new Federal Student Aid data out this week that showed that the number of students in IBR doubled from 950,000 to 1.9 million in the last year while outstanding loan balances went from $52.2 billion to $101.0 billion. The federal government’s total Direct Loan portfolio increased from $361.3 billion to $464.3 billion in the last year, meaning that IBR was responsible for nearly half of the increase in loan dollars.

This shift to IBR means that the federal government needs to consider new options for holding colleges accountable for their outcomes. Some options include:

(1) Using a longer default window. The “standard” loan repayment plan is ten years, but defaults are only tracked for three years. A longer window wouldn’t give an accurate picture of outcomes if more students enroll in IBR, but it would provide useful information on students who expect to do well enough after college that standard payments will be a better deal than IBR. This probably requires replacement of the creaky National Student Loan Data System, which may not be able to handle that many more data requests.

(2) Look at the percentage of students who don’t pay anything under IBR. This would measure the percentage of students making more than 150% of the poverty line, or about $23,000 per year for a former borrower with one other family member. Even with the woeful salaries in many public service jobs (such as teaching), they’ll likely have to pay something here.

(3) Look at the total amount repaid compared to the amount borrowed. If the goal is to make sure the federal government gets its money back, a measure of the percentage of funds repaid might be useful. Colleges could even be held accountable for part of the unpaid amount if desired.

As the Department of Education continues to develop draft college ratings (to come out later this fall), they are hopefully having these types of conversations when considering outcome measures. I hope this piece sparks a conversation about potential loan default or repayment measures that can improve upon the currently inadequate measure, so please offer your suggestions as comments below.

Exploring Trends in Pell Grant Receipt and Expenditures

The U.S. Department of Education released its annual report on the federal Pell Grant program this week, which is a treasure trove of information about the program’s finances and who is receiving grants. The most recent report includes data from the 2012-13 academic year, and I summarize the data and trends over the last two decades in this post.

Pell Grant expenditures decreased from $33.6 billion in 2011-12 to $32.1 billion in 2012-13, following another $2.1 billion decline in the previous year. After adjusting for inflation, Pell spending has increased 258% since the 1993-94 academic year.

pell_fig1

Part of the increase in spending is due to increases over the maximum Pell Grant over the last 20 years. Even though the maximum Pell Grant covers a smaller percentage of the cost of college now than 20 years ago, the inflation-adjusted value rose from $3,640 in 1993-94 to $5,550 in 2012-13.

pell_fig2

The number of Pell recipients has also increased sharply in the last 20 years, going from 3.8 million in 1993-94 to just under 9 million in 2012-13. However, note the decline in the number of independent students in 2012-13, going from 5.59 million to 5.17 million.

pell_fig3

Recent changes to the federal calculation formula has impacted the number of students receiving an automatic zero EFC (and the maximum Pell Grant), which is given to dependent students or independent students with dependents of their own who meet income and federal program participation criteria. Between 2011-12 and 2012-13, the maximum income to qualify for an automatic zero EFC dropped from $31,000 to $23,000 due to Congressional action, resulting in a 25% decline in automatic zero EFCs. Most of these students still qualified for the maximum Pell Grant, but had to fill out more questions on the FAFSA to qualify.

pell_fig4

The number of students receiving a zero EFC (automatic or calculated) dropped by about 7% from 2011-12, or about 400,000 students, after more than doubling in the last six years. Part of this drop is likely due to students choosing a slowly recovering labor market over attending college.

pell_fig5

UPDATE: Eric Best, co-author of “The Student Loan Mess,” asked me to put together a chart of the average Pell award by year after adjusting for inflation. Below is the chart, showing a drop of nearly $500 in the average inflation-adjusted Pell Grant in the last two years after a long increase.

pell_fig6

I hope these charts are useful to show trends in Pell receipt and spending over time, and please let me know in the comments section if you would like to see any additional analyses.

Unit Record Data Won’t Doom Students

The idea of a national unit record database in higher education, in which the U.S. Department of Education gathers data on individual students’ demographic information, college performance, and later outcomes, has been controversial for years—and not without good reason. Unit record data would represent a big shift in policy from the current institutional-level data collection through the Integrated Postsecondary Education Data System (IPEDS), which excludes part-time, transfer, and most nontraditional students from graduation rate metrics. The Higher Education Act reauthorization in 2008 banned the collection of unit record data, although bipartisan legislation has been introduced (but not advanced) to repeal that law.

Opposition to unit record data tends to fall into three categories: student privacy, the cost to the federal government and colleges, and more philosophical arguments about institutional freedom. The first two points are quite reasonable in my view; even as a general supporter of unit record data, it is still the burden of supporters to show that the benefits outweigh the costs. The federal government doesn’t have a great track record in keeping personally identifiable data private, although I have never heard of data breaches involving the Department of Education’s small student-level datasets collected for research purposes. The cost of collecting unit record data for the federal government is unknown, but colleges state the compliance burden would increase substantially.

I have less sympathy for philosophical arguments that colleges make against unit record data. The National Association of Independent Colleges and Universities (NAICU—the association for private nonprofit institutions) is vehemently opposed to unit record data, stating that “we do not believe that the price for enrolling in college should be permanent entry into a massive data registry.” Amy Laitinen and Clare McCann of the New America Foundation documented NAICU’s role in blocking unit record data, even though the private nonprofit sector is a relatively small segment of higher education and these colleges benefit from federal Title IV student financial aid dollars.

An Inside Higher Ed opinion piece by Bernard Fryshman, professor of physics at the New York Institute of Technology and recent NAICU award winner, opposes unit record data for the typical (and very reasonable) privacy concerns before taking a rather odd turn toward unit record data potentially dooming students later in life. He writes the following:

“The sense of freedom and independence which characterizes youth will be compromised by the albatross of a written record of one’s younger years in the hands of government. Nobody should be sentenced to a lifetime of looking over his/her shoulder as a result of a wrong turn or a difficult term during college. Nobody should be threatened by a loss of personal privacy, and we as a nation should not experience a loss of liberty because our government has decreed that a student unit record is the price to pay for a postsecondary education.”

He also writes that employers will request prospective employees to provide a copy of their student unit record, even if they are not allowed to mandate a copy be provided. This sounds suspiciously like a type of student record that already exists (and employers can ask for)—a college transcript. Graduate faculty responsible for admissions decisions already use transcripts in that process, and applications are typically not considered unless that type of unit record data is provided.

While there are plenty of valid reasons to oppose student unit record data (particularly privacy safeguards and potential costs), Professor Fryshman’s argument doesn’t advance that cause. The information from unit record data is already available for employers to request, making that point moot.

Does College Improve Happiness? What the Gallup Poll Doesn’t Tell Us

The venerable polling organization Gallup released a much-anticipated national survey of 30,000 college graduates on Tuesday, focusing on student satisfaction in the workplace and in life as a whole. I’m not going to spend a lot of time getting into all of the details (see great summaries at Inside Higher Ed, NPR, and The Chronicle of Higher Education), but two key findings merit further discussion.

The first key finding is that not that many graduates are engaged with their job and thriving across a number of elements of well-being (including purpose, social, community, financial, and physical). Having supportive professors is the strongest predictor of being engaged at work, and being engaged at work is a strong predictor of having a high level of well-being.

Second, the happiness of graduates doesn’t vary that much across types of nonprofit institutions, with students graduating from (current?) top-100 colleges in the U.S. News & World Report rankings reporting similar results to less-selective institutions. Graduates of for-profit institutions are less engaged at work and are less happy than graduates of nonprofit colleges, although no causal mechanisms are posed.

While it is wonderful to have data on a representative sample of 30,000 college graduates, adults who started college but did not complete are notably excluded. Given that about 56% of first-time students complete a college degree within six years of first enrolling (according to the National Student Clearinghouse), just surveying students who graduated leaves out a large percentage of adults with some postsecondary experience. Given the (average) economic returns to completing a degree, it might be reasonable to expect dropouts to be less satisfied than graduates; however, this is an empirical question.

Surveying dropouts would also provide better information on the counterfactual outcome for certain types of students. For example, are students who attend for-profit colleges happier than dropouts—and are both of these groups happier than high school graduates who did not attempt college? This is a particularly important policy question given the ongoing skirmishes between the U.S. Department of Education and the proprietary sector regarding gainful employment data.

Surveying people across the educational distribution would allow for more detailed analyses of the potential impacts of college by comparing adults who appear similar on observable characteristics (such as race, gender, and socioeconomic status) but received different levels of education. While these studies would not be causal, the results would certainly be of interest to researchers, policymakers, and the general public. I realize the Gallup Education poll exists in part to sell data to interested colleges, but the broader education community should be interested in what happens to students who did not complete college—or did not even enroll. Hopefully, future versions of the poll will include adults who did not complete college.

The Black Hole of PLUS Loan Outcomes

Much of the debate about improving federal higher education data quality has focused on whether a student unit record dataset is necessary in order to give students, their families, and policymakers the information they need in order to make better decisions. Last month’s release of College Blackout: How the Higher Education Lobby Fought to Keep Students in the Dark by Amy Laitinen and Clare McCann of the New America Foundation highlighted the potential role of the higher education lobby in opposing unit record data. However, privacy advocates note the concerns with these types of datasets—and these are concerns that policymakers must always keep in mind.

Colleges are already required as a condition of the Higher Education Act to report institutional-level data on some outcomes to the federal government, which are then typically made publicly available through the Integrated Postsecondary Education Data System (IPEDS). In what is an annoying quirk of the federal government’s data reporting systems, the best source for data on the amount of certain types of aid received (such as work-study or the Supplemental Educational Opportunity Grant) is the Office of Postsecondary Education’s website and is not available through IPEDS. Student loan default rates (for Stafford loans) are available on Federal Student Aid’s website, which is also not tied to IPEDS. The lack of a central database for all of these data sources is a pain for analysts (consider the technical appendix to my paper on campus-based aid programs), but it typically can be overcome with a mix of elbow grease and knowledge of the difference between UnitIDs and OPEIDs.

Yet, until last week, we knew absolutely nothing about the outcomes for students and families who took out federal PLUS loans. These loans, which require a credit check for the parents of undergraduate students, have gained attention recently due to the federal government’s 2011 decision to tighten eligibility criteria in order to reduce default rates. This disproportionately affected enrollment at historically black colleges and universities, many of which are private and do not have large endowments that provide institutional aid funds. Some analysts, such as Rachel Fishman at New America, have called for PLUS loans to be severely curtailed or even eliminated.

The Department of Education provided a negotiated rulemaking committee with data on PLUS denial rates and default rates by institutional sector (public, private nonprofit, and for-profit) last week, marking the first time these data had even been made public. These data were only provided after members of the committee complained about a lack of data on the proposals they were discussing. (The data are available here, under the pre-session 2 materials header.) The data on loan balances suggests that the average parent PLUS loan balance among borrowers at four-year private colleges is $27,443, compared to $19,491 at four-year publics and $18,133 at four-year for-profit institutions. Three-year default rates at for-profit colleges were 13.3% in fiscal year 2010, compared to 3.4% at private nonprofits and 3.1% at public institutions. And the total amount of outstanding PLUS loans (undergraduate and graduate students combined) is just over $100 billion, or roughly 10% of all student loan debt.

A piece in Thursday’s Inside Higher Ed quoted a HBCU president who noted that there was no reason to tighten loan criteria given the low default rates in the data. But the public has no idea what any college’s default rate is on PLUS loans, given the release of broad sector-level data. The piece goes on to note that the Department of Education says institutional-level data are not available for PLUS loans, in part because there is no appeal process in place for colleges. This has the effect of insulating programs that take in large amounts of PLUS funds, do not graduate those students, and as a result they default. Right now, there is no accountability whatsoever.

The Department of Education needs to release institutional-level PLUS loan data to improve transparency and accountability. However, they claim that these measures do not exist—an assumption which borders on the absurd given the existence of the data in the National Student Loan Data System and their ability to calculate sector-level measures. ED’s response has been that colleges do not have the ability to appeal the data, but this can be easily remedied. In the meantime, I hope that the higher education community uses the Freedom of Information Act to request these data—and that advocates are willing to go to court when ED says the data do not exist.

Should Payscale’s Earnings Data Be Trusted?

Despite the large amount of money spent on higher education, prospective students, their families, and the public have historically known very little about the earnings of students who attend college. This has started to change in recent years, as a few states (such as Virginia) began to publish earnings data for their graduates who stayed in state and the federal government publishes earnings data for certain programs through gainful employment rules. But this leaves out many public and private nonprofit institutions, and complete data are not available without a student unit record system.

As is often the case, the private sector steps in to try to fill the gap. Payscale.com has collected self-reported earnings data by college and major among a large number of bachelor’s degree recipients (those with a higher degree are excluded—the full methodology is here). Their 2014 “return on investment” report ranked colleges based on the best and worst dollar returns, with Harvey Mudd College at the top with a $1.1 million return over 20 years and Shaw University at the bottom with a return of negative $121,000.

Payscale data is self-reported earnings among individuals who happened to look at Payscale’s website and were willing to provide estimates of their annual earnings. It’s my strong suspicion that self-reported earnings from these individuals are substantially higher than the average bachelor’s degree recipient, and these are often based on a relatively small number of students. For example, the estimates of my alma mater, Truman State University, are based on 251 graduates for a college that graduates about 1,000 students per year. As many Truman students go on to get advanced degrees, probably about 500 students per year would qualify for the Payscale sample. Yet 102 students provided data within five years of graduation—about four percent of graduates who did not pursue further degrees.

But is it still worth considering? Yes and no. I don’t put a lot of stock in the absolute earnings listed, since they’re likely biased upward and there are relatively few cases. Additionally, there is no adjustment for cost of living—which really helps colleges in expensive urban areas. But the relative positions of institutions with similar focuses in similar parts of the country are probably somewhat close to what complete data would say. If the self-reporting bias is similar, then controlling for cost of living and the composition of graduates could yield useful information.

I hope that Payscale can do a version of their ROI estimates taking cost of living into account, and try to explore whether their data are somewhat representative of a particular college’s bachelor’s degree recipients. Although I commend them for providing a useful service, I still recommend taking the dollar value of ROI estimates with a shaker of salt.

The 2014 Net Price Madness Tournament

It’s time for my second annual Net Price Madness Tournament, in which colleges which have men’s basketball teams in the NCAA Division I tournament are ranked based on net price in a tournament format. In last year’s Net Price Madness, North Carolina State, North Carolina A&T, Northwestern State (LA), and Wichita State were the regional winners for the lowest net price among students who received any financial aid in the 2011-12 academic year. And the Shockers did go on to advance to the Final Four, so maybe this method has a tiny correlation to basketball success!

Here are the results for the 2014 Net Price Madness Tournament in a convenient spreadsheet that also includes winners for each game, net price by income level, percent Pell, and six-year graduation rates. The regional winners for 2014 are:

East: North Carolina Central University (14): $8,757 net price, 64% Pell, 43% grad rate

Midwest: Wichita State University (1): $8,645 net price, 36% Pell, 41% grad rate

South: University of New Mexico (7): $11,001 net price, 39% Pell, 46% grad rate

West: University of Louisiana-Lafayette (14): $5,891 net price, 35% Pell, 44% grad rate

And here is the full bracket:

netprice_bracket

Congratulations to these institutions, and a big raspberry to the nine colleges that charged a net price of over $20,000 to the typical student with household income below $30,000 per year. Feel free to use these data to inform your rooting interests!

UPDATE 3/17 Noon ET: Mark Huelsman of Demos drew my attention to the oddity that Wichita State’s net price for all students ($8,645) is far lower than the net price for each of the three lowest income brackets (roughly $12,500 to $13,500). I investigated the IPEDS data report from WSU and discovered that 706 of the 721 WSU first-year, full-time, in-state students receiving Title IV financial aid (listed as Group 4) were reported as having incomes below $30,000 in 2011-12; similar percentages existed for the previous two years.

The sample for the full net price number is somewhat different–it’s first-year, full-time, in-state students receiving any grant aid (including the institution, listed as Group 3). This sample has 902 students, 179 more than the previous sample. Comparing net tuition revenue from the two groups, Group 4 had roughly $9.5 million in net revenue in 2011-12 and the larger Group 3 had $7.8 million in net revenue. This is unusual, to say the least, and it is possible that one of the net price numbers listed in IPEDS is incorrect. I’m continuing to investigate this point.

Spring Admissions: Expanding Access or Skirting Accountability?

More than one in five first-year students at the University of Maryland now start their studies in the spring instead of the fall, according to this recent article by Nick Anderson in the Washington Post. This seems to be an unusually high percentage among colleges and universities, but the plan makes a lot of sense. Even at selective institutions, some students will leave at the end of the first semester, and more space opens up on campus after other students graduate, study abroad, or take on internships. It can be a way to maximize revenue by better utilizing facilities throughout the academic year.

However, the article also notes that the SAT scores of spring admits are lower at Maryland. Among students starting in spring 2015, the median score was roughly a 1210 (out of 1500), compared to about 1300 for the most recent available data for fall admits in 2012. These students’ test scores suggest that spring admits are well-qualified to succeed in college, even if they didn’t quite make the cut the first time around. (It’s much less realistic to expect high-SAT students to defer, given the other attractive options they likely have.) This suggests Maryland’s program may have a strong access component.

However, deferring admission to lower-SAT students could be done for other reasons. Currently, colleges only have to report their graduation rates for first-time, full-time students who enrolled in the fall semester to the federal government. (That’s one of the many flaws of the creaky Integrated Postsecondary Education Data System, and one that I would love to see fixed.) If these spring admits do graduate at lower rates, the public will never know. Additionally, many college rankings systems give colleges credit for being more selective. With the intense pressure to rise in the U.S. News rankings, even a small increase in SAT scores can be very important to colleges.

So is Maryland expanding access or trying to skirt accountability systems for a number of students? I would probably say it’s more of the former, but don’t discount the pressure to look good to the federal government and external rankings bodies. This practice is something to watch going forward, even though better federal data systems would reduce its effectiveness of shaping a first-year class.