Improving Data on PhD Placements

Graduate students love to complain about the lack of accurate placement data for students who graduated from their programs. Programs are occasionally accused of only reporting data for students who successfully received tenure-track jobs, and other programs apparently do not have any information on what happened to their graduates. Not surprisingly, this can frustrate students as they try to make a more informed decision about where to pursue graduate studies.

An article in today’s Chronicle of Higher Education highlights the work of Dean Savage, a sociologist who has tracked the outcomes of CUNY sociology PhD recipients for decades. His work shows a wide range of paths for CUNY PhDs, many of whom have been successful outside tenure-track jobs. Tracking these students over their lifetimes is certainly a time-consuming job, but it should be much easier to determine the initial placements of doctoral degree recipients.

All students who complete doctoral degrees are required to complete the Survey of Earned Doctorates (SED), which is supported by the National Science Foundation and administered by the National Opinion Research Center. The SED contains questions designed to elicit a whole host of useful information, such as where doctoral degree recipients earned their undergraduate degrees (something which I use in the Washington Monthly college rankings as a measure of research productivity) and information about the broad sector in which the degree recipient will be employed.

The utility of the SED could be improved by clearly asking degree recipients where their next job is located, as well as their job title and academic department. The current survey asks about the broad sector of employment, but the most relevant response for postgraduate plans is “have signed contract or made definite commitment to a “postdoc” or other work. Later questions do ask about the organization where the degree recipient will work, but there is no clear distinction between postdoctoral positions, temporary faculty positions, and tenure-track faculty positions. Additionally, there is no information requested about the department in which the recipient will work.

My proposed changes to the SED are little more than tweaks in the grand scheme of things, but have the potential to provide much better data about where newly minted PhDs take academic or administrative positions. This still wouldn’t fix the lack of data on the substantial numbers of students who do not complete their PhDs, but it’s a start to providing better data at a reasonable cost using an already-existing survey instrument.

Is there anything else we should be asking about the placements of new doctoral recipients? Please let me know in the comments section.

“Bang for the Buck” and College Ratings

President Obama made headlines in the higher education world last week with a series of speeches about possible federal plans designed to bring down the cost of college. While the President made several interesting points (such as cutting law school from three to two years), the most interesting proposal to me was has plan to create a series of federal ratings based on whether colleges provide “good value” to students—tying funding to those ratings.

How could those ratings be constructed? As noted by Libby Nelson in Politico, the federal government plans to publish currently collected data on the net price of attendance (what students pay after taking grant aid into account), average borrowing amounts, and enrollment of Pell Grant recipients. Other measures could potentially be included, some of which are already collected but not readily available (graduation rates for Pell recipients) and others which would be brand new (let your imagination run wild).

Regular readers of this blog are probably aware of my work with Washington Monthly magazine’s annual set of college rankings. Last year was my first year as the consulting methodologist, meaning that I collected the data underlying the rankings, compiled it, and created the rankings—including a new measure of cost-adjusted graduation rate performance. This measure seeks to reward colleges which do a good job serving and graduating students from modest economic means, a far cry from many prestige-based rankings.

The metrics in the Washington Monthly rankings are at least somewhat similar to those proposed by President Obama in his speeches. As a result, we bumped up the release of the new 2013 “bang for the buck” rankings to Thursday afternoon. These rankings reward colleges which performed well on four different metrics:

  • Have a graduation rate of at least 50%.
  • Match or exceed their predicted graduation rate given student and institutional characteristics.
  • Have at least 20% of students receive Pell Grants (a measure of effort in enrolling low-income students).
  • Have a three-year student loan default rate of less than 10%.

Only one in five four-year colleges in America met all four of those criteria, which highlighted a different group of colleges than is normally highlighted. Colleges such as CUNY Baruch College and Cal State University-Fullerton ranked well, while most Ivy League institutions failed to make the list due to Pell Grant enrollment rates in the teens.

This work caught the eye of the media, as I was asked to be on MSNBC’s “All in with Chris Hayes” on Friday night to discuss the rankings and their policy implications. Here is a link to the full segment, where I’m on with Matt Taibbi of Rolling Stone and well-known author Anna Kamenetz:

http://video.msnbc.msn.com/all-in-/52832257/

This was a fun experience, and now I can put the “As Seen on TV” label on my CV. (Right?) Seriously, though, stay tuned for the full Washington Monthly rankings coming out in the morning!

Financial Aid as a Paycheck?

President Obama is set to make a series of speeches this week addressing college affordability—a hot topic on college campuses as new students move into their dorm rooms. An article in this morning’s New York Times provides some highlights of the plan. While there are other interesting proposals, most notably tying funding to some measure of college success, I’m focusing this brief post on the idea to disburse Pell Grants throughout the semester—“aid like a paycheck.”

The goal of “aid like a paycheck” is to spread grant aid disbursals out through the semester so students take ownership of their education. Sounds great, right? The problem is that it’s only been tested at a small number of community colleges in low-tuition states, such as California. If a student has more financial aid than the cost of attendance, then there is “extra” aid to disburse. But this doesn’t apply to the vast majority of students, particularly those at four-year schools. Spreading out aid awards for students with unmet need creates an even bigger financial gap at the beginning of the semester.

In order for “aid like a paycheck” to work for the vast majority of students, we have to make other costs look like a monthly bill. If students still have to pay for tuition, books, and housing upfront (or face a hefty interest rate), this program will create a yawning financial gap. If colleges want to be accountable to students, perhaps they should bill students per month for their courses—that way, dropped courses hurt the institution’s bottom line more than the student’s. This would delay funds coming in to a college, which can result in a loss of interest given the large amounts of tuition revenue.

Before we try “aid like a paycheck” on a large scale, Mr. President, let’s try making colleges get their funds from students in that same way. And let’s also get some research on how it works for students whose financial need isn’t fully met by the Pell Grant. The feds have the power to try demonstration programs, and this would be worth a shot.

Yes, Student Characteristics Matter. But So Do Colleges.

It is no surprise to those in the higher education world that student characteristics and institutional resources are strongly associated with student outcomes. Colleges which attract academically elite students and have the ability to spend large sums of money on instruction and student support should be able to graduate more of their students than open-access, financially-strapped universities, even after holding factors such as teaching quality constant. But an article in today’s Inside Higher Ed shows that there is a great deal of interest in determining the correlation between inputs and outputs (such as graduation).

The article highlights two new studies that examine the relationship between inputs and outputs. The first, by the Department of Education’s Advisory Committee on Student Financial Assistance, breaks down graduation rates by the percentage of students who are Pell Grant recipients, per-student endowments, and ACT/SAT scores using IPEDS data. The second new study, by the president of Colorado Technical University, finds that four student characteristics (race, EFC, transfer credits, and full-time status) explain 74% of the variation in an unidentified for-profit college’s graduation rate. His conclusion is that “public [emphasis original] policy will not increase college graduates by focusing on institution characteristics.”

While these studies take different approaches (one using institutional-level data and the other using student-level data), they highlight the importance that student and institutional characteristics currently have in predicting student success rates. These studies are not novel or unique—they follow a series of papers in HCM Strategists’ Context for Success project in 2012 and even more work before that. I contributed a paper to the project (with Doug Harris at Tulane University) examining input-adjusted graduation rates using IPEDS data. We found R-squared values of approximately 0.74 using a range of student and institutional characteristics, although the predictive power varied by Carnegie classification. It is also worth noting that the ACSFA report calculated predicted graduation rates with an R-squared value of 0.80, but they control for factors (like expenditures and endowment) that are at least somewhat within an institution’s control and don’t allow for a look at cost-effectiveness.

This suggests the importance of taking a value-added approach in performance measurement. Just like K-12 education is moving beyond rewarding schools for meeting raw benchmarks and adopting a gain score approach, higher education needs to do the same. Higher education also needs to look at cost-adjusted models to examine cost-effectiveness, something which we do in the HCM paper and I have done in the Washington Monthly college rankings (a new set of which will be out later this month).

However, even if a regression model explains 74% of the variation in graduation rates, a substantial amount can be attributed either to omitted variables (such as motivation) or institutional actions. The article by the Colorado Technical University president takes exactly the wrong approach, saying that “student graduation may have little to do with institutional factors.” If his statement is accurate, we would expect colleges’ predicted graduation rates to be equal to their actual graduation rates. But, as anyone who was spent time on college campuses should know, institutional practices and policies can play an important role in retention and graduation. The 2012 Washington Monthly rankings included a predicted vs. actual graduation rate component. While Colorado Tech basically hit its predicted graduation rate of 25% (with an actual graduation rate one percentage point higher), other colleges outperformed their prediction given student and institutional characteristics. For example, San Diego State University and Rutgers University-Newark, among others, outperformed their prediction by more than ten percentage points.

While incoming student characteristics do affect graduation rates (and I’m baffled by the amount of attention on this known fact), colleges’ actions do matter. Let’s highlight the colleges which appear to be doing a good job with their inputs (and at a reasonable price to students and taxpayers) and see what we can learn from them.

How Not to Rate the Worst Professors

I was surprised to come across an article from Yahoo! Finance claiming knowledge of the “25 Universities with the Worst Professors.” (Maybe I shouldn’t have been surprised, but that is another discussion for another day.) The top 25 list includes many technology and engineering-oriented institutions, as well as liberal arts colleges. I am particularly amused by the inclusion of my alma mater (Truman State University) at number 21, as well as my new institution starting next fall (Seton Hall University) at number 16. Additionally, 11 of the 25 universities are located in the Midwest, with none in the South.

This unusual distribution immediately led me to examine the methodology of the list, which comes from Forbes and CCAP’s annual college rankings. The worst professors list is based on Rate My Professor, a website which allows students to rate their instructors on a variety of characteristics. For the rankings, a mix of the helpfulness and clarity measures is used in conjunction with partially controlling for a professor’s “easiness.”

I understand their rationale for using Rate My Professor, as it’s the only widespread source of information about faculty teaching performance. I’m not opposed to using Rate My Professor as part of this measure, but controlling for grades received and the course’s home discipline is essential. At many universities, science and engineering courses have much lower average grades, which may influence students’ perceptions of the professor. The same is true at certain liberal arts colleges.

The course’s home discipline is currently in the Rate My Professor data, and I recommend that Forbes and CCAP weight results by discipline in order to more accurately make comparisons across institutions. I would also push them to aggregate a representative sample of comments for each institution, so students can learn more about what students think beyond a Likert score.

Student course evaluations are not going away (much to the chagrin of some faculty members), and they may be used in institutional accountability systems as well as a very small part of the tenure and promotion process. But like many of the larger college rankings, Forbes/CCAP’s work results in at best an incomplete and at worst a biased comparison of colleges. (And I promise that I will work hard on my helpfulness and clarity measures next fall!)

The Higher Learning Commission’s Accreditation Gamble

Accrediting bodies play an important role in judging the quality (or at least the competency) of American colleges and universities. There are six accreditors which cover the majority of non-profit, non-religious postsecondary institutions, including the powerful Higher Learning Commission in the Midwest.  The HLC recently informed Apollo Group, the owner of the University of Phoenix, that it may be placed on probation due to concerns about administrative and governance structures.

Part of Phoenix’s accrediting concerns may be due to a philosophical shift at the HLC, emphasizing the public purposes of higher education. As noted in an Inside Higher Ed article on the topic, Sylvia Manning, president of the HLC, stated the priority that education be a public good. The new accrediting criteria include the following statement:

“The institution’s educational responsibilities take primacy over other purposes, such as generating financial returns for investors, contributing to a related or parent organization, or supporting external interests.”

This shift occurs in the midst of questions about the purposes of the current accreditation structure. While colleges must be accredited in order for students to receive federal financial aid dollars, the federal government currently has no direct involvement in the accreditation structure. Accrediting bodies also focus on degree programs instead of individual courses, something which has also been questioned.

Given the current decentralized structure of accreditation, Phoenix could easily move to another of the main regional nonprofit accrediting bodies—or it could go through a body focusing on private colleges and universities. The latter would likely be easier for Phoenix, as it would have to answer to more like-minded critics. While these bodies are viewed as being less prestigious than the HLC, it is an open question whether students care about the accrediting body—as long as they can receive financial aid.

The Higher Learning Commission is taking a gamble with its move toward placing Phoenix on probation, partially due to the new criteria. They need to carefully consider whether it is better to have oversight over one of the nation’s largest and most powerful postsecondary institutions or to steer them toward a more friendly accrediting body. Traditional accrediting bodies should also consider the possibility that the federal government will get into the accreditation business if for-profits leave groups like the HLC. If the HLC chooses to focus on Phoenix’s control instead of its academic competency, a chain reaction could be set off which may end up with them being replaced by federal oversight.

New Recommendations for Performance-Based Funding in Wisconsin

Performance-based funding for Wisconsin’s technical colleges is at the forefront of Governor Walker’s higher education budget for the next biennium. In previous blog posts (here, here, and here), I have briefly discussed some of the pros and cons of moving to a performance-based funding model for a diverse group of postsecondary institutions.

This week, Nick Hillman, Sara Goldrick-Rab, and I released a policy brief with recommendations for performance-based funding in Wisconsin through WISCAPE. In the brief, we discuss how performance-based funding has operated in other states, as well as recommendations for how to operate PBF in Wisconsin. Our key points are the following:

(1) Performance-based funding seeks to switch the focus from enrollment to completion.

(2) Successful performance-based funding starts small and is developed via collaboration.

(3) Colleges with different missions should have different performance metrics.

(4) Multiple measures of success are necessary to reduce the possibility of perverse incentives.

Wisconsin’s proposal appears to meet some of these key points, but some concerns do remain. My primary concern is the speed with which funding will shift to performance—from 10% in 2014-15 to 100% by 2019-20. This may not be enough time for colleges to adjust their actions, so this timeline should be adjusted as needed.

Technical Colleges Debate Tying Funding to Job Placement

In advance of Wisconsin Governor Scott Walker’s budget address tomorrow evening, last week’s release of plans to tie state funding for technical colleges to performance measures has generated a great deal of discussion. One of the most discussed portions of his plan (press release here) is his proposal to tie funding to job placement rates, particularly in high-demand fields. Most colleges seem to support the idea of getting better data on job placement rates, but using that measure in an accountability system has sparked controversy.

Madison Area Technical College came out last week in opposition to the Governor’s proposal, as covered by a recent article in the Capital Times. The article mentions comments by provost Terry Webb that job placement rates are partially influenced by factors outside the college’s control, such as job availability, location, and individual preferences. These concerns are certainly real, especially given the difficulty of tracking students who may leave the state in search of a great job opportunity.

However, Gateway Technical College came out in support of funding based on job placement rates, according to an article in the Racine Journal Times (hat tip to Noel Radomski for the link). Gateway president Bryan Albrecht supports the plan on account of the college’s high job placement rates among graduates (85%, among those who responded to a job placement survey with a 78% response rate, although only 55% were employed in their field of study). The college seems confident in its ability to change programs as needed in order to keep up with labor market demands, even in the face of a difficult economy in southeast Wisconsin.

The differing reactions of these two technical colleges show the difficulty of developing a performance-based funding system which works for all stakeholders. Madison College, along with three other technical colleges in the state, has liberal arts transfer programs with University of Wisconsin System institutions. These students may graduate with an associate’s degree and not immediately enter the labor market, or even successfully transfer before getting the degree. The funding system, which will be jointly developed by the Wisconsin Technical College System and the state’s powerful Department of Administration, should keep those programs in mind so to not unfairly penalize students with dual vocational/transfer missions.

More Proposed Financial Aid Reforms

The past few months have been an exciting time for financial aid researchers, as many reports proposing changes in federal financial aid policies and practices have been released as a part of the Bill and Melinda Gates Foundation’s Reimagining Aid Design and Delivery (RADD) project. The most recent proposal comes from the Education Policy Program at the New America Foundation, a left-of-center Washington think tank. Their proposal (summary here, full .pdf here) would dramatically shift federal priorities in student financial aid—by prioritizing the federal Pell Grant over all other types of aid and changing loan repayment options—without creating any additional costs to the government. Below, I detail some of the key proposals and offer my comments.

Pell Grant program

(1)     Shift the program from discretionary spending to an entitlement. I’m torn over this proposal. The goal is to guarantee that funding will be present for students in order to provide more certainty in the college planning process (a goal in my work), but moving more items to the entitlement side of the ledger makes cutting spending in any meaningful way exceedingly difficult. A potential compromise would be to authorize spending several years in advance, but not lock us into a program for generations to come.

(2)    Limit Pell eligibility to 125% of program length (five years for a four-year college and three years for a two-year college). Currently, students are allowed 12 full time equivalent semesters of Pell eligibility, which can be used through the bachelor’s degree. This means that students who only seek to earn an associate’s degree can use the Pell for six years in a two-year program. This can safely be cut back (to three years, perhaps), but I’m not sure if cutting all the way back to 125% of stated program length is ideal. I would be concerned about students who can’t quite make it across the finish line financially.

(3)    Create institutional incentives to enroll Pell recipients and graduate students. New America has several prongs in this policy, including bonuses for colleges which enroll and graduate large numbers of Pell recipients. But the most interesting part is a proposed requirement that colleges which enroll few Pell recipients, have high net prices of attendance for Pell recipients, and have substantial endowments have to provide matching funds in order for students to be Pell-eligible. I think this policy has potential and doesn’t punish colleges for actions they can’t control—compared to other proposals, which have sought to tie Pell funding for public and private colleges to state appropriations.

Student loans

(1)    Switch all students to income-based repayment of loans. This would reduce default rates and simplify financial aid, but has the potential to let students attending expensive colleges off the hook. New America shares my concern on this, but switching to IBR could still have substantial upfront costs (which would later be repaid).

(2)    Set student loan interests based on government borrowing costs plus three percentage points. This proposal should result in a revenue-neutral student loan program (after accounting for defaults) and stop the games of reauthorizing artificially low interest rates for political gain. Loan rates would be fixed for each cohort of students, but vary across each incoming cohort.

(3)    Allow colleges to lower federal loan limits “to discourage excessive borrowing.” I’m concerned about this point of the proposal, at least for undergraduate students. Loan limits are currently fairly modest and students should have the right to borrow a sufficient amount of money needed to attend college, whether the college disagrees with that or not.

Other key points

(1)    Pay for the additional Pell expenditures by cutting education tax credits, savings plans, and student loan interest deductions. This is a common call by financial aid researchers, and not just because academia tilts heavily to the left. Economic theory would suggest that plans to reduce the cost of college through grants should work as well as credits and deductions, but this assumes that students and their families fully account for the tax benefits in their decisionmaking and that the students who take up these programs are on the margin of attending college. Neither appears to be true. An additional tax deduction for being a student would likely be more effective than the current credit system.

(2)    Require better data systems and consumer information. I’m fully on board with getting better data systems so researchers can finally figure out whether financial aid works and student outcomes can be better tracked across colleges. I’m a little more concerned about some of the consumer information measures, as colleges should have the ability to tailor materials somewhat.

(3)    Create publicly available accountability standards. Gainful employment, in which for-profit colleges are examined based on job placement rates, could be a model for extending some sort of accountability to all colleges receiving federal funds. Graduation rates, earnings, and other measures could be used—or at the very least, the information could be made public to students, their families, and policymakers.

I don’t agree with everything that New America suggests in their policy proposals, but many of the suggestions would help improve financial aid delivery and our ability to examine whether programs work for students. To me, that is the mark of a successful proposal that could at least partially be adopted by Congress.

Another Commission on Improving Graduation Rates

College leaders and policymakers are rightly concerned about the percentage of incoming students who graduate in a reasonable period of time. Although there have been numerous reports and commissions at the university, state, and national level to improve college completion rates, about the same percentage of incoming students graduate college now as a decade ago. This spurred the creation of the National Commission on Higher Education Attainment, a group of college presidents from various types of public and private nonprofit colleges and universities. This group released their report on improving graduation rates today, which offers few new suggestions and repeats many of the same concerns of past commissions.

The report made the following recommendations, with my comments below:

Recommendation 1: Change campus culture to boost student success.

We’ve heard this one before, to say the least. The problem is that few campus-level innovations have been “scalable”—or able to expand to other colleges with the same results. Other programs appear promising, but have never been rigorously evaluated or cost a lot of money. Rigorous evaluation is essential to determine what we can learn from other colleges’ apparent successes.

Recommendation 2: Improve cost-effectiveness and quality.

In theory, this sounds great—and many of the recommendations sound reasonable. But policymakers and college leaders should be concerned about any potential cost savings resulting in a lower-quality education. A slightly less personalized education for a lower price may be a worthwhile tradeoff and pass a cost-effectiveness test, but these concerns should be addressed.

A bigger concern not addressed regarding the cost of education is the actual cost of teaching a given course. First-year students tend to subsidize upper-level undergraduates, and all undergraduates tend to subsidize doctoral students. Much more research needs to be done about the costs of individual courses in order to provide lower-cost offerings to certain groups of students.

Recommendation 3: Make better use of data to boost success.

The commission calls for better use of institutional-level data to identify at-risk students and keep students on track to graduation. They call for more students to be included in the federal IPEDS dataset, which currently only tracks first-time, full-time, traditional-age students at their first institution of attendance. While this would be an improvement, I would like to see a pilot test of a student-level dataset instead of an institutional-level dataset. This would be much better for identifying student success patterns for groups with a lower probability of success.

 

The report also had a few notable omissions. First of all, the decision to exclude leaders of for-profit colleges is troubling. While many for-profit colleges have low completion rates, their cost structure (in terms of tracking per-student expenditures) is worth examining and they do disproportionately serve at-risk students. There is no reason to leave out an important, if not controversial, sector of higher education. Second, the typical text on declining public support for higher education (on a per-student basis) was present. While it might make college presidents feel good, any requests for additional funding in this political and economic climate need to be more closely tied to improving college completion rates. Finally, little attention was paid to the different sectors of higher education sharing best practices in spite of their often symbiotic relationship.

I don’t expect more than a few months to go by before the next commission issues a very similar report to this one. Stakeholders in the higher education arena need to think of how potential success stories can actually be brought to scale to benefit a meaningful number of students.