I am pleased to announce the release of my newest policy brief, “Moving Forward with Federal College Ratings: Goals, Metrics, and Recommendations” through my friends at the Wisconsin Center for the Advancement of Postsecondary Education (WISCAPE). In the brief, I outline the likely goals of the Obama Administration’s proposed Postsecondary Institution Ratings System (PIRS), discuss some potential outcome measures, and provide recommendations for a fair and effective ratings system given available data. I welcome your comments on the brief!
The Sunday New York Times included an editorial piece by Nicholas Kristof with the title, “Professors, We Need You!” In this piece, Kristof argued that the vast majority of faculty do not do a good job connecting with media and policymakers and thus do not get the importance of their work communicated beyond the proverbial ivory tower. Perhaps the most damning statement in the piece is Kristof’s assertion that “there are, I think, fewer public intellectuals on American university campuses today than a generation ago.”
Some of Kristof’s statements about the disincentives toward public engagements are certainly true, at least for some faculty at some institutions. Tenure-track faculty are often judged by the number of peer-reviewed publications in top journals, at the expense of public service and publishing in open-access journals. The increased specialization of many faculty members also makes communicating with the public more difficult due to the often technical nature of our work. Faculty who are not on the tenure track face an additional set of concerns in engaging with the public due to their often unstable employment situations.
With those concerns being noted, I think that Kristof is providing a somewhat misguided view of faculty engagement. Some (but not enough) academics, regardless of their employment situation, do make the extra effort to be public as well as private intellectuals. (If you’re reading this blog post, I’ve succeeded to at least some extent.) The Internet lit up with complaints from academics about Kristof’s take, which are well-summarized in a blog post by Chuck Pearson, an associate professor at Virginia Intermont College. He also created the #engagedacademics hashtag on Twitter, which is worth a look.
While I would love to see elite media outlets like the New York Times reach out beyond their usual list of sources at the most prestigious institutions, I don’t see that as tremendously likely to happen. So what can academics do in order to get their work out to policymakers and the media? Here are a few suggestions based on my experiences, which have included a decent amount of media coverage for a first-year assistant professor:
1. Work on cultivating a public presence. Academics who are serious about being public intellectuals should work to develop a strong public presence. If your institution supports a professional website under the faculty directory, be sure to do that. Otherwise, use Twitter, Facebook, or blogging to help create connections with other academics and the general public. One word of caution: if you have strong opinions on other topics, consider a personal and a professional account.
2. Try to reach out to journalists. Most journalists are available via social media, and some of them are more than willing to engage with academics doing work of interest to their readers. Providing useful information to journalists and responding to their tweets can result in being their source for articles. Help a Reporter Out (HARO), which sends out regular e-mails about journalists seeking sources on certain topics, is a good resources for academics in some disciplines. I have used HARO to get several interviews in various media outlets regarding financial aid questions.
3. Work through professional associations and groups. Academics who belong to professional associations can potentially use the association’s connections to advance their work. I am encouraged by associations like the American Educational Research Association, which highlights particularly relevant papers through its media outreach efforts. Another option is to connect with other academics with similar goals. An example of this is the Scholars Strategy Network, a network of “progressive-minded citizens” working to get their research out to the public.
4. Don’t forget your campus resources. If your college or university has a media relations person or staff, make sure to reach out to them as soon as possible. This may not be appropriate for all research topics, but colleges tend to like to highlight faculty members’ research—particularly at smaller institutions. The media relations staff can potentially help with messaging and making connections.
While Kristof’s piece overstates the problem that faculty face in being viewed as public intellectuals, it is a worthwhile wakeup call for us to step up for efforts for public engagement. Perhaps Kristof will turn his op-ed column over to some academics who are engaged with the public to highlight some successful examples?
[UPDATE: Thanks to The Chronicle of Higher Education for linking to this piece. Readers, I would love to get your comments on my post and your suggestions on how to engage the media and public!]
In what will come as a surprise to few observers, much of the higher education community isn’t terribly fond of President Obama’s plan to develop a college ratings system for the 2015-16 academic year. An example of this is a recently released Inside Higher Ed/Gallup survey of college provosts and chief academic officers. Only a small percentage of the 829 individuals who returned surveys were supportive of the ratings and thought they would be effective, as shown below:
- 12% of provosts agree the ratings will help families make better comparisons across institutions.
- 12% of provosts agree the ratings will reflect their own college’s strengths.
- Just 9% agree the ratings will accurately reflect their own college’s weakness.
There is some variation in support by type of college. Provosts at for-profit institutions and public research universities tended to offer more support, while those at private nonprofit institutions were almost unanimous in opposition. But regardless of whether provosts like the idea of ratings, the plan seems to be full steam ahead.
The Association of Public and Land-Grant Universities (APLU) took a productive step in the ratings conversation by releasing their own plan for accountability and cost-effectiveness. This plan centers on three components that could be used to allocate financial aid to colleges: risk-adjusted retention and graduation rates, employment/graduate degree rates, and default/loan repayment rates. Under APLU’s proposal, colleges could fall into one of three groups: a top tier that receives bonus Title IV funds, a middle tier that is held harmless, and a bottom tier that loses some or all Title IV funds.
To me, that sounds like a ratings system. But APLU took care not to call their plan a ratings system, and viewed the Administration’s plans as being “extremely difficult to structure.” It seems like the phrase “college ratings” has become a toxic idea; so rather than call for a simplified set of ratings, APLU discussed the use of “performance tiers.” This sounds a little like the Common Core debate in K-12 education, in which some states have considered renaming the standards in an attempt to reduce opposition.
It will be interesting to see how the discussion on college ratings moves forward over the next several weeks, particularly as more associations either offer their plans or decry the entire idea. The technical ratings symposium previously scheduled for January 22 will now occur on February 6 on account of snow, and I’ll be presenting my thoughts on how to develop a ratings system for postsecondary education. I’ll post my presentation on this blog at that time.
Yesterday, I put out my top-ten list of higher education policy and finance issues from 2013. And today, I’m back with a list of not-top-ten events from the year (big thanks to Justin Chase Brown for inspiring me to write this post). These are events that left me shaking my head in disbelief or wondering how someone could fail so dramatically.
(Did I miss anything? Start the discussion below!)
10. Monsters University isn’t real. The higher education community was abuzz this summer with the premiere of Pixar’s newest movie about one of the few universities outside Fear Tech specializing in scaring studies. The Monsters University website is quite good, and as Jens Larson at U of Admissions Marketing notes, it’s hard to distinguish from many Title IV-participating institutions. I’ll use this blog post to announce my willingness to give a lecture or two at Monsters University. (As an aside, since the two main characters didn’t graduate, their post-college success may not help MU’s scores in a college rating system.)
9. Brent Musburger set men back at least five decades in the course of 30 seconds. His public ogling of the girlfriend of Alabama quarterback A.J. McCarron during January’s BCS championship game instantly became a YouTube sensation. Musburger shouldn’t have listened to his partner in The Waterboy, Dan Fouts, who urged him to not hold anything back in the last game of the season. McCarron, on the other hand, is preparing to play Oklahoma in the Sugar Bowl on January 2.
8. Rankings and ratings are not the same thing. While college leaders tend not to like the Obama Administration’s proposed Postsecondary Institution Rating System, it is important to emphasize the difference between rankings and ratings. Rankings assign unique values to each institution (like the college football or basketball polls), while ratings lump colleges into broad categories (think A-F grades). Maybe since I work on college rankings, I’m particularly annoyed by the confusion. In any case, it’s enough to make my list.
7. Mooooove over: The College Board has another rough year. This follows a rough 2012 for the publishers of the SAT, as more students took the ACT than the SAT for the first time last year. But in 2013, the redesign of the SAT got pushed back from 2015 to 2016, giving the ACT more time to gain market share. The College Board followed that up with a head-scratching example of “brand-ing,” passing out millions of cow stickers to students taking the PSAT. If these weren’t enough, the College Board also runs the CSS Profile, a supplemental (and not free) application for financial aid required by many expensive institutions. Rachel Fishman at New America has written extensively about the concerns of the Profile.
6. Gordon Gee is the most interesting man in higher education. The well-traveled university president began 2013 leading Ohio State University, but left the post this summer after his 2012 comments disparaging Notre Dame, Catholic priests, and the ability of the Southeastern Conference to read came to light. Yet, he and his large bowtie collection will be heading to West Virginia University this spring as he assumes the role of interim president. There is still no word if the Little Sisters of the Poor will show up on WVU’s 2014 football schedule.
5. Rate My Professor is a lousy measure of institutional teaching quality. I’m not going to fully dismiss Rate My Professor, as I do believe it can be correlated with an individual professor’s teaching quality. But a Yahoo! Finance piece claiming to have knowledge of the 25 colleges with the worst professors cross the boundaries of absurd. I quickly wrote a response to that piece, noting that controlling for a student’s grade and the difficulty of the course are essential in order to try to isolate teaching quality. This was by far my most-viewed blog of 2013.
4. Elizabeth Warren’s interest rate follies. The Democratic Senator from Massachusetts became even more of a progressive darling this spring when she announced a plan to tie student loan interest rates to the Federal Reserve’s overnight borrowing rate—0.75%. Unfortunately, this plan made no sense on several dimensions. While overnight borrowing has nearly no risk, student loans (over a ten-year period) have considerable risk. Additionally, if interest rates were set this low, money would have to come from somewhere else. I would much rather see the subsidy go upfront to students through larger Pell Grants than through lower interest payments after leaving college. Fortunately, Congress listened to smart people like Jason Delisle at New America and her plan went nowhere.
3. The Common Application fails early applicants. The Common Application, used by a substantial number of elite colleges, did not work for some students applying in October and November. The reason was that the Common App’s new software didn’t work and they failed to leave the previous version available in case of problems. Although this didn’t affect the vast majority of students who aspire to attend less-selective institutions, it certainly got the chattering classes talking.
2. The federal government shut down and budget games ensued all year long. The constant partisan battle culminated with a sixteen-day shutdown in October, bringing much of the Department of Education to a screeching halt. While the research community used Twitter to trade downloaded copies of IPEDS data and government reports, other disruptions were more substantial. 2013 also featured sequestration of some education spending, although it looks like the budget process might return to regular order for the next two years.
1. Georgetown Law finds a way to stick taxpayers with the entire cost of law school. It is no secret that law school is an expensive proposition, with six-figure debt burdens becoming the norm at many institutions. But some of the loans can be forgiven if students pursue public service careers for a decade, a program that was designed to help underpaid and overworked folks like public defenders or prosecuting attorneys.
Georgetown’s Loan Repayment Assistance Program advertises that “public interest borrowers might now pay a single penny on their loans—ever!” To do this, the law school increased tuition to cover the cost of 10 years’ worth of loan payments under income-based repayment for students making under $75,000 per year. Students take out Grad PLUS loans to fund this upfront, but never have to pay a dime of those loans back as Georgetown makes the payments. Jason Delisle and Alex Holt, who busted this scheme wide open this summer, estimate that students will have over $150,000 in loans forgiven—and put on the backs of taxpayers. Although Georgetown tries to defend the practice as being good for society, it is extremely hard to make that argument.
Honorable mentions: #Karma, lousy attacks on performance-based funding research, financial stability of athletics at Rutgers and Maryland, and parking at 98% of campuses.
Performance-based funding (PBF), in which at least a small portion of state higher education appropriations are tied to outcomes, is a hot political topic in many states. According to the National Conference of State Legislatures and work by Janice Friedel and others, 22 states have PBF in place, seven more are transitioning to PBF, and ten more have discussed a switch.
The theory of PBF is simple: if colleges are incentivized to focus on improving student retention and graduation rates, they will redirect effort and funds from other areas to do so. PBF should work if two conditions hold:
(1) Colleges must currently be using their resources in ways that do not strongly correlate with student success, a point of contention with many institutions. If colleges are already operating in a way that maximizes student success, then PBF will not have an impact. PBF could also have negative effects if colleges end up using resources less effectively than they currently are.
(2) The expected funding tied to performance must be larger than the expected cost of changing institutional practices. Most state PBF systems currently tie small amounts of state appropriations to outcomes, which could result in the cost of making changes smaller than the benefits. Colleges also need to be convinced that PBF systems will be around for the long run instead of until the next governor ends the plan or state budget crises cut any funds for PBF. Otherwise, they may choose to wait out the current PBF system and not make any changes. Research by Kevin Dougherty and colleagues through the Community College Research Center highlights the unstable nature of many PBF systems.
For these reasons, the expected impacts of state PBF plans on student outcomes may not be positive. A recent WISCAPE policy brief by David Tandberg, an assistant professor at Florida State University, and Nicholas Hillman, an assistant professor at the University of Wisconsin-Madison, examines whether PBF plans appear to affect the number of associate’s and bachelor’s degrees awarded by institutions in affected states. Their primary findings are that although some states had significantly significant gains in degrees awarded (four at the four-year level and four at the two-year level), other states had significant declines (four at the four-year level and five at the two-year level). Moreover, PBF was most effective in inducing additional degree completions in states with long-running programs.
The general consensus in the research community is that more work needs to be done to understand the effects of state performance-based funding policies on student outcomes. PBF policies differ considerably by state, and it is too early to evaluate the impact of policies on states that have recently adopted the systems.
For these reasons, I was particularly excited to read the Inside Higher Ed piece by Nancy Shulock and Martha Snyder entitled, “Don’t Dismiss Performance Funding,” in response to Tandberg and Hillman’s policy brief. Shulock and Snyder are well-known individuals in the policy community and work for groups with significant PBF experience. However, their one-sided look at the research and cavalier assumptions about the authors’ motives upset me to the point that writing this response became necessary.
First of all, ad hominem attacks about the motives of well-respected researchers should never be a part of a published piece, regardless of the audience. Shulock and Snyder’s reference to the authors’ “surprising lack of curiosity about their own findings” is both an unfair personal attack and untrue. Tandberg and Hillman not only talk about the eight states with some positive impacts, they also discuss the nine states with negative impacts and a larger number of states with no statistically significant effects. Yet Shulock and Snyder do not bother mentioning the states with negative effects in their piece.
Shulock and Snyder are quite willing to attack Tandberg and Hillman for a perceived lack of complexity in their statistical model, particularly regarding their lack of controls for “realism and complexities.” In the academic community, criticisms like this are usually followed up with suggestions on how to improve the model given available data. Yet they fail to do so.
It is also unusual to see a short policy brief like this receive such a great degree of criticism, particularly when the findings are null, the methodology is not under serious question, and the authors are assistant professors. As a new assistant professor myself, I hope that this sort of criticism does not deter untenured faculty and graduate students from pursuing research in policy-relevant fields.
I teach higher education finance to graduate students, and one of the topics this semester was performance-based funding and accountability policy. If Shulock and Snyder submitted their essay for my class, I would ask for a series of revisions before the end of the semester. They need to provide empirical evidence in support of their position and to accurately describe the work done by Tandberg and Hillman. They deserve to have their research fairly characterized in the public sphere.
It is painfully obvious to students, their families, and financial aid administrators alike that the current system of determining federal financial aid eligibility is incredibly complex and time-consuming. Although there should be broad support for changes to the financial aid system, any progress has been halting at best. I have devoted much of my time to researching and discussing potential changes to the financial aid system. Below is some of my work, going from relatively minor to major changes.
I’ve been working on an ongoing study with the National Association of Student Financial Aid Administrators examining the extent to which students’ financial aid packages would change if income data from one year earlier (the “prior-prior year”) than is currently used were to be used in the FAFSA calculations. Although a full report from this study won’t be out until sometime next month, here is a nice summary of the work from the Chronicle of Higher Education. The key point from this work is that, since family resources don’t change that much for students with the greatest financial need, students could file their FAFSA several months earlier using income data from the prior-prior year without a substantial change in aid targeting.
Under a prior-prior year system, students would still have to file the FAFSA each year. Given the fact that many students don’t see that much income volatility, there is a case to be made that students should only have to file the FAFSA once—at the beginning of college—unless their family or financial circumstances change by a considerable margin. In a piece hot off the virtual presses at the Chronicle, Sara Goldrick-Rab and I discuss why it would be better for many students to only have to file the FAFSA once. I would like to know more about the costs and benefits of such a program (weighing the benefits of reduced complexity and administrative compliance costs versus the likelihood of higher aid spending), but the net fiscal cost is likely to be small or even positive.
So let’s take this one step further. Do we even need to have all students file the FAFSA? Sara and I have looked at the possibility of automatically granting students the maximum Pell Grant if anyone in their family qualifies for means-tested benefits (primarily free and reduced price lunches). We detail the results of our fiscal analysis and simulation in an Institute for Research on Poverty working paper, where we find that such a program is likely to remain reasonably well targeted and pass a cost-benefit test in the long run.
There is a broad menu of options available to simplify the FAFSA, from giving students more time to complete the form to getting rid of it altogether. Let’s talk more about these options (plus many more) and actually get something done that can help all stakeholders in the higher education arena.
This blog has been fairly quiet through the month of April, a notable difference from my goal of writing about two posts per week. While I greatly enjoy being able to write my thoughts on timely issues in the higher education world, there are times when my day job doesn’t readily allow for time necessary to think through and write a post—let alone keep up with the news. But I do want to take a few minutes to share the reasons why I’ve been so busy, as well as why May will likely be a fairly slow month on this blog.
First of all, I’m preparing to defend my dissertation (three essays on higher education policy) toward the end of next week. The last few weeks have been fairly frantic as I’ve made substantial changes to two chapters before I sent them to my committee last week. Although there will certainly be a lot of changes required after my defense, it feels great to be ready to defend. I will be happy to share the dissertation chapters with anyone who is interested after final revisions have been made.
At the end of this week, I am flying to California to give a presentation at the annual Education Writers’ Association seminar at Stanford University. I was asked to give a talk on my research in the area of input-adjusted metrics in measuring institutional effectiveness, and particularly how adjusting for cost changes the ordering of institutions. This talk will be in front of a large group of journalists who cover education on a regular basis, which is a neat opportunity.
Finally, on the teaching front, I am giving my final lecture of the semester tomorrow on accountability and performance measures to a mixed undergraduate/grad student class on debates in higher education policy. I’ve really enjoyed giving several previous lectures, and this one has particular meaning to me as it is something that is both very policy-relevant and fun to teach.
I hope to get a post or two up sometime in the next two weeks, so please send along any ideas that you would like for me to explore in future posts. Until then, it’s back to the fun world of cleaning and coding administrative datasets!
Most people would generally consider a student getting money from his or her parents while in college to be a good thing—after all, most traditional-age college students tend to have few resources of their own and additional money from Mom and Dad might help students work fewer hours (generally considered a good thing). But a new paper in the American Sociological Review by Laura Hamilton, an assistant professor of sociology at the University of California-Merced, challenges this assumption. In a paper titled “More Is More or More Is Less? Parental Financial Investments During College” (abstract here), she finds that parental financial assistance increases the likelihood of graduation, but is associated with lower student GPAs.
As a sociologist, Hamilton came to the project with the perspective that more financial resources are a good thing for a student due to the mere availability of resources and social capital. I don’t start from that perspective—and instead look at what students can do with the available funds. But I am also concerned that no-strings-attached gifts from parents might not be a good thing, since they may lack the performance requirements of merit-based financial aid. Additionally, the need for additional funds might reflect the inability of a student from a middle- to upper-income family to secure merit-based aid.
Hamilton uses two old, workhorse datasets in her analysis—the Baccalaureate and Beyond Study (B&B) of students who graduated in 1993 and the Beginning Postsecondary Students Study (BPS) of students who began college in 1990. She uses the B&B to focus on cumulative GPA at graduation as an outcome, which has two main limitations: we don’t know the relationship between parental assistance on dropout or changes in college major which may be associated with GPA. Because of that, she uses the BPS to look at graduation rates. Neither dataset is perfect or free of issues of causality, but it’s not a bad starting point (the datasets have to be appropriate to get into a top-tier journal like ASR).
The positive relationship between parental assistance and graduation rates won’t raise many eyebrows, but her claim that among students who get to graduation, those with higher levels of parental assistance have lower GPAs is more controversial. My biggest concern with the article is that appears that more help from the parents allows some marginal students to stay in school who otherwise would not have appeared in the dataset. If some of the 2.0 GPA students with parental assistance would have dropped out, there may not be differences in the GPAs of students who successfully completed college. Because of this, I have to take the finding on GPAs with a grain of salt.
On another note, this article also can teach scholars quite a bit about how to interact with the media. The mixed conclusion gives the education press and the general public an opportunity to run with a provocative conclusion—parents shouldn’t give their kids money (if they can) because they might just slack off. The headline in today’s Inside Higher Ed piece on the article (“Spoiled Children”) is an example of how research findings can be spun to get more eyeballs. While the media should run more reasonable headlines, it is the responsibility of academics to call out the education press when they play these sorts of games.
One of the factors which attracted me to the University of Wisconsin-Madison for graduate school was the Wisconsin Idea—the belief that the boundaries of the university should be the boundaries of the state. (Yes, that is much more important than being able to see my beloved Packers on television each week—and I’m a shareholder in the team.) As the University of Wisconsin System was formed in the early 1970s, the Wisconsin Idea has been adopted by the rest of the state’s public colleges and universities. While some people say that the Wisconsin Idea has passed its prime due to the focus on arcane research topics, I still think the idea is alive in well.
I saw a great example of the Wisconsin Idea in action at UW-Parkside that made the state newspapers this morning. Two Parkside students did research for a class project and discovered that moving prisoners’ medical records from paper to electronic formats could save millions of dollars and likely improve patient outcomes. This is a win-win for the students (who gain valuable research experience and analytic skills), the university (which gets great publicity), and the state (which should be able to save money).
I have been privileged to study the Wisconsin public higher education system for the past four-plus years through the Wisconsin Scholars Longitudinal Study. It is not uncommon for someone at UW-Madison to look down their noses at the rest of the UW System, but it is critical to recognize the contributions of the entire system toward making Wisconsin a better place to live.
It is no secret that academics research some obscure topics—and are known to write about these topics in ways that obfuscate the importance of such research. This is one reason why former Senator William Proxmire (D-WI) started the Golden Fleece Awards to highlight research that he did not consider cost-effective. Here are some examples, courtesy of the Wisconsin Historical Society. (Academia has started to push back through the Golden Goose Awards, conceived by Rep. Jim Cooper (D-TN).)
Some of these potentially strange topics either have potentially useful applications or are just plain thought-provoking. To recognize some of the most unusual research in a given year, some good chaps at Harvard organized the first Ig Nobel Prize ceremony in 1991. This wonderful tradition continues to this day, with the 2012 ceremony being held yesterday. Real Nobel Prize winners are even known to hand out the awards!
Ten awards are handed out each year, so it is difficult to pick the best award. My initial thought was to highlight the Government Accountability Office’s report titled, “Actions Needed to Evaluate the Impact of Efforts to Estimate Costs of Reports and Studies,” but this sort of report is not unusual in the federal government. So I’ll single out a nice little article on whether multiple comparisons bias can result in brain wave activities for a dead Atlantic salmon (no word on whether the study participant was consumed after completion of the study) as my favorite award. Multiple comparisons bias is certainly real and the authors provide a nice example of how to lie with statistics, but the subject tested sure is unusual. I encourage people to take a look at the other awards and try to figure out how these research projects got started. Some seem more useful than others, but that is the nature of academic research.
The Annals of Improbable Research, the folks who put on the Ig Nobel ceremony, also have three hair clubs for scientists: The Luxuriant Flowing Hair Club for Scientists, the Luxuriant Former Hair Club for Scientists, and the Luxuriant Facial Hair Club for Scientists.
Here is the full video of the ceremony.