Examining Variations in Marriage Rates across Colleges

This piece originally appeared at the Brookings Institution’s Brown Center Chalkboard blog.

Young adulthood is not only the time when most people attend college, but also a time when many marry. In fact, college attendance and marriage are linked and have social and economic consequences for individuals and their families.

When (and if) people get married is an important topic due to the presence of what is known as assortative mating. This phenomenon, in which a person is likely to marry someone with similar characteristics such as education, is a contributing factor to increasing levels of income inequality. In some circles, there is pressure to marry someone with a similar pedigree, as evidenced by the high-profile Princeton alumna who urged women at the university to find a spouse while in college. For people attending less-selective colleges, having the possibility of a second household income represents a key buffer against economic shocks.

In this blog post, I use a tremendous dataset compiled by The Equality of Opportunity Project that is based on deidentified tax records for 48 million Americans who were born between 1980 and 1991. This dataset has gotten a great deal of attention on account of its social mobility index, which examines the percentage of students who move well up in the income distribution by young adulthood.

I use the publicly available dataset to examine marriage rates of traditional-age college students through age 34 based on their primary institution of attendance. In particular, I am curious about the extent to which institutional marriage rates seem to be affected by the institution itself versus the types of students who happen to enroll there. My analyses are based on 820 public and private nonprofit four-year colleges that had marriage rates and other characteristics available at the institutional level. This excludes a number of public universities that reported tax data as a system (such as all four-year institutions in Arizona and Wisconsin).

The first two figures below show the distribution of marriage rates for the 1980-82 and 1989-91 birth cohorts as of 2014 for students who attended public, private religious, and private nonsectarian institutions. Marriage rates for the younger cohorts (who were between ages 23 and 25) were low, with median rates of 12% at public colleges, 14% at religiously-affiliated colleges, and just 5% at private nonsectarian colleges. For the older cohort (who were between ages 32 and 34), marriage rates were 59% at public colleges, 65% at religiously-affiliated colleges, and 56% at private nonsectarian colleges.

There is an incredible amount of variation in marriage rates within each of these three types of colleges. In the two figures below, I show the colleges with the five lowest and five highest marriage rates for both cohorts. In the younger cohort (Figure 3), the five colleges with the lowest marriage rates (between 0.9% and 1.5%) are all highly selective liberal arts colleges that send large percentages of their students to graduate school—a factor that tends to delay marriage. At the high end, there are two Brigham Young University campuses (which are affiliated with the Church of Jesus Christ of Latter-day Saints, widely known as the Mormon church), two public universities in Utah (where students are also predominately Mormon), and Dordt College in Iowa (affiliated with the Christian Reformed Church). Each of these colleges has at least 43% of students married by the time they reach age 23 to 25.

A similar pattern among the high-marriage-rate colleges emerges in the older cohorts, with four of the five colleges with the highest rates in students’ mid-20s had marriage rates over 80% in students’ early-30s.

A more fascinating story plays out among colleges with the lowest marriage rates. The selective liberal arts colleges with the lowest marriage rates in the early cohort had marriage rates approaching 60% in the later cohort, while the 13 colleges with the lowest marriage rates in the later cohort were all either historically black colleges or institutions with high percentages of African-American students. This aligns with the large gender gap in bachelor’s degree attainment among African-Americans, with women representing nearly 60% of African-American degree completions.

Finally, I examined the extent to which marriage rates were associated with the location of the college and the types of students who attended as well as whether the college was public, private nonsectarian, or religious. I ran regressions controlling for the factors mentioned below as well as the majors of graduates (not shown for brevity). These characteristics explain about 55% of the variation in marriage rates for the younger cohorts and 77% of the variation in older cohorts. Although students at religiously-affiliated institutions had higher marriage rates across both cohorts, this explains less than five percent of the overall variation after controlling for other factors. In other words, most of the marriage outcomes observed across institutions appear to be related mostly to students, and less to institutions.

Colleges in the Northeast had significantly lower marriage rates in both cohorts than the reference group of the Midwest, while colleges in the South had somewhat higher marriage rates. The effects of institutional type and region both got smaller between the two cohorts, which likely reflects cultural differences in when people get married rather than if they ever get married.

Race and ethnicity were significant predictors of marriage. Colleges with higher percentages of black or Hispanic students had much lower marriage rates than colleges with more white or Asian students. The negative relationship between the percentage of black students and marriage rates was much stronger in the older cohort. Colleges with more low-income students had much higher marriage rates in the earlier cohort but much lower marriage rates in the later cohort. Less-selective colleges had higher marriage rates for the younger cohort, while colleges with higher student debt burdens had lower marriage rates; neither was significant for the older cohort.

There has been a lot of discussion in recent years as to whether marriage is being increasingly limited to Americans in the economic elite, both due to the presence of assortative mating and the perception that marriage is something that must wait until the couple is financially secure. The Equality of Opportunity project’s dataset shows large gaps in marriage rates by race/ethnicity and family income by the time former students reach their early 30s, with some colleges serving large percentages of minority and low-income students having fewer than one in three students married by this time.

Yet, this exploratory look suggests that the role of individual colleges in encouraging or discouraging marriage is generally limited, since the location of the institution and the types of students it serves explain most of the difference in marriage rates across colleges.

Does College Improve Happiness? What the Gallup Poll Doesn’t Tell Us

The venerable polling organization Gallup released a much-anticipated national survey of 30,000 college graduates on Tuesday, focusing on student satisfaction in the workplace and in life as a whole. I’m not going to spend a lot of time getting into all of the details (see great summaries at Inside Higher Ed, NPR, and The Chronicle of Higher Education), but two key findings merit further discussion.

The first key finding is that not that many graduates are engaged with their job and thriving across a number of elements of well-being (including purpose, social, community, financial, and physical). Having supportive professors is the strongest predictor of being engaged at work, and being engaged at work is a strong predictor of having a high level of well-being.

Second, the happiness of graduates doesn’t vary that much across types of nonprofit institutions, with students graduating from (current?) top-100 colleges in the U.S. News & World Report rankings reporting similar results to less-selective institutions. Graduates of for-profit institutions are less engaged at work and are less happy than graduates of nonprofit colleges, although no causal mechanisms are posed.

While it is wonderful to have data on a representative sample of 30,000 college graduates, adults who started college but did not complete are notably excluded. Given that about 56% of first-time students complete a college degree within six years of first enrolling (according to the National Student Clearinghouse), just surveying students who graduated leaves out a large percentage of adults with some postsecondary experience. Given the (average) economic returns to completing a degree, it might be reasonable to expect dropouts to be less satisfied than graduates; however, this is an empirical question.

Surveying dropouts would also provide better information on the counterfactual outcome for certain types of students. For example, are students who attend for-profit colleges happier than dropouts—and are both of these groups happier than high school graduates who did not attempt college? This is a particularly important policy question given the ongoing skirmishes between the U.S. Department of Education and the proprietary sector regarding gainful employment data.

Surveying people across the educational distribution would allow for more detailed analyses of the potential impacts of college by comparing adults who appear similar on observable characteristics (such as race, gender, and socioeconomic status) but received different levels of education. While these studies would not be causal, the results would certainly be of interest to researchers, policymakers, and the general public. I realize the Gallup Education poll exists in part to sell data to interested colleges, but the broader education community should be interested in what happens to students who did not complete college—or did not even enroll. Hopefully, future versions of the poll will include adults who did not complete college.

Should Payscale’s Earnings Data Be Trusted?

Despite the large amount of money spent on higher education, prospective students, their families, and the public have historically known very little about the earnings of students who attend college. This has started to change in recent years, as a few states (such as Virginia) began to publish earnings data for their graduates who stayed in state and the federal government publishes earnings data for certain programs through gainful employment rules. But this leaves out many public and private nonprofit institutions, and complete data are not available without a student unit record system.

As is often the case, the private sector steps in to try to fill the gap. Payscale.com has collected self-reported earnings data by college and major among a large number of bachelor’s degree recipients (those with a higher degree are excluded—the full methodology is here). Their 2014 “return on investment” report ranked colleges based on the best and worst dollar returns, with Harvey Mudd College at the top with a $1.1 million return over 20 years and Shaw University at the bottom with a return of negative $121,000.

Payscale data is self-reported earnings among individuals who happened to look at Payscale’s website and were willing to provide estimates of their annual earnings. It’s my strong suspicion that self-reported earnings from these individuals are substantially higher than the average bachelor’s degree recipient, and these are often based on a relatively small number of students. For example, the estimates of my alma mater, Truman State University, are based on 251 graduates for a college that graduates about 1,000 students per year. As many Truman students go on to get advanced degrees, probably about 500 students per year would qualify for the Payscale sample. Yet 102 students provided data within five years of graduation—about four percent of graduates who did not pursue further degrees.

But is it still worth considering? Yes and no. I don’t put a lot of stock in the absolute earnings listed, since they’re likely biased upward and there are relatively few cases. Additionally, there is no adjustment for cost of living—which really helps colleges in expensive urban areas. But the relative positions of institutions with similar focuses in similar parts of the country are probably somewhat close to what complete data would say. If the self-reporting bias is similar, then controlling for cost of living and the composition of graduates could yield useful information.

I hope that Payscale can do a version of their ROI estimates taking cost of living into account, and try to explore whether their data are somewhat representative of a particular college’s bachelor’s degree recipients. Although I commend them for providing a useful service, I still recommend taking the dollar value of ROI estimates with a shaker of salt.

Associate’s Degree Recipients are College Graduates

Like most faculty members, I have my fair share of quirks, preferences, and pet peeves. While some of them are fairly minor and come from my training (such as referring to Pell Grant recipients as students from low-income families instead of low-income students, since most students have very little income of their own), others are more important because of the way they incorrectly classify students and fail to recognize their accomplishments.

With that in mind, I’m particularly annoyed by a Demos piece with the headline “Since 1991, Only College Graduates Have Seen Their Income Rise.” This claim comes from Pew data showing that only households headed by someone with a bachelor’s degree or more had a real income gain between 1991 and 2012, while households headed by those with less education lost ground. However, this headline implies that students who graduate with associate’s degrees are not college graduates—a value judgment that comes off as elitist.

According to the Current Population Survey, over 21 million Americans have an associate’s degree, with about 60% of them being academic degrees and the rest classified as occupational. This is nearly half the size of the 43 million Americans whose highest degree is a bachelor’s degree. Many of these students are the first in their families to even attend college, so an associate’s degree represents a significant accomplishment with meaning in the labor market.

Although most people in the higher education world have an abundance of degrees, let’s not forget that our college experiences are becoming the exception rather than the norm. I urge writers to clarify their language and recognize that associate’s degree holders are most certainly college graduates.

More Data on the Returns to College

Most people consider attending college to be a good bet in the long run, in spite of the rising cost of attendance and increasing levels of student loan debt. While I’m definitely not in the camp that everyone should earn a bachelor’s degree, I do believe that some sort of postsecondary training benefits the majority of adults. A recent report from the State Higher Education Executive Officers (SHEEO) highlights the benefits of graduating with a college degree from public colleges and universities.

Not surprisingly, their report suggests that there are substantial benefits to graduating from college. Using data from IPEDS and the American Community Survey, they find that the average associate’s degree holder earned 31.2% more (or about $9,200 per year) than the average person with a high school diploma. The premium associated with a bachelor’s degree is even larger, 71.2%–or nearly $21,000 per year. These figures seem to be on the high end (but quite plausible) of the returns to education literature, which suggests that students tend to get an additional 10-15% boost in wages for each year of college completed.

I do have some concerns with the analysis, which does limit its generalizability and/or policy relevance. They are the following:

(1)    Given that SHEEO represents public colleges and universities, it is not surprising that they focused on that sector in their analysis. Policymakers who are interested in the overall returns to education (including the private not-for-profit and for-profit sectors) should try to get more data.

(2)    This study is in line with the classic returns to education literature, which compares students who completed a degree to those with a high school diploma. The latter group of students who just have a high school diploma may have also completed some college but left without a degree, which results in a different comparison group than students and policymakers would expect. I would like to see studies compare all students who entered college with students who never attended to get a better idea of the average wage premium among those who attempt college.

(3)    While the average student benefits from completing a college degree, not all students benefit. For example, welders with a high school diploma may very well make more than a preschool teacher with a bachelor’s degree. A 2011 report by Georgetown University’s Center on Education and the Workforce does a nice job showing that not everyone benefits.

(4)    Most reports like this one do a good job estimating the benefits of education (in terms of higher wages), but neglect the costs in terms of forgone earnings and tuition expenses. While most people are still likely to benefit from attending relatively inexpensive public colleges, some students’ expected returns may become negative after this assumption.

(5)    Students who complete a certificate degree (generally one-year programs in technical fields) are excluded from the analyses for data reasons, which is truly a shame. Students and policymakers should keep in mind that many of these programs have high completion rates and positive payoffs in the long run.

My gripes notwithstanding, I encourage readers to check out the state-level estimates of the returns to different types of college degrees and majors. It’s worth a read.

(Note: This will likely be my last post of 2012, as I am looking forward to spending some time far away from computer screens and datasets next week. I’ll be back in January…enjoy the holidays and please travel carefully!)

Overvaluing Harvard

Many parents went to send their children to what they consider to be the best colleges and universities. For quite a few of these families, this means that Junior should go to fair Harvard (after all, it’s the top-rated university by U.S. News and World Report). But few families are willing to go as far as Gerald and Lily Chow of Hong Kong.

An article in the Boston Globe tells the sad saga of the Chow family and how they got duped out of $2.2 million by a former Harvard professor who claimed to be able to get the family’s two sons into the university. The family filed suit against defendant Mark Zimmy’s company after claiming fraud after their children did not get accepted there (although they did get into other elite colleges). Zimmy’s website is still active and targets Chinese students, many of whom have little knowledge of the American educational system. Needless to say, I am interested in how this case proceeds through the legal system.

I am pretty familiar with the academic literature studying the returns to attending a prestigious college. Although there are possibly some additional benefits of attending a more prestigious college to students from disadvantaged backgrounds, the literature is quite clear that the typical student should not expect to benefit by over one million dollars for attending Harvard compared to a slightly less prestigious college. It’s safe to say that the Chow family was likely going to waste their money, even if Mr. Zimmy was able to get their children into Harvard.