Image: Harry Campbell for The Chronicle
Pointing out that much of higher education is in a state of crisis is at this point about as revelatory as noting that the U.S. Senate is dysfunctional.
More interesting is a consideration of how far colleges and universities, beyond those whose wealth and reputations insulate them from both disruption and self-reflection, are willing to go to mitigate the crisis. How fundamental are the assumptions we are willing to challenge? How many established orthodoxies and traditions are we willing seriously to interrogate? Will this be a slight mid-air course correction or a redesign of the plane? In an industry where the elimination of a small program, the creation of a new interdisciplinary minor, or even the renaming of a department is considered dramatic change, these are important questions.
As Gabriel Paquette recently asked, can higher education save itself?
Most diagnoses of the current problem and most prescriptions for a solution focus on cost. This is understandable, given that the unsustainability of the financial model is, for the industry as a whole, the most pressing existential threat. And because we have tended to begin with cost, we have tended as well to arrive pretty consistently at the same result: cuts. Cuts in personnel, cuts in compensation, cuts in programs. But, as others have pointed out, spending less on fewer things is not in itself a long-term answer to everything that ails higher education.
Making higher education less expensive, a worthy goal, would not by itself make higher education better. Given the nature of the thinking to date, it might, in fact, make it worse if reductions in cost are not accompanied by increases in innovation. Faculty reductions in the wrong areas might have a negative impact on learning outcomes, while reductions in support services might worsen already poor completion rates.
What if, instead of beginning with cost, we began with the more profound question of impact? That is, what is the desired impact of higher education on the society we serve, and what form of education will have that impact? This would lead us, I believe, to the contemplation of some issues that are broader in scope than simply cost, for while affordability is inseparable from impact, it is far from its only determinant.
We can argue for a long time about the civic purpose of higher education — academics can argue for a long time about most things — but let me propose for the moment at least one reasonable description. Higher education should in its ideal form lead to more economic security for more people, a more equitable and innovative society, and a well-functioning democracy. Add whatever goals you would like, but these seem like a reasonable starting point and, given the present state of the country, more than a little aspirational.
Notice that I do not include in this definition of the ideal “preserving higher education in something as close as possible to its current form,” despite the fact that this seems to be an implicit or explicit goal of many within academe. That goal should be prioritized only if it can be shown to be the best way of leading to the desired outcomes.
If we focus on the end result of social impact rather than on preserving the status quo, we free ourselves up to ask some fundamental and uncomfortable questions. Here are some examples.
Are we organizing ourselves, and are we organizing knowledge, in the best way? Colleges are among the most compartmentalized organizations in the world. Institutions with one hundred faculty members will often divide them into 20 or 25 departments. It is common to find administrative departments staffed by two or three people. Essentially, if there is a particular academic discipline — history — or a particular administrative task — career services — we locate it within its own department. Colleges are large, sometimes massive accumulations of highly specialized pieces, constructed not by design but through gradual accretion over time.
Perhaps this is the best way to do things. Or perhaps not. Many organizational thinkers argue that the most successful work “requires both the integration of skills and transcending functional boundaries.” Higher education, the ultimate siloed enterprise, is adept at neither. Would a university with fewer internal divisions be able to eliminate redundancies and reduce costs? Almost certainly. Would such a university also teach students more effectively and perform its internal functions at a higher level? I believe that it would.
The overwhelming majority of four-year colleges in the United States require students to declare a major in a discipline housed in a department. Most majors are designed by faculty members essentially to reproduce themselves: If you want to be an English professor or a biology professor, moving through the curricula in those departments makes perfect sense. Yet only a tiny and shrinking percentage of English majors will go on to be English professors, and if you want to do almost anything else, I would argue, there are more effective ways to organize your education than by majoring in a single discipline. Should we take it as a given that building your education around 10 literature courses is preferable to building it around a global challenge like food insecurity or climate change or around the development of an ability like creativity or clarity of expression? This is not about the value of studying literature, but about how that study should be organized in relation to other areas of study in order to prepare students for work they will do and the problems they will need to solve.
Maybe I’m wrong. But let’s have that debate rather than a debate simply about which majors or positions should and should not be on the chopping block or what to call the English department.
Are we wasting time? During my decades as a student, teacher, and administrator, I’ve been associated with six colleges, and at every one the central work — teaching students — happened at full bore for two-thirds of the year. (To be fair, this is not the case at many two-year and some four-year institutions.) Of what other essential industry is this true? Imagine if hospitals or supermarkets or the postal service took a pause in January and another from June through August.
The evolution of the “summer break” in both colleges and K-12 schools is often, but erroneously, linked to the country’s agrarian roots. In fact it was a concession in the mid-19th century to affluent vacationers. It has stuck around, essentially, because we like it, even though most experts agree that it does more to harm than to help student learning.
The strongest arguments in favor of the eight-month academic calendar at colleges in particular are the following: It allows students, faculty, and staff members to decompress after the intensity of a term (though try making that case to someone who does almost any other job); it provides time for students to help fund their education through working; and it allows the faculty both to prepare their classes and to engage in scholarship. At research universities and many liberal-arts colleges, this last argument in particular is the one most commonly voiced.
The strongest arguments against the typical calendar are both financial and educational. The simplest way to reduce the cost of a four-year college degree would be to make it a three-year college degree, and this could be accomplished rather easily by expanding the length of the school year. And while the evidence for a “summer slide” — that is, a decline in achievement levels after a long break — is not incontrovertible, it is strong, and it suggests that the decline is worse among students of lower socio-economic status.
The interruption of teaching in the service of research is commonly justified on two bases. Research universities in particular are essential drivers of innovation, and the faculty are therefore fulfilling two roles: teaching students and advancing society through the creation of knowledge. This argument is powerful, though it might not apply with equal force to all disciplines. The second and more debatable argument is that productive scholars make better teachers. This is among the most cherished of academic beliefs, but it is, unfortunately, unsupported by evidence. A review of the research on the correlation between scholarship and teaching notes that while “academics overwhelmingly think the roles are mutually supportive,” the author “cannot conclude from the information at hand that the link is strongly positive” (nor can he conclude, fortunately, that the link is negative, which is something of a relief).
The cost-benefit analysis of the academic calendar is complex, but it seems at least worth asking whether it is better to teach fewer things — that is, to make cuts — or teach things more often — that is, to extend the calendar.
Can teaching online sometimes be better than teaching in-person? If there is a single, central belief underlying the operations of the traditional college, it is that the in-person educational experience cannot be fully replicated online. This is undoubtedly the case, as anyone who has spent the past year teaching on Zoom can attest. But does this mean that the online experience, while different from the in-person experience, is necessarily and in all instances worse?
Online education has a poor reputation that is largely earned. Its early decades have been dominated by for-profit entities that have often been bad actors and by over-excitement about MOOCs. But we have been forced into a year of learning and experimentation that should lead to an honest reassessment of what is and is not possible.
As a social experience, being with others in a classroom is clearly preferable to being in a bedroom or kitchen staring at a screen. As an educational experience, however, the comparison becomes more complicated. Here are some things I have been able to do as an online teacher that I was never able to do in my 15 years as a classroom teacher: record lectures in advance so that synchronous screen time can be used for more interactive learning; bring together in one place students in Kyoto and London and San Francisco; see every student’s name as they spoke; seamlessly introduce audio and visual material; bring in guest discussants from all over the world at no cost; teach students who are working full time; and divide students into small groups without moving around furniture.
The wealthiest colleges will surely return, post-Covid, to a wholly on-campus experience; many for-profit and nontraditional colleges will continue to operate entirely online. But for the vast group of institutions in the middle, the question becomes whether in-person and online instruction can be combined in a way that reduces cost and broadens impact without sacrificing educational quality.
Questions of the kind I have asked, in my experience, are met on most college campuses with a degree of anger approaching road rage, yet they merely scratch the surface of the discussions we should be having about the future of higher education. It may be, to paraphrase T. S. Eliot, that the end of all our exploring will be to arrive more or less where we started and know the place for the first time. But if we fail to explore — if we fail to go beyond superficial change and interrogate our most fundamental assumptions about how and what we teach, how and why we organize ourselves in the current way — we will have no one but ourselves to blame if the system as we know it shrivels to the point where it collapses from within or is painfully disrupted from without.
Brian Rosenberg is president emeritus of Macalester College and president in residence of the Harvard Graduate School of Education.