“Everything in Between” is about the systems, institutions, and practices that people build, “things” of a sort that sit in between us, between groups of us, between “us” and “them,” and between us and other systems and institutions that seem terribly far away: “the market,” “the state,” the universe, and so on. Once a week, usually on a Monday, I’ll have something new.
Posting a weekly short essay about the forms and functions of the university prompts me to keep a running list of topics to touch on. But the rhythms of social media permit me to interrupt myself, and that’s my starting point today. I’ll interrupt myself to talk, briefly, about, well, interruption.
At my university, for the last year or so I’ve hosted a monthly convening of researchers, teachers, and senior administrators from across the full enterprise who want to talk about various aspects of AI. It is entirely a grassroots activity, very much the sort of Hayekian “spontaneous order” that is often in short supply in higher education today. Sometimes our topics are big and broad and more than a little corporate; sometimes they are micro and specific and all too familiar to long-time academics.
At last week’s meeting, I introduced a micro question about how GenAI might be used in grading student work. That led quickly to a lively macro conversation about ethics and humanity in higher education and ultimately to a fantastically broad question: What sort of university do we want to be?
The issue, in short, is not “how best to use AI in teaching, or research, or administration?” Instead, the issue is “how, where, when, and why should we prioritize human-to-human interaction in education?” What and where are the affirmative cases for *not* using algorithms and automation?
We’ll see if some of us can assemble a real project that tackles those questions, but their simultaneous novelty and lack of novelty should be clear. Novelty: Human teacher and human student is just the sort of obvious conceptual baseline for an institution that GenAI systems are starting to call into question. Even older “distance learning” systems tended to have humans in central design and delivery roles. Lack of novelty: One intuitive response to the prompt, “how do we ensure that our students learn to think for themselves rather than learn to work the gears of ChatGPT?” is to lean into modern versions of the Oxbridge tutorial system. Regardless of where one ends up on that spectrum, the questions seem to have a renewed urgency. Jessica Grose in The New York Times: “Human interaction is now a luxury good,” repeating in part Hollis Robbins’ “AI and the Last Mile.” Nicholas Carr has been writing about versions of this theme for a long time, along with Evgeny Morozov.
Carr’s forthcoming book, “Superbloom: How Technologies of Connection Tear Us Apart,” both generalizes the argument and prompts the title and illustration for this post. Carr writes: “[C]ommunication still took place on a human scale during [the late 19th century]. The constraints and frictions of analog transmission limited the speed and volume of information flows and encouraged discipline and discretion in people’s reading, viewing, and listening habits.”
Slow research. Slow learning. Text, not TikTok (the link points to Sam Jennings’s “On Bonfire Night, at The Hinternet). Friction. Interruption. The sometimes disrupting, uneven rhythms of human interaction rather than the seductive smooth rhythms of seamless communication and connection.
What else developed during the latter decades of the 19th century and the early decades of the 20th century? That fundamentally analog, friction-based institution: the American university. Back then, universities were built sometimes to produce better knowledge and sometimes to produce better people, but they were rarely built to produce any thing or any one quickly or efficiently. Even Christopher Columbus Langdell, who famously rebuilt the Harvard Law School curriculum during the 1870s on the “law as science” template, relied on the painstaking work of a human teacher “Socratically” examining a human student.
Coincidence? Perhaps; perhaps not. To be discussed: Is the humanity we associate with close small scale human-to-human teaching and research a cause or consequence of the material form of late 19th century American universities? How broadly or specifically might we collectively invoke the ancient “Socratic” ideal, if we invoke it at all? US colleges and universities typically are baked into campuses, literally and metaphorically separating the so-called “life of the mind” from the life and business beyond. With few exceptions, that has always been true. And the opposite has usually been the case outside the US, with some notable exceptions - Oxford and Cambridge especially. So it should be possible, conceptually at least, to ask the following question genuinely, rather than rhetorically: Can that humanity, or a normatively-desirable 21st century version of it, be baked into the design of modern research and education? Should it? If so, where? How? Why? And at whose instance and with what effects? And - to invoke a question whose salience (I am reminded by European friends) may represent a distinctly American point of view - who pays?
Michael Sacasas posted the widely-circulated “The Enclosure of the Human Psyche” the other day, observing that the social and cultural consequences of digitalization are analogous to the British “enclosure of the commons.” His argument parallels in pithy form a long scholarly article published 20 years ago by my law professor friend and colleague James Boyle. “The commons” as a metaphorically open, free place is an alluring ideal, but if the university is a sort of intellectual commons, then we’re reminded that its historical “openness” was the product of design decisions - governance - that sometimes baked in “friction” and interruption of various sorts and sometimes filtered them out.
Friction as governance. There is more on governance to come.