Digging
In the Wrong Place?

Everything in Between” is about the systems, institutions, and practices that people build, “things” of a sort that sit in between us, between groups of us, between “us” and “them,” and between us and other systems and institutions that seem terribly far away: “the market,” “the state,” the universe, and so on.
In addition to my posts here, I co-host a podcast titled “Your Leadership Podcast,” which is available on Spotify and wherever fine podcasts are available. I write about law and legal education at TaxProf Blog and for several years co-hosted a podcast about technology and law titled “Your Future Law Podcast.“ My older blog about Pittsburgh and renewing cities, Pittsblog, is still available online, as is my original blog about law, technology, and governance.
Two years ago, in a late night, whisky-fueled conversation in a bar at the end of a long conference day, I said something off-hand to a friend and colleague that made its way later into a thoughtful journal article.
This is what came across:
Perhaps, as Michael Madison has suggested, we need to stop thinking of the vast volume of texts, images, and expressive works that we have amassed in this information era as massive accumulations or storehouses of individuated artifacts and begin to think of them instead as an ocean of knowledge. Perhaps we could then turn to the question of governing this ocean as a knowledge commons—a vast expanse of shared intellectual and cultural resources in whose vitality and sustainability we all have an interest; but within which none of us can stake an individual claim qua right to exclude.
That passage appears in The AI-Copyright Trap, published as 100 Chi.-Kent L. Rev. 107 (2025), at page 132. The author is Carys Craig, a wonderful scholar on the faculty of Osgoode Hall Law School at York University.
Carys accurately captured my thoughts.
Ever since I saw a draft of her paper, I have been trying to figure out how to add nuance and structure to my premise. The image of data in an ocean has been sitting with me, and mostly in this post I aim to share that image with you.
Here, for now, is where I think I want to take the image:
Metaphorically, the image is not simply an ocean of data; it is an ocean of knowledge, an enormous undifferentiated pool of truth and belief, of evidence and speculation, of things known and yet to be discovered. The ocean may or may not contain all of the knowledge in the universe, but it contains so much that for most practical purposes, the ocean is comprehensive.
And in that comprehensive form, unless and until institutions and practices are built in it and around it, that comprehensive ocean of knowledge is, practically speaking, either useless or dangerous. The threats and opportunities lie much less in “what does using a Generative AI model do to my brain?” and much more in “what does ‘our’ collective experience of using Generative AI models do to the institutions and practices that we’ve built to regulate our collective experiences?” I don’t mean to say that an interest in the relationship between AI and individual experience is “digging in the wrong place,” per Raiders of the Lost Ark. Well, maybe I sort of say that. I’m not certain, yet, that I have the evidence to back it up.
(As I have slowly assembled this line of thinking for myself, I keep coming back to the image of René Belloq in Raiders, opening the Ark of the Covenant and discovering that unmediated access to divine power yields horror rather than enlightenment or power. And so I found my header image for today.)
The possible disconnect between individual experience (which may be good, bad, or many things in between) and collective experience (same) is an example of the sort of “social dilemma” explored by the knowledge commons project that I’ve been invested in for close to 20 years. That style of thinking is borrowed, as is much of the style of that project, from the research of Elinor Ostrom. Exploring that disconnect is as much a matter of empirics as it is theory and concept.
In that respect, Carys Craig elides a step when she connects my comment about the ocean of data to my interest in knowledge commons. I do not mean to suggest that “no one owns or controls the data”; I do not mean to suggest that no one whose work appears in a set of training data has a legitimate gripe as to whether they should have been consulted or paid. Rather, I mean to suggest that a ginormous shared information resource can cause a lot of problems in the absence of thoughtful governance institutions. Those institutions might be baked into the collection and organization of the information (an STS-style approach); those might also be attached or applied to the “ocean” from the outside (a law-and-norms approach).
As an example of how that point of view might translate into something more tangible, try this: the recent NBER paper from Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar, “AI, Human Cognition and Knowledge Collapse,” simply (and not so simply) describes a knowledge-related social dilemma at the core of (their model of) generative AI. Hollis Robbins draws that appropriate inference from their work:
Acemoglu, Kong, and Ozdaglar have formalized the fragility of systems in which general knowledge is produced solely as a byproduct of individually motivated learning. If “knowledge collapse” becomes a policy keyword, it should carry an implication the paper does not yet reach: the response includes building age-structured and expertise-structured infrastructure around AI systems, so that public knowledge is produced as a primary activity by people with long memory and deep domain judgment.
One synonym for “age-strucutured and expertise-structured infrastructure” is “knowledge commons governance.”
One more thing, for now:
This is far from the first time that “practically all of the world’s knowledge” has been collected in one place and therefore far from the first time that societies have wrestled with developing institutions and practices that aim to mitigate the negative effects of the collection, to amplify the positive effects, and to deal effectively with the evolving implications of both the collection and the governance practices themselves. The Library of Alexandria. The research university. The Internet.
I could go on; the point is not that all of these are great in themselves, let alone equal in their weighing of virtues and drawbacks. The point is that these are blends of formal and informal governance practices, with distinctive virtues and drawbacks. None of these is exactly “like” the others, but each of them in their own ways presents knowledge commons dilemmas and opportunities.
And all of them, in diverse ways, relied or rely on abstracting material in “the ocean of knowledge” of the moment into patterns and regularities of various sorts. Libraries and librarians developed systems for organizing books and other materials. Universities and scholars developed fields and disciplines. The designers and managers of the early Internet developed systems of technical protocols.
The “pattern-recognition” element of how Generative AI models work, is, in short, not a quirky conceptual or technical limitation on their capabilities; it is precisely how institutions work, generally speaking.
(I wasn’t motivated to write this out by learning about Sam Altman’s comment that “intelligence,” in the future, will be a utility like electricity or water, but I do subscribe to John Warner’s critique: “this shit is unbelievably insane.”)
The model is the governance, in the first place, because of the respects in which the model, which is to say any Generative AI model or (subsidiarity and polycentricity alert, for social science folks) element of an AI “stack,” is designed to pull on patterns in the data. AI models are special cases of models generally: they are tractable versions of intractable collections of information and knowledge. The word “tractable” does a lot of work in that sentence, not all of it helpful. The word “model” has been attached to Generative AI systems in ways that seem to try to make those systems comprehensible and predictable and regulable, which they may not be.
But they do find and rely on patterns.
Some patterns are more persuasive and useful than others, of course; some are generative and productive; some are harmful to the point of causing widespread social (economic) (cultural) (political) damage.
All of which is a longish explanation of my intuition-based comment reported above. I’m an institutionally-minded person; I am scratching the surface of the institutions I know well (for example, universities) to figure out how Generative AI – an institution, or institution of institutions in itself – might be changing or supplanting those institutions and, in the process, changing or supplanting … us?
What others have to say
Selections from new-ish commentary about the institutions of higher education and institutions of expertise, shared because these folks have me thinking, not necessarily because I agree.
As almost every sentient being on the planet knows, the “FIFA” World Cup arrives in North America later this year, and as someone who has been active in soccer/football for coming up on 60 years (I first kicked a ball around 1967 and first pulled on a uniform in 1968, I believe), and who has written both short things and long things about the game, I feel like I should say a thing or two.
While I work out what that is, I want to note the resources that I check in on every single day for football information, covering both club football and national team football around the world, bearing in mind that American soccer fans (see what I did there?) assign a strange sort of authority to English commentators, both as match narrators and as historians, critics, and analysts.
First: The Athletic, part of the New York Times empire. These folks do a very nice job of rounding up news from out of the way places as well as the big European and British leagues. The lead writer is a Scot.
Second: Men in Blazers (now with a new blazer-less logo). This is less a single resource than a collection of resources, in different media, overseen by the inimitable Roger (“Rog”) Bennett, a blue from Liverpool. Personally, I enjoy the podcasts more than the newsletters, and I especially enjoy the podcasts that feature Rory Smith, who once wrote for the New York Times and who now often begins his commentary by confirming that he is at home in (or near) Leeds. The style reminds me a bit of older New Yorker “Talk of the Town” pieces.
My bookshelf
Like a lot of academics, I read a lot. Like a lot of law professors, I read a lot about law and about governance. But I also read a lot of things just for fun and a lot of other things because you never know where interesting ideas might come from.
Hermione Lee’s recent biography of Tom Stoppard took me a while, but it rewarded my patience. It is a very British book, and Stoppardian in that sense (the book makes clear that Stoppard was enormously attached to and proud of his British identity), and by that I mean that it both assumes and rewards a degree of Anglophilia and knowledge of the theater that many American readers are likely to lack. Something that I had not appreciated was the extent to which Stoppard was finely attuned to and engaged with how his works were performed.
Lee’s narrative oscillates between the progress of Stoppard’s life and literate, critical dives into his work in a way that other popular biographies often do not. I am thinking, for example, of Isaacson’s biography of Steve Jobs, which spends a lot of time on Jobs’s love interests and much less on technical deconstructions of various operating systems.
Mostly, I am motivated to track down and read a lot more of Stoppard’s work. “Parade’s End,” a BBC series that he wrote and based on the books of Ford Madox Ford, is now in my streaming queue.
Thanks for sticking with me.


