エピソード

  • Hello all,

    The audio version keep coming. Here you have the audio for Secularization Comes for the Religion of Technology.

    Below you’ll find a couple of paintings that I cite in the essay.

    Thanks for listening. Hope you enjoy it.

    Cheers,

    Michael



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • I continue to catch up on supplying audio versions of past essays. Here you have the audio for “Vision Con,” an essay about Apple’s mixed reality headset originally published in early February.

    The aim is to get caught up and then post the audio version either the same day as or very shortly after I publish new written essays.

    Thanks for listening!



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • エピソードを見逃しましたか?

    フィードを更新するにはここをクリックしてください。

  • Just before my unexpected hiatus during the latter part of last year, I had gotten back to the practice of recording audio version of my essays. Now that we’re up and running again, I wanted to get back to these recordings as well, beginning with this recording of the first essay of this year. Others will follow shortly, and as time allows I will record some of the essay from the previous year as well.

    You can sign up for the audio feed at Apple Podcasts or Spotify.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • At long last, the audio version of the Convivial Society returns.

    It’s been a long time, which I do regret. Going back to 2020, it had been my practice to include an audio version of the essay with the newsletter. The production value left a lot to be desired, unless simplicity is your measure, but I know many of you appreciated the ability to listen to the essays. The practice became a somewhat inconsistent in mid-2022, and then fell off altogether this year. More than a few of you have inquired about the matter over the past few months. Some of you graciously assumed there must have been some kind of technical problem. The truth, however, was simply that this was a ball I could drop without many more things falling apart, so I did. But I was sorry to do so and have always intended to bring the feature back.

    So, finally, here it is, and I aim to keep it up.

    I’m sending this one out via email to all of you on the mailing list in order to get us all on the same page, but moving forward I will simply post the the audio to the site, which will also publish the episode to Apple Podcasts and Spotify.

    So if you’d like to keep up with the audio essays, you can subscribe to the feed at either service to be notified when new audio posts. Otherwise just keep an eye on the newsletter’s website for the audio versions that will accompany the text essays. The main newsletter will, of course, still come straight to your inbox.

    One last thing. I intend, over the coming weeks, to post audio versions of the past dozen or so essays for which no audio version was ever recorded. If that’s of interest to you, stay tuned.

    Thanks for reading and now, once again, for listening.

    Cheers,

    Michael

    The newsletter is public and free to all, but sustained by readers who value the writing and have the means to support it.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome back to the Convivial Society. In this installment, you’ll find the audio version of the latest essay, “What You Get Is the World.” I try to record an audio version of most installments, but I send them out separately from the text version for reasons I won’t bore you with here. Incidentally, you can also subscribe to the newsletter’s podcast feed on Apple Podcasts and Spotify. Just look up The Convivial Society.

    Aside from the audio essay, you’ll find an assortment of year-end miscellany below.

    I trust you are all well as we enter a new year. All the best to you and yours!

    A Few Notable Posts

    Here are six installments from this past year that seemed to garner a bit of interest. Especially if you’ve just signed up in recent weeks, you might appreciate some of these earlier posts.

    Incidentally, if you have appreciated the writing and would like to become a paid supporter at a discounted rate, here’s the last call for this offer. To be clear, the model here is that all the writing is public but I welcome the patronage of those who are able and willing. Cheers!

    Podcast Appearances

    I’ve not done the best job of keeping you all in loop on these, but I did show up in a few podcasts this year. Here are some of those:

    With Sean Illing on attention

    With Charlie Warzel on how being online traps us in the past

    With Georgie Powell on reframing our experience

    Year’s End

    It is something of a tradition at the end of the year for me to share Richard Wilbur’s poem, “Year’s End.” So, once again I’ll leave you with it.

    Now winter downs the dying of the year, And night is all a settlement of snow;From the soft street the rooms of houses show A gathered light, a shapen atmosphere, Like frozen-over lakes whose ice is thin And still allows some stirring down within.

    I’ve known the wind by water banks to shakeThe late leaves down, which frozen where they fell And held in ice as dancers in a spell Fluttered all winter long into a lake; Graved on the dark in gestures of descent, They seemed their own most perfect monument.

    There was perfection in the death of ferns Which laid their fragile cheeks against the stone A million years. Great mammoths overthrown Composedly have made their long sojourns, Like palaces of patience, in the grayAnd changeless lands of ice. And at Pompeii

    The little dog lay curled and did not rise But slept the deeper as the ashes roseAnd found the people incomplete, and froze The random hands, the loose unready eyes Of men expecting yet another sunTo do the shapely thing they had not done.

    These sudden ends of time must give us pause. We fray into the future, rarely wroughtSave in the tapestries of afterthought.More time, more time. Barrages of applause Come muffled from a buried radio.The New-year bells are wrangling with the snow.

    Thank you all for reading along in 2022. We survived, and I’m looking forward to another year of the Convivial Society in 2023.

    Cheers, Michael



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome again to the Convivial Society, a newsletter about technology and culture. This post features the audio version of the essay that went out in the last installment: “Lonely Surfaces: On AI-generated Images.”

    For the sake of recent subscribers, I’ll mention that I ordinarily post audio of the main essays (although a bit less regularly than I’d like over the past few months). For a variety of reasons that I won’t bore you with here, I’ve settled on doing this by sending a supplement with the audio separately from the text version of the essay. That’s what you have here.

    The newsletter is public but reader supported. So no customers, only patrons. This month if you’d like to support my work at a reduced rate from the usual $45/year, you can click here:

    You can go back to the original essay for links to articles, essays, etc. You can find the images and paintings I cite in the post below.

    Jason Allen’s “Théâtre D’opéra Spatial”

    Rembrandt’s “The Anatomy Lesson of Dr Nicolaes Tulp”

    Detail from Pieter Bruegel’s “Harvesters”

    The whole of Bruegel’s “Harvesters”



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome back to the Convivial Society. In this installment, you’ll find the audio version of two recent posts: “The Pathologies of the Attention Economy” and “Impoverished Emotional Lives.” I’ve not combined audio from two separate installments before, but the second is a short “Is this anything?” post, so I thought it would be fine to include it here. (By the way, I realized after the fact that I thoughtlessly mispronounced Herbert Simon’s name as Simone. I’m not, however, sufficiently embarrassed to go back and re-record or edit the audio. So there you have it.)

    If you’ve been reading over the past few months, you know that I’ve gone back and forth on how best to deliver the audio version of the essays. I’ve settled for now on this method, which is to send out a supplement to the text version of the essay. Because not all of you listen to the audio version, I’ll include some additional materials (links, resources, etc.) so that this email is not without potential value to those who do not listen to the audio.

    Farewell Real Life

    I noted in a footnote recently that Real Life Magazine had lost its funding and would be shutting down. This is a shame. Real Life consistently published smart and thoughtful essays exploring various dimensions of internet culture. I had the pleasure of writing three pieces for the magazine between 2018 and 2019: ”The Easy Way Out,” “Always On,” and “Personal Panopticons.”

    I was also pleasantly surprised to encounter essays in the past year or two drawing on the work of Ivan Illich: “Labors of Love” and “Appropriate Measures,” each co-authored by Jackie Brown and Philippe Mesly, as well as “Doctor’s Orders” by Aimee Walleston.

    And at any given time I’ve usually had a handful of Real Life essays open in tabs waiting to be read or shared. Here are some more recent pieces that are worth your time: “Our Friend the Atom The aesthetics of the Atomic Age helped whitewash the threat of nuclear disaster,” “Hard to See How trauma became synonymous with authenticity,” and “Life’s a Glitch The non-apocalypse of Y2K obscures the lessons it has for the present.”

    Links

    The latest installment in Jon Askonas’s ongoing series in The New Atlantis is out from behind the paywall today. In “How Stewart Made Tucker,” Askonas weaves a compelling account of how Jon Stewart prepared the way for Tucker Carlson and others:

    In his quest to turn real news from the exception into the norm, he pioneered a business model that made it nearly impossible. It’s a model of content production and audience catering perfectly suited to monetize alternate realities delivered to fragmented audiences. It tells us what we want to hear and leaves us with the sense that “they” have departed for fantasy worlds while “we” have our heads on straight. Americans finally have what they didn’t before. The phony theatrics have been destroyed — and replaced not by an earnest new above-the-fray centrism but a more authentic fanaticism.

    You can find earlier installments in the series here: Reality — A post-mortem. Reading through the essay, I was struck again and again by how foreign and distant the world of late 90s and early aughts. In any case, the Jon’s work in this series is worth your time.

    Kashmir Hill spent a lot of time in Meta’s Horizons to tell us about life in the metaverse:

    My goal was to visit at every hour of the day and night, all 24 of them at least once, to learn the ebbs and flows of Horizon and to meet the metaverse’s earliest adopters. I gave up television, books and a lot of sleep over the past few months to spend dozens of hours as an animated, floating, legless version of myself.

    I wanted to understand who was currently there and why, and whether the rest of us would ever want to join them.

    Ian Bogost on smart thermostats and the claims made on their behalf:

    After looking into the matter, I’m less confused but more distressed: Smart heating and cooling is even more knotted up than I thought. Ultimately, your smart thermostat isn’t made to help you. It’s there to help others—for reasons that might or might not benefit you directly, or ever.

    Sun-ha Hong’s paper on predictions without futures. From the abstract:

    … the growing emphasis on prediction as AI's skeleton key to all social problems constitutes what religious studies calls cosmograms: universalizing models that govern how facts and values relate to each other, providing a common and normative point of reference. In a predictive paradigm, social problems are made conceivable only as objects of calculative control—control that can never be fulfilled but that persists as an eternally deferred and recycled horizon. I show how this technofuture is maintained not so much by producing literally accurate predictions of future events but through ritualized demonstrations of predictive time.

    Miscellany

    As I wrote about the possibility that the structure of online experience might impoverish our emotional lives, I recalled the opening paragraph of the Dutch historian Johan Huizinga’s The Waning of the Middle Ages. I can’t say that I have a straightforward connection to make between “the passionate intensity of life” Huizinga describes and my own speculations the affective consequences of digital media, but I think there may be something worth getting at.

    When the world was half a thousand years younger all events had much sharper outlines than now. The distance between sadness and joy, between good and bad fortune, seemed to be much greater than for us; every experience had that degree of directness and absoluteness that joy and sadness still have in the mind of a child. Every even, every deed was defined in given and expressive forms and was in accord with the solemnity of a tight, invariable life style. The great events of human life—birth, marriage, death—by virtue of the sacraments, basked in the radiance of divine mystery. But even the lesser events—a journey, labor, a visit—were accompanied by a multitude of blessings, ceremonies, sayings, and conventions.

    From the perspective of media ecology, the shift to print as the dominant cultural medium is interpreted as having the effect of tempering the emotional intensity of oral culture and tending instead toward an ironizing effect as it generates a distance between an emotion and its experssion. Digital media curiously scrambles these dynamics by generating an instantaneity of delivery that mimics the immediacy of physical presence. In 2019, I wrote in The New Atlantis about how digital media scrambles the pscyhodynamics (Walter Ong’s phrase) of orality and literacy in often unhelpful ways: “The Inescapable Town Square.” Here’s a bit from that piece:

    The result is that we combine the weaknesses of each medium while losing their strengths. We are thrust once more into a live, immediate, and active communicative context — the moment regains its heat — but we remain without the non-verbal cues that sustain meaning-making in such contexts. We lose whatever moderating influence the full presence of another human being before us might cast on the passions the moment engendered. This not-altogether-present and not-altogether-absent audience encourages a kind of performative pugilism.

    To my knowledge, Ivan Illich never met nor corresponded with Hannah Arendt. However, in my efforts to “break bread with the dead,” as Auden once put it, they’re often seated together at the table. In a similarly convivial spirit, here is an excerpt from a recent book by Alissa Wilkinson:

    I learn from Hannah Arendt that a feast is only possible among friends, or people whose hearts are open to becoming friends. Or you could put it another way: any meal can become a feast when shared with friends engaged in the activity of thinking their way through the world and loving it together. A mere meal is a necessity for life, a fact of being human. But it is transformed into something much more important, something vital to the life of the world, when the people who share the table are engaging in the practices of love and of thinking.

    Finally, here’s a paragraph from Jacques Ellul’s Propaganda recently highlighted by Jeffrey Bilbro:

    In individualist theory the individual has eminent value, man himself is the master of his life; in individualist reality each human being is subject to innumerable forces and influences, and is not at all master of his own life. As long as solidly constituted groups exist, those who are integrated into them are subject to them. But at the same time they are protected by them against such external influences as propaganda. An individual can be influenced by forces such as propaganda only when he is cut off from membership in local groups. Because such groups are organic and have a well-structured material, spiritual, and emotional life, they are not easily penetrated by propaganda.

    Cheers! Hope you are all well,

    Michael



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter about technology and culture. In this installment, I explore a somewhat eccentric frame by which to consider how we relate to our technologies, particularly those we hold close to our bodies. You’ll have to bear through a few paragraphs setting up that frame, but I hope you find it to be a useful exercise. And I welcome your comments below. Ordinarily only paid subscribers can leave comments, but this time around I’m leaving the comments open for all readers. Feel free to chime in. I will say, though, that I may not be able to respond directly to each one. Cheers!

    Pardon what to some of you will seem like a rather arcane opening to this installment. We’ll be back on more familiar ground soon enough, but I will start us off with a few observations about liturgical practices in religious traditions.

    A liturgy, incidentally, is a formal and relatively stable set of rites, rituals, and forms that order the public worship of a religious community. There are, for example, many ways to distinguish among the varieties of Christianity in the United States (or globally, for that matter). One might distinguish by region, by doctrine, by ecclesial structure, by the socioeconomic status its members, etc. But one might also place the various strands of the tradition along a liturgical spectrum, a spectrum whose poles are sometimes labeled low church and high church.

    High church congregations, generally speaking, are characterized by their adherence to formal patterns and rituals. At high church services you would be more likely to observe ritual gestures, such as kneeling, bowing, or crossing oneself as well as ritual speech, such as set prayers, invocations, and responses. High church congregations are also more likely to observe a traditional church calendar and employ traditional vestments and ornamentation. Rituals and formalities of this sort would be mostly absent in low church congregations, which tend to place a higher premium on informality, emotion, and spontaneity of expression. I am painting with a broad brush, but it will serve well enough to set up the point I’m driving at.

    But one more thing before we get there. What strikes me about certain low church communities is that they sometimes imagine themselves to have no liturgy at all. In some cases, they might even be overtly hostile to the very idea of a liturgy. This is interesting to me because, in practice, it is not that they have no liturgy at all as they imagine—they simply end up with an unacknowledged liturgy of a different sort. Their services also feature predictable patterns and rhythms, as well as common cadences and formulations, even if they are not formally expressed or delineated and although they differ from the patterns and rhythms of high church congregations. It’s not that you get no church calendar, for example, it’s that you end up trading the old ecclesial calendar of holy days and seasons, such as Advent, Epiphany, and Lent, for a more contemporary calendar of national and sentimental holidays, which is to say those that have been most thoroughly commercialized.

    Now that you’ve borne with this eccentric opening, let me get us to what I hope will be the payoff. In the ecclesial context, this matters because the regular patterns and rhythms of worship, whether recognized as a liturgy or not, are at least as formative (if not more so) as the overt messages presented in a homily, sermon, or lesson, which is where most people assume the real action is. This is so because, as you may have heard it said, the medium is the message. In this case, I take the relevant media to be the embodied ritual forms, the habitual practices, and the material layers of the service of worship. These liturgical forms, acknowledged or unacknowledged, exert a powerful formative influence over time as they write themselves not only upon the mind of the worshipper but upon their bodies and, some might say, hearts.

    With all of this in mind, then, I would propose that we take a liturgical perspective on our use of technology. (You can imagine the word “liturgical” in quotation marks, if you like.) The point of taking such a perspective is to perceive the formative power of the practices, habits, and rhythms that emerge from our use of certain technologies, hour by hour, day by day, month after month, year in and year out. The underlying idea here is relatively simple but perhaps for that reason easy to forget. We all have certain aspirations about the kind of person we want to be, the kind of relationships we want to enjoy, how we would like our days to be ordered, the sort of society we want to inhabit. These aspirations can be thwarted in any number of ways, of course, and often by forces outside of our control. But I suspect that on occasion our aspirations might also be thwarted by the unnoticed patterns of thought, perception, and action that arise from our technologically mediated liturgies. I don’t call them liturgies as a gimmick, but rather to cast a different, hopefully revealing light on the mundane and commonplace. The image to bear in mind is that of the person who finds themselves handling their smartphone as others might their rosary beads.

    To properly inventory our technologically mediated liturgies we need to become especially attentive to what our bodies want. After all, the power of a liturgy is that it inscribes itself not only on the mind, but also on the body. In that liminal moment before we have thought about what we are doing but find our bodies already in motion, we can begin to discern the shape of our liturgies. In my waking moments, do I find myself reaching for a device before my eyes have had a chance to open? When I sit down to work, what routines do I find myself engaging? In the company of others, to what is my attention directed? When I as a writer, for example, notice that my hands have moved to open Twitter the very moment I begin to feel my sentence getting stuck, I am under the sway of a technological liturgy. In such moments, I might be tempted to think that my will power has failed me. But from the liturgical perspective I’m exploring here, the problem is not a failure of willpower. Rather, it’s that I’ve trained my will—or, more to the point, I have allowed my will to be trained—to want something contrary to my expressed desire in the moment. One might even argue that this is, in fact, a testament to the power of the will, which is acting in keeping with its training. By what we unthinkingly do, we undermine what we say we want.

    Say, for example, that I desire to be a more patient person. This is a fine and noble desire. I suspect some of you have desired the same for yourselves at various points. But patience is hard to come by. I find myself lacking patience in the crucial moments regardless of how ardently I have desired it. Why might this be the case? I’m sure there’s more than one answer to this question, but we should at least consider the possibility that my failure to cultivate patience stems from the nature of the technological liturgies that structure my experience. Because speed and efficiency are so often the very reason why I turn to technologies of various sorts, I have been conditioning myself to expect something approaching instantaneity in the way the world responds to my demands. If at every possible point I have adopted tools and devices which promise to make things faster and more efficient, I should not be surprised that I have come to be the sort of person who cannot abide delay and frustration.

    “The cunning of pedagogic reason,” sociologist Pierre Bourdieu once observed, “lies precisely in the fact that it manages to extort what is essential while seeming to demand the insignificant.” Bourdieu had in mind “the respect for forms and forms of respect which are the most visible and most ‘natural’ manifestation of respect for the established order, or the concessions of politeness, which always contain political concessions.”

    What I am suggesting is that our technological liturgies function similarly. They, too, manage to extort what is essential while seeming to demand the insignificant. Our technological micro-practices, the movements of our fingers, the gestures of our hands, the posture of our bodies—these seem insignificant until we realize that we are in fact etching the grooves along which our future actions will tend to habitually flow.

    The point of the exercise is not to divest ourselves of such liturgies altogether. Like certain low church congregations that claim they have no liturgies, we would only deepen the power of the unnoticed patterns shaping our thought and actions. And, more to the point, we would be ceding this power not to the liturgies themselves, but to the interests served by those who have crafted and designed those liturgies. My loneliness is not assuaged by my habitual use of social media. My anxiety is not meaningfully relieved by the habit of consumption engendered by the liturgies crafted for me by Amazon. My health is not necessarily improved by compulsive use of health tracking apps. Indeed, in the latter case, the relevant liturgies will tempt me to reduce health and flourishing to what the apps can measure and quantify.

    Hannah Arendt once argued that totalitarian regimes succeed, in part, by dislodging or disemedding individuals from their traditional and customary milieus. Individuals who have been so “liberated” are more malleable and subject to new forms of management and control. The consequences of many modern technologies can play out in much the same way. They promise some form of liberation—from the constraints of place, time, community, or even the body itself. Such liberation is often framed as a matter of greater efficiency, convenience, or flexibility. But, to take one example, when someone is freed to work from home, they may find that they can now be expected to work anywhere and at anytime. When older patterns and rhythms are overthrown, new patterns and rhythms are imposed and these are often far less humane because they are not designed to serve human ends.

    So I leave you with a set of questions and a comment section open to all readers. I’ve given you a few examples of what I have in mind, but what technological liturgies do you find shaping your days? What are their sources or whose interests do they serve? How much power do you have to resist these liturgies or subvert them if you find that they do, in fact, undermine your own aims and goals? Finally, what liturgies do you seek to implement for yourselves (these may be explicitly religious or not)? After all, as the philosopher Albert Borgmann once put it, we must “meet the rule of technology with a deliberate and regular counterpractice.”



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • This is the audio version of the last essay posted a couple of days ago, “What Is To Be Done? — Fragments.”

    It was a long time between installments of the newsletter, and it has been an even longer stretch since the last audio version. As I note in the audio, my apologies to those of you who primarily rely on the audio version of the essays. I hope to be more consistent on this score moving forward!

    Incidentally, in recording this installment I noticed a handful of typos in the original essay. I’ve edited these in the web version, but I'm sorry those of you who read the emailed version had to endure them. Obviously, my self-editing was also a bit rusty!

    One last note, I’ve experimented with a paid-subscribers* discussion thread for this essay. It’s turned out rather well, I think. There’ve been some really insightful comments and questions. So, if you are a paid subscriber, you might want to check that out: Discussion Thread.

    Cheers,

    Michael

    * Note to recent sign-ups: I follow a patronage model. All of the writing is public, there is no paywall for the essays. But I do invite those who value this work to support it as they are able with paid subscriptions. Those who do so, will from time to time have some additional community features come their way.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter about technology and culture. This is the audio version of the last installment, which focused on the Blake Lemoine/LaMDA affair. I argued that while LaMDA is not sentient, applications like it will push us further along toward a digitally re-enchanted world. Also: to keep the essay to a reasonable length I resorted to some longish footnotes in the prior text version. That version also contains links to the various articles and essays I cited throughout the piece.

    I continue to be somewhat flummoxed about the best way to incorporate the audio and text versions. This is mostly because of how Substack has designed the podcast template. Naturally, it is designed to deliver a podcast rather than text, but I don’t really think of what I do as a podcast. Ordinarily, it is simply an audio version of a textual essay. Interestingly, Substack just launched what, in theory, is an ideal solution: the option to include a simple voiceover of the text, within the text post template. Unfortunately, I don’t think this automatically feeds the audio to Apple Podcasts, Spotify, etc. And, while I don’t think of myself as having a podcast, some of you do access the audio through those services. So, at present, I’ll keep to this somewhat awkward pattern of sending out the text and audio versions separately.

    Thanks as always to all of you who read, listen, share, and support the newsletter. Nearly three years into this latest iteration of my online work, I am humbled by and grateful for the audience that has gathered around it.

    Cheers,

    Michael



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter exploring the relationship between technology and culture. This is what counts as a relatively short post around here, 1800 words or so, about a certain habit of mind that online spaces seem to foster.

    Almost one year ago, this exchange on Twitter caught my attention, enough so that I took a moment to capture it with a screen shot, thinking I’d go on to write about it at some point.

    Set aside for a moment whatever your particular thoughts might be on the public debate, if we can call it that, over vaccines, vaccine messaging, vaccine mandates, etc. Instead, consider the form of the claim, specifically the “anti-anti-” framing. I think I first noticed this peculiar way of talking about (or around) an issue circa 2016. In 2020, contemplating the same dynamics, I observed that “social media, perhaps Twitter especially, accelerates both the rate at which we consume information and the rate at which ensuing discussion detaches from the issue at hand, turning into meta-debates about how we respond to the responses of others, etc.” So by the time the Nyhan quote-tweeted Rosen last summer, the “anti-anti-” framing, to my mind, had already entered its mannerist phase.

    The use of “anti-anti-ad infinitum” is easy to spot, and I’m sure you’ve seen the phrasing deployed on numerous occasions. But the overt use of the “anti-anti-” formulation is just the most obvious manifestation of a more common style of thought, one that I’ve come to refer to as meta-positioning. In the meta-positioning frame of mind, thinking and judgment are displaced by a complex, ever-shifting, and often fraught triangulation based on who holds certain views and how one might be perceived for advocating or failing to advocate for certain views. In one sense, this is not a terribly complex or particularly novel dynamic. Our pursuit of understanding is often an uneasy admixture of the desire to know and the desire to be known as one who knows by those we admire. Unfortunately, social media probably tips the scale in favor of the desire for approval given its rapid-fire feedback mechanisms.

    Earlier this month, Kevin Baker commented on this same tendency in a recent thread that opened with the following observation, “A lot of irritating, mostly vapid people and ideas were able to build huge followings in 2010s because the people criticizing them were even worse.”

    Baker goes on to call this “the decade of being anti-anti-” and explains that he felt like he spent “the better part of the decade being enrolled into political and discursive projects that I had serious reservations about because I disagreed [with] their critics more and because I found their behavior reprehensible.” In his view, this is a symptom of the unchecked expansion of the culture wars. Baker again: “This isn't censorship. There weren't really censors. It's more a structural consequence of what happens when an issue gets metabolized by the culture war. There are only two sides and you just have to pick the least bad one.”

    I’m sympathetic to this view, and would only add that perhaps it is more specifically a symptom of what happens when the digitized culture wars colonize ever greater swaths of our experience. I argued a couple of years ago that just as industrialization gave us industrial warfare, so digitization has given us digitized culture warfare. My argument was pretty straightforward: “Digital media has dramatically enhanced the speed, scale, and power of the tools by which the culture wars are waged and thus transformed their norms, tactics, strategies, psychology, and consequences.” Take a look at the piece if you missed it.

    I’d say, too, that the meta-positioning habit of mind might also be explained as a consequence of the digitally re-enchanted discursive field. I won’t bog down this post, which I’m hoping to keep relatively brief, with the details of that argument, but here’s the most relevant bit:

    For my purposes, I’m especially interested in the way that philosopher Charles Taylor incorporates disenchantment theory into his account of modern selfhood. The enchanted world, in Taylor’s view, yielded the experience of a porous, and thus vulnerable self. The disenchanted world yielded an experience of a buffered self, which was sealed off, as the term implies, from beneficent and malignant forces beyond its ken. The porous self depended upon the liturgical and ritual health of the social body for protection against such forces. Heresy was not merely an intellectual problem, but a ritual problem that compromised what we might think of, in these times, as herd immunity to magical and spiritual forces by introducing a dangerous contagion into the social body. The answer to this was not simply reasoned debate but expulsion or perhaps a fiery purgation.

    Under digitally re-enchanted conditions, policing the bounds of the community appears to overshadow the value of ostensibly objective, civil discourse. In other words, meta-positioning, from this perspective, might just be a matter of making sure you are always playing for the right team, or at least not perceived to be playing for the wrong one. It’s not so much that we have something to say but that we have a social space we want to be seen to occupy.

    But as I thought about the meta-positioning habit of mind recently, another related set of considerations came to mind, one that is also connected to the digital media ecosystem. As a point of departure, I’d invite you to consider a recent post from Venkatesh Rao about “crisis mindsets.”

    “As the world has gotten more crisis prone at all levels from personal to geopolitical in the last few years,” Rao explained, “the importance of consciously cultivating a more effective crisis mindset has been increasingly sinking in for me.” I commend the whole post to you, it offers a series of wise and humane observations about how we navigate crisis situations. Rao’s essay crossed my feed while I was drafting this post about meta-positioning, and these lines near the end of the essay caught my attention:

    “We seem to be entering a historical period where crisis circumstances are more common than normalcy. This means crisis mindsets will increasingly be the default, not flourishing mindsets.”

    I think this is right, but it also has a curious relationship to the digital media ecosystem. I can imagine someone arguing that genuine crisis circumstances are no more common now than they have ever been but that digital media feeds heighten our awareness of all that is broken in the world and also inaccurately create a sense of ambient crisis. This argument is not altogether wrong. In the digital media ecosystem, we are enveloped by an unprecedented field of near-constant information emanating from the world far and near, and the dynamics of the attention economy also encourage the generation of ambient crisis.

    But two things can both be true at the same time. It is true, I think, that we are living through a period during which crisis circumstances have become more frequent. This is, in part, because the structures, both social and technological, of the modern world do appear increasingly fragile if not wholly decrepit. It is also true that our media ecosystem heightens our awareness of these crisis circumstances (generating, in turn, a further crisis of the psyche) and that it also generates a field of faux crisis circumstances.

    Consequently, learning to distinguish between a genuine crisis and a faux crisis will certainly be an essential skill. I would add that it is also critical to distinguish among the array of genuine crisis circumstances that we encounter. Clearly, some will bear directly and unambiguously upon us—a health crisis, say, or a weather emergency. Others will bear on us less directly or acutely, and others still will not bear on us at all. Furthermore, there are those we will be able to address meaningfully through our actions and those we cannot. We should, therefore, learn to apportion our attention and our labors wisely and judiciously.

    But let’s come back to the habit of mind with which we began. If we are, in fact, inhabiting a media ecosystem that, through sheer scale and ubiquity, heightens our awareness of all that is wrong with the world and overwhelms pre-digital habits of sense-making and crisis-management, then meta-positioning might be more charitably framed as a survival mechanism. As Rao noted, “I have realized there is no such thing as being individually good or bad in a crisis. Humans either deal with crises in effective groups, or not at all.” Just as digital re-enchantment retrieves the communal instinct, so too, perhaps, does the perma-crisis mindset. Recalling Baker’s analysis, we might even say that the digitized culture war layered over the crisis circumstances intensifies the stigma of breaking ranks.

    There’s one last perspective I’d like to offer on the meta-positioning habit of mind. It also seems to suggest something like a lack of grounding or a certain aimlessness. There is a picture that is informing my thinking here. It is the picture of being adrift in the middle of the ocean with no way to get our bearings. Under these circumstances the best we can ever do is navigate away from some imminent danger, but we can never purposefully aim at a destination. So we find ourselves adrift in the vast digital ocean, and we have no idea what we are doing there or what we should be doing. All we know is that we are caught up in wave after wave of the discourse and the best we can do is to make sure we steer clear of obvious perils and keep our seat on whatever raft we find ourselves in, a raft which might be in shambles but, nonetheless, affords us the best chance of staying afloat.

    So, maybe the meta-positioning habit of mind is what happens when I have clearer sense of what I am against than what I am for. Or maybe it is better to say that meta-positioning is what happens when we lack meaningful degrees of agency and are instead offered the simulacra of action in digitally mediated spheres, which generally means saying things about things and about the things other people are saying about the things—the “internet of beefs,” as Rao memorably called it. The best we can do is survive the beefs by making sure we’re properly aligned.

    To give it yet another turn, perhaps the digital sea through which we navigate takes the form of a whirlpool sucking us into the present. The whirlpool is a temporal maelstrom, keeping us focused on immediate circumstances, unable to distinguish, without sufficient perspective, between the genuine and the faux crisis.

    Under such circumstances, we lack what Alan Jacobs, borrowing the phrase from novelist Thomas Pynchon, has called “temporal bandwidth.” In Gravity’s Rainbow (1973), a character explains the concept: “temporal bandwidth is the width of your present, your now … The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.” Paradoxically, then, the more focused we are on the present, the less of a grip we’re able to get on it. As Jacobs notes, the same character went on to say, “It may get to where you’re having trouble remembering what you were doing five minutes ago.” Indeed, so.

    Jacobs recommends extending our temporal bandwidth through a deliberate engagement with the past through our reading as well as a deliberate effort to bring the more distant future into our reckoning. As the philosopher Hans Jonas, whom Jacobs cites, encouraged us to ask, “What force shall represent the future in the present?” The point is that we must make an effort to wrest our gaze away from the temporal maelstrom, and to do so not only in the moment but as a matter of sustained counter-practice. Perhaps then we’ll be better equipped to avoid the meta-positioning habit of mind, which undoubtedly constrains our ability to think clearly, and to find better ways of navigating the choppy, uncertain waters before us.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter about technology and culture. I tend to think of my writing as way of clarify my thinking, or, alternatively, of thinking out loud. Often I’m just asking myself, What is going on? That’s the case in this post. There was a techno-cultural pattern I wanted to capture in what follows, but I’m not sure that I’ve done it well enough. So, I’ll submit this for your consideration and critique. You can tell me, if you’re so inclined, whether there’s at least the grain of something helpful here or not. Also, you’ll note that my voice suggests a lingering cold that’s done a bit of a number on me over the past few days, but I hope this is offset by the fact that I’ve finally upgraded my mic and, hopefully, improved the sound quality. Cheers!

    If asked to define modernity or give its distinctive characteristics, what comes to mind? Maybe the first thing that comes to mind is that such a task is a fool’s errand, and you wouldn’t be wrong. There’s a mountain of books addressing the question, What is or was modernity? And another not insignificant hill of books arguing that, actually, there is or was no such thing, or at least not in the way it has been traditionally understood.

    Acknowledging as much, perhaps we’d still offer some suggestions. Maybe we’d mention a set of institutions or practices such as representative government or democratic liberalism, scientific inquiry or the authority of reason, the modern university or the free press. Perhaps a set of values comes to mind: individualism, free speech, rule of law, or religious freedom. Or perhaps some more abstract principles, such as instrumental rationality or belief in progress and the superiority of the present over the past. And surely some reference to secularization, markets, and technology would also be made, not to mention colonization and economic exploitation.

    I won’t attempt to adjudicate those claims or rank them. Also, you’ll have to forgive me if I failed to include you preferred account of modernity; they are many. But I will venture my own tentative and partial theory of the case with a view to possibly illuminating elements of the present state affairs. I’ve been particularly struck of late by the degree to which what I’ll call the myth of the machine became an essential element of the modern or, maybe better, the late modern world. Two clarifications before we proceed. First, I was initially calling this the “myth of neutrality” because I was trying to get at the importance of something like neutral or disinterested or value-free automaticity in various cultural settings. I wasn’t quite happy with neutrality as a way of capturing this pattern, though, and I’ve settled on the myth of the machine because it captures what may be the underlying template that manifests differently across various social spheres. And part of my argument will be that this template takes the automatic, ostensibly value-free operation of a machine as its model. Second, I use the term myth not to suggest something false or duplicitous, but rather to get at the normative and generative power of this template across the social order. That said, let’s move on, starting with some examples of how I see this myth manifesting itself.

    Objectivity, Impartiality, Neutrality

    The myth of the machine underlies a set of three related and interlocking presumptions which characterized modernity: objectivity, impartiality, and neutrality. More specifically, the presumptions that we could have objectively secured knowledge, impartial political and legal institutions, and technologies that were essentially neutral tools but which were ordinarily beneficent. The last of these appears to stand somewhat apart from the first two in that it refers to material culture rather than to what might be taken as more abstract intellectual or moral stances. In truth, however, they are closely related. The more abstract intellectual and institutional pursuits were always sustained by a material infrastructure, and, more importantly, the machine supplied a master template for the organization of human affairs.

    There are any number of caveats to be made here. This post obviously paints with very broad strokes and deals in generalizations which may not prove useful or hold up under closer scrutiny. Also, I would stress that I take these three manifestations of the myth of the machine to be presumptions, by which I mean that this objectivity, impartiality, and neutrality were never genuinely achieved. The historical reality was always more complicated and, at points, tragic. I suppose the question is whether or not these ideals appeared plausible and desirable to a critical mass of the population, so that they could compel assent and supply some measure of societal cohesion. Additionally, it is obviously true that there were competing metaphors and models on offer, as well as critics of the machine, specifically the industrial machine. The emergence of large industrial technologies certainly strained the social capital of the myth. Furthermore, it is true that by the mid-20th century, a new kind of machine—the cybernetic machine, if you like, or system—comes into the picture. Part of my argument will be that digital technologies seemingly break the myth of the machine, yet not until fairly recently. But the cybernetic machine was still a machine, and it could continue to serve as an exemplar of the underlying pattern: automatic, value-free, self-regulating operation.

    Now, let me suggest a historical sequence that’s worth noting, although this may be an artifact of my own limited knowledge. The sequence, as I see it, begins in the 17th century with the quest for objectively secured knowledge animating modern philosophy as well as the developments we often gloss as the scientific revolution. Hannah Arendt characterized this quest as the search for an Archimedean point from which to understand the world, an abstract universal position rather than a situated human position. Later in the 18th century, we encounter the emergence of political liberalism, which is to say the pursuit of impartial political and legal institutions or, to put it otherwise, “a ‘machine’ for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.” Finally, in the 19th century, the hopes associated with these pursuits became explicitly entangled with the development of technology, which was presumed to be a neutral tool easily directed toward the common good. I’m thinking, for example, of the late Leo Marx’s argument about the evolving relationship between progress and technology through the 19th century. “The simple republican formula for generating progress by directing improved technical means to societal ends,” Marx argued, “was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”

    I wrote “explicitly entangled” above because, as I suggested at the outset, I think the entanglement was always implicit. This entanglement is evident in the power of the machine metaphor. The machine becomes the template for a mechanistic view of nature and the human being with attendant developments in a variety of spheres: deism in religion, for example, and the theory of the invisible hand in economics. In both cases, the master metaphor is that of self-regulating machinery. Furthermore, contrasted to the human, the machine appears dispassionate, rational, consistent, efficient, etc. The human was subject to the passions, base motives, errors of judgement, bias, superstition, provincialism, and the like. The more machine-like a person became, the more likely they were to secure objectivity and impartiality. The presumed neutrality of what we today call technology was a material model of these intellectual and moral aspirations. The trajectory of these assumptions leads to technocracy. The technocratic spirit triumphed through at least the mid-twentieth century, and it has remained a powerful force in western culture. I’m tempted to argue, however, that, in the United States at least, the Obama years may come to be seen as its last confident flourish. In any case, the machine supplied a powerful metaphor that worked its way throughout western culture.

    Another way to frame all of this, of course, is by reference to Jacques Ellul’s preoccupation with what he termed la technique, the imperative to optimize all areas of human experience for efficiency, which he saw as the defining characteristic of modern society. Technique manifests itself in a variety of ways, but one key symptom is the displacement of ends by a fixation on means, so much so that means themselves become ends. The smooth and efficient operation of the system becomes more important than reckoning with which substantive goods should be pursued. Why something ought to be done comes to matter less than that it can be done and faster. The focus drifts toward a consideration of methods, procedures, techniques, and tools and away from a discussion of the goals that ought to be pursued.

    The Myth of the Machine Breaks Down

    Let’s revisit the progression I described earlier to see how the myth of the machine begins to break down, and why this is may illuminate the strangeness of our moment. Just as the modern story began with the quest for objectively secured knowledge, this ideal may have been the first to lose its implicit plausibility. Since the late 19th century onward, philosophers, physicists, sociologists, anthropologists, psychologists, and historians have, among others, proposed a more complex picture that emphasized the subjective, limited, contingent, situated, and even irrational dimensions of how humans come to know the world. The ideal of objectively secured knowledge became increasingly questionable throughout the 20th century. Some of these trends get folded under the label “postmodernism,” but I found the term unhelpful at best a decade ago—now find it altogether useless.

    We can similarly trace a growing disillusionment with the ostensible impartiality of modern institutions. This takes at least two forms. On the one hand, we might consider the frustrating and demoralizing character of modern bureaucracies, which we can describe as rule-based machines designed to outsource judgement and enhance efficiency. On the other, we can note the heightened awareness of the actual failures of modern institutions to live up to the ideals of impartiality, which has been, in part, a function of the digital information ecosystem.

    But while faith in the possibility of objectively secured knowledge and impartial institutions faltered, the myth of the machine persisted in the presumption that technology itself was fundamentally neutral. Until very recently, that is. Or so it seems. And my thesis (always for disputation) is that the collapse of this last manifestation of the myth brings the whole house down. This in part because of how much work the presumption of technological neutrality was doing all along to hold American society together. (International readers: as always read with a view to your own setting. I suspect there are some areas of broad overlap and other instances when my analysis won’t travel well). Already by the late 19th century, progress had become synonymous with technological advancements, as Leo Marx argued. If social, political, or moral progress stalled, then at least the advance of technology could be counted on.

    The story historian David Nye tells in American Technological Sublime is also instructive here. Nye convincingly argued that technology became an essential element of America’s civil religion (that’s my characterization) functionally serving through its promise and ritual civic commemoration as a source of cultural vitality and cohesion. It’s hard to imagine this today, but Nye documents how through the 19th and early to mid-20th century, new technologies of significant scale and power were greeted with what can only be described as religious reverence and their appearance heralded in civic ceremonies.

    But over the last several years, the plausibility of this last and also archetypal manifestation of the myth of the machine has also waned. Not altogether, to be sure, but in important and influential segments of society and throughout a wide cross-section of society, too. One can perhaps see the shift most clearly in the public discourse about social media and smart phones, but this may be a symptom of a larger disillusionment with technology. And not only technological artifacts and systems, but also with the technocratic ethos and the public role of expertise.

    After the Myth of the Machine

    If the myth of the machine in these three manifestations, was, in fact, a critical element of the culture of modernity, underpinning its aspirations, then when each in turn becomes increasingly implausible the modern world order comes apart. I’d say that this is more or less where we’re at. You could usefully analyze any number of cultural fault lines through this lens. The center, which may not in fact hold, is where you find those who still operate as if the presumptions of objectivity, impartiality, and neutrality still compelled broad cultural assent, and they are now assailed from both the left and the right by those who have grown suspicious or altogether scornful of such presumptions. Indeed, the left/right distinction may be less helpful than the distinction between those who uphold some combination of the values of objectivity, impartiality, and neutrality and those who no longer find them compelling or desirable.

    At present, contemporary technologies are playing a dual role in these developments. On the one hand, I would argue that the way the technologies classified, accurately or not, as A.I. are framed suggests an effort to save the appearances of modernity, which is to say to aim at the same ideals of objectivity, impartiality, and neutrality while acknowledging that human institutions failed to consistently achieve them. Strikingly, they also retrieve the most pernicious fixations of modern science, such as phrenology. The implicit idea is that rather than make human judgement, for example, more machine-like, we simply hand judgment over to the machines altogether. Maybe the algorithm can be thoroughly objective even though the human being cannot. Or we might characterize it as a different approach to the problem of situated knowledge. It seeks to solve the problem by scale rather than detachment, abstraction, or perspective. The accumulation of massive amounts of data about the world can yield new insights and correlations which, while not subject to human understanding, will nonetheless prove useful. Notice how in these cases, the neutrality of the technology involved is taken for granted. When it becomes clear, however, that the relevant technologies are not and cannot, in fact, be neutral in this way, then this last ditch effort to double down on the old modern ideals stalls out.

    It is also the case that digital media has played a key role in weakening the plausibility of claims to objectively secured knowledge and impartial institutions. The deluge of information through which we all slog everyday is not hospitable to the ideals of objectivity and impartiality, which to some degree were artifacts of print and mass media ecosystems. The present condition of information super-abundance and troves of easily searchable memory databases makes it trivially easy to either expose actual instances of bias, self-interest, inconsistency, and outright hypocrisy or to generate (unwittingly for yourself or intentionally for others) the appearance of such. In the age of the Database, no one controls the Narrative. And while narratives proliferate and consolidate along a predictable array of partisan and factional lines, the notion that the competing claims could be adjudicated objectively or impartially is defeated by exhaustion.

    The dark side of this thesis involves the realization that the ideals of objectivity, impartiality, and neutrality, animated by the myth of the machine, were strategies to diffuse violent and perpetual conflict over competing visions of the true, the good, and the just during the early modern period in Europe. I’ve been influenced in this line of thought by the late Stephen Toulmin’s Cosmopolis: The Hidden Agenda of Modernity. Toulmin argued that modernity experienced a false start in the fifteenth and sixteenth century, one characterized by a more playful, modest, and humane spirit, which was overwhelmed by the more domineering spirit of the seventeenth century and the emergence of the modern order in the work of Descartes, Newton, and company, a spirit that was, in fairness, animated by a desperate desire to quell the violence that engulfed post-Reformation Europe. As I summarized Toulmin’s argument in 2019, the quest for certainty “took objectivity, abstraction, and neutrality as methodological pre-conditions for both the progress of science and politics, that is for re-emergence of public knowledge. The right method, the proper degree of alienation from the particulars of our situation, translations of observable phenomena into the realm mathematical abstraction—these would lead us away from the uncertainty and often violent contentiousness that characterized the dissolution of the premodern world picture. The idea was to reconstitute the conditions for the emergence of public truth and, hence, public order.”

    In that same essay three years ago, I wrote, “The general progression has been to increasingly turn to technologies in order to better achieve the conditions under which we came to believe public knowledge could exist [i.e., objectivity, disinterestedness, impartiality, etc]. Our crisis stems from the growing realization that our technologies themselves are not neutral or objective arbiters of public knowledge and, what’s more, that they may now actually be used to undermine the possibility of public knowledge.” Is it fair to say that these lines have aged well?

    Of course, the reason I characterize this as the dark side of the argument is that it raises the following question: What happens when the systems and strategies deployed to channel often violent clashes within a population deeply, possibly intractably divided about substantive moral goods and now even about what Arendt characterized as the publicly accessible facts upon which competing opinions could be grounded—what happens when these systems and strategies fail?

    It is possible to argue that they failed long ago, but the failure was veiled by an unevenly distributed wave of material abundance. Citizens became consumers and, by and large, made peace with the exchange. After all, if the machinery of government could run of its own accord, what was their left to do but enjoy the fruits of prosperity. But what if abundance was an unsustainable solution, either because it taxed the earth at too high a rate or because it was purchased at the cost of other values such as rootedness, meaningful work and involvement in civic life, abiding friendships, personal autonomy, and participation in rich communities of mutual care and support? Perhaps in the framing of that question, I’ve tipped my hand about what might be the path forward.

    At the heart of technological modernity there was the desire—sometimes veiled, often explicit—to overcome the human condition. The myth of the machine concealed an anti-human logic: if the problem is the failure of the human to conform to the pattern of the machine, then bend the human to the shape of the machine or eliminate the human altogether. The slogan of the one of the high-modernist world’s fairs of the 1930s comes to mind: “Science Finds, Industry Applies, Man Conforms.” What is now being discovered in some quarters, however, is that the human is never quite eliminated, only diminished.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter about technology, culture, and the moral life. In this installment you’ll find the audio version of the previous essay, “The Face Stares Back.” And along with the audio version you’ll also find an assortment of links and resources. Some of you will remember that such links used to be a regular feature of the newsletter. I’ve prioritized the essays, in part because of the information I have on click rates, but I know the links and resources are useful to more than a few of you. Moving forward, I think it makes sense to put out an occasional installment that contains just links and resources (with varying amounts of commentary from me). As always, thanks for reading and/or listening.

    Links and Resources

    * Let’s start with a classic paper from 1965 by philosopher Hubert Dreyfus, “Alchemy and Artificial Intelligence.” The paper, prepared for the RAND Corporation, opens with a long epigraph from the 17th-century polymath Blaise Pascal on the difference between the mathematical mind and the perceptive mind.

    * On “The Tyranny of Time”: “The more we synchronize ourselves with the time in clocks, the more we fall out of sync with our own bodies and the world around us.” More: “The Western separation of clock time from the rhythms of nature helped imperialists establish superiority over other cultures.”

    * Relatedly, a well-documented case against Daylight Saving Time: “Farmers, Physiologists, and Daylight Saving Time”: “Fundamentally, their perspective is that we tend to do well when our body clock and social clock—the official time in our time zone—are in synch. That is, when noon on the social clock coincides with solar noon, the moment when the Sun reaches its highest point in the sky where we are. If the two clocks diverge, trouble ensues. Startling evidence for this has come from recent findings in geographical epidemiology—specifically, from mapping health outcomes within time zones.”

    * Jasmine McNealy on “Framing and Language of Ethics: Technology, Persuasion, and Cultural Context.”

    * Interesting forthcoming book by Kevin Driscoll: The Modem World: A Prehistory of Social Media.

    * Great piece on Jacques Ellul by Samuel Matlack at The New Atlantis, “How Tech Despair Can Set You Free”: “But Ellul rejects it. He refuses to offer a prescription for social reform. He meticulously and often tediously presents a problem — but not a solution of the kind we expect. This is because he believed that the usual approach offers a false picture of human agency. It exaggerates our ability to plan and execute change to our fundamental social structures. It is utopian. To arrive at an honest view of human freedom, responsibility, and action, he believed, we must confront the fact that we are constrained in more ways than we like to think. Technique, says Ellul, is society’s tightest constraint on us, and we must feel the totality of its grip in order to find the freedom to act.”

    * Evan Selinger on “The Gospel of the Metaverse.”

    * Ryan Calo on “Modeling Through”: “The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill equipped to address. Concerns include privacy, brittleness, and automation bias, all of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm, but there are sharks in it.”

    * “Why Christopher Alexander Still Matters”: “The places we love, the places that are most successful and most alive, have a wholeness about them that is lacking in too many contemporary environments, Alexander observed. This problem stems, he thought, from a deep misconception of what design really is, and what planning is. It is not “creating from nothing”—or from our own mental abstractions—but rather, transforming existing wholes into new ones, and using our mental processes and our abstractions to guide this natural life-supporting process.”

    * An interview with philosopher Shannon Vallor: “Re-envisioning Ethics in the Information Age”: “Instead of using the machines to liberate and enlarge our own lives, we are increasingly being asked to twist, to transform, and to constrain ourselves in order to strengthen the reach and power of the machines that we increasingly use to deliver our public services, to make the large-scale decisions that are needed in the financial realm, in health care, or in transportation. We are building a society where the control surfaces are increasingly automated systems and then we are asking humans to restrict their thinking patterns and to reshape their thinking patterns in ways that are amenable to this system. So what I wanted to do was to really reclaim some of the literature that described that process in the 20th century—from folks like Jacques Ellul, for example, or Herbert Marcuse—and then really talk about how this is happening to us today in the era of artificial intelligence and what we can do about it.”

    * From Lance Strate in 2008: “Studying Media AS Media: McLuhan and the Media Ecology Approach.”

    * Japan’s museum of rocks that look like faces.

    * I recently had the pleasure of speaking with Katherine Dee for her podcast, which you can listen to here.

    * I’ll leave you with an arresting line from Simone Weil’s notebooks: “You could not have wished to be born at a better time than this, when everything is lost.”



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter about technology and culture. The pace of the newsletter has been slow of late, which I regret, but I trust it will pick up just a touch in the coming weeks (also please forgive me if you’ve been in touch over the past month or so and haven’t heard back). For starters, I’ll follow up this installment shortly with another that will include some links and resources. In this installment, I’m thinking about attention again, but from a slightly different perspective—how do we become the objects of the attention for others? If you’re a recent subscriber, I’ll note that attention is recurring theme in my writing, although it may be awhile before I revisit it again (but don’t hold me to that). As per usual, this is an exercise in thinking out loud, which seeks to clarify some aspect of our experience with technology and explore its meaning. I hope you find it useful. Finally, I’m playing with formatting again, driven chiefly by the fact that this is a hybrid text meant to be both read and/or listened to in the audio version. So you’ll note my use of bracketed in-text excursuses in this installment. If it degrades your reading or listening, feel free to let me know.

    Objects of Attention

    A recent email exchange with Dr. Andreas Mayert got me thinking about attention from yet another angle. Ordinarily, I think about attention as something I have, or, as I suggested in a recent installment, something I do. I give my attention to things out there in the world, or, alternatively, I attend to the world out there. Regardless of how we formulate it, what I am imagining in these cases is how attention flows outward from me, the subject, to some object in the world. And there’s much to consider from that perspective: how we direct our attention, for example, or how objects in the world beckon and reward our attention. But, as Dr. Mayert suggested to me, it’s also worth considering how attention flows in the opposite direction. That is to say, considering not the attention I give, but the attention that bears down on me.

    [First excursus: The case of attending to myself is an interesting one given this way of framing attention as both incoming and outgoing. If I attend to my own body—by minding my breathing, for example—I’d say that my attention still feels to me as if it is going outward before then focusing inward. It’s the mind’s gaze upon the body. But it’s a bit different if I’m trying to attend to my own thoughts. In this case I find it difficult to assign directionality to my attention. Moreover, it seems to me that the particular sense I am using to attend to the world matters in this regard, too. For example, closing my eyes seems to change the sense that my attention is flowing out from my body. As I listen while my eyes are shut, I have the sense that sounds are spatially located, to my left rather than to my right, but also that the sound is coming to me. I’m reminded, too, of the ancient understanding of vision, which conceived of sight as a ray emanating from the eye to make contact with the world. The significance of these subtle shifts in how we perceive the world and how media relate to perception should not be underestimated.]

    There are several ways of thinking about where this attention that might fix on us as its object originates. We can consider, for example, how we become an object of attention for large, impersonal entities like the state or a corporation. Or we can contemplate how we become the object of attention for another person—legibility in the former case and the gaze in the latter. There are any number of other possibilities and variations within them, but given my exchange with Mayert I found myself considering what happens when a machine pays attention to us. By “machine” in this case, I mean any of the various assemblages of devices, sensors, and programs through which data is gathered about us and interpretations are extrapolated from that data, interpretations which purport to reveal something about us that we ourselves may not otherwise recognize or wish to disclose.

    I am, to be honest, hesitant to say that the machine (or program or app, etc.) pays attention to us or, much less, attends to us. I suppose it is better to say that the machine mediates the attention of others. But there is something about the nature of that mediation that transforms the experience of being the object of another’s attention to such a degree that it may be inadequate to speak merely of the attention of another. By comparison, if I discover that someone is using a pair of binoculars to watch me at a distance, I would still say, with some unease to be sure, that it is the person and not the binoculars that are attending to me although of course their gaze is mediated by the binoculars. If I’m being watched on a recording of cctv footage, even though someone is attending to me asynchronously through the mediation of the camera, I’d still say that it is the person is paying attention to me although I might hesitate to say that it is me they are paying attention to.

    However, I’m less confident of putting it quite that way when, say, data about me is being captured, interpreted, and filtered to another who attends to me through that data and its interpretation. It does seem as if the primary work of attention, so to speak, is done not by the person but the machine, and this qualitatively changes the experience of being noted and attended to. Perhaps one way to say this is that when we are attended to by (or through) a machine we too readily become merely an object of analysis stripped of depth and agency, whereas when we are attended to more directly, although not necessarily in unmediated fashion, it may be harder—but not impossible, of course—to be similarly objectified.

    I am reminded, for example, of the unnamed protagonist of Graham Greene’s The Power and the Glory, a priest known better for his insobriety than his piety, who, while being jailed alongside one of his tormentors, thinks to himself, “When you visualized a man or woman carefully, you could always begin to feel pity … that was a quality God’s image carried with it … when you saw the lines at the corners of the eyes, the shape of the mouth, how the hair grew, it was impossible to hate.” There’s much that may discourage us from attending to another in this way, but the mediation of the machine seems to remove the possibility altogether.

    I am reminded of Clive Thompson’s intriguing analysis of captcha images, that grid of images that sometimes appears when you are logging in to a site and from which you are to select squares that contain things like buses or traffic lights. Thompson set out to understand why he found captcha images “overwhelmingly depressing.” After considering several factors, here’s what he concluded:

    “They weren’t taken by humans, and they weren’t taken for humans. They are by AI, for AI. They thus lack any sense of human composition or human audience. They are creations of utterly bloodless industrial logic. Google’s CAPTCHA images demand you to look at the world the way an AI does.”

    The uncanny and possibly depressing character of the captcha images is, in Thompson’s compelling argument, a function of being forced to see the world from a non-human perspective. I’d suggest that some analogous unease emerges when we know ourselves to be perceived or attended to by a non-human agent, something that now happens routinely. In one way or another we are the objects of attention for traffic light cameras, smart speakers, sentiment analysis tools, biometric sensors, doorbell cameras, proctoring software, on-the-job motion detectors, and algorithms used to ostensibly discern our credit worthiness, suitability for a job, or proclivity to commit a crime. The list could go on and on. We navigate a field in which we are just as likely to be scanned, analyzed, and interpreted by a machine as we are to enjoy the undisturbed attention of another human being.

    Digital Impression Management

    To explore these matters a bit more concretely, I’ll finally come to the subject of my exchange with Dr. Mayert, which was a study he conducted examining how some people experience the attention of a machine bearing down on them.

    Mayert’s research examined how employees reasoned about systems, increasingly used in the hiring process, which promise to “create complex personality profiles from superficially innocuous individual social media profiles.” You’ll find an interview with Dr. Mayert and a link to the study, both in German, here, and you can use your online translation tool of choice if, like me, you’re not up on your German. With permission, I’ll share portions of what Mayert discussed in our email exchange.

    The findings were interesting. On the one hand, Mayert found that “employees have no problem at all with companies taking a superficial look at their social media profiles to observe what is in any case only a mask in Goffman's sense.”

    Erving Goffman, you may recall, was a mid-twentieth century sociologist, who, in The Presentation of the Self in Everyday Life, developed a dramaturgical model of human identity and social interactions. The basic idea is that we can understand social interactions by analogy to stage performance. When we’re “on stage,” we’re involved in the work of “impression management.” Which is to say that we carefully manage how we are perceived by controlling the impressions we’re giving off. (Incidentally, media theorist Joshua Meyrowitz usefully put Goffman’s work in conversation with McLuhan’s in No Sense of Place: The Impact of Electronic Media on Social Behavior, an underrated work of media theory published in 1986.)

    So the idea here is that social media platforms are Goffmanesque stages, and, after we came to terms with context collapse, we figured out how to manage the impressions given off by our profiles. Indeed, from this perspective we might say that social media just made explicit (and quantifiable) dimensions of human behavior which, hitherto, had been mostly implicit. You’d be forgiven for thinking that this picture is just a bit too tidy. In practice, impressions, like most human dynamics, cannot be perfectly managed. We always “give off” more than we imagine, for example, and others may read our performances more astutely than we suppose.

    But this was not the whole story. Mayert reported that employees had a much stronger negative reaction when the systems claimed to “infer undisclosed personal information” from their carefully curated feeds. It is one thing, from their perspective, to have data used anonymously for the purpose of placing ads, for example—that is when people are “ultimately anonymous objects of the data economy”—and quite another when the systems promise to disclose something about them as a particular person, something they did not intend to reveal. Whether the systems can deliver on this promise to know us better than we would want to be known is another question, and I think we should remain skeptical of such claims. But the claim that they could do just that elicited a higher degree of discomfort among participants in the study.

    The underlying logic of these attitudes uncovered by Mayert’s research is also of interest. The short version, as I understand it, goes something like this. Prospective employees have come to terms with the idea that employers will scan their profiles as part of the hiring process, so they have conducted themselves accordingly. But they are discomfited by the possibility that their digital “impression management” can be seen through to some truer level of the self. As Mayert put it, “respondents believed that they could nearly perfectly control how they were perceived by others through the design of their profiles, and this was of great importance to them.”

    [Second excursus: I’m curious about whether this faith in digital impression management is a feature of the transition from televisual culture to digital culture. Impression management seems tightly correlated with the age of the image, specifically the televisual image. My theory is that social media initially absorbed those older impulses to mange the image (the literal image and our “self image”). We bring the assumptions and practices of the older media regime with us to new media, and this includes assumptions about the self and its relations. So those of us who grew up without social media brought our non-digital practices and assumptions to the use of social media. But practices and assumptions native to the new medium will eventually win out, and I think we’ve finally been getting a glimpse of this over the last few years. One of these assumptions is that the digital self is less susceptible to management, another may be that we now manage not the image but the algorithm, which mediates our audience’s experience of our performance. Or to put it another way, that our impression management is in the service of both the audience and the algorithm.]

    Mayert explained, however, that there was yet another intriguing dimension to his findings:

    “when they were asked about how they form their own image of others through information that can be found about them on the Net, it emerged that they superficially link together unexciting information that can be found about other persons and they themselves do roughly what is also attempted in applicant assessment through data analysis: they construct personality profiles from this information that, in terms of content, were strongly influenced by the attitudes, preferences or prejudices of the respondents.”

    So, these participants seemed to think they could, to some degree, see through or beyond the careful “impression management” of others on social media, but it did not occur to them that others might do likewise with their own presentations of the self.

    Mayert again: “Intended external representation and external representation perceived by others were equivalent for the respondents as long as it was about themselves.”

    “This result,” he adds, “explains their aversion to more in-depth [analysis] of their profiles in social media. From the point of view of the respondents, this is tantamount to a loss of control over their external perception, which endangers exactly what is particularly important to them.”

    The note of control and agency seems especially prominent and raises the question, “Who has the right to attend to us in this way?”

    I think we can approach this question by noting that our techno-social milieu is increasingly optimized for surveillance, which is to say for placing each of us under the persistent gaze of machines, people, or both. Evan Selinger, among others, has long been warning us about surveillance creep, and it certainly seems to be the case that we can now be surveilled in countless ways by state actors, corporations, and fellow citizens. And, in large measure, ordinary people have been complicit in adopting and deploying seemingly innocuous nodes in the ever expanding network of surveillance technologies. Often, these technologies promise to enhance our own ability to pay attention, but it is now the case that almost every technology that acts as an extension of our senses designed to enhance our capacity to pay attention to the world is also an instrument through which the attention of others can flow back toward us, bidden or unbidden.

    Data-driven Platonism

    Hiring algorithms are but one example of a larger set of technologies which promise to disclose some deeper truth about the self or the world that would be otherwise unnoticed. Similar tools are deployed in the realms of finance, criminal justice, and health care among others. The underlying assumption, occasionally warranted, is that analyzing copious amounts of data can disclose significant patterns or correlations, which would have been missed without these tools. As I noted a few years back, we can think about this assumption by analogy to Plato’s allegory of the cave. We are, in this case, led out of the cave by data analysis, which reveals truths that are inaccessible not only to the senses but even to unaided human reason. I remain fascinated by the idea that we’ve created tools designed to seek out realities that exist only as putative objects of quantification and prediction. They exist, that is, only in the sense that someone designed a technology to discover them and the search amounts to a pursuit of immanentized Platonic forms.

    With regard to the self, I wonder whether the participants in Mayert’s study had any clear notion of what might be discovered about them. In other words, in their online impression management, were they consciously suppressing or obscuring particular aspects of their personality or activities, which they now feared the machine would disclose, or was their unease itself a product of the purported capacities of the technology? Were they uneasy because they came to suspect that the machine would disclose something about them which they themselves did not know? Or, alternatively, was their unease grounded in the reasonable assumption that they would have no recourse should the technology disqualify them based on opaque automated judgments?

    I was reminded of ImageNet Roulette created by Kate Crawford and Trevor Paglan in 2019. The app was trained on the ImageNet database’s labels for classifying persons and was intended to demonstrate the limits of facial recognition software. ImageNet Roulette invited you to submit a picture to see how you would be classified by the app. Many users found that they were classified with an array of mistaken and even offensive labels. As Crawford noted in a subsequent report,

    “Datasets aren’t simply raw materials to feed algorithms, but are political interventions. As such, much of the discussion around 'bias' in AI systems misses the mark: there is no 'neutral,' 'natural,' or 'apolitical' vantage point that training data can be built upon. There is no easy technical 'fix' by shifting demographics, deleting offensive terms, or seeking equal representation by skin tone. The whole endeavor of collecting images, categorizing them, and labeling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform.”

    At the time, I was intrigued by another line of thought. I wondered what those who were playing with the app and posting their results might have been feeling about the prospects of getting labeled by the machine. My reflections, which I wrote about briefly in the newsletter, were influenced by the 20th century diagnostician of the self, Walker Percy. Basically, I wondered if users harbored any implicit hopes or fears in getting labeled by the machine. It is, of course possible and perhaps likely that user’s brought no such expectations to the experience, but maybe some found themselves unexpectedly curious about how they would be categorized. Would we hope that the tool validates our sense of identity, suggesting that we craved some validation of our own self-appraisals? Would we hope that the result would be obviously mistaken, suggesting that the self was not so uncomplicated that a machine could discern its essence? Or would we hope that it revealed something about us that had escaped our notice, suggesting that we’ve remained, as Augustine once put it, a question to ourselves?

    Efforts to size-up the human through the gaze of the machine trade on the currency of a vital truth: we look for ourselves in the gaze of the other. When someone gives us their attention, or, better, when someone attends to us, they bestow upon us a gift. As Simone Weil has put it, “Attention is the rarest and purest form of generosity.”

    When we consider how attention flows out from us, we are considering, among other things, what constitutes our bond to the world. When we consider how attention bears down on us, we are considering, among other things, what constitutes the self.

    One of the assumptions I bring to my writing about attention is that we desire it and we’re right to do so. To receive no one’s attention would be a kind of death. There are, of course, disordered ways of seeking attention, but we need the attention of the other even if only to know who we are. This is why I recently wrote that “the problem of distraction can just as well be framed as a problem of loneliness.” Digital media environments hijack our desire to be known in order to fuel the attention economy. And it’s in this light that I think it may be helpful to reconsider much of what we’ve recently glossed as surveillance capitalism through the frame of attention, but not just the attention we give but that which we receive.

    From this perspective, one striking feature of our techno-social milieu is that it has become increasingly difficult both to receive the attention of our fellow human beings and to refuse the attention of the machines. The exchange of one for the other is, in certain cases, especially disheartening, as, for example, when surveillance becomes, in Alan Jacobs’s memorable phrase, the normative form of care. And, as I suggested earlier, the attention frame also has the advantage of capturing the uncanny dimensions of being subject to the nonhuman gaze and rendered a quantifiable object of analysis, not so much seen as seen through, appraised without being known.

    In a rather well known poem from 1967, Richard Brautigan wrote hopefully of a cybernetic paradise in which we, and the other non-human animals, would be “watched over by machines of loving grace.” He got the watching over part right, but there are no machines of loving grace. To be fair, it is also a quality too few humans tend to exhibit in our attention to others.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter that is ostensibly about technology and culture but more like my effort to make some sense of the world taking shape around us. For many of you, this will be the first installment to hit your inbox—welcome aboard. And my thanks to those of you who share the newsletter with others and speak well of it. If you are new to the Convivial Society, please feel free to read this orientation to new readers that I posted a few months ago.

    0. Attention discourse is my term for the proliferation of articles, essays, books, and op-eds about attention and distraction in the age of digital media. I don’t mean the label pejoratively. I’ve made my own contributions to the genre, in this newsletter and elsewhere, and as recently as May of last year. In fact, I tend to think that attention discourse circles around immensely important issues we should all think about more deliberately. So, here then, is yet another entry for the attention files presented as a numbered list of loosely related observations for you consideration, a form in which I like to occasionally indulge and which I hope you find suggestive and generative.

    1. I take Nick Carr’s 2008 piece in The Atlantic, “Is Google Making Us Stupid?”, to be the ur-text of this most recent wave of attention discourse. If that’s fair, then attention and distraction have been the subject of intermittent public debate for nearly fifteen years, but this sustained focus appears to have yielded little by way of improving our situation. I say the “the most recent wave” because attention discourse has a history that pre-dates the digital age. The first wave of attention discourse can be dated back to the mid-nineteenth century, as historian Jonathan Crary has argued at length, especially in his 1999 book, Suspensions of Perception: Attention, Spectacle, and Modern Culture. “For it is in the late nineteenth century,” Crary observed,

    within the human sciences and particularly the nascent field of scientific psychology, that the problem of attention becomes a fundamental issue. It was a problem whose centrality was directly related to the emergence of a social, urban, psychic, and industrial field increasingly saturated with sensory input. Inattention, especially within the context of new forms of large-scale industrialized production, began to be treated as a danger and a serious problem, even though it was often the very modernized arrangements of labor that produced inattention. It is possible to see one crucial aspect of modernity as an ongoing crisis of attentiveness, in which the changing configurations of capitalism continually push attention and distraction to new limits and thresholds, with an endless sequence of new products, sources of stimulation, and streams of information, and then respond with new methods of managing and regulating perception […] But at the same time, attention, as a historical problem, is not reducible to the strategies of social discipline. As I shall argue, the articulation of a subject in terms of attentive capacities simultaneously disclosed a subject incapable of conforming to such disciplinary imperatives.”

    Many of the lineaments of contemporary attention discourse are already evident in Crary’s description of its 19th century antecedents.

    2. One reaction to learning that modern day attention discourse has longstanding antecedents would be to dismiss contemporary criticisms of the digital attention economy. The logic of such dismissals is not unlike that of the tale of Chicken Little. Someone is always proclaiming that the sky is falling, but the sky never falls. This is, in fact, a recurring trope in the wider public debate about technology. The seeming absurdity of some 19th-century pundit decrying the allegedly demoralizing consequences of the novel is somehow enough to ward off modern day critiques of emerging technologies. Interestingly, however, it’s often the case that the antecedents don’t take us back indefinitely into the human past. Rather, they often have a curiously consistent point of origin: somewhere in the mid- to late-nineteenth century. It’s almost as if some radical techno-economic re-ordering of society had occurred, generating for the first time a techno-social environment which was, in some respects at least, inhospitable to the embodied human person. That the consequences linger and remain largely unresolved, or that new and intensified iterations of the older disruptions yield similar expressions of distress should not be surprising.

    3. Simone Weil, writing in Oppression and Liberty (published posthumously in 1955):

    “Never has the individual been so completely delivered up to a blind collectivity, and never have men been less capable, not only of subordinating their actions to their thoughts, but even of thinking. Such terms as oppressors and oppressed, the idea of classes—all that sort of thing is near to losing all meaning, so obvious are the impotence and distress of all men in the face of the social machine, which has become a machine for breaking hearts and crushing spirits, a machine for manufacturing irresponsibility, stupidity, corruption, slackness and, above all, dizziness. The reason for this painful state of affairs is perfectly clear. We are living in a world in which nothing is made to man’s measure; there exists a monstrous discrepancy between man’s body, man’s mind and the things which at present time constitute the elements of human existence; everything is in disequilibrium […] This disequilibrium is essentially a matter of quantity. Quantity is changed into quality, as Hegel said, and in particular a mere difference in quantity is sufficient to change what is human in to what is inhuman. From the abstract point of view quantities are immaterial, since you can arbitrarily change the unit of measurement; but from the concrete point of view certain units of measurement are given and have hitherto remained invariable, such as the human body, human life, the year, the day, the average quickness of human thought. Present-day life is not organized on the scale of all these things; it has been transported into an altogether different order of magnitude, as though men were trying to raise it to the level of the forces outside of nature while neglecting to take his own nature into account.”

    4. Nicholas Carr began his 2008 article with a bit of self-disclosure, which I suspect now sounds pretty familiar to most of us if it didn’t already then. Here’s what he reported:

    Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

    At the time, it certainly resonated with me, and what may be most worth noting about this today is that Carr, and those who are roughly his contemporaries in age, were in the position of living before and after the rise of the commercial internet and thus had a point of experiential contrast to emerging digital culture.

    5. I thought of this paragraph recently while I was reading the transcript of Sean Illing’s interview with Johann Hari about his new book, Stolen Focus: Why You Can’t Pay Attention—And How to Think Deeply Again. Not long after reading the text of Illing’s interview, I also read the transcript of his conversation with Ezra Klein, which you can read or listen to here. I’m taking these two conversations as an occasion to reflect again on attention, for it’s own sake but also as an indicator of larger patterns in our techno-social milieu. I’ll dip into both conversations to frame my own discussion, and, as you’ll see, my interest isn’t to criticize Hari’s argument but rather to pose some questions and take it as a point of departure.

    Hari, it turns out, is, like me, in his early-40s. So we, too, lived a substantial chunk of time in the pre-commercial internet era. And, like Carr, Hari opens his conversation with Illing by reporting on his own experience:

    I noticed that with each year that passed, it felt like my own attention was getting worse. It felt like things that require a deep focus, like reading a book, or watching long films, were getting more and more like running up and down an escalator. I could do them, but they were getting harder and harder. And I felt like I could see this happening to most of the people I knew.

    But, as the title of his book suggests, Hari believes that this was not just something that has happened but something that was done to him. “We need to understand that our attention did not collapse,” he tells Illing, “our attention has been stolen from us by these very big forces. And that requires us to think very differently about our attention problems.”

    Like many others before him, Hari argues that these “big forces” are the tech companies, who have designed their technologies with a view to capturing as much of our attention as possible. In his view, we live in a technological environment that is inhospitable to the cultivation of attentiveness. And, to be sure, I think this is basically right, as far as it goes. This is not a wholly novel development, as we noted at the outset, even if its scope and scale have expanded and intensified.

    6. There’s another dimension to this that’s worth considering because it is often obscured by the way we tend to imagine attention and distraction as solitary or a-social phenomena. What we meet at the other end of our digital devices is not just a bit of information or an entertaining video clip or a popular game. Our devices do not only mediate information and entertainment, they mediate relationships.

    As Alan Jacobs put it writing in “Habits of Mind in an Age of Distraction”:

    “[W]e are not addicted to any of our machines. Those are just contraptions made up of silicon chips, plastic, metal, glass. None of those, even when combined into complex and sometimes beautiful devices, are things that human beings can become addicted to […] there is a relationship between distraction and addiction, but we are not addicted to devices […] we are addicted to one another, to the affirmation of our value—our very being—that comes from other human beings. We are addicted to being validated by our peers.”

    This is part of what lends the whole business a tragic aspect. The problem of distraction can just as well be framed as a problem of loneliness. Sometimes we turn thoughtlessly to our devices for mere distraction, something to help us pass the time or break up the monotony of the day, although the heightened frequency with which we may do so may certainly suggests the signs of compulsive behavior. Perhaps it is the case in such moments that we do not want to be alone with our thoughts. But perhaps just as often we simply don’t want to be alone.

    We desire to be seen and acknowledged. To exercise meaningful degrees of agency and judgment. In short, to belong and to matter. Social media trades on these desires, exploits them, deforms them, and never truly satisfies them, which explains a good deal of the madness.

    7. In her own thoughtful and moving reflections on the ethical dimensions of attention, Jasmine Wang cited the following observations from poet David Whyte:

    “[T]he ultimate touchstone of friendship is not improvement, neither of the other nor of the self. The ultimate touchstone is witness, the privilege of having been seen by someone, and the equal privilege of being granted the sight of the essence of another, to have walked with them, and to have believed in them, and sometimes, just to have accompanied them, for however brief a span, on a journey impossible to accomplish alone.”

    8. Perhaps the first modern theorist of distraction, 17th century polymath Blaise Pascal had a few things to say about diversions in his posthumously published Pensées:

    “What people want is not the easy peaceful life that allows us to think of our unhappy condition, nor the dangers of war, nor the burdens of office, but the agitation that takes our mind off it and diverts us.”

    “Nothing could be more wretched than to be intolerably depressed as soon as one is reduced to introspection with no means of diversion.”

    “The only thing that consoles us for our miseries is diversion. And yet it is the greatest of our miseries. For it is that above all which prevents us thinking about ourselves and leads us to destruction. But for that we should be bored, and boredom would drive us to seek some more solid means of escape, but diversion passes our time and brings us imperceptibly to our death.”

    9. Pascal reminds us of something we ought not to forget, which is that there may be technology-independent reasons for why we crave distractions. Weil had a characteristically religious and even mystical take on this. “There is something in our soul,” she wrote, “that loathes true attention much more violently than flesh loathes fatigue. That something is much closer to evil than flesh is. That is why, every time we truly give our attention, we destroy some evil in ourselves.”

    We should not, in other words, imagine that the ability to focus intently or to give one’s sustained attention to some matter was the ordinary state of affairs before the arrival of digital technologies or even television beforehand. But this does not mean that new technologies are of no consequence. Quite the contrary. It is one thing to have a proclivity, it is another to have a proclivity and inhabit a material culture that is designed to exploit your proclivity, and in a manner that is contrary to your self-interest and well-being.

    10. Human beings have, of course, always lived in information rich environments. Step into the woods, and you’re surrounded by information and stimuli. But the nature of the information matters. Modern technological environments present us with an abundance of symbolically encoded information, which is often designed with a view to hijacking or soliciting our attention. Which is to say that our media environments aggressively beckon us in a way that an oak tree does not. The difference might be worth contemplating.

    Natural, which is to say non-human environments, can suddenly demand our attention. At one point, Klein and Hari discuss a sudden thunder clap, which is one example of how this can happen. And I can remember once hearing the distinctive sound of a rattlesnake while hiking on a trail. In cases like these, the environment calls us decidedly to attention. It seems, though, that, ordinarily, non-human environments present themselves to us in a less demanding manner. They may beckon us, but they do not badger us or overwhelm our faculties in a manner that generates an experience of exhaustion or fatigue.

    In a human-built environment rich with symbolically encoded information—a city block, for example, or a suburban strip mall—our attention is solicited in a more forceful manner. And the relevant technologies do not have to be very sophisticated to demand our attention in this way. Literate people are compelled to read texts when they appear before them. If you know how to read and an arrangement of letters appears before you, you can hardly help but read them if you notice them (and, of course, they can be designed so as to lure or assault your attention). By contrast, naturally encoded information, such as might be available to us when we attend to how a clump of trees has grown on a hillside or the shape a stream has cut in the landscape does not necessarily impress itself upon us as significant in the literal sense of the word, as having meaning or indicating something to us. From this perspective, attention is bound up with forms of literacy. I cannot be hailed by signs I cannot recognize as such, as meaning something to me. So then, we might say that our attention is more readily elicited by that which presents itself as being somehow “for me,” by that which, as Thomas de Zengotita has put it, flatters me by seeming to center the world on me.

    If I may press into this distinction a bit further, the question of purpose or intent seems to matter a great deal, too. When I hike in the woods, there’s a relative parity between my capacity to direct my attention, on the one hand, and capacity of the world around me to suddenly demand it of me on the other. I am better able to direct my attention as I desire, and to direct it in accord with my purpose. I will seek out what I need to know based on what I have set out to do. If I know how to read the signs well, I will seek those features of the landscape that can help me navigate to my destination, for example. But in media-rich human-built environments, my capacity to direct my attention in keeping with my purposes is often at odds with features of the environment that want to command my attention in keeping with purposes that are not my own. It is the difference between feeling challenged to rise to an occasion that ultimately yields an experience of competence and satisfaction, and feeling assaulted by an environment explicitly designed to thwart and exploit me.

    11. Thomas de Zengotita, writing in Mediated: How the Media Shapes Your World and the Way You Live In It (2005):

    “Say your car breaks down in the middle of nowhere—the middle of Saskatchewan, say. You have no radio, no cell phone, nothing to read, no gear to fiddle with. You just have to wait. Pretty soon you notice how everything around you just happens to be there. And it just happens to be there in this very precise but unfamiliar way […] Nothing here was designed to affect you. It isn’t arranged so that you can experience it, you didn’t plan to experience it, there isn’t any screen, there isn’t any display, there isn’t any entrance, no brochure, nothing special to look at, no dramatic scenery or wildlife, no tour guide, no campsites, no benches, no paths, no viewing platforms with natural-historical information posted under slanted Plexiglas lectern things—whatever is there is just there, and so are you […] So that’s a baseline for comparison. What it teaches us is this: in a mediated world, the opposite of real isn’t phony or illusional or fiction—it’s optional […] We are most free of mediation, we are most real, when we are at the disposal of accident and necessity. That’s when we are not being addressed. That’s when we go without the flattery intrinsic to representation.”

    12. It’s interesting to me that de Zengotita’s baseline scenario would not play out quite the same way in a pre-modern cultural setting. He is presuming that nature is mute, meaningless, and literally insignificant. But—anthropologists please correct me—this view would be at odds with most if not all traditional cultures. In the scenario de Zengotita describes, premodern people would not necessarily find themselves either alone or unaddressed, and I think this indirectly tells us something interesting about attention.

    Attention discourse tends to treat attention chiefly as the power to focus mentally on a text or task, which is to say on what human beings do and what they make. Attention in this mode is directed toward what we intend to do. We might say that it is attention in the form of actively searching rather than receiving, and this makes sense if we don’t have an account of how attention as a form of openness might be rewarded by our experience in the world. Perhaps the point is that there’s a tight correlation between what I conceive of as meaningful and what I construe as a potential object of my attention. If, as Arendt for example has argued, in the modern world we only find meaning in what we make, then we will neglect forms of attention that presuppose the meaningfulness of the non-human world.

    13. Robert Zaretsky on “Simone Weil’s Radical Conception of Attention”:

    Weil argues that this activity has little to do with the sort of effort most of us make when we think we are paying attention. Rather than the contracting of our muscles, attention involves the canceling of our desires; by turning toward another, we turn away from our blinding and bulimic self. The suspension of our thought, Weil declares, leaves us “detached, empty, and ready to be penetrated by the object.” To attend means not to seek, but to wait; not to concentrate, but instead to dilate our minds. We do not gain insights, Weil claims, by going in search of them, but instead by waiting for them: “In every school exercise there is a special way of waiting upon truth, setting our hearts upon it, yet not allowing ourselves to go out in search of it… There is a way of waiting, when we are writing, for the right word to come of itself at the end of our pen, while we merely reject all inadequate words.” This is a supremely difficult stance to grasp. As Weil notes, “the capacity to give one’s attention to a sufferer is a very rare and difficult thing; it is almost a miracle; it is a miracle. Nearly all those who think they have this capacity do not possess it.”

    14. As I see it, there is a critical question that tends to get lost in the current wave of attention discourse: What is attention for? Attention is taken up as a capacity that is being diminished by our technological environment with the emphasis falling on digitally induced states of distraction. But what are we distracted from? If our attention were more robust or better ordered, to what would we give it? Pascal had an answer, and Weil did, too, it seems to me. I’m not so sure that we do, and I wonder whether that leaves us more susceptible to the attention economy. Often the problem seems to get framed as little more than the inability read long, challenging texts. I enjoy reading long, challenging texts, and I do find that, like Carr and Hari, this has become more challenging. But I don’t think reading long, challenging texts is essential to human flourishing nor the most important end toward which our attention might be ordered.

    We have, it seems, an opportunity to think a bit more deeply not only about the challenges our techno-social milieu presents to our capacity to attend to the world, challenges I suspect many of us feel keenly, but also about the good toward which our attention ought to be directed. What deserves our attention? What are the goods for the sake of which we ought to cultivate our capacity for attention?

    On this score, attention discourse often strikes me as an instance of a larger pattern that characterizes modern society: a focus on means rather than ends. I’d say it also illustrates the fact that it is far easier to identify the failures and disorders of contemporary society than it is to identify the goods that we ought to be pursuing. In “Tradition and the Modern Age,” Hannah Arendt spoke of the “ominous silence that still answers us whenever we dare to ask, not ‘What are we fighting against’ but ‘What are we fighting for?’”

    As I’ve suggested before, may be the problem is not that our attention is a scarce resource in a society that excels in generating compelling distractions, but rather that we have a hard time knowing what to give our attention to at any given moment. That said, I would not want to discount the degree to which, for example, economic precarity also robs people of autonomy on this front. And I also appreciated Hari’s discussion of how our attention is drained not only by the variegated media spectacle that envelops us throughout our waking hours, but also by other conditions, such as sleeplessness, that diminish the health of our bodies taken whole.

    15. Hari seems convinced that the heart of the problem is the business model. It is the business model of the internet, driven by ad revenue, that pushes companies to design their digital tools for compulsive engagement. This is, I think, true enough. The business model has certainly exacerbated the problem. But I’m far less sanguine than Hari appears to be about whether changing the business model will adequately address the problem, much less solve it. When asked by Sean Illing about what would happen if internet companies moved to a different business model Hari’s responses were not altogether inspiring. He imagines that under alternative models, such as subscription based services for example, companies would be incentivized to offer better products: “Facebook and other social media companies have to ask, ‘What does Sean want?’ Oh, Sean wants to be able to pay attention. Let’s design our app not to maximally hack and invade his attention and ruin it, but to help him heal his attention.” In my view, this overestimates the power of benevolent design and underestimates the internal forces that lead us to seek out distraction. Something must, at the end of the day, be asked of us, too.

    16. Subtle shifts in language can sometimes have surprising consequences. The language of attention seems particularly loaded with economic and value-oriented metaphors, such as when we speak of paying attention or imagine our attention as a scarce resource we must either waste or horde. However, to my ears, the related language of attending to the world does not carry these same connotations. Attention and attending are etymologically related to the Latin word attendere, which suggested among other things the idea of “stretching toward” something. I like this way of thinking about attention, not as a possession in limited supply, theoretically quantifiable, and ready to be exploited, but rather as a capacity to actively engage the world—to stretch ourselves toward it, to reach for it, to care for it, indeed, to tend it.

    Hari and other critics of the attention economy are right to be concerned, and they are right about how our technological environment tends to have a corrosive effect on our attention. Right now, I’m inclined to put it this way: our dominant technologies excel at exploiting our attention while simultaneously eroding our capacity to attend to the world.

    Klein and Illing, while both sympathetic to Hari’s concerns, expressed a certain skepticism about his proposals. That’s understandable. In this case, as in so many others, I don’t believe that policy tweaks, regulations, shifting economic models, or newer technologies built on the same assumptions will solve the most fundamental challenges posed by our technological milieu. Such measures may have their role to play, no doubt. But I would characterize these measures as grand but ultimately inadequate gestures that appeal to us exactly to the degree that they appear to require very little of us while promising to deliver swift, technical solutions. For my part, I think more modest and seemingly inadequate measures, like tending more carefully to our language and cultivating ways of speaking that bind us more closely to the world, will, in the admittedly long, very long run, prove more useful and more enduring.

    Postscript: In his opening comments, Klein makes the following observation: “And the strangest thing to me, in retrospect, about the education I received growing up — the educations most of us receive — is how little attention they give to attention.”

    Around 2014 or so, I began to think that one of my most important roles as a teacher was to help students think more deliberately about how they cultivated their attention. I was helped in thinking along these lines by a 2013 essay by Jennifer Roberts, “The Power of Patience.” In it, Roberts wrote the following:

    During the past few years, I have begun to feel that I need to take a more active role in shaping the temporal experiences of the students in my courses; that in the process of designing a syllabus I need not only to select readings, choose topics, and organize the sequence of material, but also to engineer, in a conscientious and explicit way, the pace and tempo of the learning experiences. When will students work quickly? When slowly? When will they be expected to offer spontaneous responses, and when will they be expected to spend time in deeper contemplation?

    I want to focus today on the slow end of this tempo spectrum, on creating opportunities for students to engage in deceleration, patience, and immersive attention. I would argue that these are the kind of practices that now most need to be actively engineered by faculty, because they simply are no longer available “in nature,” as it were. Every external pressure, social and technological, is pushing students in the other direction, toward immediacy, rapidity, and spontaneity—and against this other kind of opportunity. I want to give them the permission and the structures to slow down.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • As promised, here is the audio version of the last installment, “The Dream of Virtual Reality.” To those of you who find the audio version helpful, thank you for your patience!

    And while I’m at it, let me pass along a few links, a couple of which are directly related to this installment.

    Links

    Just after I initially posted “The Dream of Virtual Reality,” Evan Selinger reached out with a link to his interview of David Chalmers about Reality+. I confess to thinking that I might have been a bit uncharitable in taking Chalmers’s comments in a media piece as the basis of my critique, but after reading the interview I now feel just fine about it.

    And, apropos my comments about science fiction, here’s a discussion between philosophers Nigel Warburton and Eric Schwitzgebel about the relationship between philosophy and science fiction. It revolves around a discussion of five specific sci-fi texts Schwitzgebel recommends.

    Unrelated to virtual reality, let me pass along this essay by Meghan O’Gieblyn: “Routine Maintenance: Embracing habit in an automated world.” It is an excellent reflection on the virtues of habit. It’s one of those essays I wish I had written. In fact, I have a draft of a future installment that I had titled “From Habit to Habit.” It will aim at something a bit different, but I’m sure that it will now incorporate some of what O’Gieblyn has written. You may also recognize O’Gieblyn as the author of God, Human, Animal, Machine, which I’ve got on my nightstand and will be reading soon.

    In a brief discussion of Elon Musk’s Neuralink in his newsletter New World Same Humans, David Mattin makes the following observation:

    A great schism is emerging. It’s between those who believe we should use technology to transcend all known human limits even if that comes at the expense of our humanity itself, and those keen to hang on to recognisably human forms of life and modes of consciousness. It may be a while yet until that conflict becomes a practical and widespread reality. But as Neuralink prepares for its first human trials, we can hear that moment edging closer.

    I think this is basically right, and I’ve been circling around this point for quite some time. But I would put the matter a bit differently: I’m not so sure that it will be a while until that conflict becomes a practical and widespread reality. I think it has been with us for quite some time, and, in my less hopeful moments, I tend to think that we have already crossed some critical threshold. As I’ve put it elsewhere, transhumanism is the default eschatology of the modern technological project.

    Podcasts

    Lastly, I’ve been neglecting to pass along links to some podcasts I’ve been on recently. Let me fill you in on a couple of these. Last fall, I had the pleasure of talking to historian Lee Vinsel for his new podcast, People and Things. We talked mostly Illich and it was a great conversation. I commend Vinsel’s whole catalog to you. Peruse at your leisure. Certainly be sure to catch the inaugural episode with historian Ruth Schwartz Cowan, the author of one of the classic texts in the history of technology, More Work For Mother: The Ironies Of Household Technology From The Open Hearth To The Microwave.

    And just today my conversation with the Irish economist David McWilliams was posted. We talked mostly about the so-called meta verse, and while we focused on my early “Notes From the Metaverse,” it also pairs nicely with the latest installment. My thanks to David and his team for their conviviality. And to the new set of readers from Ireland, the UK, and beyond—welcome!

    Postscript

    I can not neglect to mention that it was brought to my attention that the promo video for Facebook’s VR platform, Horizons, from which I took a screenshot for the essay, has a comical disclaimer near the bottom of the opening shot. As philosopher Ariel Guersenzvaig noted on Twitter, “‘Virtual reality is genuine reality’? Be that as it may, the VR footage is not genuine VR footage!”



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to a special installment of the Convivial Society featuring my conversation with Andrew McLuhan. I can’t recall how or when I first encountered the work of Marshall McLuhan, I think it might’ve been through the writing of one of his most notable students, Neil Postman. I do know, however, that McLuhan, and others like Postman and Walter Ong who built on his work, became a cornerstone of my own thinking about media and technology. So it was a great pleasure to speak with his grandson Andrew, who is now stewarding and expanding the work of his grandfather and his father, Eric McLuhan, through the McLuhan Institute, of which he is the founder and director.

    I learned a lot about McLuhan through this conversation and I think you’ll find it worth your time. A variety of resources and sites were mentioned throughout the conversation, and I’ve tried to provide links to all of those below. Above all, make sure you check out the McLuhan Institute and consider supporting Andrew’s work through his Patreon page.

    Links

    McLuhan Institute’s Twitter account and Instagram account

    Andrew McLuhan’s Twitter account

    The image of McLuhan and Edmund Carpenter on the beach which Andrew mentions can be seen at the :30 mark of this YouTube video featuring audio of Carpenter describing his friendship with McLuhan

    Eric McLuhan’s speech, “Media Ecology in the 21st Century,” on the McLuhan Institute’s YouTube page (the setting is a conference in Bogota, Columbia, so McLuhan is introduced in Spanish, but he delivers his talk in English)

    Laws of Media: The New Science by Marshall McLuhan and Eric McLuhan

    Marshall McLuhan/Norman Mailer exchange

    Marshall McLuhan/W.H. Auden/Buckminster Fuller exchange

    Jeet Heer’s essay on McLuhan from 2011

    Understanding Media: The Extensions of Man (Critical Edition)

    Understanding Me: Lectures and Interviews

    Marshall McLuhan Speaks



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter about technology and culture. In this installment I write a bit about burnout, exhaustion, and rest. It doesn’t end with any neat solutions, but that’s kind of the point. However, I’ll take up the theme again in the next installment, and will hopefully end on a more promising note.

    As many of you know, the newsletter operates on a patronage model. The writing is public, there is no paywall, but I welcome the support of readers who value the work. Not to be too sentimental about it, but thanks to those who have become paying subscribers this newsletter has become a critical part of how I make my living. And for that I’m very grateful. Recently, a friend inquired about one-time gifts as the year draws to a close, however this platform doesn’t allow that option. So for those who would like to support the Convivial Society but for whom the usual subscription rates are a bit too steep, here’s a 30% discounted option that works out to about $31 for the year or about $2.50 a month. The option is good through the end of December. Cheers!

    Several years ago, I listened to Terry Gross interview the son of a prominent religious leader, who had publicly broken with his father’s legacy and migrated to another, rather different branch of the tradition. Gross asked why he had not simply let his faith go altogether. His reply has always stuck with me. He explained that it was, in part, because he was the kind of person whose first instinct, upon deciding to become an atheist, would be to ask God to help him be an atheist.

    I thought about his response recently when I encountered an article with the following title: “The seven types of rest: I spent a week trying them all. Could they help end my exhaustion?”

    My first, admittedly ill-tempered reaction was to conclude that Betteridge’s Law had been validated once again. In case this is the first time you’re hearing of Betteridge’s Law, it states that any headline ending in a question can be answered with no. I think you’ll find that it holds more often than not.

    With the opening anecdote in mind, my second, slightly more considered response was to conclude that some of us have become the kind of people whose first instinct, upon deciding to break loose from the tyranny of productivity and optimization, would be to make a list. Closely related to this thought was another: some of us have become the kind of people whose first instinct, upon deciding to reject pathological consumerism, would be to buy a product or service which promised to help us do so.

    And I don’t think we should necessarily bracket the religious context of the original formulation in the latter two cases. The structure is altogether analogous: a certain pattern of meaning, purpose, and value has become so deeply engrained that we can hardly imagine operating without it. This is why the social critic Ivan Illich called assumptions of this sort “certainties” and finally concluded that they needed to be identified and challenged before any meaningful progress on social ills could be made.

    As it turned out, that article on the different forms of rest takes a recent book as its point of departure. The book identified and explored the seven forms of rest—physical, emotional, mental, social, and so on—which the author of the article sampled for a day a piece. Probably not what the book’s author had in mind. Whatever one makes of the article, or the book upon which it is based, the problem to which it speaks, a sense of permanent burnout or chronic exhaustion, is, as far as I can tell, real and pervasive, and it is a symptom of a set of interlocking disorders afflicting modern society, which have been exacerbated and laid bare over the last two years.

    Others have written about this phenomenon perceptively and eloquently, particularly if we consider discussions of rest, exhaustion, and burnout together with similar discussions about the changing nature and meaning of work. The writing of Jonathan Malesic and Anne Helen Petersen comes immediately to mind. I won’t do the matter justice in the way they and others do, but this is a subject I’ve been thinking about a good bit lately so I’ll offer some brief observations for your consideration.

    And I think I’ll break these reflections up into two or three posts beginning with this one. As I think about what we might variously describe as the exhaustion, fatigue, or burnout that characterizes our experience, several obvious sources come to mind, chief among them economic precarity and a global pandemic. The persistent mental and emotional tax we pay to use social media doesn’t help, of course. But my attention is drawn to another set of factors: a techno-economic milieu that is actively hostile to human well-being, for example, or a certain programmed aimlessness that may undermine the experience of accomplishment or satisfaction. So let me take aim at that first point in this installment and turn to the second in a forthcoming post.

    Let’s start by acknowledging that we’re talking about a longstanding problem, which likely varies in intensity from time to time. I’ve mentioned on more than a few occasions that the arc of digital culture bends toward exhaustion, but it does so as part of a much longer cultural trajectory. Here’s another passage that has stayed with me years after first encountering it. It is from Patrick Leigh Fermor’s A Time To Keep Silent, the famed travel writer’s account of his stays at several monasteries across Europe and Turkey circa 1950. Early on, Fermor recounted the physical effects of his first stay in a monastery after recently having been in Paris. “The most remarkable preliminary symptoms,” Fermor began, “were the variations of my need of sleep.” “After initial spells of insomnia, nightmare and falling asleep by day,” he continues,

    I found that my capacity for sleep was becoming more and more remarkable: till the hours I spent in or on my bed vastly outnumbered the hours I spent awake; and my sleep was so profound that I might have been under the influence of some hypnotic drug. For two days, meals and the offices in the church — Mass, Vespers and Compline — were almost my only lucid moments. Then began an extraordinary transformation: this extreme lassitude dwindled to nothing; night shrank to five hours of light, dreamless and perfect sleep, followed by awakenings full of energy and limpid freshness.

    If your experience is anything like mine, that last line will be the most unrelatable bit of prose you’ll read today. So to what did Fermor attribute this transformation? “The explanation is simple enough:” he writes,

    the desire for talk, movements and nervous expression that I had transported from Paris found, in this silent place, no response or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which must be the common property of all our contemporaries, broke loose and swamped everything. No demands, once I had emerged from that flood of sleep, were made upon my nervous energy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but appeared to have lost their dragonish validity.”

    There’s a lot that’s worth lingering over in that paragraph—how digital devices have multiplied the automatic drains, for example—but I want to focus our attention on this one phrase: “the tremendous accumulation of tiredness, which must be the common property of all our contemporaries.”

    Now there’s a relatable sentiment. I emphasize it only to make the point that while, as Petersen wrote in a 2019 essay, burnout may be the “permanent residence” of the millennial generation, it can also be characterized as a more recent iteration of a longstanding condition. And the reason for this is that the dominant techno-social configuration of modern society demands that human beings operate at a scale and pace that is not conducive to their well-being—let alone rest, rightly understood—but by now most of us have been born into this state of affairs and take it more or less for granted.

    For example, in a recent installment of her newsletter, Petersen discussed how existing social and economic structures make it so we always pay, in one way or another, for taking time to rest, and, of course, that’s if we are among those who are fortunate enough to do so. In the course of her discussion she makes the following pointed observation:

    The ideal worker, after all, is a robot. A robot never tires, never needs rest, requires only the most basic of maintenance. When or if it collapses, it is readily replicated and replaced. In 24/7: Late Capitalism and the Ends of Sleep, Jonathan Crary makes the haunting case that we’re already training our bodies for this purpose. The more capable you are of working without rest of any form — the more you can convince your body and yourself to labor as a robot — the more valuable you become within the marketplace. We don’t turn off so much as go into “sleep mode”: ready, like the machines we’ve forced our bodies to approximate, to be turned back on again.

    This is yet another example of the pattern I sought to identify in a recent installment: the human-built world is not built for humans. In that essay, I was chiefly riffing on Illich, who argued that “contemporary man attempts to create the world in his image, to build a totally man-made environment, and then discovers that he can do so only on the condition of constantly remaking himself to fit it.”

    Illich is echoing the earlier work of the French polymath Jacques Ellul, to whom Illich acknowledged his debt in a 1994 talk I’ve cited frequently. In his best known book, The Technological Society, Ellul argued that by the early 20th century Western societies had become structurally inhospitable to human beings because technique had become their ordering principle. These days I find it helpful to gloss what technique meant for Ellul as the tyrannical imperative to optimize everything.

    So, recall Petersen’s observation about the robot being the ideal worker. It’s a remarkably useful illustration of Ellul’s thesis. It’s not that any one technology has disordered the human experience of work. Rather, it’s that technique, the ruthless pursuit of efficiency or optimization, as an ordering principle has determined how specific technologies and protocols are to be developed and integrated into the work environment. The resulting system, reflecting the imperatives of technique, is constructed in such a way that the human being qua human being becomes an impediment, a liability to the functioning of the system. He or she must become mechanical in their performance in order to fit the needs of the system, be it a warehouse floor or a byzantine bureaucracy. It’s the Taylorite fantasy of scientific management now abetted by a vastly superior technical apparatus. The logic, of course, finally suggests the elimination of the human element. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.

    But only under certain circumstances can the human element be eliminated. For the most part, we carry on in techno-social environments that are either indifferent to a certain set of genuine human needs or altogether hostile to them. For this reason, Ellul argued, a major subset of technique emerges. Ellul referred to these as human techniques because their aim was to continually manage the human element in the technological system so that it would function adequately.

    “In order that he not break down or lag behind (precisely what technical progress forbids),” Ellul believed, “[man] must be furnished with psychic forces he does not have in himself, which therefore must come from elsewhere.” That “elsewhere” might be pharmacology, propaganda, or, to give some more recent examples, mindfulness apps or seven techniques for finding rest.

    “The human being,” Ellul writes,

    is ill at ease in this strange new environment, and the tension demanded of him weighs heavily on his life and being. He seeks to flee—and tumbles into the snare of dreams; he tries to comply and falls into the life of organizations; he feels maladjusted—and becomes a hypochondriac. But the new technological society has foresight and ability enough to anticipate these human reactions. It has undertaken, with the help of techniques of every kind, to make supportable what was not previously so, and not, indeed, by modifying anything in man's environment but by taking action upon man himself.

    In his view, human techniques are alway undertaken in the interest of preserving the system and adapting the human being to its demands. Ellul explained the problem at length, but here’s a relatively condensed expression of the argument:

    [W]e hear over and over again that there is ‘something out of line’ in the technical system, an insupportable state of affairs for a technician. A remedy must be found. What is out of line? According to the usual superficial analysis, it is man that is amiss. The technician thereupon tackles the problem as he would any other […] Technique reveals its essential efficiency in discerning that man has a sentimental and moral life which can have great influence on his material behavior and in proposing to do something about such factors on the basis of its own ends. These factors are, for technique, human and subjective; but if means can be found to act upon them, to rationalize them and bring them into line, they need not be a technical drawback. Of course, man as such does not count.

    One recurring rejoinder to critiques of new or emerging technologies, particularly when it is clear that they are unsettling existing patterns of life for some, usually those with little choice in the matter, is to claim that human beings are remarkably resilient and adaptable. The fact that this comes off as some sort of high-minded compliment to human nature does a lot of work, too. But this claim tells us very little of merit because it does not address the critical issue: is it good for human beings to adapt to the new state of affairs. After all, as Ellul noted, human beings can be made to adapt to all manner of inhumane conditions, particularly in wartime. The fact that they do so may be to the credit of those who do, but not necessarily to the circumstances to which they must adapt. From this perspective, praise of humanity’s adaptability can look either like a bit of propaganda or, more generously, a case of Stockholm syndrome.

    So let’s come back to where we started with Ellul’s insights in mind. There are two key points. First, our exhaustion—in its various material and immaterial dimensions—is a consequence of the part we play in a techno-social milieu whose rhythms, scale, pace, and demands are not conducive to our well-being, to say nothing of the well-being of other creatures and the planet we share. Second, the remedies to which we often turn may themselves be counterproductive because their function is not to alter the larger system which has yielded a state of chronic exhaustion but rather to keep us functioning within it. Moreover, not only do the remedies fail to address the root of the problem, but there’s also a tendency to carry into our efforts to find rest the very same spirit which animates the system that left us tired and burnt out. Rest takes on the character of a project to be completed or an experience to be consumed. In neither case do we ultimately find any sort of meaningful and enduring relief or renewal.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome again to the Convivial Society. This installment follows relatively quickly on the last, and you may be forgiven for not having yet made your way through that one, which came in at 4,500 words (sorry). But, we have some catching up to do, and this essay is only half as long. Read on.

    In her testimony before the Senate, Facebook whistleblower Frances Haugen made an observation that caught my attention.

    When asked by Senator Todd Young of Indiana to “discuss the short and long-term consequences of body image issues on these platforms,” Haugen gave the following response, emphasis mine:

    The patterns that children establish in their teenage years live with them for the rest of their lives. The way they conceptualize who they are, how they conceptualize how they interact with other people, are patterns and habits that they will take with them as they become adults, as they themselves raise children. I’m very scared about the upcoming generation because when you and I interact in person, and I say something mean to you, and I see wince or I see you cry, that makes me less likely to do it the next time, right? That’s a feedback cycle. Online kids don’t get those cues and they learn to be incredibly cruel to each other and they normalize it. And I’m scared of what will their lives look like, where they grow up with the idea that it’s okay to be treated badly by people who allegedly care about them? That’s a scary future.

    There is much that is worth discussing in Haugen’s testimony, but these comments stood out to me because they resonated with themes I’ve been working with for some time. Specifically, I’ve been thinking about the relationship among the virtue of pity, embodied presence, and online interactions, which, it seems to me, is precisely what Haugen has in view here.

    Back in May I wrote a short post that speculated about a tendency to become forgetful of the body in online situations. “If digital culture tempts us to forget our bodies,” I wondered, “then it may also be prompting us to act as if we were self-sufficient beings with little reason to care or expect to be cared for by another.”

    I wrote those lines with Alasdair MacIntyre’s book, Dependent Rational Animals, in mind. In it, MacIntyre seeks to develop an account of the virtues and virtuous communities that takes our bodies, and thus our dependence, as a starting point. So, for example, he writes,

    What matters is not only that in this kind of community children and the disabled are objects of care and attention. It matters also and correspondingly that those who are no longer children recognize in children what they once were, that those who are not yet disabled by age recognize in the old what they are moving towards becoming, and that those who are not ill or injured recognize in the ill and injured what they often have been and will be and always may be.

    From this starting point, MacIntyre makes the case for what he calls the virtues of acknowledged dependence, explaining that “what the virtues require from us are characteristically types of action that are at once just, generous, beneficent, and done from pity.” “The education of dispositions to perform just this type of act,” he continues, “is what is needed to sustain relationships of uncalculated giving and graceful receiving.”

    Among this list of characteristics, pity was the one that most caught my attention. It is a quality that may strike many as ambivalent at best. The phrase, “I don’t want your pity,” is a common trope in our stories and it is often declared in defiantly heroic cadences. And, indeed, even when the quality has been discussed as a virtue in the tradition, writers have seen the need to distinguish it from counterfeits bearing a surface resemblance but which are often barely veiled expressions of condescension.

    MacIntyre, wanting to avoid just this association with condescension, uses instead the Latin word misericordia. Thus MacIntyre, drawing on Aquinas writes, “Misericordia is grief or sorrow over someone else’s distress […] just insofar as one understands the other’s distress as one’s own. One may do this because of some preexisting tie to the other—the other is already one’s friend or kin—or because in understanding the other’s distress one recognizes that it could instead have been one’s own.”

    This latter observation suggests the universalizing tendency of the virtue of pity, that it can recognize itself in the other regardless of whether the other is a personal relation or kin or a member of the same tribe. For this reason, pity can, of course, be the source of misguided and even oppressive actions. I say “of course,” but maybe it is not, in fact, obvious. I’m thinking, for instance, of Ivan Illich warning that something like pity turned into a rule or an institutional imperative can eventually lead to bombing the neighbor for his own good. And it’s worth noting that, in MacIntyre’s view, pity must work in harmony with justice, benevolence, and generosity—each of these virtues informing and channeling the others.

    Illich, however, says as much in discussing his interpretation of the parable of the good Samaritan. In his view, the parable teaches the freedom to care for the other regardless of whether they are kin or members of the same tribe. Indeed, given the ethics of the day, the Samaritan (Illich sometimes called him the Palestinian to drive the point home to a modern audience) had little reason to care for the Jew, who had been beaten and left by the side of the road. Certainly he had much less reason to do so than the priest and Levite who callously pass him by. And in Illich’s telling, as I read him, it is precisely the flesh-to-flesh nature of the encounter that constitutes the conditions of the call the Samaritan answers to see in the other someone worthy of his attention and care.

    Which again makes me wonder about the degree to which pity is activated or called forth specifically in the context of the fully embodied encounter, whether this context is not the natural habitat of pity, and what this means for online interactions where embodied presence cannot be fully realized.

    I thought, too, of a wise passage from Tolkien in The Fellowship of the Ring. Many you of will know the story well. For those who don’t, one of the principal characters, Frodo, refers back to a moment, many years in the story’s past, when another character, Bilbo, passed up an opportunity to kill Gollum, who is a complicated character responsible for much mischief.

    “What a pity that Bilbo did not stab that vile creature when he had a chance!” Frodo declares in conversation with the wizard Gandalf.

    “Pity? It was Pity that stayed his hand. Pity, and Mercy: not to strike without need,” the wizard replies.

    The exchange continues thus:

    “I am sorry,” said Frodo. “But I am frightened; and I do not feel any pity for Gollum.”

    “You have not seen him,” Gandalf broke in.

    “No, and I don’t want to,” said Frodo. “I can’t understand you. Do you mean to say that you, and the Elves, have let him live on after all those horrible deeds? Now at any rate he is as bad as an Orc, and just an enemy. He deserves death.”

    “Deserves it! I daresay he does. Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgement. For even the very wise cannot see all ends. I have not much hope that Gollum can be cured before he dies, but there is a chance of it. And he is bound up with the fate of the Ring. My heart tells me that he has some part to play yet, for good or ill, before the end; and when that comes, the pity of Bilbo may rule the fate of many — yours not least.”

    There are many things worth noting in this exchange but I’ll draw your attention to just three of them.

    First, Frodo justifies his lack of pity by explaining that he is afraid. And, indeed, if it were possible to measure such things, I suspect we would find that fear rather than hate is at the root of many of our social disorders. Fear distorts our capacity to see the other well. It frames them merely as a threat and allows us to rationalize our lack of regard for their well-being under the irrefutable sign of self-preservation.

    Second, Gandalf seems to believe that Frodo might change his tune once he has seen Gollum. Somehow the sight of Gollum, which depends on their bodily presence before one another, would be conducive to the experience of pity. This is the critical point for my purposes here.

    Third, we might say, with MacIntyre in mind, that through Gandalf’s speech Tolkien frames pity not only as a virtue of acknowledged dependence but also as a virtue of acknowledged ignorance. “For even the very wise cannot see all ends.” Perhaps ignorance is merely another form dependence. When we do not know, we must depend on others who do. But it may be worth distinguishing ignorance from dependence. Even the strong can be ignorant. Either the ignorance is acknowledged or it is not. But it is true that in the end failing to acknowledge either our dependence or our ignorance may amount to the same thing: the reckless exercise of what Albert Borgmann has called regardless power.

    There is one other literary case study of the link between bodies and pity that came to mind. It is found in the Iliad, Homer’s tale of the wrath of Achilles set during the Trojan War. Near the end of the epic, Achilles has allowed his wrath, arising from the killing of his friend Patroclus, to drive him into a murderous frenzy during which he has lashed out at the gods themselves, killed Hector, and, disregarding the moral obligation to honor the body of the dead, dragged Hector’s body from the back of a chariot. Through it all he has refused food and drink, seeming to forget his bond with other mortals, as if violence alone could sustain him. In all of this, he illustrates a pattern: those who act without regard to the moral and physical limits implicit in the human condition do not become as the gods but rather descend into an inhuman, bestial state.

    In the climactic scene of the story, Hector’s father, King Priam, with the aid of the gods and at great personal risk, makes a clandestine nighttime visit to Achilles’s tent. He is there to beseech Achilles for the body of Hector his son. When he is alone with Achilles, Priam entreats him to “‘Remember your own father, great godlike Achilles.” He goes on:

    Revere the gods, Achilles! Pity me in my own right,remember your own father! I deserve more pity . . .I have endured what no one on earth has ever done before—I put to my lips the hands of the man who killed my son.’”Those words stirred within Achilles a deep desireto grieve for his own father. Taking the old man’s handhe gently moved him back. And overpowered by memoryboth men gave way to grief. Priam wept freelyfor man-killing Hector, throbbing, crouchingbefore Achilles’ feet as Achilles wept himself,now for his father, now for Patroclus once again,and their sobbing rose and fell throughout the house.

    Pity again, and again the face-to-face encounter. It is this encounter that draws Achilles back to the mortal realm—the realm of limits, sorrow, memory, custom, and death. And signaling this reentry into the common human condition, Achilles says to Priam, “So come—we too, old king, must think of food.”

    It is worth noting the obvious at this point: the fullness of embodied presence is no guarantee that we will take pity on one another or recognize ourselves in the other. People can be horrendously cruel to one another even when confronted with the body of another, something the Iliad also teaches us if we had not yet learned the lesson in more bitter fashion.

    Simone Weil, who wrote a remarkable meditation on the epic title “The Iliad: Or, the Poem of Force,” knew this well. “The true hero, the true subject, the center of the Iliad is force,” Weil declares in the opening lines. “Force employed by man, force that enslaves man, force before which man’s flesh shrinks away. In this work, at all times, the human spirit is shown as modified by its relations with force, as swept away, blinded, by the very force it imagined it could handle, as deformed by the weight of the force it submits to.”

    “To define force —” she writes, “it is that x that turns anybody who is subjected to it into a thing. Exercised to the limit, it turns man into a thing in the most literal sense: it makes a corpse out of him.”

    Later in the essay, she observes that “the man who is the possessor of force seems to walk through a non-resistant element; in the human substance that surrounds him nothing has the power to interpose, between the impulse and the act, the tiny interval that is reflection. Where there is no room for reflection, there is none either for justice or prudence.” Several lines further on she writes again of “that interval of hesitation, wherein lies all our consideration for our brothers in humanity.”

    These conditions can obviously manifest themselves in contexts far removed from online forums and social media. But a simulation of the experience of power, understood as the collapse of the space between impulse and act, may be more generalized in online environments where a forgetfulness of the body is also a default setting.

    The interval of hesitation is not unlike what Haugen described, in very different language, as part of the embodied feedback cycle of human interaction, where a wince and a tear are visible to the one who elicits them from the other. And in this way the idealized frictionless quality of online actions, particularly in the absence of the body, can be understood as an inducement to cruelty. Although, inducement may not be quite right. Perhaps it is better to say that in online environments, certain critical impediments to cruelty, fragile and tenuous as they already are in the course of human affairs, are lifted.

    Looking at these dynamics from another perspective, and with Weil’s analysis in mind, we might also say that in online environments we may be tempted by the illusion of force or power. We are inclined to be forgetful of our bodies and hence also of the virtues of acknowledge dependence, especially pity. And the interval of reflection, which is also the fleeting, ephemeral ground in which the seed of virtue may yield its fruit, is collapsed by design. The result is a situation in which it is easier to imagine the other as an object susceptible to our manipulations and to mistake the absence of friction with the possession of power. Regrettably, these are mutually reinforcing tendencies, which, it should be noted, have little to do with anonymity, and for which there are no obvious technical solutions.

    Contexts that sever the relationship between action and presence make it difficult for pity to emerge. Consequently, in her testimony Frances Haugen worried that children whose habits and sensibilities were shaped in online contexts would come to accept, or even expect, cruelty and then carry this acceptance over into the rest of their lives. This is certainly plausible, but it also opens up another possibility: that we reverse or at least complicate the flow of influence. Online platforms are morally formative, but, despite their ubiquity and their Skinner box quality, they are not the only morally formative realities in our lives, or that of our children, unless, of course, we altogether cede that power to them.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
  • Welcome to the Convivial Society, a newsletter about technology and culture. It’s been a bit longer than usual since our last installment, but I’m glad to report that this has been in part a function of some recent developments, which I’m delighted to tell you about. Many of you are reading this because you caught my interview with Ezra Klein back in August. That interview was based on an installment of this newsletter in which I offered 41 questions through which to explore the moral dimensions of a given technology. Well, as it turns out, I’ve sold my first book, based on those questions, to Avid Reader Press, an imprint of Simon & Schuster. As you might imagine, I’m thrilled and immensely grateful.

    Naturally, I’ll keep you posted as I write the book and publication day approaches. There’s more than a little work to be done before then, of course. And, in case you were wondering, the newsletter will continue as per usual. In fact, I’ve got about five drafts going right now, so stay tuned.

    For now, here is a rather long and meandering, but certainly not comprehensive, discussion of models and metaphors for ordering knowledge, memory, the ubiquity of search, the habits and assumptions of medieval reading, and how information loses its body. I won’t claim this post is tightly argued. Rather, it’s an exercise in thinking about how media order and represent the world to us and how this ordering and representation interacts with our experience of the self.

    Here’s a line that struck me recently: “It’s an idea that’s likely intuitive to any computer user who remembers the floppy disk.”

    Is that you? It’s definitely me. I remember floppy disks well. But, then, it occurred to me that the author might have the 3.5-inch variety in mind, while I remember handling the older 8-inch disks as well.

    In fact, I am even old enough to remember this thing below, although I never did much but store a few lines of code to change the color of the TV screen to which my Tandy was hooked up.

    In any case, the idea that is supposedly intuitive to anyone who remembers floppy disks is the directory structure model, or, put otherwise, “the hierarchical system of folders that modern computer operating systems use to arrange files.” In a recent article for The Verge, “File Not Found: A generation that grew up with Google is forcing professors to rethink their lesson plans,” Monica Chin explored anecdotal evidence suggesting that, by somewhere around 2017, some significant percentage of college students found this mental model altogether foreign.

    The essay opens with a couple of stories from professors who, when they instructed their students to locate their files or to open a folder, were met with incomprehension, and then proceeds to explore some possible causes and consequences. So, for example, she writes, that “directory structure connotes physical placement — the idea that a file stored on a computer is located somewhere on that computer, in a specific and discrete location.” “That’s a concept,” she goes on to add,

    that’s always felt obvious to Garland [one of the professor’s supplying the anecdotal evidence] but seems completely alien to her students. “I tend to think an item lives in a particular folder. It lives in one place, and I have to go to that folder to find it,’ Garland says. ‘They see it like one bucket, and everything’s in the bucket.”

    This suggests, of course, the reality that metaphors make sense of things by explaining the unknown (tenor) by comparison to the known (vehicle), but, when the known element itself becomes unknown, then the meaning-making function is lost. Which is to say, that files and folders are metaphors that help users navigate computers by reference to older physical artifacts that would’ve been already familiar to users. But, then, what happens when those older artifacts themselves become unfamiliar? I happen to have one of these artifacts sitting in front of me in my office, but, in truth, I never use it.

    Of course, even though I don’t use it myself now, I once did and I haven’t forgotten the logic. I suspect that for others, considerably younger than myself, the only file folder they’ve seen is the one that appears as an icon on their computers. So, perhaps it is the case that the metaphor has simply broken down in the way so many other metaphors do over time when the experiences upon which they depended are lost due to changes in material culture.

    Chin points in this direction when she writes, “It’s possible that the analogy multiple professors pointed to — filing cabinets — is no longer useful since many students Drossman’s age spent their high school years storing documents in the likes of OneDrive and Dropbox rather than in physical spaces.” Although, I think she undersells the significance of the observation because she thinks of it as an analogy rather than as a metaphor.

    The point seems like a crucial one. Mental categories tend to be grounded in embodied experiences in a material world. Tactile facility with files, folders, and filing cabinets grounded the whole array of desktop metaphors that appeared in the 1980s to organize the user’s experience of a computer. And I think we ought to take this as a case in point of a more general pattern: technological change operates on the shared material basis of our mental categories and, yes, the “metaphors we live by.” Consequently, technological change not only transforms the texture of everyday life, it also alters the architecture and furniture of our mental spaces. Hold on to that point for a moment. We’ll come back to it again. But first, let’s come back to Chin’s article.

    Even when students who have some understanding of the logic of the directory structure attempt to use it to organize their files, Chin’s sources suggest that they are unlikely to stick with it. Reporting the case of one graduate student, she writes,

    About halfway through a recent nine-month research project, he’d built up so many files that he gave up on keeping them all structured. “I try to be organized, but there’s a certain point where there are so many files that it kind of just became a hot mess,” Drossman says. Many of his items ended up in one massive folder.

    In passing, I’ll note that I was struck by the phrase “a hot mess,” if only because the same phrase occurs in the comments from another student in the article. I realize, of course, that it is a relatively popular expression, but I do wonder whether we might be justified in reading something of consequence into it. How do our mental models for organizing information intersect with our experience of the world?

    Whatever the case on that score, Chin goes on to put her finger on one more important factor. Writing about why the directory structure is no longer as familiar as it once was, she observes, “But it may also be that in an age where every conceivable user interface includes a search function, young people have never needed folders or directories for the tasks they do.” A bit further on she adds, “Today’s virtual world is largely a searchable one; people in many modern professions have little need to interact with nested hierarchies.”

    Similar arguments have been made to explain how some people think about their inboxes. While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?

    A Scribal Revolution and the Emergence of the Text

    Okay, now it’s time for an 800-year detour. I’ll confess at this point that my interest in this topic is fueled, in part, by my reading of the last book Ivan Illich ever wrote, In the Vineyard of the Text. If you know Illich chiefly as the author of Deschooling Society or Tools for Conviviality, then this book will, I think, catch you off guard. It’s written in a different style and takes up a rather different set of concerns. Its focus is a relatively forgotten revolution in technologies of the written word that occurred around the 12th century and which, in Illich’s view, transformed the intellectual culture of the west and contributed to the rise of the modern individual. If this sounds a bit far fetched, I’d just invite you to consider the power of media. Not the media, of course, but media of human communication, from language itself to the alphabet and the whole array of technologies built upon the alphabet. Media do just that: they mediate. A medium of communication mediates our experience of the world and the self at a level that is so foundational to our thinking it is easy to lose sight of it altogether. Thus technologies of communication shape how we come to understand both the world and the self. They shape our perception, they supply root metaphors and symbols, they alter the way we experience our senses, they generate social hierarchies of value, and they structure how we remember. I could go on, but you get the point.

    So, to keep our starting point in view, the apparent fading of the directory structure model doesn’t matter because it is making STEM education more challenging for college professors. If it matters, it matters as a clue to some deeper shifts in the undercurrents of our cultural waters. As I read about these students who had trouble grokking directory structures, for example, I remembered Walter Ong’s work on Peter Ramus (or in Latin, Petrus Ramus, but, in fact, Pierre de La Ramée). Ramus was a sixteenth century scholar, who in Ong’s telling, despite his unspectacular talents, became a focal point of the then current debates about knowledge and teaching. Ong frames him as a transitional figure with one foot in the late medieval scholastic milieu and another in the “modern” world emerging in the wake of the Renaissance. Ong, who is best remembered today for his work on orality and literacy, cut his teeth on Ramus, his research focusing on how Ramus, in conjunction with the advent of printing, pushed the culture of Western learning further away from the world of the ear (think of the place of dialog in the Platonic tradition) toward the world of the eye. His search for a universal method and logic, which preceded and may have prepared the way for Descartes, yielded a decidedly visual method and logic, complete with charts and schemas. Perhaps a case could be made, and maybe has been made, that this reorientation of human learning and knowing around sight finds its last iteration in the directory structure of early personal computers, whose logic is fundamentally visual. Your own naming of files and folders may presume another kind of logic, but there is no logic to the structure itself other than the one you visualize, which may be why it was so difficult for these professors to articulate the logic to students. In any case, the informational milieu the student’s describe is one that is not ordered at all. It is a hot mess navigated exclusively by the search function.

    Back to Illich. I don’t have a carefully worked out theory of how newer mental models for organizing information will change the way we think about the world and ourselves, but I believe that revisiting some of Illich’s observations about this earlier transition will prove fruitful. Illich himself wrote in the hope that this would be the case.

    In the Vineyard of the Text, which is itself a careful analysis of a medieval guide to the art of reading written by Hugh of St. Victor, sets out to make one principle argument that goes something like this: In the 12th century, a set of textual innovations transformed how reading was experienced by the intellectual class. Illich describes it as a shift from monkish reading focused on the book as a material artifact to scholastic reading focused on the text as a mental construct that floats above its material anchorage in a book. (I’m tempted to say manuscript or codex to emphasize the difference between the artifact we call a book and what Hugh of St. Victor would’ve handled.) A secondary point Illich makes throughout this fascinating book is that this profound shift in the culture of the book that shaped Western societies for the rest of the millennium was also entangled with the emergence of a new experience of the self.

    So, let’s start with the changing nature of the reader’s relationship to the book and then come back to the corresponding cultural and existential changes.

    The Sounding Pages

    It’s commonly known that the invention of printing in the 15th century was a momentous development in the history of European culture. Elizabeth Eisenstein’s work, especially, made the case that the emergence of printing revolutionized European society. Without it, it seems unlikely that we get the Protestant Reformation, modern science, or the modern liberal order. Illich was not interested in challenging this thesis, but he did believe that the print revolution had an important antecedent: the differentiation of the text from the book.

    To make his case, Illich begins by detailing, as well as the sources allow, what had been the experience of reading prior to the 12th century, what Illich calls monkish reading. This kind of reading was grounded not just in the book generically, but in a particular book. Remember, of course, that books were relatively scarce artifacts and that reproducing them was a laborious task, although often one lovingly undertaken. This much is well known. What might not be as well known is that many features that we take for granted when we read a book had not yet been invented. These include, for example, page numbers, chapter headings, paragraph breaks, and alphabetical indexes. These are some of the dozen or so textual innovations that Illich had in mind when he talks about the transformation of the experience of reading in the 12th century. What they provide are multiple paths into a book. If we imagine the book as an information storage technology (something we can do only on the other side of this revolution) then what these new tools do is solve the problems of sorting and access. They help organize the information in such a way that readers can now dip in and out of what now can be imagined as a text independent of the book.

    I’ve found it helpful to think about this development by recalling how Katherine Hayles phrased one of the themes of How We Became Posthuman. She sought to show, in her words, “how information lost its body.” Illich is here doing something very similar. The text is information that has lost its body, i.e. the book. According to Illich, until these textual innovations took hold in the 12th century, it was very hard to imagine a text apart from its particular embodiment in a book, a book that would’ve born the marks of its long history—in the form, for example, of marginalia accruing around the main text.

    I’ve also thought about this claim by analogy to the photograph. The photograph is to the book as the image is to the text. This will likely make more sense if you are over 35 or thereabouts. Today, one can have images that live in various devices: a phone, a laptop, a tablet, a digital picture frame, the cloud, an external drive, etc. Before digital photography, we did not think in terms of images but rather of specific photographs, which changed with age and could be damaged or lost altogether. Consequently, our relationship to the artifact has changed. Roland Barthes couldn’t be brought to include the lone photograph he possessed of his mother in his famous study of photography, Camera Lucida published in 1980. The photograph was too private, his relationship to it too intimate. This attitude toward a photographic image is practically unintelligible today. Or, alternatively, imagine the emotional distance between tearing a photograph and deleting an image. This is an important point to grasp because Illich is going to suggest that there’s another analogous operation happening in the 12th century as the individual detaches from their community. But we’ll come back to that in the last section.

    Some of these innovations also made it easier to read the book silently—something that was unusual in the scriptoriums of early medieval monasteries, which could be rather noisy places. And, of course, this reminds us that the transition from orality to literacy was not accomplished by the flipping of a switch. As Illich puts it, the monkish book was still understood as recorded sound rather than as a record of thought. Just as we thought of the web in terms of older textual technologies and spoke of web pages and scrolling, readers long experienced the act of reading by reference to oral forms of communication.

    So, here is one of a handful of summary paragraphs where Illich lays out his case:

    This [technical] breakthrough consisted in the combination of more than a dozen technical inventions and arrangements through which the page was transformed from score to text. Not printing, as is frequently assumed, but this bundle of innovations, twelve generations earlier, is the necessary foundation for all stages through which bookish culture has gone since. This collection of techniques and habits made it possible to imagine the ‘text’ as something detached from the physical reality of a page. It both reflected and in turn conditioned a revolution in what learned people did when they read — and what they experienced reading to mean.”

    Elsewhere, he wrote,

    The text could now be seen as something distinct from the book. It was an object that could be visualized even with closed eyes [….] The page lost the quality of soil in which words are rooted. The new text was a figment on the face of the book that lifted off into autonomous existence [….] Only its shadow appeared on the page of this or that concrete book. As a result, the book was no longer the window onto nature or god; it was no longer the transparent optical device through which a reader gains access to creatures or the transcendent.

    I’m going to resist the temptation to meticulously unpack for you everyone of those claims, but the last sentence deserves a bit of attention, particularly when coupled with the last sentence of the previously quoted paragraph. Together they remind us that what we think we’re doing when we’re reading evolves over time. We don’t read with the same set of assumptions, habits, and expectations that the medieval monks or modern scholastic readers brought to the text. As Illich put it in the early 1990s, “Quite recently reading-as-a-metaphor has been broken again.” And a little further on, “The book has now ceased to be the root-metaphor of the age; the screen has taken its place. The alphabetic text has become but one of many modes of encoding something, now called ‘the message.’”

    Part of the charm of In the Vineyard of the Text lies in its careful attention to what monastic readers thought they were doing when they read a book, and not just a sacred book. The book was a source of wisdom and a window onto the true order of things. Through it the reader made contact not with the thoughts of a person but with reality itself. The reader’s vision, conceived of as a searchlight emanating from the eyes, searched the book, often an illuminated manuscript, for the light of truth. In the book, the reader sought to order their soul. “‘To order’” as Illich observed, “means neither to organize and systematize knowledge according to preconceived subjects, nor to manage it. The reader’s order is not imposed on the story, but the story puts the reader into its order. The search for wisdom is a search for the symbols of order that we encounter on the page.” The presumption of order makes for a striking contrast to the experience of a hot mess, of course, and the search for wisdom is rather different than what we do when we are doing what we call searching.

    The reader sought ultimately to order his soul in accord with the order of things he discovered through the book. But to do so, the reader had to first be trained in the arts of memory. The student would, according to the ancient art, fashion within themselves a memory palace to store and readily access the wisdom he encountered in the book. Interestingly, search at this stage was primarily a mental technique designed to readily access the treasures kept in the mind’s storehouse. As St. Augustine, a trained rhetorician undoubtedly adept at the arts of memory, put it nearly 700 years earlier, “I come to fields and vast palaces of memory, where are the treasures of innumerable images of all kinds of objects brought in by sense-perception.”

    Monastic reading, as Illich describes it, was taken to be “an ascetic discipline focused on a technical object.” That technical object was the book as it was known prior to the twelfth century. It was a tool through which the reader’s soul was finely tuned to the true order of things. This approach to reading was not sustainable when technical innovations transformed the experience of the book into that of a scholastic text read for the sake of engaging with the recorded the thoughts of an author.

    Perhaps you’ve gotten this far and are wondering what exactly the point of all of this might be. To be honest, I delight in this kind of encounter with the past for its own sake. But I also find that these encounters illuminate the present by giving us a point of contrast. The meaning and significance of contemporary technologies become clearer, or so it seems to me, when I have some older form of human experience I can hold it up against. This is not to say that one form is necessarily better than the other, of course. Only that the nature of each becomes more evident.

    It’s striking, for instance, that in another age there existed the presumption of an order of things that could be apprehended through books—not as repositories of information and arguments but as windows onto the real—and that the role of reading was to help order the soul accordingly. That the meaning of what, on the surface, appears to be the very same activity could alter so dramatically is remarkable. And it prompts all sorts of questions for us today. What do we imagine we are doing when we are reading? How have our digital tools—the ubiquity of the search function, for example—changed the way we relate to the written word? Is there a relationship between our digital databases and the experience of the world as a hot mess? How has the digital environment transformed not only how we encounter the word, but our experience of the world itself?

    I’d say at this juncture that we are reeling under the burdens of externalized memory. Hugh’s students labored to construct elaborate interior structures to order their memories and enjoy ready access to all the knowledge they accumulated. And these imagined structures were built so as to mirror the order of knowledge. We do not strive to interiorize knowledge. We build external rather than internal archives. And we certainly don’t believe that interiorizing knowledge is a way of fitting the soul to the order of things. In part, because the very idea of an order of things is implausible to those of us whose primary encounter with the world is mediated by massive externalized databases of variously coded information.

    There comes a point when our capacity to store information outpaces our ability to actively organize it, no matter how prodigious our effort to do so. Consider our collections of digital images. No one pretends to order their collections. I’m not sure what the number might be, maybe 10,000, at which our efforts to organize images falters. Of course, our apps do this for us. They can self-sort by a number of parameters: date, file size, faces, etc. And Apple or Google Photos offer a host of other algorithmically curated collections to make our image databases meaningful. We outsource not only remembering but also the ordering function.

    Bearing in mind the current length of this post, let me draw things to a close by briefly taking up the other salient feature of Illich’s discussion: the relationship between the emergence of the text and the emergence of the modern self.

    “I am not suggesting that the ‘modern self’ is born in the twelfth century, nor that the self which here emerges does not have a long ancestry,” Illich remarks at one point. But he certainly believes something of significance happens then and that it bears some relationship to the emergence of the text. As he goes on to say,

    We today think of each other as people with frontiers. Our personalities are as detached from each other as are our bodies. Existence at an inner distance from the community, which the pilgrim who set out to Santiago or the pupil who studied the Didascalicon had to discover on their own, is for us a social reality, something so obvious that we would not think of wishing it away. We were born into a world of exiles …

    What Illich is picking up on here is the estrangement of the self from the community that was analogous in his view to the estrangement of the text from the book. “What I want to stress here,” Illich claims at one point, “is a special correspondence between the emergence of selfhood understood as a person and the emergence of ‘the’ text from the page.”

    Illich goes on at length about how Hugh of St. Victor likened the work of the monk to a kind of intellectual or spiritual pilgrimage through the pages of the book. Notice the metaphor. One did not search a text, but rather walked deliberately through a book. At one point Illich writes, “Modern reading, especially of the academic and professional type, is an activity performed by commuters or tourists; it is no longer that of pedestrians and pilgrims.”

    So, to summarize this point, as the text detaches from the book, or the image from the photograph, so the self detaches from the community. There is one point, though, at which I think I might build on Illich’s analysis. Illich believed that in 12th century the self begins to detach from the community. I wonder whether there is not a case to be made that the self was also detaching from the body. I think, for example, of the mind/body or soul/body dualism that characterizes the tradition of Cartesian thought. It’s tempting to imagine that this dualism was just a standard feature of medieval thought as well. But I’m not sure this is true. Thomas Aquinas, born roughly 80 years after Hugh, could still write, “Since the soul is part of the body of a human being, the soul is not the whole human being and my soul is not I.” There’s a lot one could get into here, of course, but it’s worth considering not only the wilder transhumanist dreams of immortality achieved by uploading our minds to the machine but also how we’ve dispersed the self through digital media. The self is no longer rooted to the experience of the body. It lives in various digitally mediated manifestations and iterations. As such it is variously coded and indexed. We can search not only the text but the archives of the self. And perhaps, like other forms of information that have lost their body, it becomes unmanageable. Or, at least it takes on that aspect, when we understand it through the primary experience of a digitally dispersed self. While the early instantiations of social media were characterized by the well-managed performance, more recent iterations seem to scoff at the idea. Better to simply embrace the hot mess.



    Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe