Off-Modern Onions

Still Longforming Like It’s the Year 2000

Andy Li, CC0 image via Wikimedia Commons

Or, Feeling Seen by a Writer Doing His Own Technoskeptic Thing

If you haven’t read him, add Anders Kølle to your list of technocritics. It’s not that he’s antitechnology tout court; at the very least, as a twenty-first century author, he relies on digital and print technologies to get his work out. But his is a very unique addition to the already packed conversation about digital-age screwiness.

In The Technological Sublime, Kølle provides the first discussion of the topic I’ve seen that leans heavily on the granddaddy of information theory, mathematician-cum-computer scientist Claude Shannon. By laying out in a 1948 paper how “information could be quantified and demonstrat[ing] that information could be delivered reliably over imperfect communication channels like phone lines or wireless connections,” Shannon started us down the path toward our text- and IM-clogged present.1 It’s not that the guy hoped to turn us all into zombies so infatuated with our phones that we’ll walk right into traffic to attend to them; he just wanted to make communication less prone to error or uncertainty, trying to ensure that the message intended by a “sender” was unproblematically heard and understood by a “receiver” in the way that sender had intended. If you’ve ever tried talking to someone in a loud restaurant, sitting uncomfortably close in order to yell in their ear and have your ear yelled into in turn and still not really sure any conceivable conversation’s been had, you can see why the effort to reduce “noise”—actual sound, the influence of environmental conditions, and so forth—was not a ridiculous idea.

But Kølle thinks that the way Shannon approached this challenge led to the rise of other contemporary problems. As Shannon admirer David Tse puts it, the man’s “genius lay in his observation that the key to communication is uncertainty. After all, if you knew ahead of time what I would say to you… what would be the point of writing [or stating] it?”2 And in order to remove that uncertainty, Shannon set out to determine what the “basic unit of uncertainty” was—and so was born the “bit,” a now-familiar representative of binary choice. Designating the bit’s essence or mode of existence via either the number one or the number zero (or on/off, yes/no, etc.), the little speck was granted no other options. You could be absolutely certain about one thing with this fleck of information: it was either x or y, but definitely nothing else. Phew.

To make communication—the handing over of your message to another person—as clear and free from (interpretive) error as could be, the key was keeping the number of bits in your message as small as possible. The more bits you sent at a certain speed, the more potential uncertainty (or “entropy”) would creep in. Although Shannon’s work was focused on computers and electronic communication, and particularly with the capacity of information a given channel could carry, that work makes sense even at a tech-free level; if you’re trying to shout instructions to someone and only have three seconds to do it, “Run!” will probably be more easily understood and will have a better chance of your instruction being heeded than “There’s a large man with a knife coming at you from the west at about a five-minute mile from a half a block away, and you should get out of his way by running north, east, or south!”

Things start getting uncomfortable, though, distanced, in this aim for clarity—because clarity grows ever closer to being a mere servant of efficiency. Tse notes that an “unexpected conclusion stemming from Shannon’s theory is that whatever the nature of the information—be it a Shakespeare sonnet, a recording of Beethoven’s Fifth Symphony or a Kurosawa movie—it is always most efficient to encode it into bits before transmitting.” Tse’s describing a technological process aimed at getting a whole from sender to receiver. The problem is, though, in a whole whose meaning depends upon the connected interplay of its parts, how are you supposed to break the whole into (meaningful) bits? Like it or not, interpretation breaks the reign of objectivity. Do the first three notes of that famous Fifth Symphony’s opening four-note phrase get separated out from the fourth one? They’re Gs, after all, totally different in pitch and duration from that final held E-flat. Is each note a bit on its own? Remove one note from that phrase, and the symphony’s celebrated “fate motif” disappears. Separate that phrase, as a unit on its own, from the repeated phrase that starts a step lower, and the two-phrase unit doesn’t even send the tonal or emotional or aesthetic “message” the composer had in mind, and that’s not even addressing the speed at which any particular symphony plays the passage or how long they hold the fermata on that fourth note. Is the pause between one phrase and the next a bit of silence in itself? You get the point—and then if that famous opening is all you know of the symphony, you don’t know the symphony, in spite (maybe because) of those first few bars having made their way into cultural cliché by now.

Maybe, as long as all the bits come together and the seams that join them haven’t intruded in some way, it’s not so much a problem for the person who hears the broadcast symphony as a whole. What does it do to the sender, though, to think you can chop the work into bits and reassemble them at the other end, to think one bit can stand on its own, free from influence on or being influenced by the surrounding notes and silences, no harm done? Beethoven did, of course, decide that only this “bit,” this particular E-flat and not a D or an F, would fit in this particular space, would convey the “message” he had in mind. But when we start proclaiming that we can break the music apart and resend that so-called message without concern, glitches are revealed in that assumption. For instance, believing it’s possible to remove uncertainty means having a comprehensive grasp of what [types of] uncertainties exist and how they can be eliminated. The bits you’re sending are no more than objectively present pieces of information—mere data—situated at a remove that allows them to undergo perfecting (noise-cancelling) operations. Although some sort of (human and now machine/algorithmic) choice has had to be made about what constitutes a basic unit of information, the entire process of transmission is touted as value-free.

This is where Kølle’s concerns really begin to fire up. The art historian’s argument is complex and densely packed—delightfully free of the bare-bones bullet-pointed thinking meant to prevent us from puzzling over multiple meanings or implications—so I’ll undoubtedly not do it justice here. But he notes that Shannon moved us away from interest in the content of a message, “what was said and meant,” to a focus not on that content itself, but on “how it was transmitted.”3 You get Beethoven’s E-flat from sender to receiver unchanged, no straying allowed. So if your record developed a scratch that resulted in your hearing that note with some static, failure has occurred. There can be no positive value to that static, that source of noise; if it provided a new sense of what the music was doing, or added nuance to your feelings while listening, that was still evidence of a badly transmitted message. Again, the success of the error-free—faithful—transmission was the point, not what the content of that transmission effected.

That rationale applies, whether you’re dealing with music or language or art or image. If the transmission is noise-free, the message between sender and receiver won’t change, either; nothing is marred by interpretation on either end, because nothing about a clean, value-free binary could possibly be interpreted. The message makes it intact or it doesn’t, and it doesn’t matter what you’re sending. Chamber music or dirty picture, it’s all just movable data, especially once that data begins to be passed not just from one human to another, but between machines. “Accuracy and efficiency,” Kølle says, “thus supplanted [messy] understanding and creativity.” And with the massive increase in speeds that allow for efficient transmission, and so transmission of ever more information, period, “the significance of interpretation is minimized while the importance of information flow is maximized.” When that happens, the slow art of contemplating what you’ve received, taking time in silence to ponder what it could mean, is bypassed in favor of “prolific chatter” that itself fears even a moment without some sort of information being exchanged.4

The degraded role of careful interpretation, and the “in-forming” (as opposed to the dry process of transmitting and receiving information) that comes along with it, is critical for Kølle.5 But that prolific, silence-denying chatter is equally critical in itself. If you’re not saying anything or are taking too much time to say it; if you’re not contributing to the constant stream of data that keeps the silence at bay, you and your lumbering are somehow frightening, threatening to reveal the nothing behind that flood. So thanks to that inadmissible terror you represent, you’re then declared irrelevant; because you won’t join in the game as now played, no one with bother with you at all. No one will respond to the nothing—or the too much—you have to say; these days, if you want to take part in human interaction, you have to work with the swiftness that interaction requires. And that pace in turn requires uncomplicated brevity; the important thing is, remember, accuracy of whatever message it is you’re transmitting, maximum efficiency at that given speed. In spite of the injunction to express your unique self to and before the world, no real particularity is possible when there’s nothing to a piece of data meant only to be a bridge to the next piece. Even if you have an original thought or opinion, that originality gets absorbed by the way anything acceptable is expressed: quickly, transparently, without forcing the receiver to trip up and think about what it means, without relying on the particularities of a particular relationship for that thought to be meaningful. You should be able to say it to anyone and everyone, and anyone and everyone will understand, will immediately see its relevance.

Kølle says that if you refuse this communicative game and its rules, your attempts to reach out to, function with and alongside, other contemporary humans will come to naught. People may once have viewed a long letter or email, or a call out of the blue, as an attentive gift, proof of 1) the “sender’s” care, partially via the time taken to spend on it, partially as a risky expression of who that sender is and how they feel; and 2) trust in the receiver’s ability to see and appreciate that gift, and its creator, for what it was. Now, though, such long-winded things are viewed as burdens. You don’t know how long it’ll take the writer of a four-page letter to get to the real point or know from the get-go whether that letter will even contain an identifiable point; you don’t know what a real-time phone conversation will demand of you. Will you have wasted your time—and will you in turn have been forced to take your own risk in responding, only to have come off like an idiot? The thing is, the risky, unpredictable, and slow-paced sharing that constitutes meaningful communication between human beings isn’t undertaken “with the purpose of ‘reproducing a message,’” and won’t work “[i]f it does not create and offer change, if it does not offer the possibility to trans-form and re-form. Hence Shannon’s essential misunderstanding….: The fundamental problem of communication is not how to make the same but how to offer difference.”6 Offer difference, though, in the Age of Information, and you’ll be left lonely, wanting to participate and “express yourself,” but only able to do so by turning yourself into someone you’re not, and hence, still feeling lonely and false and wrong.

With his analysis of gift turned to burden, Kølle nails the present-day predicament of thinking individuals like no other critic I’ve read. Maybe it’s because that analysis corresponds to my own experience, especially to my inability to cease spewing out what I refuse to call “long reads” and staying as far away as I can from brevity-obsessed (social) media. His observations get to the heart of why I’m mostly ashamed to share what I write here even with friends, knowing they’ll feel compelled to maybe skim through it while tamping down irritation at my having forced this stuff upon them. I do realize as well that I’m sometimes seen as making things unduly hard on myself, if not on others. Why don’t I just get an Uber, instead of taking all that time with the bus? Why don’t I just Google the topic I’m interested in, instead of asking an acquaintance if they know anything about it? Why don’t I just look at someone’s Instagram to see what they’ve been doing, instead of asking them to tell me about what’s been happening in their life?

I get the sense that communication itself—not just via long missives—is increasingly considered onerous. When you have to get a bot, for instance, to drum up a two-sentence email response to an easy question, something very strange has happened; the suspicion grows that some wasting disease is upon us, turning us back into nineteenth-century literary invalids without the energy to speak or lift a finger. I’m also fully cognizant of the fact that these days, I regularly issue more or less the same condemnation of the state of our digitized world. It’s the burden I guess I’ve accepted, even as I apologize for asking those around me to help carry a heavy weight I could just as well toss to the side. I hate to quote good old Martin Luther, but in straits much more perilous than my merely dorky own, he encapsulated best what I’m feeling: “Here I stand; I can do no other.”7 I’ll write long and slowly about that standing as the data keeps speeding by.




1. IEEE Information Theory Society, “Claude E. Shannon,” http://www.itsoc.org/about/shannon.
2. Quotations in this paragraph are from David Tse, “How Claude Shannon Invented the Future,” Quanta Magazine, December 23, 2020, https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/.
3. Anders Kølle, The Technological Sublime (Delere Press, 2018), 34. Further quotations in this paragraph are from pages 38 and 45. Incidentally, the replacement of “understanding and creativity” by “accuracy and efficiency” mentioned toward the end of the paragraph could be illustrated by an old Onion article that describes what happened in a fictional interruption by the P-Funk Mothership of a Hootie and the Blowfish concert: “Said band member Mark Bryan, ‘We had just finished a really super rendition of ’Let Her Cry’ that sounded exactly like on the CD, when out of nowhere these strange men came down shouting about getting up and doing the backstroke, or something.’” February 5, 1997, https://theonion.com/mothership-accidentally-descends-on-hootie-concert-1819564179/.
4. Quotations in this paragraph are from Kølle, 38, 45, 45.
5. Kølle, 58. “To in-form and thus to shape and give form would require a level of commitment and engagement which is incompatible with the traveling attention of constant orientation and the perpetual demands of the flux.” Italics in original.
6. Kølle, 63.
7. For the background to this declaration, see W. Thomas Smith Jr., “‘Here I Stand: I Can Do No Other’: Commemorating the 500th Anniversary of Martin Luther’s 95 Theses,” Columbia Metropolitan, October 2017, https://columbiametro.com/article/here-i-stand-i-can-do-no-other/.

Subscribe to Off-Modern Onions!


You can subscribe as well via RSS feed.

#communication #information theory #tech criticism #writing