The “real world,” we’re told, is an increasingly meaningless construct, a product of nostalgia or a failure to adapt to the digital age. Real life is no longer confined to face-to-face interactions, nor is identity constructed only there, but also on Facebook, Twitter, and in all the spaces that devices create by continuously talking to one another. While their forerunners were static, these environments are dynamic. We can still escape from reality in a book, a newspaper article or a movie, as passive spectators, but we actively create and influence the networks that compete for that same attention now. Thanks to mobile devices, moreover, we’re never simply immersed in the digital and away from the physical world around us—we comfortably occupy both at once. The term “meatspace” has thus become necessary as the distinction between real and virtual has vanished. If “real” things can happen in the digital sphere too, then environments are better characterized by their corporeality than by their realness.
Despite all the similarities, though, digital platforms differ from traditional reality in this respect: It’s too easy to leave them. A single mouse click makes Facebook or Twitter disappear, at least for a while, meaning they’re less enveloping than work, school, a social gathering, or any other meatspace activity that we wish we could exit as quickly. Unlike Amazon, who just want you to buy things, social networks like Facebook and Twitter need you to spend time using their platforms and plugging important parts of your life into them. Two thirds of Facebook’s 1.35 billion active users log in every day, and Facebook’s future success depends on increasing that number. Since Facebook is competing with Twitter and other platforms for your time and attention, not to mention the possibility that you’ll put your computer away entirely, it becomes even more important to Facebook that you stay logged in.
So, back to that big difference between digital reality and meatspace reality: How do we react when reality makes us feel bad? In the digital version, we tend to log off; in meatspace, we must react in ways that are rarely as simple—“dealing with it,” if you will. Twitter and Facebook both understand the low barriers to disengagement, and put significant effort into filtering the reality they present to their users. Twitter gives you full responsibility to choose who you follow, and empowers you to make unwanted voices disappear from your feed. Facebook’s solution is less transparent and more effective, judging by its growing addictiveness and nine-figure user base: Algorithms filter the content appearing in your News Feed, guessing what you want to see better than you could and hiding what you don’t like before you find out that it exists. On Twitter, in theory, you might follow accounts that you actively disagree with, but that’s doubtful in practice, and doing so would probably make you use Twitter less. If you could make that annoying coworker disappear with a single click, you probably would, right?
Filtering reality has long been a goal of environmental design. Before the internet, shopping malls and suburban enclaves were built to exclude undesirable elements, reinforce social myths, and make people feel good enough to not leave. Digital environments achieve those same objectives at greater scale and less expense. In 2012, Facebook conducted a controversial experiment on a subset of users to better understand the emotional impact of its News Feed content and tune its algorithms accordingly. More recently, Adrian Chen reported on Facebook’s overseas “content moderators” who manually scan the network for offensive material and remove it before it appears in anyone’s feed. Both unsettling efforts, of course, aim to limit the negativity that Facebook’s users encounter on the site, maintain the network’s position as the ersatz spiritual center of contemporary life, and monetize that position as advertising revenue.
Like any designed environment, the internet’s largest platforms breed delinquent forces that actively subvert their designers’ goals. As if to remind us of the limitations of the social networks’ version of reality, two species have evolved and flourished in the digital ecosystem: the troll and the bot. Trolls—the antagonistic online personae of users who may actually be agreeable in person—distill the negativity that Twitter or Facebook make so much effort to hide, and exploit gaps in the networks’ sanitizing measures to force that negativity upon their chosen targets. Like barbarians or guerrilla combatants, their intimate familiarity with the landscape and their decentralized organization enables them to stay one step ahead of the countermeasures that an incumbent power employs to stop them (the blocking and muting features, account deactivation, or more sophisticated filtering algorithms). Trolls inhabit Nakatomi space and make a mockery of the myth that multimillion-user networks can be scrubbed of unauthorized discourse. In their most extreme forms, trolls actually drive users away from their chosen social network altogether—and back to an environment where social norms and physical distance make trolling impossible.
Bots, less menacing than trolls and frequently even amusing, are also unwelcome in the digital landscape. Bots rarely represent real people, and thus expose the fallacy that social networks are places of firsthand human expression alone. Mining the internet for existing content, or posting according to programmed rules, bots add to the entropy that human work is always trying to reduce. A sophisticated bot can nearly pass the Turing test (or, in the case of @Horse_ebooks, a human can pass for a bot imitating a human), revealing yet another advantage that meatspace still holds over social networks and dating websites: We’re still getting plenty of essential information from face-to-face interactions that can be faked or misinterpreted in digital channels. John Robb has posited the future of Twitter as machine conversations between bots, finding one company already pursuing that course. If users must wade through increasing volumes of digital noise to find the signals they’re looking for, they’ll revert to more familiar environments where it’s easy to distinguish the two.
Twitter has bigger problems with bots and trolls than Facebook does, suggesting that a truly free and transparent platform is impossible without the entropic tendencies that more controlled platforms can suppress. The very existence of trolls and bots, however, is a vaccine that inoculates us against a greater threat: Not understanding the digital landscape we spend so much time in, and imagining it’s governed by more familiar forces.
“…each block is covered with several layers of phantom architecture in the form of past occupancies, aborted projects and popular fantasies that provide alternative images to the New York that exists.”
The commentary sphere, having its center of gravity in New York, is rarely more fluent than when it’s churning out prose about its most familiar city. One favorite topic in that category, becoming more popular all the time, is the New York Death Certificate, in which the author laments the city’s seizure by the megarich and the concurrent decline of culture, grit, Times Square pornography theaters, and the “real New York” that for many young bloggers is a product of imagination more than experience. It takes a lot for one of these pieces not to be tired or uninteresting, even when passionately argued by a person who lived both versions of the city, but some, like David Byrne’s, are certainly better than others.
Manhattan is overrun by a flawed brand of capitalism, though, and it does stifle creative production in ways that it didn’t 40 years ago. These are real problems, which is why the think pieces keep accumulating. At the heart of every essay proclaiming the decline of a great old New York and its replacement with a soulless playground for the wealthy, however, lies a grave mistake: comparing the present version of a city (or the present version of anything) with a younger incarnation of itself, and using one as a foundation for critiquing the other—sending nostalgia to do serious work for which it’s not equipped. Is any prior chapter of New York’s history so faultless that we can claim it was simply better? No, we probably just remember it that way.
If it seems like I’m dismissing nostalgia altogether, let me be clear: Nostalgia employed properly is a powerful force. Sanford Kwinter (a constant reference point for me) explains this best: “It is customary today…to dismiss points of view and systems of thought as ‘nostalgic’ whenever they attempt to summon the nobility of past formations or achievements to bear witness against the mediocrities of the present.” This summoning, of course, is precisely what the “New York is dead” school attempts, but those critics’ shortcoming is that they stop there. Kwinter continues by saying that memory is an act of creation, thus reaching beyond the actual limitations and faults of the past. “The antidote is the flexibility afforded by controlled remembering, not only of what we were but, in the same emancipatory act, of what we might be,” he writes. Choosing among the idealized aspects of past and the advancements contained in the present, we can stitch together a notion of a realistic but improved version of the city we inhabit today.
Last week I saw a few comedians perform at Madison Square Garden. I couldn’t stop thinking about the density of narrative and meaning within that space: the historic preservation movement accelerated by the old Penn Station’s demolition; the symbolic significance of playing or performing in the Garden (Bruce Springsteen playing ten concerts in a row in MSG or Lebron scoring 61 points there); or the continuous arrival and departure of train passengers for suburban New Jersey and Kansas directly beneath the basketball court (as opener Hannibal Buress pointed out during the show I attended). All of these stories coexist in our culture’s collective memory, almost haunting the blocks that the arena occupies. Every other block in New York can lay claim to something similar, although few are inhabited by narratives as potent as the Garden’s, and the city’s blocks each house failures and imagined realities as well as actual events, as the Rem Koolhaas quote above reminds us. New York, like any city, is constantly being created, imagined, destroyed, and rebuilt. With such turbulent change surrounding us, how could we believe for even one second that any past or present version of the city is frozen in place, or that we’re a passive audience to it?
“Two extremes of redundancy in English prose are represented by Basic English and James Joyce’s book Finnegans Wake. The Basic English vocabulary is limited to 850 words and the redundancy is very high. This is reflected in the expansion that occurs when a passage is translated into Basic English. Joyce on the other hand enlarges the vocabulary and is able to achieve a compression of semantic content.”
Bandwidth is a late machine age term that helps illuminate the millennia of technology and culture that preceded its coinage. The definitions of bandwidth vary, but its most basic meaning is a channel’s capacity to carry information. Smoke signals and telegraphs are low-bandwidth media, transmitting one bit at a time in slow succession, while human vision transmits information to the brain at a much faster rate. The past century has yielded tools for measuring bandwidth and quantifying information (see Claude Shannon) as the channels for carrying that information have advanced rapidly.
In any era, but never more so than now, the landscape of existing technology is a palimpsest in which the cutting-edge, the obsolete, and the old-but-durable all coexist as layers of varying intensity and visibility. New, unprecedented means of information exchange and communication are invented constantly, while their older equivalents live on long after they’ve stopped being state of the art. Information reaches each of us—and often assaults us—through a multitude of high-bandwidth and low-bandwidth channels, some of which we permit to speak to us, and some of which do so uninvited. Sitting down to watch TV, checking one’s iPhone during dinner with a friend, or finding a quiet place to read a book all represent conscious choices to block certain channels and pay attention to others. Marshall McLuhan recognized that technologies in our environment have a rebalancing effect on our senses, writing that each medium is an “intensification, an amplification of an organ, sense or function, and whenever it takes place, the central nervous system appears to institute a self-protective numbing of the affected area, insulating and anesthetizing it from conscious awareness of what’s happening to it.”
Human attention, then, is a finite resource. A variety of criteria inform everyone’s small, constant choices about which media to focus on and which to tune out, and those choices often have little to do with their bandwidth, but today one thing is certain: Our own brains, not anything manmade, are the bottlenecks that limit how much information we can receive at once. The contemporary world offers as much space for storing information as we’ll ever need, and we can instantly send any amount of it to the other side of the planet. Well before either were the case, however, humans learned to ignore the features of our environments that we deemed irrelevant—the noise surrounding the valuable signals we actually wanted or needed to receive.
Claude Shannon’s quote above, from his Mathematical Theory of Communication, introduces a qualitative layer to the question of human bandwidth and the environments we seek: People move continuously through information-rich and information-poor environments and are affected differently by each. Basic English is “redundant,” meaning it’s a language that requires many words to convey even simple messages—and is therefore a language that few would choose to use for anything but utilitarian purposes. Finnegans Wake, at the opposite extreme, could not be richer in information or content, to the point that it can barely be compressed or summarized. In Shannon’s example, the rich, information-dense content of Joyce’s novel represents a higher quality of communication, and Basic English a lower quality, although the latter fulfills plenty of functional roles.
Low-information, redundant content has a flatness to it. It’s less interesting. The Residents expressed this a different way on their Commercial Album, which comprises 40 one-minute-long pop songs. The liner notes explain:
“Point one: Pop music is mostly a repetition of two types of musical and lyrical phrases, the verse and the chorus. Point two: These elements usually repeat three times in a three-minute song, the type usually found on top-40 radio. Point three: Cut out the fat and a pop song is only one minute long.”
Plenty of pop music, in other words, is redundant and can be compressed without losing anything. This might be too harsh and cynical a judgment, but it’s valuable as a polemic. Modern environments, from top-40 radio to architecture to fiction, are full of redundancy and thus thin on information. The ease of digital information storage and transmission help explain why we can afford to be less economical with information than we were in the past, but getting used to redundancy, like getting used to a diet full of salt and sugar, reinforces our appetite for it and actually influences the types of information we produce. If the manmade world seems like a flatter place in the Information Age (not in the Thomas Friedman sense), this might be part of the reason.
As communication technology improves, the argument periodically surfaces that face-to-face interaction and cities in general will become obsolete. Joel Garreau rebutted this argument a few years ago in his article “Santa Fe-ing of the World,” in which he praises the high bandwidth that physical proximity and direct experience afford. He writes, “Humans always default to the highest available bandwidth that does the job, and face-to-face is the gold standard. Some tasks require maximum connection to all senses. When you’re trying to build trust, or engage in high-stress, high-value negotiation, or determine intent, or fall in love, or even have fun, face-to-face is hard to beat.” Even the most advanced digital media, in other words, are limited compared to full sensory engagement with one’s environment—the digital, closer to Basic English than Finnegans Wake, is still a utilitarian solution to problems like distance more than it’s an ideal theater for the highest levels of human contact. As our reality becomes more automated and algorithmic, our truly complex, nuanced, information-rich activities will continue to justify their existence, while the flat and redundant will increasingly disappear into the digital. By recognizing this condition, we can learn to preserve the depth of the former instead of simplifying our reality for easier absorption into lower-bandwidth channels.
The map below depicts the striking decline in Manhattan’s population density between 1910 and 2010, capturing the desperate overcrowdedness of the tenement-era Lower East Side (where the Other Half lived) alongside the more even distribution of residents throughout today’s New York. Created by an NYU researcher and posted by Matt Yglesias on Vox, the data visualization raises the obvious question of why New York’s population thinned so dramatically over the past century. The replies to Brandon Fuller’s original tweet reflect a variety of theories, while the Vox post attributes the shift to modern-day city dwellers taking up more space than their 1910 equivalents, who lived closer together at every socioeconomic tier.
Source: Brandon Fuller
Yglesias posits that New York’s residential square footage must continuously increase to maintain the city’s population density, but his diagnosis of the situation, while not wrong, misses a more fundamental dynamic: New York is a global city, and the primary purpose of global cities is not residential. Yes, these cities have (and need) huge populations, but their main role is an economic one. Manhattan is a market, a production hub, and a control center for the world economy above all else; the population that lives on the island must increasingly support those functions and their side effects, such as tourism, or move elsewhere. In short, space needs to be made for capital to flow through New York freely, in all its forms, and the opportunity cost of allocating that space to anyone who doesn’t somehow contribute to that free flow is far greater in 2010 than it was in 1970 or 1930. Thus, while New York’s current population takes up more square feet per capita than their predecessors, they also pay a much higher price for those square feet.
In the United States, a loss of central city population density has historically indicated economic decline, as it did for decaying Rust Belt cities in past decades. In present-day New York, lower densities indicate the opposite, although Manhattan also went through its own period of hollowing out in the 1960s and 1970s. Again, Yglesias is not wrong. People do take up more square feet in New York than they used to. Suburbanization, housing reform, and automobile ownership all put upward pressure on the amount of living space people expect and the minimum amount of space that is legal in US cities. These forces have reduced population density but they are part of the global city’s broader narrative: It’s no longer necessary to cram as many people as possible into urban centers, as it was during the industrial age, and while there’s still plenty of labor needed to keep the world spinning, the heart of the global city is the least optimal site for most of that labor. In post-industrial cities, the function of housing has changed accordingly (see this NY Times article describing a Lower East Side apartment’s layout in 1910 and today), and its ideal tenant is one who plays some part, directly or indirectly, in the commanding heights of the global economy—a mode of life that usually requires more breathing room.
Apple’s announcement of the iPhone 6 last week, like every other product they unveil, highlights an easily neglected truth: The algorithms, bits, clouds, streams, and light beams—the elements of today’s most inspiring technological wonders—all run on hardware, though the inscrutability of those elements, and the metaphors by which we seek to know them better, conceal the underlying materiality that makes them possible, the tangled mess of wires behind the (proverbial) television set. Apple’s devices disappear seamlessly behind their glowing rectangular screens when used, but are nonetheless the exceptions that prove a widely-accepted rule: Information should take up as little space as possible in the physical world. What used to fill shelves and file cabinets and clutter our houses and offices should part from its physical body and ascend into the Cloud. In this narrative, if not in reality, hardware is becoming less important all the time, and Apple, by providing the lightest and sleekest portals to that alternate universe of information, light, and sound, has managed to make the most socially important hardware of all.
Devices like the iPhone and iPad, in fact, form a graphical user interface for the physical system that delivers a tweet from one person’s fingertips to another’s eyes. Contrary to the popular narrative, the “cloud” that enables such light traveling at the personal level is anything but light in the aggregate. A mass of true hardware, from routers to data centers to fiberoptic cables to cell towers, is the behind-the-scenes machinery that makes the internet tick. Andrew Blum describes this condition vividly in Tubes, his exploration of the Information Age’s hidden infrastructure, and I’ve addressed it previously here, here and here.
Kazys Varnelis recently reflected upon the data center’s status as the architectural symbol of network culture. Comparing data centers to factories, the buildings that most closely embodied the Industrial Revolution, he writes that “factories served as conspicuous symbols of power and modernity” while “data centers strive for invisibility.” The traditional factory gave some indication of its function and its role in society through its size, its outward appearance, and its location (usually near or within the city). Its presence, like that of the bridge or skyscraper, was often striking and dramatic. Data centers, Varnelis writes, are the opposite, housed in the “familiar, anonymous architecture of the big shed,” situated outside of urban centers and rarely even seen, much less noticed. Not only does the reality of Information Age technology differ greatly from its user-facing mythology; the design of its various layers reinforces the myth at the expense of the reality.
Frommer’s quip captures the essence of the “hardware problem” that reaches far beyond the smartphone, although Apple’s flagship device is a good starting point in the effort to understand the broader problem. In short, the issue is this: Hardware improves at a much slower pace than the exponential improvement of the information flow it makes possible. As a result, hardware is the main bottleneck that limits what our technology can accomplish, even when it’s all on an upward trajectory. In the iPhone, Apple solved a thousand problems that we didn’t yet know we had, but the constraints on its battery life and network connectivity still limit our access to its power in a way that harshly contrasts with the enormous impact of its software. That impact, so total in one domain, does not necessarily extend to the heavier and less pliable layers of reality: Recall the boss of Lena Dunham’s character in Girls, who jokes that she won’t sue him because there’s no app for doing so.
We’re trained to underrate hardware’s importance by a constant cultural emphasis on problems with fast, scalable solutions, like social networking, search optimization, or music streaming. The thornier problems, like the lawsuit example in Girls, are either treated as constants or ignored. The iPhone’s short battery life is obvious to its myriad users, but more subtle hardware problems, invisible as the data center in the woods, are the “unknown unknowns” (to quote Donald Rumsfeld) that we don’t even know we haven’t solved because we’ve focused our attention everywhere else.
Extending the metaphor beyond consumer devices, this hardware problem pervades the present-day urban landscape. Countless apps, open data portals, and smart city agendas promise to revolutionize or save cities, while the costly infrastructure upon which those solutions depend—bridges, roads, and power grids—steadily decays or requires increasing maintenance just to keep working. “Hacking” the city, or hacking anything, is a form of arbitrage that yields something for nothing by exploiting an asymmetry in information, but a massive substrate lies below those hacks that requires true work, in the mechanical sense of the word, to improve. Hannah in Girls can’t sue her employer using an app, nor can the Port Authority rebuild the functionally obsolete Goethals Bridge with anything we could call a hack. All the brilliant attention focused on such hacks and shortcuts at the expense of the underlying work ensures that we’ll keep running increasingly sophisticated apps on perpetually dying phones.
Ribbonfarm guest post #3 is up (and has been for a week). Please read it. I will probably spin off multiple future Kneeling Bus posts from what I wrote there. Let me know what you think.