Creative Nostalgia

“…each block is covered with several layers of phantom architecture in the form of past occupancies, aborted projects and popular fantasies that provide alternative images to the New York that exists.” 

-Rem Koolhaas

The commentary sphere, having its center of gravity in New York, is rarely more fluent than when it’s churning out prose about its most familiar city. One favorite topic in that category, becoming more popular all the time, is the New York Death Certificate, in which the author laments the city’s seizure by the megarich and the concurrent decline of culture, grit, Times Square pornography theaters, and the “real New York” that for many young bloggers is a product of imagination more than experience. It takes a lot for one of these pieces not to be tired or uninteresting, even when passionately argued by a person who lived both versions of the city, but some, like David Byrne’s, are certainly better than others.

images-2

                        Source: nycgo.com

Manhattan is overrun by a flawed brand of capitalism, though, and it does stifle creative production in ways that it didn’t 40 years ago. These are real problems, which is why the think pieces keep accumulating. At the heart of every essay proclaiming the decline of a great old New York and its replacement with a soulless playground for the wealthy, however, lies a grave mistake: comparing the present version of a city (or the present version of anything) with a younger incarnation of itself, and using one as a foundation for critiquing the other—sending nostalgia to do serious work for which it’s not equipped. Is any prior chapter of New York’s history so faultless that we can claim it was simply better? No, we probably just remember it that way.

If it seems like I’m dismissing nostalgia altogether, let me be clear: Nostalgia employed properly is a powerful force. Sanford Kwinter (a constant reference point for me) explains this best: “It is customary today…to dismiss points of view and systems of thought as ‘nostalgic’ whenever they attempt to summon the nobility of past formations or achievements to bear witness against the mediocrities of the present.” This summoning, of course, is precisely what the “New York is dead” school attempts, but those critics’ shortcoming is that they stop there. Kwinter continues by saying that memory is an act of creation, thus reaching beyond the actual limitations and faults of the past. “The antidote is the flexibility afforded by controlled remembering, not only of what we were but, in the same emancipatory act, of what we might be,” he writes. Choosing among the idealized aspects of past and the advancements contained in the present, we can stitch together a notion of a realistic but improved version of the city we inhabit today.

Last week I saw a few comedians perform at Madison Square Garden. I couldn’t stop thinking about the density of narrative and meaning within that space: the historic preservation movement accelerated by the old Penn Station’s demolition; the symbolic significance of playing or performing in the Garden (Bruce Springsteen playing ten concerts in a row in MSG or Lebron scoring 61 points there); or the continuous arrival and departure of train passengers for suburban New Jersey and Kansas directly beneath the basketball court (as opener Hannibal Buress pointed out during the show I attended). All of these stories coexist in our culture’s collective memory, almost haunting the blocks that the arena occupies. Every other block in New York can lay claim to something similar, although few are inhabited by narratives as potent as the Garden’s, and the city’s blocks each house failures and imagined realities as well as actual events, as the Rem Koolhaas quote above reminds us. New York, like any city, is constantly being created, imagined, destroyed, and rebuilt. With such turbulent change surrounding us, how could we believe for even one second that any past or present version of the city is frozen in place, or that we’re a passive audience to it?


Flatness

“Two extremes of redundancy in English prose are represented by Basic English and James Joyce’s book Finnegans Wake. The Basic English vocabulary is limited to 850 words and the redundancy is very high. This is reflected in the expansion that occurs when a passage is translated into Basic English. Joyce on the other hand enlarges the vocabulary and is able to achieve a compression of semantic content.”

-Claude Shannon

Bandwidth is a late machine age term that helps illuminate the millennia of technology and culture that preceded its coinage. The definitions of bandwidth vary, but its most basic meaning is a channel’s capacity to carry information. Smoke signals and telegraphs are low-bandwidth media, transmitting one bit at a time in slow succession, while human vision transmits information to the brain at a much faster rate. The past century has yielded tools for measuring bandwidth and quantifying information (see Claude Shannon) as the channels for carrying that information have advanced rapidly.

matrix

Source: Ceasefire

In any era, but never more so than now, the landscape of existing technology is a palimpsest in which the cutting-edge, the obsolete, and the old-but-durable all coexist as layers of varying intensity and visibility. New, unprecedented means of information exchange and communication are invented constantly, while their older equivalents live on long after they’ve stopped being state of the art. Information reaches each of us—and often assaults us—through a multitude of high-bandwidth and low-bandwidth channels, some of which we permit to speak to us, and some of which do so uninvited. Sitting down to watch TV, checking one’s iPhone during dinner with a friend, or finding a quiet place to read a book all represent conscious choices to block certain channels and pay attention to others. Marshall McLuhan recognized that technologies in our environment have a rebalancing effect on our senses, writing that each medium is an “intensification, an amplification of an organ, sense or function, and whenever it takes place, the central nervous system appears to institute a self-protective numbing of the affected area, insulating and anesthetizing it from conscious awareness of what’s happening to it.”

Human attention, then, is a finite resource. A variety of criteria inform everyone’s small, constant choices about which media to focus on and which to tune out, and those choices often have little to do with their bandwidth, but today one thing is certain: Our own brains, not anything manmade, are the bottlenecks that limit how much information we can receive at once. The contemporary world offers as much space for storing information as we’ll ever need, and we can instantly send any amount of it to the other side of the planet. Well before either were the case, however, humans learned to ignore the features of our environments that we deemed irrelevant—the noise surrounding the valuable signals we actually wanted or needed to receive.

Claude Shannon’s quote above, from his Mathematical Theory of Communication, introduces a qualitative layer to the question of human bandwidth and the environments we seek: People move continuously through information-rich and information-poor environments and are affected differently by each. Basic English is “redundant,” meaning it’s a language that requires many words to convey even simple messages—and is therefore a language that few would choose to use for anything but utilitarian purposes. Finnegans Wake, at the opposite extreme, could not be richer in information or content, to the point that it can barely be compressed or summarized. In Shannon’s example, the rich, information-dense content of Joyce’s novel represents a higher quality of communication, and Basic English a lower quality, although the latter fulfills plenty of functional roles.

Low-information, redundant content has a flatness to it. It’s less interesting. The Residents expressed this a different way on their Commercial Album, which comprises 40 one-minute-long pop songs. The liner notes explain:

“Point one: Pop music is mostly a repetition of two types of musical and lyrical phrases, the verse and the chorus. Point two: These elements usually repeat three times in a three-minute song, the type usually found on top-40 radio. Point three: Cut out the fat and a pop song is only one minute long.”

Plenty of pop music, in other words, is redundant and can be compressed without losing anything. This might be too harsh and cynical a judgment, but it’s valuable as a polemic. Modern environments, from top-40 radio to architecture to fiction, are full of redundancy and thus thin on information. The ease of digital information storage and transmission help explain why we can afford to be less economical with information than we were in the past, but getting used to redundancy, like getting used to a diet full of salt and sugar, reinforces our appetite for it and actually influences the types of information we produce. If the manmade world seems like a flatter place in the Information Age (not in the Thomas Friedman sense), this might be part of the reason.

As communication technology improves, the argument periodically surfaces that face-to-face interaction and cities in general will become obsolete. Joel Garreau rebutted this argument a few years ago in his article “Santa Fe-ing of the World,” in which he praises the high bandwidth that physical proximity and direct experience afford. He writes, “Humans always default to the highest available bandwidth that does the job, and face-to-face is the gold standard. Some tasks require maximum connection to all senses. When you’re trying to build trust, or engage in high-stress, high-value negotiation, or determine intent, or fall in love, or even have fun, face-to-face is hard to beat.” Even the most advanced digital media, in other words, are limited compared to full sensory engagement with one’s environment—the digital, closer to Basic English than Finnegans Wake, is still a utilitarian solution to problems like distance more than it’s an ideal theater for the highest levels of human contact. As our reality becomes more automated and algorithmic, our truly complex, nuanced, information-rich activities will continue to justify their existence, while the flat and redundant will increasingly disappear into the digital. By recognizing this condition, we can learn to preserve the depth of the former instead of simplifying our reality for easier absorption into lower-bandwidth channels.


Room to Live

The map below depicts the striking decline in Manhattan’s population density between 1910 and 2010, capturing the desperate overcrowdedness of the tenement-era Lower East Side (where the Other Half lived) alongside the more even distribution of residents throughout today’s New York. Created by an NYU researcher and posted by Matt Yglesias on Vox, the data visualization raises the obvious question of why New York’s population thinned so dramatically over the past century. The replies to Brandon Fuller’s original tweet reflect a variety of theories, while the Vox post attributes the shift to modern-day city dwellers taking up more space than their 1910 equivalents, who lived closer together at every socioeconomic tier.

ByFPHEnIAAAEo6b.0

Source: Brandon Fuller

Yglesias posits that New York’s residential square footage must continuously increase to maintain the city’s population density, but his diagnosis of the situation, while not wrong, misses a more fundamental dynamic: New York is a global city, and the primary purpose of global cities is not residential. Yes, these cities have (and need) huge populations, but their main role is an economic one. Manhattan is a market, a production hub, and a control center for the world economy above all else; the population that lives on the island must increasingly support those functions and their side effects, such as tourism, or move elsewhere. In short, space needs to be made for capital to flow through New York freely, in all its forms, and the opportunity cost of allocating that space to anyone who doesn’t somehow contribute to that free flow is far greater in 2010 than it was in 1970 or 1930. Thus, while New York’s current population takes up more square feet per capita than their predecessors, they also pay a much higher price for those square feet.

In the United States, a loss of central city population density has historically indicated economic decline, as it did for decaying Rust Belt cities in past decades. In present-day New York, lower densities indicate the opposite, although Manhattan also went through its own period of hollowing out in the 1960s and 1970s. Again, Yglesias is not wrong. People do take up more square feet in New York than they used to. Suburbanization, housing reform, and automobile ownership all put upward pressure on the amount of living space people expect and the minimum amount of space that is legal in US cities. These forces have reduced population density but they are part of the global city’s broader narrative: It’s no longer necessary to cram as many people as possible into urban centers, as it was during the industrial age, and while there’s still plenty of labor needed to keep the world spinning, the heart of the global city is the least optimal site for most of that labor. In post-industrial cities, the function of housing has changed accordingly (see this NY Times article describing a Lower East Side apartment’s layout in 1910 and today), and its ideal tenant is one who plays some part, directly or indirectly, in the commanding heights of the global economy—a mode of life that usually requires more breathing room.


Hacks and the Hardware Problem

Apple’s announcement of the iPhone 6 last week, like every other product they unveil, highlights an easily neglected truth: The algorithms, bits, clouds, streams, and light beams—the elements of today’s most inspiring technological wonders—all run on hardware, though the inscrutability of those elements, and the metaphors by which we seek to know them better, conceal the underlying materiality that makes them possible, the tangled mess of wires behind the (proverbial) television set. Apple’s devices disappear seamlessly behind their glowing rectangular screens when used, but are nonetheless the exceptions that prove a widely-accepted rule: Information should take up as little space as possible in the physical world. What used to fill shelves and file cabinets and clutter our houses and offices should part from its physical body and ascend into the Cloud. In this narrative, if not in reality, hardware is becoming less important all the time, and Apple, by providing the lightest and sleekest portals to that alternate universe of information, light, and sound, has managed to make the most socially important hardware of all.

Devices like the iPhone and iPad, in fact, form a graphical user interface for the physical system that delivers a tweet from one person’s fingertips to another’s eyes. Contrary to the popular narrative, the “cloud” that enables such light traveling at the personal level is anything but light in the aggregate. A mass of true hardware, from routers to data centers to fiberoptic cables to cell towers, is the behind-the-scenes machinery that makes the internet tick. Andrew Blum describes this condition vividly in Tubes, his exploration of the Information Age’s hidden infrastructure, and I’ve addressed it previously here, here and here.

Kazys Varnelis recently reflected upon the data center’s status as the architectural symbol of network culture. Comparing data centers to factories, the buildings that most closely embodied the Industrial Revolution, he writes that “factories served as conspicuous symbols of power and modernity” while “data centers strive for invisibility.” The traditional factory gave some indication of its function and its role in society through its size, its outward appearance, and its location (usually near or within the city). Its presence, like that of the bridge or skyscraper, was often striking and dramatic. Data centers, Varnelis writes, are the opposite, housed in the “familiar, anonymous architecture of the big shed,” situated outside of urban centers and rarely even seen, much less noticed. Not only does the reality of Information Age technology differ greatly from its user-facing mythology; the design of its various layers reinforces the myth at the expense of the reality.

Frommer’s quip captures the essence of the “hardware problem” that reaches far beyond the smartphone, although Apple’s flagship device is a good starting point in the effort to understand the broader problem. In short, the issue is this: Hardware improves at a much slower pace than the exponential improvement of the information flow it makes possible. As a result, hardware is the main bottleneck that limits what our technology can accomplish, even when it’s all on an upward trajectory. In the iPhone, Apple solved a thousand problems that we didn’t yet know we had, but the constraints on its battery life and network connectivity still limit our access to its power in a way that harshly contrasts with the enormous impact of its software. That impact, so total in one domain, does not necessarily extend to the heavier and less pliable layers of reality: Recall the boss of Lena Dunham’s character in Girls, who jokes that she won’t sue him because there’s no app for doing so.

We’re trained to underrate hardware’s importance by a constant cultural emphasis on problems with fast, scalable solutions, like social networking, search optimization, or music streaming. The thornier problems, like the lawsuit example in Girls, are either treated as constants or ignored. The iPhone’s short battery life is obvious to its myriad users, but more subtle hardware problems, invisible as the data center in the woods, are the “unknown unknowns” (to quote Donald Rumsfeld) that we don’t even know we haven’t solved because we’ve focused our attention everywhere else.

Extending the metaphor beyond consumer devices, this hardware problem pervades the present-day urban landscape. Countless apps, open data portals, and smart city agendas promise to revolutionize or save cities, while the costly infrastructure upon which those solutions depend—bridges, roads, and power grids—steadily decays or requires increasing maintenance just to keep working. “Hacking” the city, or hacking anything, is a form of arbitrage that yields something for nothing by exploiting an asymmetry in information, but a massive substrate lies below those hacks that requires true work, in the mechanical sense of the word, to improve. Hannah in Girls can’t sue her employer using an app, nor can the Port Authority rebuild the functionally obsolete Goethals Bridge with anything we could call a hack. All the brilliant attention focused on such hacks and shortcuts at the expense of the underlying work ensures that we’ll keep running increasingly sophisticated apps on perpetually dying phones.


Ribbonfarm: Civilization and the War on Entropy

Ribbonfarm guest post #3 is up (and has been for a week). Please read it. I will probably spin off multiple future Kneeling Bus posts from what I wrote there. Let me know what you think.


Ribbonfarm: The Wave of Unknowing

My second guest post at Ribbonfarm went up last week. It’s rife with iPhone and surfing metaphors. Enjoy!


The iPhone Perspective

I got an iPhone two months ago, after years of pretending that I would never own one. Before the iPhone I had a Blackberry (from my employer) that I used infrequently, as well as a regular old cell phone that I took everywhere. I knew I’d eventually get an iPhone whether I wanted to or not because the world around me would eventually orient itself entirely toward smartphones, at which point not having one would qualify me as “that guy.”

I bring any new technology into my life with caution. The more potent and life-changing the technology, the more caution I use. An iPhone, of course, is among the most powerful and invasive devices one could ever integrate into his sensory experience, something that can truly affect the way you think, and it’s no exaggeration to say that I was frightened by what owning an iPhone might do to me. Marshall McLuhan said that technological extensions are also amputations—which faculties was I about to lose?

800px-Perugino_Keys

   Perugino’s perspective, before we had iPhones

Whether it’s true or not—and it probably is—America’s narrative about itself is that the iPhone represents the greatest height we are capable of reaching as a society, like the moon landing for a different generation. So why not participate? I saw the purchase as an experiment. By using an iPhone, I’d find out exactly how it affects the human sensorium: what gets better, what gets worse, and how the device redirects perception. No matter how much discipline you have, a smartphone is going to have some nontrivial impact on your relationship to the world around you. I knew it was important to think about this early, before I adjusted to the iPhone and stopped noticing its effects.

The most interesting quality of smartphones, to me, is their redefinition of individual perspective. Again, Marshall McLuhan’s observation helps here: The iPhone is a kind of eye, although it doesn’t fully replace (“amputate”) the human eye. It augments or extends it. More than just visually, the iPhone is something we look into and through to see the world around us. Each app presents a different kinds of vision—a different Umwelt. Yelp, for example, lets the user “see” resturants, bars, and other places of business, but leaves everything else invisible.

Media in the previous technological era, during which McLuhan wrote and lived, revolved around the television, radio, novel, and newspaper. Each required a centralized infrastructure that delivered standardized content to mass audiences. Everyone read the same paper and watched the same shows and movies. The economics of publishing, printing, and producing all encouraged this. Every living room’s TV presented the same channels in the same static form, with slight regional variations and the ability to switch from channel to channel.

When I started using my own iPhone, I kept thinking about the Bat-Signal – the searchlight that projected Batman’s logo into the night sky as a distress call – and how such a signal makes no sense in the iPhone era. My broader question was this: How do you send a message when you want to be certain it’s received? Fifty years ago, everyone was looking in the same directions. If you needed Batman, you shined an image into the sky that everyone saw, so to speak. Similarly, important messages appeared in the newspapers that everyone read and on the TV channels that everyone watched.

The Bat-Signal is no longer how you communicate. The algorithmic personalization of the smartphone and internet, combined with the long tail of choices they make possible, ensure that no two people see the same version of the same thing. Facebook and Twitter are read more widely than the most circulated newspaper ever was, but the Facebook you see and the one I see are almost completely different because the content comes from our own lives, not a producer or publisher. Even the media of the last century—movies, TV, and recorded music, all still as popular as ever—are chopped up and repackaged via a multitude of digital channels, ensuring that everyone receives them somewhat differently.

An iPhone is simply hardware that reinforces this solipsistic mode. Your phone has different apps than mine, so you can see things that I can’t see, and vice versa. If Batman got a distress text message instead of a Bat-Signal, nobody else would know. Each smartphone is a unique repository of its owner’s memory, attention, and sensory capacity. We still have eyes and a sky to look up at, but important messages are no longer projected onto that sky because the bulk of attention is directed inward, and the smartphone is how we receive as well as transmit messages in that new interior space. These messages are like Bat-Signals projected in ultraviolet light, invisible to the human eye without special tools to help it see.


Follow

Get every new post delivered to your Inbox.

Join 104 other followers