After the Gold Rush

“From this moment on, continuity no longer breaks down in space, not in the physical space of urban lots nor in the juridical space of their property tax records. From here, continuity is ruptured in time, in a time that advanced technologies and industrial redeployment incessantly arrange through a series of interruptions, such as plant closings, unemployment, casual labor, and successive or simultaneous disappearing acts. These serve to organize and then disorganize the urban environment to the point of provoking the irreversible decay of neighborhoods, as in the housing development near Lyon where the occupants’ “rate of rotation” became so great—people staying for a year and then moving on—that it contributed to the ruin of a place that each inhabitant found adequate…”

-Paul Virilio, “The Overexposed City”

North Dakota has urban decay that could drive away. “There are empty campers everywhere,” says one North Dakotan interviewed for this article, describing the abandoned trailers that litter the shale towns of a thinly populated state overwhelmed this past decade by a fracking-induced flood of migrant labor. The speed of the initial boom and the frantic need for labor meant neither the oil industry nor the workers could afford to wait for permanent settlements to develop, lest they leave money on the table, so like the gold rushers of yesteryear the arriving workers stayed in trailers, shacks and “man camps” while the real estate market tried to catch up.

The present slump in oil prices, having reduced the viability of fracking, means that last year’s fully-employed roughnecks are this year’s surplus labor, and so is their quickly-built housing, mobile as well as stationary.

Natural resources have always stirred up strange eddies in economies and cultures, from tulip bubbles to gold rushes, but information and money move faster in the destabilized contemporary version (that Nassim Taleb calls Extremistan) and those eddies are increasingly the choppy texture of the landscape rather than breaks in a smooth surface. Global commodity prices can make a local commodity effectively disappear: In North Dakota, cheap oil stays underground when drilling through shale to get it becomes a money-losing operation, despite no natural or human-caused change in the local environment. In such an event, even houses with wheels—the nomadic huts of the modern industrial world—become sunk costs and can’t travel fast enough to simply take advantage of the next opportunity. Instead their inhabitants (drivers?) scatter elsewhere, in planes and cars, not necessarily to the next oil boom site (which doesn’t just pop up as a replacement) but to a multitude of other gigs and industries.

The Virilio passage above instructs us that there’s a dimension of reality that moves faster all the time, and another that stands still by comparison. The latter encompasses cities, housing, and usually the people who live there, while the former comprises the information that sets the values and prices that determine how much those houses are worth, what jobs will exist in those cities, and ultimately whether it’s time to leave those places entirely. Continuity breaks down in time, not space, through “a series of interruptions”—the residents of Watford City, North Dakota were in the wrong place, then the right place, then the wrong place again, and it may yet become the right place again, depending on the schedule of said interruptions.

Human settlements in Taleb’s Extremistan are the residue that remains when the economic tide goes out, and as the speed of information accelerates, those ruins—the lifeless objects that remain when their usefulness goes away—will keep looking fresher than ever and might even seem brand new.

Digital Wastelands, Societies of Control

“The story of software eating the world is the also the story of networks eating geography,” writes Venkatesh Rao in Breaking Smart, extending Marc Andreesen’s well-known pronouncement. If I reduced the perennial themes of my own blog to one thesis statement, it could be a variant of that: the way infrastructural networks, often digital but not always, channel and intensify the traffic of human activity that would have to settle for less efficiency anywhere else (but would probably gain in humanity what it lost in speed).

I constantly compare the pervasive Social Network—Facebook, Twitter, and everything else—to physical urban space because certain dynamics operate so similarly in both environments. It should be obvious by now that the agora, the marketplace, and so many of the city’s classical functions have migrated partially or entirely to the information-saturated ether that language still hasn’t evolved nearly enough words to describe.

The interplay between the meatspace world we’re seemingly (but not really) always trying to leave behind, and the cyberspace where we’re ending up, is far from a one-to-one mapping. Deleuze explained this distinction in “Postscript on the Societies of Control” well before he could have observed its full realization, desribing what he recognized as our collective passage from societies of sovereignty to societies of control. The former is characterized by the enclosure, the physical space that literally contains its subjects—schools, prisons, factories, or even offices. Anyone inside the enclosure follows its rules, feels its presence, and escapes from it by physically leaving it. The traditional city is a kind of enclosure, albeit a porous one.

After the enclosure, in the world already apparent to Deleuze in 1992, we find “ultrarapid forms of free-floating control that replaced the old disciplines operating in the time frame of a closed system.” The average reader may not find Facebook quite this oppressive, but the contours of these new digital landscapes are correctly anticipated in Deleuze’s essay, particularly his observation that “in the disciplinary societies one was always starting again (from school to the barracks, from the barracks to the factory) while in the societies of control one is never finished with anything.” You can hear a more optimistic phrasing of this dynamic by techno-optimist Peter Thiel when he says that Facebook has “solved the identity problem.” Either way, you can no longer reinvent yourself by leaving town. Your profile travels with you, as does your credit score and a variety of other attributes by which your weak ties might evaluate you.

When we observe how humans behave in digital social space, however, the differences between physical public space and its digital counterpart become clear. The limitations that the physical world imposes, and the social norms that only emerge through face-to-face contact, yield a specific range of behavioral modes that diversify and run wild online, once freed from those constraints. When we compare networks to wastelands or failed states, this is what we’re talking about: Environments that fail to adequately punish certain antisocial actions or reward desirable ones – a tendency on most extreme display in any online article’s comments section.

On the social networks that offer the freest range of expression—Facebook, Twitter, and dating apps, again – three principal modes of behavior emerge: humans behaving like humans, humans behaving like machines (Hackers/Turks), and machines behaving like humans (Bots). A user’s impression of the quality of experience on a given network or platform usually derives from the mixture of these three behaviors that it encourages.

The first and third categories—humans being human, and machines being human—are the more straightforward. The former is the province of anyone who hasn’t fully submitted to the constraints of the digital milieu or consciously adapted to it—in other words, people who try to act online like they would offline. This is a reasonable approach, generally pursued by the majority of network users (normal people)—although there’s not exactly any structural reason why that should be the case.

The latter mode, of course, is the Bot—software that imitates human behavior with various degrees of success. I’ve discussed bots at greater length here. The nature of the digital, and the actions allowed or restricted, are what enable bots and humans to mimic each other with any success. Narrower restrictions on expression, like Twitter’s 140-character limit, create opportunities for bots to mimic humans more successfully, while looser restrictions, the ultimate example of which is real life in meatspace, foil bots or at least limit their spread.

The middle category is the most interesting: humans behaving like machines. The digital presents to us a simplified landscape that therefore rewards a simplified approach to its unambiguous contours. Built out of code—1s and 0s—a world like Facebook or OKCupid offers opportunities for hacking to achieve quantifiable goals, like maximum followers, maximum Likes, or maximum matches. Since each platform is optimized for certain things, these environments don’t present the difficult complexity that might overwhelm or confound similar strategies in the real world.

Of course, there’s a high-level and a low-level approach to behaving like a machine on the internet. We’ll call the high level that of the Hacker, the sophisticated machine imitator who exploits a quantified, rules-driven world to achieve goals that are harder to achieve in the messy physical world. Think of Tinder “hacks” like the guy who always opened his exchanges with the same line, or the Twitter user who follows known rules to get more followers.

The Turk is the low level imitator of machines – unable to navigate a more complex environment, he finds the limited domain of the network easier to use. He operates rather than creates. The Turk and Hacker both pursue the same end but for different reasons. The former wants to achieve specific objectives with greater reliability in a constrained domain; the latter wants to live in a world where his limited skills will get him farther. Both groups are finding ways to achieve better results than they would in meatspace.

The extent to which social networks read as failed states or wastelands are the extent to which they are dominated by humans acting like machines and machines acting like humans. The rest of us, who come to places like Facebook because we want another place to be human, will only thrive in these networks to the extent that they limit or restrict the Bot, the Turk and the Hacker from dominating that experience, and thus allow us to pretend that the digital is an extension of the real world rather than a simulation of it. The constant campaign that the network’s owners must wage to make this happen means that only the most carefully-tended gardens will sustainably attract activity that most of us could call human, while the rest of those networks will, at best, merely attract “users.”

Tinder & the Last Mile

The “last mile” has become an increasingly popular buzzword as transportation systems have grown faster and more efficient. Consider the following examples of the Last Mile Problem:

  • FedEx and UPS combined to rack up $2.8 million in New York City parking tickets during the first quarter of 2013, accepting those fines as a cost of doing business.
  • Reaching most US airports from their corresponding cities is a notoriously painful process that adds significantly to the cost of air travel, in the form of a long transit trip or an expensive cab ride.
  • Amazon’s fastest broadly available shipping option still only guarantees delivery the following day, although the merchandise rarely spends more than a few hours moving between its origin and destination cities.

In all of these cases, an efficient transportation system is bedeviled by its relationship to the outside world—by the origins and destinations where the actual people are located. The passengers, inconveniently, don’t want to live as close as possible to the infrastructure of high-speed travel, nor can they, mostly. The air transportation network and the FedEx supply chain operate as closed systems that move people or goods smoothly between their hubs and spokes but the final legs of those trips, on city streets lined with homes and offices, are far more expensive than the long hauls. In the case of air travel, the carriers avoid the problem altogether, leaving it for others to solve—they wouldn’t be profitable if they took passengers home from the airport. The traveler’s jarring transition from the airside to the groundside accounts for much of the inconvenience associated with flying. FedEx, on the other hand, is stuck with the last mile, hence their millions in parking tickets and consequent loss in profit margin.


The Last Mile Problem arises at the interface between two systems: the messy, irrational human environment and the rationalized networks that are optimized to achieve a few goals as efficiently as possible. I’ve written before about how those two systems find themselves at odds in the modern world, with the human city actually getting in the way of the “machine city” that comprises airports, seaports, bridges, and freeways, and making it harder for those networks to accomplish their goals. For infrastructure ostensibly built to serve people, the people themselves pose a surprising obstacle. In a broad sense, the Last Mile Problem is the problem of humans being humans and not machines.

Mankind has managed to annihilate distance quite successfully since ancient history, and the Last Mile Problem didn’t exist until the last mile wasn’t the only mile that most people had to think about. The last mile, therefore, is relative: The distance that we define as the last mile becomes shorter and more frustrating as we overcome the physical distance between ourselves and the things we want through faster travel across greater distance. Software has seemingly solved the Last Mile Problem in major cities by enabling the on-demand economy to flourish and become more affordable, yet we call the last mile a problem more frequently now than ever before, because our expectations have risen just as fast and the last mile is now measured in minutes. Waiting a half hour for a car service is deemed intolerable for those accustomed to getting picked up in 3 minutes via an app, although the former was a luxury not long ago and still is for most people. Likewise, the ability to order any product online and have it delivered the following day, unthinkable even a decade ago, now represents another problem: A day is too much, 24 hours is the new last mile, and the challenge of guaranteeing same-day delivery is a problem that needs to be solved.

The smartphone revolution, and the app ecosystem that has since emerged, suggest that a version of the Last Mile Problem extends beyond the transportation domain. Greg Lindsay, in his recent exploration of mobile dating apps and their effect on urban social life, asks whether apps like Tinder and Happn are augmenting face-to-face interactions or merely competing with them. Put another way, can we live well inside and outside of these “systems” simultaneously, or must we gradually abandon one for the other?

An app like Tinder may or may not improve our lives, but it certainly makes one aspect of dating more efficient, letting us sort through 100 potential mates in the time we’d spend talking to one or two people at a bar. Like the Acela Express, or many other transportation systems, Tinder achieves its efficiency by forcing its users to adhere to the constraints of the platform (the Acela train makes very few stops; Tinder requires you to swipe right or left instead of miring yourself in time-consuming, ambiguous signaling). By externalizing the steps in the process that they don’t streamline, each platform creates its own Last Mile Problem. The Acela will get you to Union Station in DC quickly, but you’ll be stuck in traffic taking a cab the rest of the way to your house in suburban Maryland. Similarly, Tinder will match you with plenty of people who also liked your profile photo, but the rest—the last mile—is up to you, and from that point onward, the process is often no better than what it replaced.

The last mile, then, is always located where machines leave off and humans are forced to take over. Unfortunately, the more time we spend in the controlled system, the more difficult the transition, since the rules of the platform require us to behave more like machines than humans, mindlessly swiping and clicking or passively riding before suddenly needing to reawaken all the faculties that have fallen asleep. It takes entirely different skills to navigate these platforms and systems. Marshall McLuhan called every new medium an extension of man, but also recognized that each extension also involved an amputation of what it replaced. Few will argue that the smartphone hasn’t “amputated” certain mental capacities, or at least allowed them to atrophy.

Lindsay’s examination of dating apps concludes by acknowledging that the most valuable human interactions still occur in person, regardless of which apps or technologies helped to facilitate those interactions. In other words, the last mile, however defined, is still where the most important human affairs happen (and for millennia, the last mile was where everything happened). Yes, the last mile is a frustrating problem in transportation, but the space where we traverse that last mile is the space we actually inhabit, and it’s worth protecting from the various engineered systems with which it competes for our attention and resources. Whether a measure of distance, time, or social interaction, a last mile will always exist where our technological extensions end and our embodied selves begin. As that technology advances and the relative length of the last mile recedes, approaching invisibility, we humans will finally find that the last mile is just…us.

Tribal Twitter

Plagiarism, in its many forms, ranks among the unquestionable (and unpardonable) sins that the educated classes recognize and punish. Deliberate copying, or even reuse of material in certain unsanctioned ways, appears on the short list of accepted career-ending offenses in the creative domain, a list that mostly contains wrongs we’d agree are less tolerable than repurposing a sentence someone else wrote.

High-profile plagiarism incidents in the press and academia gain plenty of negative attention and carry massive downside, but Twitter and the broader internet have made the reproduction of phrases, jokes and sentences easy enough for the masses that doing so has become an irresistible path to an often-miniscule, but sometimes decent, reward. Luke O’Neil explores this phenomenon in the Washington Post, documenting the successful Twitter one-liners that now echo eternally through the social network via easy copy/paste reproduction by hundreds of less clever users who would take credit for those jokes. Unlike the celebrity plagiarists, O’Neil explains, these account-holders face few consequences (although some have become quasi-celebrities through the success of their unattributed aggregation). This theft is frequently too difficult to prove, but generally nobody cares what a low-profile Twitter account does, so the behavior flourishes. The word plagiarism, it seems, is like assassination—it can only refer to high-profile subjects.

If Twitter leads to widespread joke-thieving, then, the next question we need to ask is “So what?” And when a relatively new technology or medium aligns so well with human tendencies that it lets a certain behavior explode, the next question we need to ask about that is whether the new behavior is wrong or just something natural that preceding environment had suppressed. To answer both questions, we need a more robust grasp of originality than we seem to have, and the technological milieu that has delivered us here might be the reason for this lack of clarity.

The age of mechanical reproduction (which hasn’t ended), in which the copying of text became easy and widespread, created a variety of opportunities to benefit from what others had written and published, with plagiarism on the most dishonest end of that spectrum. The ease of copying also created incentives that were at odds with creative production, and needed correction: Why labor over a novel or article that could be repurposed by a stranger and passed of as his work? Music, which never absorbed the concept of plagiarism as completely as the written word, faced a more egregious transition: Recorded music, to the live musician, was just as direct a threat, and arguably more dangerous to that musician’s livelihood. Before art could be mechanically reproduced, its reproduction was in itself a creative act that deserved at least some credit. Afterward, that was no longer necessarily true, and social taboos developed against exploiting those new conditions unfairly. The nascent digital sphere, it seems, combines qualities of both eras, fostering the strange feelings that Twitter plagiarism and other phenomena are inspiring in us.

Returning to the questions above, O’Neil himself finds evidence that a “post-originality” movement is flourishing in digital space, as articulated by a victim of that movement he interviews: “My hunch is there’s a sizable chunk of people who don’t really grasp what plagiarism is or why it’s wrong, and they kind of regard Twitter and social media as this giant free-for-all where everybody’s just constantly taking and posting whatever they want from whoever they want.”

In statements like this, O’Neil’s piece conveys frustration—that anyone finds satisfaction in stealing a stranger’s joke, or that the digital environment enables sufficient plausible deniability to conceal the crime. As shameful as stealing tweets may be, however, Twitter is not academia, the publishing industry, or even a blog, and to rail against this behavior is to impose those domain’s valid models on a medium where they can’t be effective. Twitter has always, quite obviously, been a diverse, free, and illegible ecosystem swirling with ephemeral content, where hundreds often make the same original joke or comment without even knowing. Even before the internet, it was not necessary to credit the (rarely knowable) source when repeating a joke in conversation. The notions of singular authors, originality, and authoritative versions will not easily be imposed on the medium without a great loss of vitality. Twitter, in this sense, is retribalizing man (returning us to premodern modes of being), as McLuhan believed so much electronic media would.

In the spectrum between honest content aggregation and deliberate plagiarism for personal gain, the unattributed reuse of digital content will continue and accelerate. Many of these reusers will be caught and shamed, many more will go ignored, and all who find some benefit from the act, regardless of their motives, will have received their reward in full. As post-original humans, they won’t know what they did wrong.

Post-Empire & the Brain

“Man is no longer man enclosed, but man in debt.”

-Gilles Deleuze

This NY Times reflection on dying shopping malls comes close to recognizing a few vital trends in American culture before reaching the simplistic, if still valid, conclusion that oversupply of retail space is killing the malls. The suburbanization of poverty and shrinking cities don’t arise in the discussion, while the growing gap between the affluent and poor is hinted at but not openly stated. The article does contribute to the vocabulary of junkspace by explaining what “power centers” are, but otherwise it neglects more questions than it asks or answers.

More than anything else, dead shopping malls reflect the lag between the fluid and flexible movements of culture, information, and capital, and the cumbersome physical bodies that those immaterial flows inhabit for a while before moving on to their next hosts. When consumer preferences and regional economies shift, they do not and cannot bring their built infrastructure with them, and that infrastructure becomes the residue that attests to the bygone era, visible fossils in the cultural shale. The longevity of artifacts doesn’t, however correspond to their importance, and that consequent survival bias distorts our understanding of what mattered about a previous historical era and which forces are driving the present one.

Abandoned buildings provide unambiguous evidence of failures, of projects that crashed quickly or outlived their usefulness, of the mortality of so much that we undertake. But malls are products of midcentury America, when we benefited from more obvious signs that helped us navigate our environments and know what we were looking at: The middle class shopped at the mall, lived in the suburbs, worked in the city, kept their money in the bank. The quarterback of the football team was the coolest kid in school and two nation-states dominated world politics by almost all criteria imaginable. Turning on a TV or reading a newspaper didn’t explain this reality, but it gave you an excellent grip on mainstream sentiments. It was never all quite this simple, but the narrative was unified and at least broadly accurate.

The human environment has become less legible since those golden years, in the present epoch that Bret Easton Ellis calls “post-Empire.” The popular standards for failure and success, once so black and white, are muddled, while the evidence of either is invisible thanks to a host of new forces: debt, pharmaceuticals, reality television’s tongue-in-cheek celebration of the lowbrow, and reality’s omnipresent digital substrate that has its own physics and algorithmic morality despite always talking to reality’s surface layer. The failed mall of yesteryear might remain propped up by complicated credit today, just as insolvent banks entered zombie afterlives following the 2008 financial crisis—alive but hollow. Today, a walk down the proverbial Main Street reveals less than ever about what is actually happening on that street and in the broader economy that envelops it.

As information and money swirl more and more frantically in the ether, unmoored from their physical sources of existence, it’s hard to fault the human mind for its increasing bewilderment. As Daniel Smail has written, the brain doesn’t evolve nearly fast enough to keep up with its environment and is thus easily tricked by dissonance between observed conditions and their underlying truth, of which there is more all the time. Long gone are the days when nomadic hunter gatherers detected the primary threats to their existence through their five senses, but your brain today—which is basically the same as theirs—doesn’t totally know that, no matter how much you remind it.

The subtlety of positive and negative outcomes and our uncertainty about what winning and losing actually entail make it easier than ever to welcome bad tradeoffs into our lives. A great deal of recent technological progress can be explained this way: a tangible, highly visible benefit paired with a cost that is subtle, hard to articulate, or borne in the long run. The automobile, for example, transported us with unprecedented speed and convenience but forced us to eventually rebuild our cities and society in its image; thus we accepted it. Countless smartphone apps and social networks coax us to move areas of our lives onto their platforms at the expense of our privacy, mental clarity, and even a degree of freedom. Our animal brains compel us to welcome these palpable goods that look like the simple windfalls they evolved to value in millennia past, but today those goods are just as often Trojan horses smuggling in debts that we’ll feel only after it’s too late to easily get rid of them.

Filters, Trolls & Bots

The “real world,” we’re told, is an increasingly meaningless construct, a product of nostalgia or a failure to adapt to the digital age. Real life is no longer confined to face-to-face interactions, nor is identity constructed only there, but also on Facebook, Twitter, and in all the spaces that devices create by continuously talking to one another. While their forerunners were static, these environments are dynamic. We can still escape from reality in a book, a newspaper article or a movie, as passive spectators, but we actively create and influence the networks that compete for that same attention now. Thanks to mobile devices, moreover, we’re never simply immersed in the digital and away from the physical world around us—we comfortably occupy both at once. The term “meatspace” has thus become necessary as the distinction between real and virtual has vanished. If “real” things can happen in the digital sphere too, then environments are better characterized by their corporeality than by their realness.

Despite all the similarities, though, digital platforms differ from traditional reality in this respect: It’s too easy to leave them. A single mouse click makes Facebook or Twitter disappear, at least for a while, meaning they’re less enveloping than work, school, a social gathering, or any other meatspace activity that we wish we could exit as quickly. Unlike Amazon, who just want you to buy things, social networks like Facebook and Twitter need you to spend time using their platforms and plugging important parts of your life into them. Two thirds of Facebook’s 1.35 billion active users log in every day, and Facebook’s future success depends on increasing that number. Since Facebook is competing with Twitter and other platforms for your time and attention, not to mention the possibility that you’ll put your computer away entirely, it becomes even more important to Facebook that you stay logged in.



So, back to that big difference between digital reality and meatspace reality: How do we react when reality makes us feel bad? In the digital version, we tend to log off; in meatspace, we must react in ways that are rarely as simple—“dealing with it,” if you will. Twitter and Facebook both understand the low barriers to disengagement, and put significant effort into filtering the reality they present to their users. Twitter gives you full responsibility to choose who you follow, and empowers you to make unwanted voices disappear from your feed. Facebook’s solution is less transparent and more effective, judging by its growing addictiveness and nine-figure user base: Algorithms filter the content appearing in your News Feed, guessing what you want to see better than you could and hiding what you don’t like before you find out that it exists. On Twitter, in theory, you might follow accounts that you actively disagree with, but that’s doubtful in practice, and doing so would probably make you use Twitter less. If you could make that annoying coworker disappear with a single click, you probably would, right?

Filtering reality has long been a goal of environmental design. Before the internet, shopping malls and suburban enclaves were built to exclude undesirable elements, reinforce social myths, and make people feel good enough to not leave. Digital environments achieve those same objectives at greater scale and less expense. In 2012, Facebook conducted a controversial experiment on a subset of users to better understand the emotional impact of its News Feed content and tune its algorithms accordingly. More recently, Adrian Chen reported on Facebook’s overseas “content moderators” who manually scan the network for offensive material and remove it before it appears in anyone’s feed. Both unsettling efforts, of course, aim to limit the negativity that Facebook’s users encounter on the site, maintain the network’s position as the ersatz spiritual center of contemporary life, and monetize that position as advertising revenue.

Like any designed environment, the internet’s largest platforms breed delinquent forces that actively subvert their designers’ goals. As if to remind us of the limitations of the social networks’ version of reality, two species have evolved and flourished in the digital ecosystem: the troll and the bot. Trolls—the antagonistic online personae of users who may actually be agreeable in person—distill the negativity that Twitter or Facebook make so much effort to hide, and exploit gaps in the networks’ sanitizing measures to force that negativity upon their chosen targets. Like barbarians or guerrilla combatants, their intimate familiarity with the landscape and their decentralized organization enables them to stay one step ahead of the countermeasures that an incumbent power employs to stop them (the blocking and muting features, account deactivation, or more sophisticated filtering algorithms). Trolls inhabit Nakatomi space and make a mockery of the myth that multimillion-user networks can be scrubbed of unauthorized discourse. In their most extreme forms, trolls actually drive users away from their chosen social network altogether—and back to an environment where social norms and physical distance make trolling impossible.

Bots, less menacing than trolls and frequently even amusing, are also unwelcome in the digital landscape. Bots rarely represent real people, and thus expose the fallacy that social networks are places of firsthand human expression alone. Mining the internet for existing content, or posting according to programmed rules, bots add to the entropy that human work is always trying to reduce. A sophisticated bot can nearly pass the Turing test (or, in the case of @Horse_ebooks, a human can pass for a bot imitating a human), revealing yet another advantage that meatspace still holds over social networks and dating websites: We’re still getting plenty of essential information from face-to-face interactions that can be faked or misinterpreted in digital channels. John Robb has posited the future of Twitter as machine conversations between bots, finding one company already pursuing that course. If users must wade through increasing volumes of digital noise to find the signals they’re looking for, they’ll revert to more familiar environments where it’s easy to distinguish the two.

Twitter has bigger problems with bots and trolls than Facebook does, suggesting that a truly free and transparent platform is impossible without the entropic tendencies that more controlled platforms can suppress. The very existence of trolls and bots, however, is a vaccine that inoculates us against a greater threat: Not understanding the digital landscape we spend so much time in, and imagining it’s governed by more familiar forces.

Creative Nostalgia

“…each block is covered with several layers of phantom architecture in the form of past occupancies, aborted projects and popular fantasies that provide alternative images to the New York that exists.” 

-Rem Koolhaas

The commentary sphere, having its center of gravity in New York, is rarely more fluent than when it’s churning out prose about its most familiar city. One favorite topic in that category, becoming more popular all the time, is the New York Death Certificate, in which the author laments the city’s seizure by the megarich and the concurrent decline of culture, grit, Times Square pornography theaters, and the “real New York” that for many young bloggers is a product of imagination more than experience. It takes a lot for one of these pieces not to be tired or uninteresting, even when passionately argued by a person who lived both versions of the city, but some, like David Byrne’s, are certainly better than others.



Manhattan is overrun by a flawed brand of capitalism, though, and it does stifle creative production in ways that it didn’t 40 years ago. These are real problems, which is why the think pieces keep accumulating. At the heart of every essay proclaiming the decline of a great old New York and its replacement with a soulless playground for the wealthy, however, lies a grave mistake: comparing the present version of a city (or the present version of anything) with a younger incarnation of itself, and using one as a foundation for critiquing the other—sending nostalgia to do serious work for which it’s not equipped. Is any prior chapter of New York’s history so faultless that we can claim it was simply better? No, we probably just remember it that way.

If it seems like I’m dismissing nostalgia altogether, let me be clear: Nostalgia employed properly is a powerful force. Sanford Kwinter (a constant reference point for me) explains this best: “It is customary today…to dismiss points of view and systems of thought as ‘nostalgic’ whenever they attempt to summon the nobility of past formations or achievements to bear witness against the mediocrities of the present.” This summoning, of course, is precisely what the “New York is dead” school attempts, but those critics’ shortcoming is that they stop there. Kwinter continues by saying that memory is an act of creation, thus reaching beyond the actual limitations and faults of the past. “The antidote is the flexibility afforded by controlled remembering, not only of what we were but, in the same emancipatory act, of what we might be,” he writes. Choosing among the idealized aspects of past and the advancements contained in the present, we can stitch together a notion of a realistic but improved version of the city we inhabit today.

Last week I saw a few comedians perform at Madison Square Garden. I couldn’t stop thinking about the density of narrative and meaning within that space: the historic preservation movement accelerated by the old Penn Station’s demolition; the symbolic significance of playing or performing in the Garden (Bruce Springsteen playing ten concerts in a row in MSG or Lebron scoring 61 points there); or the continuous arrival and departure of train passengers for suburban New Jersey and Kansas directly beneath the basketball court (as opener Hannibal Buress pointed out during the show I attended). All of these stories coexist in our culture’s collective memory, almost haunting the blocks that the arena occupies. Every other block in New York can lay claim to something similar, although few are inhabited by narratives as potent as the Garden’s, and the city’s blocks each house failures and imagined realities as well as actual events, as the Rem Koolhaas quote above reminds us. New York, like any city, is constantly being created, imagined, destroyed, and rebuilt. With such turbulent change surrounding us, how could we believe for even one second that any past or present version of the city is frozen in place, or that we’re a passive audience to it?


Get every new post delivered to your Inbox.

Join 106 other followers