Tinder & the Last Mile

The “last mile” has become an increasingly popular buzzword as transportation systems have grown faster and more efficient. Consider the following examples of the Last Mile Problem:

  • FedEx and UPS combined to rack up $2.8 million in New York City parking tickets during the first quarter of 2013, accepting those fines as a cost of doing business.
  • Reaching most US airports from their corresponding cities is a notoriously painful process that adds significantly to the cost of air travel, in the form of a long transit trip or an expensive cab ride.
  • Amazon’s fastest broadly available shipping option still only guarantees delivery the following day, although the merchandise rarely spends more than a few hours moving between its origin and destination cities.

In all of these cases, an efficient transportation system is bedeviled by its relationship to the outside world—by the origins and destinations where the actual people are located. The passengers, inconveniently, don’t want to live as close as possible to the infrastructure of high-speed travel, nor can they, mostly. The air transportation network and the FedEx supply chain operate as closed systems that move people or goods smoothly between their hubs and spokes but the final legs of those trips, on city streets lined with homes and offices, are far more expensive than the long hauls. In the case of air travel, the carriers avoid the problem altogether, leaving it for others to solve—they wouldn’t be profitable if they took passengers home from the airport. The traveler’s jarring transition from the airside to the groundside accounts for much of the inconvenience associated with flying. FedEx, on the other hand, is stuck with the last mile, hence their millions in parking tickets and consequent loss in profit margin.

Yellow_cabs_04

The Last Mile Problem arises at the interface between two systems: the messy, irrational human environment and the rationalized networks that are optimized to achieve a few goals as efficiently as possible. I’ve written before about how those two systems find themselves at odds in the modern world, with the human city actually getting in the way of the “machine city” that comprises airports, seaports, bridges, and freeways, and making it harder for those networks to accomplish their goals. For infrastructure ostensibly built to serve people, the people themselves pose a surprising obstacle. In a broad sense, the Last Mile Problem is the problem of humans being humans and not machines.

Mankind has managed to annihilate distance quite successfully since ancient history, and the Last Mile Problem didn’t exist until the last mile wasn’t the only mile that most people had to think about. The last mile, therefore, is relative: The distance that we define as the last mile becomes shorter and more frustrating as we overcome the physical distance between ourselves and the things we want through faster travel across greater distance. Software has seemingly solved the Last Mile Problem in major cities by enabling the on-demand economy to flourish and become more affordable, yet we call the last mile a problem more frequently now than ever before, because our expectations have risen just as fast and the last mile is now measured in minutes. Waiting a half hour for a car service is deemed intolerable for those accustomed to getting picked up in 3 minutes via an app, although the former was a luxury not long ago and still is for most people. Likewise, the ability to order any product online and have it delivered the following day, unthinkable even a decade ago, now represents another problem: A day is too much, 24 hours is the new last mile, and the challenge of guaranteeing same-day delivery is a problem that needs to be solved.

The smartphone revolution, and the app ecosystem that has since emerged, suggest that a version of the Last Mile Problem extends beyond the transportation domain. Greg Lindsay, in his recent exploration of mobile dating apps and their effect on urban social life, asks whether apps like Tinder and Happn are augmenting face-to-face interactions or merely competing with them. Put another way, can we live well inside and outside of these “systems” simultaneously, or must we gradually abandon one for the other?

An app like Tinder may or may not improve our lives, but it certainly makes one aspect of dating more efficient, letting us sort through 100 potential mates in the time we’d spend talking to one or two people at a bar. Like the Acela Express, or many other transportation systems, Tinder achieves its efficiency by forcing its users to adhere to the constraints of the platform (the Acela train makes very few stops; Tinder requires you to swipe right or left instead of miring yourself in time-consuming, ambiguous signaling). By externalizing the steps in the process that they don’t streamline, each platform creates its own Last Mile Problem. The Acela will get you to Union Station in DC quickly, but you’ll be stuck in traffic taking a cab the rest of the way to your house in suburban Maryland. Similarly, Tinder will match you with plenty of people who also liked your profile photo, but the rest—the last mile—is up to you, and from that point onward, the process is often no better than what it replaced.

The last mile, then, is always located where machines leave off and humans are forced to take over. Unfortunately, the more time we spend in the controlled system, the more difficult the transition, since the rules of the platform require us to behave more like machines than humans, mindlessly swiping and clicking or passively riding before suddenly needing to reawaken all the faculties that have fallen asleep. It takes entirely different skills to navigate these platforms and systems. Marshall McLuhan called every new medium an extension of man, but also recognized that each extension also involved an amputation of what it replaced. Few will argue that the smartphone hasn’t “amputated” certain mental capacities, or at least allowed them to atrophy.

Lindsay’s examination of dating apps concludes by acknowledging that the most valuable human interactions still occur in person, regardless of which apps or technologies helped to facilitate those interactions. In other words, the last mile, however defined, is still where the most important human affairs happen (and for millennia, the last mile was where everything happened). Yes, the last mile is a frustrating problem in transportation, but the space where we traverse that last mile is the space we actually inhabit, and it’s worth protecting from the various engineered systems with which it competes for our attention and resources. Whether a measure of distance, time, or social interaction, a last mile will always exist where our technological extensions end and our embodied selves begin. As that technology advances and the relative length of the last mile recedes, approaching invisibility, we humans will finally find that the last mile is just…us.


Tribal Twitter

Plagiarism, in its many forms, ranks among the unquestionable (and unpardonable) sins that the educated classes recognize and punish. Deliberate copying, or even reuse of material in certain unsanctioned ways, appears on the short list of accepted career-ending offenses in the creative domain, a list that mostly contains wrongs we’d agree are less tolerable than repurposing a sentence someone else wrote.

High-profile plagiarism incidents in the press and academia gain plenty of negative attention and carry massive downside, but Twitter and the broader internet have made the reproduction of phrases, jokes and sentences easy enough for the masses that doing so has become an irresistible path to an often-miniscule, but sometimes decent, reward. Luke O’Neil explores this phenomenon in the Washington Post, documenting the successful Twitter one-liners that now echo eternally through the social network via easy copy/paste reproduction by hundreds of less clever users who would take credit for those jokes. Unlike the celebrity plagiarists, O’Neil explains, these account-holders face few consequences (although some have become quasi-celebrities through the success of their unattributed aggregation). This theft is frequently too difficult to prove, but generally nobody cares what a low-profile Twitter account does, so the behavior flourishes. The word plagiarism, it seems, is like assassination—it can only refer to high-profile subjects.

If Twitter leads to widespread joke-thieving, then, the next question we need to ask is “So what?” And when a relatively new technology or medium aligns so well with human tendencies that it lets a certain behavior explode, the next question we need to ask about that is whether the new behavior is wrong or just something natural that preceding environment had suppressed. To answer both questions, we need a more robust grasp of originality than we seem to have, and the technological milieu that has delivered us here might be the reason for this lack of clarity.

The age of mechanical reproduction (which hasn’t ended), in which the copying of text became easy and widespread, created a variety of opportunities to benefit from what others had written and published, with plagiarism on the most dishonest end of that spectrum. The ease of copying also created incentives that were at odds with creative production, and needed correction: Why labor over a novel or article that could be repurposed by a stranger and passed of as his work? Music, which never absorbed the concept of plagiarism as completely as the written word, faced a more egregious transition: Recorded music, to the live musician, was just as direct a threat, and arguably more dangerous to that musician’s livelihood. Before art could be mechanically reproduced, its reproduction was in itself a creative act that deserved at least some credit. Afterward, that was no longer necessarily true, and social taboos developed against exploiting those new conditions unfairly. The nascent digital sphere, it seems, combines qualities of both eras, fostering the strange feelings that Twitter plagiarism and other phenomena are inspiring in us.

Returning to the questions above, O’Neil himself finds evidence that a “post-originality” movement is flourishing in digital space, as articulated by a victim of that movement he interviews: “My hunch is there’s a sizable chunk of people who don’t really grasp what plagiarism is or why it’s wrong, and they kind of regard Twitter and social media as this giant free-for-all where everybody’s just constantly taking and posting whatever they want from whoever they want.”

In statements like this, O’Neil’s piece conveys frustration—that anyone finds satisfaction in stealing a stranger’s joke, or that the digital environment enables sufficient plausible deniability to conceal the crime. As shameful as stealing tweets may be, however, Twitter is not academia, the publishing industry, or even a blog, and to rail against this behavior is to impose those domain’s valid models on a medium where they can’t be effective. Twitter has always, quite obviously, been a diverse, free, and illegible ecosystem swirling with ephemeral content, where hundreds often make the same original joke or comment without even knowing. Even before the internet, it was not necessary to credit the (rarely knowable) source when repeating a joke in conversation. The notions of singular authors, originality, and authoritative versions will not easily be imposed on the medium without a great loss of vitality. Twitter, in this sense, is retribalizing man (returning us to premodern modes of being), as McLuhan believed so much electronic media would.

In the spectrum between honest content aggregation and deliberate plagiarism for personal gain, the unattributed reuse of digital content will continue and accelerate. Many of these reusers will be caught and shamed, many more will go ignored, and all who find some benefit from the act, regardless of their motives, will have received their reward in full. As post-original humans, they won’t know what they did wrong.


Post-Empire & the Brain

“Man is no longer man enclosed, but man in debt.”

-Gilles Deleuze

This NY Times reflection on dying shopping malls comes close to recognizing a few vital trends in American culture before reaching the simplistic, if still valid, conclusion that oversupply of retail space is killing the malls. The suburbanization of poverty and shrinking cities don’t arise in the discussion, while the growing gap between the affluent and poor is hinted at but not openly stated. The article does contribute to the vocabulary of junkspace by explaining what “power centers” are, but otherwise it neglects more questions than it asks or answers.

More than anything else, dead shopping malls reflect the lag between the fluid and flexible movements of culture, information, and capital, and the cumbersome physical bodies that those immaterial flows inhabit for a while before moving on to their next hosts. When consumer preferences and regional economies shift, they do not and cannot bring their built infrastructure with them, and that infrastructure becomes the residue that attests to the bygone era, visible fossils in the cultural shale. The longevity of artifacts doesn’t, however correspond to their importance, and that consequent survival bias distorts our understanding of what mattered about a previous historical era and which forces are driving the present one.

Abandoned buildings provide unambiguous evidence of failures, of projects that crashed quickly or outlived their usefulness, of the mortality of so much that we undertake. But malls are products of midcentury America, when we benefited from more obvious signs that helped us navigate our environments and know what we were looking at: The middle class shopped at the mall, lived in the suburbs, worked in the city, kept their money in the bank. The quarterback of the football team was the coolest kid in school and two nation-states dominated world politics by almost all criteria imaginable. Turning on a TV or reading a newspaper didn’t explain this reality, but it gave you an excellent grip on mainstream sentiments. It was never all quite this simple, but the narrative was unified and at least broadly accurate.

The human environment has become less legible since those golden years, in the present epoch that Bret Easton Ellis calls “post-Empire.” The popular standards for failure and success, once so black and white, are muddled, while the evidence of either is invisible thanks to a host of new forces: debt, pharmaceuticals, reality television’s tongue-in-cheek celebration of the lowbrow, and reality’s omnipresent digital substrate that has its own physics and algorithmic morality despite always talking to reality’s surface layer. The failed mall of yesteryear might remain propped up by complicated credit today, just as insolvent banks entered zombie afterlives following the 2008 financial crisis—alive but hollow. Today, a walk down the proverbial Main Street reveals less than ever about what is actually happening on that street and in the broader economy that envelops it.

As information and money swirl more and more frantically in the ether, unmoored from their physical sources of existence, it’s hard to fault the human mind for its increasing bewilderment. As Daniel Smail has written, the brain doesn’t evolve nearly fast enough to keep up with its environment and is thus easily tricked by dissonance between observed conditions and their underlying truth, of which there is more all the time. Long gone are the days when nomadic hunter gatherers detected the primary threats to their existence through their five senses, but your brain today—which is basically the same as theirs—doesn’t totally know that, no matter how much you remind it.

The subtlety of positive and negative outcomes and our uncertainty about what winning and losing actually entail make it easier than ever to welcome bad tradeoffs into our lives. A great deal of recent technological progress can be explained this way: a tangible, highly visible benefit paired with a cost that is subtle, hard to articulate, or borne in the long run. The automobile, for example, transported us with unprecedented speed and convenience but forced us to eventually rebuild our cities and society in its image; thus we accepted it. Countless smartphone apps and social networks coax us to move areas of our lives onto their platforms at the expense of our privacy, mental clarity, and even a degree of freedom. Our animal brains compel us to welcome these palpable goods that look like the simple windfalls they evolved to value in millennia past, but today those goods are just as often Trojan horses smuggling in debts that we’ll feel only after it’s too late to easily get rid of them.


Filters, Trolls & Bots

The “real world,” we’re told, is an increasingly meaningless construct, a product of nostalgia or a failure to adapt to the digital age. Real life is no longer confined to face-to-face interactions, nor is identity constructed only there, but also on Facebook, Twitter, and in all the spaces that devices create by continuously talking to one another. While their forerunners were static, these environments are dynamic. We can still escape from reality in a book, a newspaper article or a movie, as passive spectators, but we actively create and influence the networks that compete for that same attention now. Thanks to mobile devices, moreover, we’re never simply immersed in the digital and away from the physical world around us—we comfortably occupy both at once. The term “meatspace” has thus become necessary as the distinction between real and virtual has vanished. If “real” things can happen in the digital sphere too, then environments are better characterized by their corporeality than by their realness.

Despite all the similarities, though, digital platforms differ from traditional reality in this respect: It’s too easy to leave them. A single mouse click makes Facebook or Twitter disappear, at least for a while, meaning they’re less enveloping than work, school, a social gathering, or any other meatspace activity that we wish we could exit as quickly. Unlike Amazon, who just want you to buy things, social networks like Facebook and Twitter need you to spend time using their platforms and plugging important parts of your life into them. Two thirds of Facebook’s 1.35 billion active users log in every day, and Facebook’s future success depends on increasing that number. Since Facebook is competing with Twitter and other platforms for your time and attention, not to mention the possibility that you’ll put your computer away entirely, it becomes even more important to Facebook that you stay logged in.

walking-through-walls-idf

            Source: laudyms.wordpress.com

So, back to that big difference between digital reality and meatspace reality: How do we react when reality makes us feel bad? In the digital version, we tend to log off; in meatspace, we must react in ways that are rarely as simple—“dealing with it,” if you will. Twitter and Facebook both understand the low barriers to disengagement, and put significant effort into filtering the reality they present to their users. Twitter gives you full responsibility to choose who you follow, and empowers you to make unwanted voices disappear from your feed. Facebook’s solution is less transparent and more effective, judging by its growing addictiveness and nine-figure user base: Algorithms filter the content appearing in your News Feed, guessing what you want to see better than you could and hiding what you don’t like before you find out that it exists. On Twitter, in theory, you might follow accounts that you actively disagree with, but that’s doubtful in practice, and doing so would probably make you use Twitter less. If you could make that annoying coworker disappear with a single click, you probably would, right?

Filtering reality has long been a goal of environmental design. Before the internet, shopping malls and suburban enclaves were built to exclude undesirable elements, reinforce social myths, and make people feel good enough to not leave. Digital environments achieve those same objectives at greater scale and less expense. In 2012, Facebook conducted a controversial experiment on a subset of users to better understand the emotional impact of its News Feed content and tune its algorithms accordingly. More recently, Adrian Chen reported on Facebook’s overseas “content moderators” who manually scan the network for offensive material and remove it before it appears in anyone’s feed. Both unsettling efforts, of course, aim to limit the negativity that Facebook’s users encounter on the site, maintain the network’s position as the ersatz spiritual center of contemporary life, and monetize that position as advertising revenue.

Like any designed environment, the internet’s largest platforms breed delinquent forces that actively subvert their designers’ goals. As if to remind us of the limitations of the social networks’ version of reality, two species have evolved and flourished in the digital ecosystem: the troll and the bot. Trolls—the antagonistic online personae of users who may actually be agreeable in person—distill the negativity that Twitter or Facebook make so much effort to hide, and exploit gaps in the networks’ sanitizing measures to force that negativity upon their chosen targets. Like barbarians or guerrilla combatants, their intimate familiarity with the landscape and their decentralized organization enables them to stay one step ahead of the countermeasures that an incumbent power employs to stop them (the blocking and muting features, account deactivation, or more sophisticated filtering algorithms). Trolls inhabit Nakatomi space and make a mockery of the myth that multimillion-user networks can be scrubbed of unauthorized discourse. In their most extreme forms, trolls actually drive users away from their chosen social network altogether—and back to an environment where social norms and physical distance make trolling impossible.

Bots, less menacing than trolls and frequently even amusing, are also unwelcome in the digital landscape. Bots rarely represent real people, and thus expose the fallacy that social networks are places of firsthand human expression alone. Mining the internet for existing content, or posting according to programmed rules, bots add to the entropy that human work is always trying to reduce. A sophisticated bot can nearly pass the Turing test (or, in the case of @Horse_ebooks, a human can pass for a bot imitating a human), revealing yet another advantage that meatspace still holds over social networks and dating websites: We’re still getting plenty of essential information from face-to-face interactions that can be faked or misinterpreted in digital channels. John Robb has posited the future of Twitter as machine conversations between bots, finding one company already pursuing that course. If users must wade through increasing volumes of digital noise to find the signals they’re looking for, they’ll revert to more familiar environments where it’s easy to distinguish the two.

Twitter has bigger problems with bots and trolls than Facebook does, suggesting that a truly free and transparent platform is impossible without the entropic tendencies that more controlled platforms can suppress. The very existence of trolls and bots, however, is a vaccine that inoculates us against a greater threat: Not understanding the digital landscape we spend so much time in, and imagining it’s governed by more familiar forces.


Creative Nostalgia

“…each block is covered with several layers of phantom architecture in the form of past occupancies, aborted projects and popular fantasies that provide alternative images to the New York that exists.” 

-Rem Koolhaas

The commentary sphere, having its center of gravity in New York, is rarely more fluent than when it’s churning out prose about its most familiar city. One favorite topic in that category, becoming more popular all the time, is the New York Death Certificate, in which the author laments the city’s seizure by the megarich and the concurrent decline of culture, grit, Times Square pornography theaters, and the “real New York” that for many young bloggers is a product of imagination more than experience. It takes a lot for one of these pieces not to be tired or uninteresting, even when passionately argued by a person who lived both versions of the city, but some, like David Byrne’s, are certainly better than others.

images-2

                        Source: nycgo.com

Manhattan is overrun by a flawed brand of capitalism, though, and it does stifle creative production in ways that it didn’t 40 years ago. These are real problems, which is why the think pieces keep accumulating. At the heart of every essay proclaiming the decline of a great old New York and its replacement with a soulless playground for the wealthy, however, lies a grave mistake: comparing the present version of a city (or the present version of anything) with a younger incarnation of itself, and using one as a foundation for critiquing the other—sending nostalgia to do serious work for which it’s not equipped. Is any prior chapter of New York’s history so faultless that we can claim it was simply better? No, we probably just remember it that way.

If it seems like I’m dismissing nostalgia altogether, let me be clear: Nostalgia employed properly is a powerful force. Sanford Kwinter (a constant reference point for me) explains this best: “It is customary today…to dismiss points of view and systems of thought as ‘nostalgic’ whenever they attempt to summon the nobility of past formations or achievements to bear witness against the mediocrities of the present.” This summoning, of course, is precisely what the “New York is dead” school attempts, but those critics’ shortcoming is that they stop there. Kwinter continues by saying that memory is an act of creation, thus reaching beyond the actual limitations and faults of the past. “The antidote is the flexibility afforded by controlled remembering, not only of what we were but, in the same emancipatory act, of what we might be,” he writes. Choosing among the idealized aspects of past and the advancements contained in the present, we can stitch together a notion of a realistic but improved version of the city we inhabit today.

Last week I saw a few comedians perform at Madison Square Garden. I couldn’t stop thinking about the density of narrative and meaning within that space: the historic preservation movement accelerated by the old Penn Station’s demolition; the symbolic significance of playing or performing in the Garden (Bruce Springsteen playing ten concerts in a row in MSG or Lebron scoring 61 points there); or the continuous arrival and departure of train passengers for suburban New Jersey and Kansas directly beneath the basketball court (as opener Hannibal Buress pointed out during the show I attended). All of these stories coexist in our culture’s collective memory, almost haunting the blocks that the arena occupies. Every other block in New York can lay claim to something similar, although few are inhabited by narratives as potent as the Garden’s, and the city’s blocks each house failures and imagined realities as well as actual events, as the Rem Koolhaas quote above reminds us. New York, like any city, is constantly being created, imagined, destroyed, and rebuilt. With such turbulent change surrounding us, how could we believe for even one second that any past or present version of the city is frozen in place, or that we’re a passive audience to it?


Flatness

“Two extremes of redundancy in English prose are represented by Basic English and James Joyce’s book Finnegans Wake. The Basic English vocabulary is limited to 850 words and the redundancy is very high. This is reflected in the expansion that occurs when a passage is translated into Basic English. Joyce on the other hand enlarges the vocabulary and is able to achieve a compression of semantic content.”

-Claude Shannon

Bandwidth is a late machine age term that helps illuminate the millennia of technology and culture that preceded its coinage. The definitions of bandwidth vary, but its most basic meaning is a channel’s capacity to carry information. Smoke signals and telegraphs are low-bandwidth media, transmitting one bit at a time in slow succession, while human vision transmits information to the brain at a much faster rate. The past century has yielded tools for measuring bandwidth and quantifying information (see Claude Shannon) as the channels for carrying that information have advanced rapidly.

matrix

Source: Ceasefire

In any era, but never more so than now, the landscape of existing technology is a palimpsest in which the cutting-edge, the obsolete, and the old-but-durable all coexist as layers of varying intensity and visibility. New, unprecedented means of information exchange and communication are invented constantly, while their older equivalents live on long after they’ve stopped being state of the art. Information reaches each of us—and often assaults us—through a multitude of high-bandwidth and low-bandwidth channels, some of which we permit to speak to us, and some of which do so uninvited. Sitting down to watch TV, checking one’s iPhone during dinner with a friend, or finding a quiet place to read a book all represent conscious choices to block certain channels and pay attention to others. Marshall McLuhan recognized that technologies in our environment have a rebalancing effect on our senses, writing that each medium is an “intensification, an amplification of an organ, sense or function, and whenever it takes place, the central nervous system appears to institute a self-protective numbing of the affected area, insulating and anesthetizing it from conscious awareness of what’s happening to it.”

Human attention, then, is a finite resource. A variety of criteria inform everyone’s small, constant choices about which media to focus on and which to tune out, and those choices often have little to do with their bandwidth, but today one thing is certain: Our own brains, not anything manmade, are the bottlenecks that limit how much information we can receive at once. The contemporary world offers as much space for storing information as we’ll ever need, and we can instantly send any amount of it to the other side of the planet. Well before either were the case, however, humans learned to ignore the features of our environments that we deemed irrelevant—the noise surrounding the valuable signals we actually wanted or needed to receive.

Claude Shannon’s quote above, from his Mathematical Theory of Communication, introduces a qualitative layer to the question of human bandwidth and the environments we seek: People move continuously through information-rich and information-poor environments and are affected differently by each. Basic English is “redundant,” meaning it’s a language that requires many words to convey even simple messages—and is therefore a language that few would choose to use for anything but utilitarian purposes. Finnegans Wake, at the opposite extreme, could not be richer in information or content, to the point that it can barely be compressed or summarized. In Shannon’s example, the rich, information-dense content of Joyce’s novel represents a higher quality of communication, and Basic English a lower quality, although the latter fulfills plenty of functional roles.

Low-information, redundant content has a flatness to it. It’s less interesting. The Residents expressed this a different way on their Commercial Album, which comprises 40 one-minute-long pop songs. The liner notes explain:

“Point one: Pop music is mostly a repetition of two types of musical and lyrical phrases, the verse and the chorus. Point two: These elements usually repeat three times in a three-minute song, the type usually found on top-40 radio. Point three: Cut out the fat and a pop song is only one minute long.”

Plenty of pop music, in other words, is redundant and can be compressed without losing anything. This might be too harsh and cynical a judgment, but it’s valuable as a polemic. Modern environments, from top-40 radio to architecture to fiction, are full of redundancy and thus thin on information. The ease of digital information storage and transmission help explain why we can afford to be less economical with information than we were in the past, but getting used to redundancy, like getting used to a diet full of salt and sugar, reinforces our appetite for it and actually influences the types of information we produce. If the manmade world seems like a flatter place in the Information Age (not in the Thomas Friedman sense), this might be part of the reason.

As communication technology improves, the argument periodically surfaces that face-to-face interaction and cities in general will become obsolete. Joel Garreau rebutted this argument a few years ago in his article “Santa Fe-ing of the World,” in which he praises the high bandwidth that physical proximity and direct experience afford. He writes, “Humans always default to the highest available bandwidth that does the job, and face-to-face is the gold standard. Some tasks require maximum connection to all senses. When you’re trying to build trust, or engage in high-stress, high-value negotiation, or determine intent, or fall in love, or even have fun, face-to-face is hard to beat.” Even the most advanced digital media, in other words, are limited compared to full sensory engagement with one’s environment—the digital, closer to Basic English than Finnegans Wake, is still a utilitarian solution to problems like distance more than it’s an ideal theater for the highest levels of human contact. As our reality becomes more automated and algorithmic, our truly complex, nuanced, information-rich activities will continue to justify their existence, while the flat and redundant will increasingly disappear into the digital. By recognizing this condition, we can learn to preserve the depth of the former instead of simplifying our reality for easier absorption into lower-bandwidth channels.


Room to Live

The map below depicts the striking decline in Manhattan’s population density between 1910 and 2010, capturing the desperate overcrowdedness of the tenement-era Lower East Side (where the Other Half lived) alongside the more even distribution of residents throughout today’s New York. Created by an NYU researcher and posted by Matt Yglesias on Vox, the data visualization raises the obvious question of why New York’s population thinned so dramatically over the past century. The replies to Brandon Fuller’s original tweet reflect a variety of theories, while the Vox post attributes the shift to modern-day city dwellers taking up more space than their 1910 equivalents, who lived closer together at every socioeconomic tier.

ByFPHEnIAAAEo6b.0

Source: Brandon Fuller

Yglesias posits that New York’s residential square footage must continuously increase to maintain the city’s population density, but his diagnosis of the situation, while not wrong, misses a more fundamental dynamic: New York is a global city, and the primary purpose of global cities is not residential. Yes, these cities have (and need) huge populations, but their main role is an economic one. Manhattan is a market, a production hub, and a control center for the world economy above all else; the population that lives on the island must increasingly support those functions and their side effects, such as tourism, or move elsewhere. In short, space needs to be made for capital to flow through New York freely, in all its forms, and the opportunity cost of allocating that space to anyone who doesn’t somehow contribute to that free flow is far greater in 2010 than it was in 1970 or 1930. Thus, while New York’s current population takes up more square feet per capita than their predecessors, they also pay a much higher price for those square feet.

In the United States, a loss of central city population density has historically indicated economic decline, as it did for decaying Rust Belt cities in past decades. In present-day New York, lower densities indicate the opposite, although Manhattan also went through its own period of hollowing out in the 1960s and 1970s. Again, Yglesias is not wrong. People do take up more square feet in New York than they used to. Suburbanization, housing reform, and automobile ownership all put upward pressure on the amount of living space people expect and the minimum amount of space that is legal in US cities. These forces have reduced population density but they are part of the global city’s broader narrative: It’s no longer necessary to cram as many people as possible into urban centers, as it was during the industrial age, and while there’s still plenty of labor needed to keep the world spinning, the heart of the global city is the least optimal site for most of that labor. In post-industrial cities, the function of housing has changed accordingly (see this NY Times article describing a Lower East Side apartment’s layout in 1910 and today), and its ideal tenant is one who plays some part, directly or indirectly, in the commanding heights of the global economy—a mode of life that usually requires more breathing room.


Follow

Get every new post delivered to your Inbox.

Join 106 other followers