Blackouts & Balloons

“Despite the difficulties or disasters it may have inflicted on thousands of French people, the flood of January 1955 was actually more of a celebration than a catastrophe.”

-Roland Barthes

On Sunday night, JFK shut down after an imagined shooting in Terminal 8 that was really just loud celebration of Usain Bolt’s spectacular 100-meter dash. Shut down is an understatement—the airport exploded into hysterical chaos that only calmed down without consequence because nothing had actually happened. David Wallace-Wells, having just arrived at the airport from Denmark with his wife, paints a scene worthy of a Junkspace Hieronymus Bosch: multiple converging stampedes, TSA agents fleeing and sobbing, the utter breakdown of authority and process.

While it lasted it was a nightmare.

To experience the JFK shooting scare or hear about it later was to glimpse how fragile the megasystems we use daily have become and wonder how much worse a real emergency might have turned out. If you indulge your paranoid side like I frequently do and imagine the various ways that modern civilization’s many interlocking support systems might get hacked, what happened at JFK on Sunday will worry you. That one wasn’t even intentional.

Exactly 13 years before Sunday night, to the day—August 14, 2003—New York was the focal point of a better-known system failure, the Northeast blackout, a night that anyone who lived in the city at the time (I didn’t) will tell you great stories about. Yes, a few people died, more suffered, plenty of property and merchandise was damaged, but for everyone else, the night of the blackout was less terrifying and more thrilling, a welcome break from patterns that have grown stale, like a snow day in school.

While it lasted it was a party.

The closest thing I’ve experienced was another partial blackout, after Hurricane Sandy in 2012, that lacked the same euphoria because the extreme weather kept the same people indoors who were driven outdoors in 2003. Nevertheless, after Sandy I wrote that the hurricane and blackout had deprogrammed parts of Manhattan that had become boring and controlled, restoring a wildness to the urban environment that was fun to see, if only for a few days.

2015_08_blackout4.jpg

Riding the bus during the 2003 blackout (source)

Why was the recent JFK incident, in which nothing happened, so strange and menacing, while the 2003 blackout, that did real damage and even killed a few people, such a joyous moment for so many more New Yorkers?

The difference is the environment where each happened. As witnessed over and over again, airports are highly controlled, optimized and engineered, anxious places. When an airport stops working, there’s nothing to do except worry, or even panic. The same is true of enclosed institutions of all kinds, from hospitals to schools, where nobody wants to be any longer than necessary (and which Foucault observed were born of the same control techniques as modern prisons). These environments amplify the worst aspects of human behavior when they don’t work, and suppress them when they do work.

Other environments—the kind where people actually want to be—amplify the best aspects of humanity. Much of New York City, or any functional urban environment, falls into this category. The lesson of the blackouts, in fact, was that (for many) New York became a more humane and less atomized place with the power turned off, the very opposite of what happens inside airports. An electricity-free city is not something anyone wants, but as Barthes observed about the 1955 floods in Paris, it engenders “a world that is more accessible, controllable with the same kind of delectation a child takes in arranging his toys, experimenting with them, enjoying them anew.”

If there’s a simple lesson here for cities or institutions in “peacetime,” it’s that we should design them not to fail as spectacularly as JFK did on Sunday night. The streets of New York City, whatever problems they have, exhibit remarkable antifragility during natural disasters, and that quality should be nurtured and extended elsewhere.

For a more complicated conclusion we can look to Donald Barthelme’s short story “The Balloon” (which you should go ahead and read): the narrator constructs a giant balloon between 14th Street and Central Park in Manhattan. For 22 days it looms over everyday life, inspiring diverse reactions, including this:

“There was a certain amount of initial argumentation about the ‘meaning’ of the balloon; this subsided, because we have learned not to insist on meanings, and they are rarely even looked for now, except in cases involving the simplest, safest phenomena. It was agreed that since the meaning of the balloon could never be known absolutely, extended discussion was pointless, or at least less purposeful than the activities of those who, for example, hung green and blue paper lanterns from the warm gray underside, in certain streets, or seized the occasion to write messages on the surface, announcing their availability for the performance of unnatural acts, or the availability of acquaintances.”

Breaks in the mundane, then, are simply when we revert to our most human, whatever that is.


Pokémon Go

Late to the party, I have some thoughts about Pokémon Go:

1. The statement I’ve heard the most about Pokémon Go is that it just showed the world what augmented reality is. For those who already knew what AR is, Pokémon Go shows us what it actually looks like when it becomes relevant—not a sci-fi transformation to a world where we all stagger around with headsets (although we might still get that) but a world in which we all stare at our phones a little bit more than we did before. Too many of our predictions about the technological future assume visually stunning outcomes driven by advanced hardware (an industrial age bias), but those changes are almost always outpaced by the faster progress of software, running on devices we already have. The near future rarely looks different than the present.

Acknowledging that smartphones are how AR first goes mainstream—now a fact not a prediction—it isn’t a stretch to also acknowledge that reality has already been augmented for a long time via smartphones, just spatially rather than visually. Pokémon Go, because it’s a game, offers a more immersive and totalizing example, a more literal example of AR. Every smartphone app that already takes your location as input or interacts with your environment —Instagram, Yelp, Google Maps, Shazam, to name a few on my home screen—already read the physical world we inhabit and write to their digital representation of that world (not to mention Snapchat, whose filters are AR by any definition). Looking at your phone, gathering metadata about your surroundings from Instagram or Yelp, and then looking back up with a new perspective: That sequence is only one level of abstraction away from what Pokémon Go does.

I always imagined AR as the technology that would finally get us past the iPhone, enabling us to look straight ahead instead of staring at screens constantly. Eventually we might get the killer app for some Google Glass successor that gets us all wearing headsets, but until then we have to spend more time staring down at our phones in even more awkward ways. That bent posture—not shiny new hardware—is the visual manifestation of AR today. At least Pokémon Go is getting people out of their houses.

pokemon-go-pikachu.jpg

2. There’s another sense in which Pokémon Go is only a change in degree from the recent past, not a change in kind. As pointed out on a recent a16z podcast, Pokémon Go is an “appified game” instead of the more familiar “gamified app.” Everything from Foursquare to Snapchat to any social media platform that scores a user’s content by number of likes operates according to game logic and reinscribes that game on real life. Pokemon Go inverts this dynamic by making the fantasy the end instead of the means, and the reality the means instead of the end: Rather than looking for a bar to hang out at and incidentally becoming the mayor, we’re hunting for Vaporeon and incidentally hanging out in a park. Amazingly, the latter seems to motivate people more than the former.

Gamification has demonstrated its ability to simplify reality and compel behavior that would otherwise lack sufficient motivation. Pokémon Go suggests that games themselves might compel us even more than merely gamified things. Long before apps or gamification existed as ideas, society was full of more concrete but similarly playful (and powerful) game-like dynamics. Following this thread backward from the present, we can only conclude that it’s games all the way down.


Walled Gardens & Escape Routes

Slack and Snapchat are two of the platforms that best embody the current technological moment, the fastest recent gainers in Silicon Valley’s constant campaign to build apps we put on our home screens and not only use constantly but freely give our locations, identities, relationships, and precious attention. One of those products is for work and one is for play; both reflect values and aesthetics that, if not new, at least differ in clear ways from those of email, Facebook, and Twitter—the avatars of comparable moments in the recent past.

Recently I compared Twitter to a shrinking city—slowly bleeding users and struggling to produce revenue but a kind of home to many, infrastructure worth preserving, a commons. Now that Pokemon Go has mapped the digital universe onto meatspace more literally, I’ll follow suit and extend that same “city” metaphor to the rest of the internet.

I’m kidding about the Pokemon part (only not really), but the internet has nearly completed one major stage of its life, evolving from a mechanism for sharing webpages between computers into a series of variously porous platforms that are owned or about to be owned by massive companies who have divided up the available digital real estate and found (or failed to find) distinct revenue-generating schemes within each platform’s confines, optimizing life inside to extract revenue (or failing to do so). The app is a manifestation of this maturing structure, each app a gateway to one of these walled gardens and a point of contact with a single company’s business model—far from the messy chaos of the earlier web. So much urban space has been similarly carved up.

BONAVENTURE2.jpg

  Illegible space: the Bonaventure Hotel (source)

If Twitter is a shrinking city, then Slack or Snapchat are exploding fringe suburbs at the height of a housing bubble, laying miles of cul-de-sac and water pipe in advance of the frantic growth that will soon fill in all the space. The problem with my spatial metaphor here is that neither Slack nor Snapchat feels like a “city” in its structure, while Twitter and Facebook do by comparison. I never thought I’d say this, but Twitter and Instagram are legible (if decentralized): follower counts, likes, or retweets signal a loosely quantifiable importance, the linear feed is easy enough to follow, and everything is basically open by default (private accounts go against the grain of Twitter). Traditional social media by now has become a set of tools for attaining a global if personally-tailored perspective on current events and culture.

Slack and Snapchat are quite different, streams of ephemeral and illegible content. Both intentionally restrict your perspective to the immediate here and now. We don’t navigate them so much as we surf them. They’re less rationally-organized, mapped cities than the postmodern spaces that fascinated Frederic Jameson and Reyner Banham: Bonaventure Hotels or freeway cloverleafs, with their own semantic systems—Deleuzian smooth space. Nobody knows one’s position within these universes, just the context their immediate environment affords. Facebook, by comparison, feels like a high modernist panopticon where everyone sees and knows a bit too much.

Like cities, digital platforms have populations that ebb and flow. The history of urbanization is a story of slow, large-scale, irreversible migrations. It’s hard to relocate human settlements. The redistributions of the digital era happen more rapidly but are less absolute: If you have 16 waking hours of daily attention to give, you don’t need to shift it all from Facebook to Snapchat but whatever you do shift can move instantly.

The forces that propel migrations from city to city to suburb and back to city were frequently economic (if not political). Most apps and websites cost nothing to inhabit and yield little economic opportunity for their users. If large groups are not abandoning Twitter or Facebook for anything to do with money, what are they looking for?

To paraphrase Douglas Adams, people are the problem. As people, we introduce some fatal flaw to each technology we embrace, especially technologies that facilitate communication, and especially when they amplify some basic weakness in our nature. Almost always, the experience of using a technology can’t be regulated or moderated properly, some misuse of it becomes rampant, and that quality gradually or quickly drives its users to another platform that solves its particular problem. Then the cycle begins anew.

Slack is not the unbundling of another platform’s chat feature, then—it’s the reverse unbundling of email, an antivenom for email’s problems. The familiar version of unbundling is splitting off a feature from a product and building a more robust standalone product out of it. What I’m describing now is an equally powerful and prominent phenomenon in the evolution of technology.

Email, in work and in personal life, has strayed far from its origin as a joyous, playful technology that early adopters used to send one another jokes. It’s more essential than ever now, a supporting infrastructure for life in every sense, but it’s also something we feel the urge to hide from on vacation. We hate it. Email’s flaws are potent: Information lives forever; everyone has equal access to everyone else; spam marketers have optimized it as a tool for their nagging. Even the most powerful people in the world toil over email for an hour daily, while strategies like Inbox Zero have emerged to help us escape from under its burdensome weight.

Our uneasy dependence on email in professional and personal life created a massive opportunity for a tool that isolated its benefits and discarded its shortcomings. Slack embodies this opportunity. It offers freedom from the oppressive inbox, in which one owns everything that ends up there, and establishes a smooth space in which the most important information reaches its recipients indirectly but effectively. The streamlike work patterns enabled by Slack, which Venkat Rao calls Flow Laminar, “avoid the illusion of perfectibility of information flows implicit in notions like Inbox Zero altogether.”

Jenna Wortham, contemplating Snapchat in the NY Times, suggested that “maybe we didn’t hate talking—just the way older phone technologies forced us to talk.” Texting, she thinks, did for phone calls what Slack promises to do for email. She proceeds to praise Snapchat for its reverse unbundling of social media and even SMS: the escape from the coldness and flatness of text-based communication, the intimacy absent from Facebook and Twitter, the triumph of the stylistic over the literal. An essay by Ben Basche makes a similar point: “Perhaps the task of constantly manicuring a persistent online identity — of carefully considering what effect your digital exhaust will have on your ego — is beginning to weigh on people.” Traditional social media, it seems, has reached the point of maturity that email already attained: more rigid and less playful. We’re looking for escape routes and Snapchat is one.

If we’ve learned anything from recent technology, we can expect Slack and Snapchat to reveal their own serious flaws over time as users accumulate, behaviors solidify, and opportunists learn to exploit their structure. Right now most of the world is still trying to understand what they are. When the time comes—and hopefully we’ll recognize it early enough—we can break camp and go looking for our next temporary outpost.


The Human as Interface

Thoreau said in Walden that “we do not ride on the railroad; it rides upon us.” He was talking about the true cost of the ride—that while he embarked on a trip by foot and arrived at his destination that same day, you would have to first get a job and work for a day in order to earn enough money to make the same trip by train. “Instead of going to Fitchburg, you will be working here the greater part of the day. And so, if the railroad reached round the world, I think that I should keep ahead of you.”

Generalizing his assessment, Thoreau found that so much in the modernizing world bore a similar cost. The railroad and the industrialization it embodied was a powerful enough force to remake society in its image because of the demands placed upon us to build, maintain, and operate those systems, as well as our willingness to accept those demands in exchange for speed and efficiency.

help-desk.png

 People helping computers talk to people (source

It makes sense why all of that heavy infrastructure weighs us down, why it’s such a chore, just like Thoreau’s image of a train driving on tracks made of people makes such vivid sense, or why a household cluttered with stuff feels so oppressive. But haven’t we escaped the era of stuff and ventured into the era of information, finally mastering the former? Software shouldn’t be able to “ride upon us” like industrial machinery can. If anything it should be loosening the physical world’s grip on us (and in many ways it is). Information storage is basically free and processing power increases exponentially and yet we don’t feel certain that we are freer than we used to be, or any less burdened. Why does email now ride upon us like trains once did?

Commenting on artificial intelligence as it reaches a tipping point in its maturity, John Robb recently observed that “we can’t even design systems that work for human beings”—that is, we’re designing AI as a godlike force that works in mysterious ways, not a true agent of our own objectives, and ensuring that we will somehow bow to it, just like we did to the industrial behemoths that we built in a previous era.

Put another way: Every medium-sized company with a competent customer service operation automates a large chunk of that work. When you call an airline or a credit card company, you pass through a tree of often-frustrating multiple-choice menus before getting your issue resolved. You only get to speak to a live operator after exhausting the menus’ abilities. That process of escalation is the Human as Interface—a reversal of roles.

The Human as Interface is the troubling but darkly funny outcome of our white-hot progress in the digital realm. An interface is traditionally a point of contact between people and computers (or between hardware and software, or two separate software systems) that eases their interaction and translates between two modes of communication.

Software is eating human work so fast that there’s less of a role for interfaces between humans and computers, as the latter can finish more and more of their work without humans dropping in partway through the process to guide them. At the same time, that software is doing even more of the jobs that humans used to do and eliminating the need for those jobs. Finally, the various activities that computers do are becoming so sophisticated that humans can’t only not understand them, we don’t even have a language for describing them. The gap between human and computer abilities is either closing or widening, depending on how highly you regard humans, and there’s a shortage of a different kind of interface or API: the kind that mediates between software and its human users in the transitional phase before a computer can handle that step too.

Thus, machines need people to translate between themselves and their users—the Human as Interface. This is a form of turking, in the sense that it’s yet another role humans only fulfill until software learns how. This type of work is found at every ability level: Customer support reps who handle the overly complex issues that automated systems escalate. Convenience store employees who help customers get unstuck from the self-checkout machines that replaced all the other employees. Explainers who can communicate to a broader audience a concept like machine learning and why it matters. IT help desks.

It’s surely a sign of increasing economic polarization that a small percentage of specialized individuals build and run the advanced systems that transform everyone else into a user in both their work and their free time. For this majority, their jobs await imminent automation, at best, and already function as an interface for machines otherwise (meanwhile everyone’s a user of some kind in their free time). No matter the reason for this condition, it’s hard to pretend that we don’t somehow work for computers, or that software doesn’t ride upon us as heavily as the railroads did.


Computers Can’t Do Party Tricks

“The horse has lost its role in transportation but has made a strong comeback in entertainment.”

Marshall McLuhan

My friend Brendan can glance at satellite photos and immediately identify their locations in the world, a skill that might have qualified him as a warlock or wizard hundreds of years ago (if satellite photos had existed then). Last week, he and I were trying to figure out whether he could somehow cash in on this skill or use it to advance his career. No, we quickly decided: This is the epitome of a task that computers have evolved to do better than people, whether anyone demands that particular service or not. But seeing Brendan identify Google Earth images in person is still an impressive feat, a great party trick.

That computers can recognize images more accurately than the most adept humans, and then instantly cross-reference the results with any other set of data, requires no explanation. It’s less certain but a true enough generalization that almost anything we call work today, machines will do eventually (as we concurrently keep inventing new things to call work). Technological progress not only eliminates the need for certain tasks in their traditional domains, thereby lowering those tasks’ economic value, but it also gradually eliminates—or amputates, as McLuhan said—the faculties certain people had developed to better perform those tasks. The printing press drastically reduced the need for hand copying as a means of reproducing texts; in the much longer run, it also led to a weakening of the widespread practice of writing by hand. We’ve seen the same phenomenon time after time: technology devaluing the job and amputating the corresponding skill.

But the automations and amputations of technology are anything but the end for what they supposedly replace. Six hundred years after Gutenberg, almost everyone literate can still write by hand. And handwriting isn’t an exception. Most of what past generations did for money is still being done somewhere, probably for less money and at a smaller scale—a reality only partially explained by William Gibson’s observation that the future is never evenly distributed after it arrives.

The world is weirder and more complex than macroeconomics usually recognizes. What doesn’t contribute to GDP still exists and often still matters. Many abilities that technology debases economically, like Brendan’s knack for recognizing satellite photos, reappear in a hundred other domains: as competitive sports, hobbies, artisanal crafts, tourist attractions, historical preservation objects—the cultural substrate that underpins the productive economy where quantifiable value is added. At the very least, they often make great party tricks. McLuhan observed that the horse thrived as a form of entertainment after becoming obsolete as a mode of transportation. To say that one technology “killed” another obscures the reality that it only fragmented its predecessor and may have actually made its presence in the world more interesting.

Computers can add value but they can’t do party tricks. No one at a party gives a shit what a computer can do. Everyone at a party cares what a person can do.

Computers can’t party by definition. But computers excel at being productive and can add more value than any human. One computer can do the work of thousands of erstwhile humans. The cutting edge of technology, always pressuring mankind to justify its existence, reminds us that the productive economy isn’t the best place to affirm our own humanity.

One useful predictor of technology’s evolution, the Varian Rule, states that what the rich do today is what everyone else will be doing in the future. This has held true for nearly every major development, from the printed word to electricity to trains to cars to air travel to personal computers to iPhones. A companion to the Varian Rule might be this: What technology renders obsolete today, the rich will eventually enjoy again. The resurgence of artisanal everything among the urban consumer class is certainly a response to the dominance of industrial food production, and of course rich people are the most likely to be found riding horses in modern society today.

horse-pulling-smart-car-shopped-or-not.img_assist_custom.png

 The horse as entertainment (source)

Cars, predicted by the Varian Rule, were the most transformative technology of the hundred years preceding the computer. They began as an expensive luxury until Henry Ford brought them to the masses. GK Chesterton described the modern transformation of so many luxuries into binding necessities—a trend exemplified by the automobile—as a degrading force (“I say the thing should be kept exceptional and felt as something breathless and bizarre”) and there is no question that driving has by now become an inescapable chore for the majority of those who have to do it, as well as a scourge on the urban landscapes we inhabit.

The car as we know it might soon get its own chance to become obsolete as a form of transportation. The long-anticipated self-driving car sits at the precise (figurative) intersection of every trope about technological progress: automation (it will eliminate jobs), social benefit (it will save us from killing ourselves behind the wheel) and massively profitable business opportunity (it will drastically reduce the cost of transportation).

If we’re lucky, autonomous vehicles will deliver the human-driven car back to the domain of entertainment and luxury from whence it came, alongside the horse. Perhaps never having to drive would restore some magic and breathlessness to the act on the special occasions when we still do it. If we’re less fortunate, however, self-driving vehicles will further solidify the car culture in which we’ve been stuck for the past century and from which we need to escape. Right now, we fulfill two roles behind the wheel, driver and passenger, and only one of those promises to go away anytime soon so the outcome will be a mixed blessing at best.


The Digital NIMBY

“The landscape was always made by this sort of weird, uneasy collaboration between nature and man. But now there’s this third coevolutionary force, algorithms…and we will have to understand those as nature. And in a way, they are.”

-Kevin Slavin

If hundreds of cars suddenly started flooding your street in your otherwise quiet neighborhood, you’d probably wonder what changed. Maybe a mall or a Trader Joe’s just opened nearby, something that could make the house you own more valuable.

It turns out you’re not so lucky. Not only did nothing materially or permanently improve in your area, but something got worse: Construction closed off a nearby thoroughfare and Waze identified your block as the fastest detour. Now, your little residential street is an arterial highway without the capacity to handle its new traffic. Your home is no more valuable and there’s nothing you can do to stop the cars.

The above scenario, described in this week’s Washington Post, has become the latest in a long list of nuisances that worry and occasionally torture suburban inhabitants. Map apps bail out their increasingly dependent users with not merely an acceptable detour to follow, but the optimal one—which is quantified and therefore the same for everyone, until the best option causes too much congestion of its own. Instead of randomly dispersing around a road closure, a platoon of Waze-guided cars can march along its detour with the precision and endlessness of an ant colony.

In negotiating the built environment, we have always sacrificed the local to the global: Major public improvements that benefit broad constituencies—airports, for example—have to go somewhere, even when none of the beneficiaries gain additional benefits from living near them. “NIMBY” (“not in my backyard”) eventually emerged as the name for this political resistance to large-scale developments in one’s neighborhood. The phenomenon is most associated with suburban homeowners not unlike those terrorized by Waze reroutes today.

Stadiums and power plants still get built in people’s backyards, but now a new type of globally eminent system, the digital platform, has priorities to assert against its local citizenry. Without the unforgiving tradeoffs of geography, one would think, the digital should be easier to live with and compromises less costly to achieve. The internet doesn’t need to “be” anywhere, and certainly not in anyone’s backyard.

Unfortunately for the digital NIMBY, by not being anywhere, the internet is everywhere, as the Waze example teaches. The digital is no longer a place to hide from meatspace—perhaps it never was, but it’s less so now than in the message-board ‘90s. In every sense, the digital world has been reinscribed upon its geographical counterpart, furnishing the logic that guides so much activity in the latter. What better demonstration of how closely the two are intertwined than a traffic app temporarily “activating” a block for use?

Screenshot_2013-06-12-12-30-27_1.png

Waze (source)

To fight against a proposed airport expansion or condo tower involves clear political processes, known stakeholders, and the possibility of tangible victory: If you block the development and it goes elsewhere, you won that battle. The absolute fluidity of the algorithmic landscape means that nothing is ever finished. The victims of a Waze reroute in Takoma Park, Maryland used the app itself to report fake speed traps and accidents, turning the platform’s own logic against it to gain momentary relief. The temporary success of that strategy, however, is dwarfed by its long-term futility—the algorithms learn and adapt, and today’s victory is tomorrow’s start from scratch.

For much of history, maps described the world more than they prescribed behavior in it. The internet has brought about a near-total inversion of that relationship, isolating the map’s prescriptive qualities—informing certain decisions—and tightening that feedback loop while expanding the loop’s user base. The descriptive function of maps has dwindled accordingly: We often use maps without looking at them, as the embedded logic behind a Yelp search or Tinder swipe. The internet has similarly inverted the descriptive-prescriptive dynamic in almost every domain it touches by turning data into decisions without involving the inefficient middleman, the human being.

Kevin Slavin, in perhaps the best TED Talk ever, describes the algorithmic underpinnings of Netflix and similar services as the “physics of culture.” He concludes by pointing toward algorithmic trading and its imperative of instantaneous speed, which has hollowed out buildings and strung underground fiberoptic cables between New York and Chicago to produce the necessary edge in financial markets. Yes, algorithms can route extra traffic down one’s street, but they also literally shape buildings and terraform the earth, completing the unification of two universes that always seemed separate. Keeping these phenomena out of one’s backyard is the uncharted territory of NIMBY politics.


Scaling Bias

Facebook got in trouble this week…sort of. If you were already sufficiently awed by the power Facebook has amassed then it may not have surprised you to learn that Facebook’s method of deciding what rises to the top of your newsfeed and what drops out of it involved a process that could be considered “biased,” perhaps even biased in a way that reflects the left-leaning humans behind the curtain.

Because Facebook is a platform that displays and curates news, with an audience that dwarfs that of any self-identified news outlet, we project our standards for journalistic integrity onto an entity that never gave us a good reason to hold it to those standards in the first place. The best reason I can come up with is “Facebook’s grip on discourse is so powerful that I can’t face the possibility that it would distort that discourse in any way.” Again, the outcry seems to come from an innocence we projected onto Facebook, not anything the company said about itself.

If bias is a problem, Facebook is certainly biased in more important domains than American politics, and our focus on that current narrow, concrete example shows the limits of our imagination and skepticism. Facebook’s ultimate bias, expressed throughout its fabric, is toward growth: increasing its total active users and increasing the number of hours that existing users spend on Facebook. If the obvious behavior of Facebook’s algorithms in service of that goal don’t alarm us as much as a liberal bias in its surfacing of political news (something we’ve seen time and time again) it’s only because we’ve so internalized the values of late capitalism that we lack any vocabulary for criticizing Facebook’s shameless harvesting of our attention.

Another of Facebook’s biases, for example, is its philosophical and practical preference for the subjective over the objective, which again serves the goal of user growth: Mark Zuckerberg would rather show you your own personal feed tailored to your past behavior (hewing cautiously to the familiar and unadventurous) than expose you to a more objective or expansive version of reality. I’m not making a statement here about which is better—that’s a longer essay—but it’s certainly a bias.

The most valuable lesson of this Facebook controversy has been the discussion of what an algorithm is, exactly, given the involvement of a human editorial team at Facebook. Like those human editors, algorithms are never objective. They’re the opposite, in fact: human tools for achieving desired results more consistently. An algorithm is an engine of bias more than an antidote, the codification and repetition at scale of an outcome that an individual or group wants to achieve. Consistently and reliably, algorithms achieve only the results that their creators intended and little else. If algorithms were less consistent and reliable they would be more random and therefore more objective, less tied to one group’s intentions. The manual involvement of an editorial staff in Facebook’s news ranking effort is no contradiction of that dynamic: Once algorithms can do tasks those editors are doing now, they will.

And reality, to the extent that it’s even separate from the internet anymore, does not always perform better. Benedict Evans points out, “If Google or Facebook have arbitrary and inscrutable algorithms, so do people’s impulses and memories, and their decisions as to how to spend their time.” Facebook’s black box performs no worse than its users, but because it’s a more controlled and regulated environment, it offers a lower probability of exposure to unexpected or counterbalancing forces, in politics or elsewhere—less possibility of correcting the algorithms we already embody. For any individual, Facebook is a monoculture that is always refining itself to be more how it already is, and refining us to be more how we already are, or how Facebook wants us to be.

The issue of bias, then, pales in comparison to two broader problems, one Facebook’s and the other ours. Facebook’s problem is that it wants to comprise as much of our experienced reality and waking life as possible, and it’s actually been adept at increasing its share of that reality. That’s a more ambitious goal than programming our political preferences, although it encompasses the latter.

Our problem is that we willingly accept the terms Facebook offers us, and have become increasingly engaged with the platform as a society but not critical enough of what its algorithmically feeds back to us. At best, Facebook will keep giving us what it thinks is best for us; at worst, it will give us what is best for Facebook. Most likely, we’ll keep getting a bland mixture of the two. The total hours we all spend immersed in Facebook’s mirror universe will continue to be a direct measure of Facebook’s success and of its grip on our collective minds.

If you think that sounds like a bad deal? Log off.


Follow

Get every new post delivered to your Inbox.

Join 155 other followers