Facebook got in trouble this week…sort of. If you were already sufficiently awed by the power Facebook has amassed then it may not have surprised you to learn that Facebook’s method of deciding what rises to the top of your newsfeed and what drops out of it involved a process that could be considered “biased,” perhaps even biased in a way that reflects the left-leaning humans behind the curtain.
Because Facebook is a platform that displays and curates news, with an audience that dwarfs that of any self-identified news outlet, we project our standards for journalistic integrity onto an entity that never gave us a good reason to hold it to those standards in the first place. The best reason I can come up with is “Facebook’s grip on discourse is so powerful that I can’t face the possibility that it would distort that discourse in any way.” Again, the outcry seems to come from an innocence we projected onto Facebook, not anything the company said about itself.
If bias is a problem, Facebook is certainly biased in more important domains than American politics, and our focus on that current narrow, concrete example shows the limits of our imagination and skepticism. Facebook’s ultimate bias, expressed throughout its fabric, is toward growth: increasing its total active users and increasing the number of hours that existing users spend on Facebook. If the obvious behavior of Facebook’s algorithms in service of that goal don’t alarm us as much as a liberal bias in its surfacing of political news (something we’ve seen time and time again) it’s only because we’ve so internalized the values of late capitalism that we lack any vocabulary for criticizing Facebook’s shameless harvesting of our attention.
Another of Facebook’s biases, for example, is its philosophical and practical preference for the subjective over the objective, which again serves the goal of user growth: Mark Zuckerberg would rather show you your own personal feed tailored to your past behavior (hewing cautiously to the familiar and unadventurous) than expose you to a more objective or expansive version of reality. I’m not making a statement here about which is better—that’s a longer essay—but it’s certainly a bias.
The most valuable lesson of this Facebook controversy has been the discussion of what an algorithm is, exactly, given the involvement of a human editorial team at Facebook. Like those human editors, algorithms are never objective. They’re the opposite, in fact: human tools for achieving desired results more consistently. An algorithm is an engine of bias more than an antidote, the codification and repetition at scale of an outcome that an individual or group wants to achieve. Consistently and reliably, algorithms achieve only the results that their creators intended and little else. If algorithms were less consistent and reliable they would be more random and therefore more objective, less tied to one group’s intentions. The manual involvement of an editorial staff in Facebook’s news ranking effort is no contradiction of that dynamic: Once algorithms can do tasks those editors are doing now, they will.
And reality, to the extent that it’s even separate from the internet anymore, does not always perform better. Benedict Evans points out, “If Google or Facebook have arbitrary and inscrutable algorithms, so do people’s impulses and memories, and their decisions as to how to spend their time.” Facebook’s black box performs no worse than its users, but because it’s a more controlled and regulated environment, it offers a lower probability of exposure to unexpected or counterbalancing forces, in politics or elsewhere—less possibility of correcting the algorithms we already embody. For any individual, Facebook is a monoculture that is always refining itself to be more how it already is, and refining us to be more how we already are, or how Facebook wants us to be.
The issue of bias, then, pales in comparison to two broader problems, one Facebook’s and the other ours. Facebook’s problem is that it wants to comprise as much of our experienced reality and waking life as possible, and it’s actually been adept at increasing its share of that reality. That’s a more ambitious goal than programming our political preferences, although it encompasses the latter.
Our problem is that we willingly accept the terms Facebook offers us, and have become increasingly engaged with the platform as a society but not critical enough of what its algorithmically feeds back to us. At best, Facebook will keep giving us what it thinks is best for us; at worst, it will give us what is best for Facebook. Most likely, we’ll keep getting a bland mixture of the two. The total hours we all spend immersed in Facebook’s mirror universe will continue to be a direct measure of Facebook’s success and of its grip on our collective minds.
If you think that sounds like a bad deal? Log off.
“The merit of style exists precisely in that it delivers the greatest number of ideas in the fewest number of words.”
“Basic” is the best insult to emerge in the last few years, a slicing, leveling adjective perfect for the present zeitgeist. Like all great slang, there was no single word for what it expresses until now, and we immediately knew what basic meant when we first heard it. In a period of information overload, the word compresses whole blog posts and essays that might previously have had to be written, sparing us from reading them. We can now sweep the North Face jackets and pumpkin spice lattes away with a single gesture and free up time to pursue our higher callings or refresh Twitter more furiously.
Awareness of the basic is a necessary value of the present age, heavier than it seems, but not yet properly appreciated. The basic, to attempt a more rigorous definition, is that which adds nothing new: no information, no value. There’s a quote I can’t track down but always attributed to David Mamet, that bad drama merely affirms what we already know. Basicness is bad drama: It invokes what we already know and derives its full value from what’s already been created. Gregory Bateson called information “a difference that makes a difference.” That which is basic does not make a difference.
Bad drama, of course, has a place in life and can be fun to watch, and something basic can be quite good—an appreciation of The Wire or a Uniqlo shirt, for example—but sharing it with another person is only a handshake, an affirmation of sameness, and not a revelation that enriches reality (again, not everything needs to be a revelation that enriches reality). This only becomes a problem when there’s too much of what’s basic and the richer material—the real information—is crowded out or lost in the noise.
The opposite of basicness is style. If the basic is pure redundancy, in the informational sense, then Victor Shklovsky’s definition of style (see above) will suffice: an act of semantic compression that adds new information, expresses the familiar more concisely, or ideally both. Style involves an actual statement. We often equate style with high quality; this richness of information is why. There’s an excellent Ribbonfarm essay that praises density in writing, contrasting it with brevity. Density, as a prerequisite for style, involves an intelligent compression of information. Brevity, by contrast, is often just a loss of information. Style, density, and quality are frequently different ways of describing the same characteristics, in many domains.
The rules (source)
And now we arrive at why “basic” is such an important concept today: because the human environment—cities, the internet, commercial establishments, and even homes—are becoming more basic all the time. We finally have the perfect word to describe them. Maybe the incessant spread of the basic is why we finally had to name it. The generative forces now producing the world yield not only the corporate sameness of Starbucks and malls, but also the platform-driven predictability of the internet (Medium, Facebook) and the well-behaved consumerism of reconstructed urban areas. All of this has in common the above definition of basic: no new information, no surprises, no mystery or ambiguity, just what’s already known. Simpler, flatter, thinner. There are countervailing forces at work making the world more interesting, of course, but right now they are less powerful.
Why are we getting more basic? I blame design. Design, that impossibly broad discipline that by now touches—if it doesn’t envelop—almost every human product. Design, not software, is eating the world (and software is arguably a subset of design). To a hammer, everything is a nail; with the tools and technologies now at our civilization’s disposal, everything is a design opportunity.
Sanford Kwinter criticized our culture’s relationship with design and this emergent world “in which we are hectored mercilessly by design, swathed in its miasma of artificial affectation, hyperstyle and micro-human-engineering that anticipates, like a subtle reflex arc, our every move and gesture. Design has now penetrated to, even threatens to replace, the existential density, the darkness, the slow intractable mystery of what was once the human social world…Design has become us; it is, alas, what we are, and there is no way (for now at least) to separate ourselves from it.” To Kwinter, design is making the world less free, less complex, and less interesting, if possibly more “user-friendly.”
Design has existed for millennia, though, so why is it only now having this effect? Well, design is fundamentally the organization of information, and information has grown easier to copy and more mobile than ever. The “rules” that emerge as design principles today are too easy to disseminate, cheaper (in time, effort, or price), and often proven effective in some way, so they end up everywhere. A few examples:
- A developer builds a strip mall by applying a few rules, optimizes for economic performance, and plugs in a variety of chain stores that are more or less copied from existing templates.
- A writer posts on Medium to reach a wider audience and avoid worrying about managing a personal website or blog. The written content may be excellent, but the context, page layout, and often tone are standardized by the platform’s rules, restricting the variety of the readers’ experience.
- Any American city’s truly weird bars and restaurants are slowly being replaced by more familiar “types” that stray less from familiar experience: wine bar, New American restaurant, high-end ice cream shop. Hence a contrasting example like New Orleans becomes more exceptional and interesting as time passes.
You’re probably thinking, “this didn’t start with the internet—every culture has copied itself in most of what it produced.” This is true, and originality is always scarce, but what’s changed is how me make the copy when we reproduce an idea. The fidelity of digital transmission, so valuable in other domains, means that copies don’t change enough. A designer’s rule can travel around the world without much alteration. And this isn’t even counting algorithms, which produce even more inflexible and widespread outcomes.
A copy, a rule, a transmitted design are basic in their pure forms—no information is added. In contrast to such rules we have patterns, best described by Christopher Alexander: a better way for information to travel. A pattern is an idea of something that is consistently desirable within a larger system, like a park in a city or a novelistic device, but with a need for interpretation and no exact blueprint for its execution. Each new person who copies these must add information through their interpretation of the pattern, and the variation that results is where everything interesting happens.
In his introduction to A Pattern Language, Alexander writes, ”It is possible to make buildings by stringing together patterns, in a rather loose way. A building made like this, is an assembly of patterns. It is not dense. It is not profound. But it is also possible to put patterns together in such a way that many patterns overlap in the same physical space: the building is very dense; it has many meanings captured in a small space; and through this density, it becomes profound.” Urbanists so often call for one kind of density—a greater concentration of population in space—but the kind of density Alexander describes is more important.
Among the aforementioned design failures is the so-called “smart city” movement, a quixotic effort to build cities out of rules rather than patterns. Smart cities, like their modernist forebears, embody the shortcomings of design as Kwinter critiqued it, and usually only succeed in the pockets their plans fail to reach or influence. Ironically, the smart city produces an environment that is dumber (more basic) than the delicate, creative complexity humans are usually able to produce on their own. Lewis Mumford wrote that the chief function of a city is to “convert power into form, energy into culture, dead matter into the living symbols of art, biological reproduction into social creativity.” In other words, to create information and counteract entropy. Despite the intentions and unintentions of design, if we hope to live in the world we want, we need to seek out environments that are smarter than we are.
Sanford Kwinter’s excellent essay collection Far from Equilibrium contains several short reflections on the work of contemporary architects he finds compelling. After Zaha Hadid’s death, I revisited Kwinter’s piece on her, and decided it was worth reposting in full here:
Hadid’s Zitra Fire Station (source)
During the 1970s, when it had become an obligatory affectation of macho bravura in architecture to manipulate the ruler and straightedge with something akin to a virtuoso performance—note Libeskind, Tschumi, Eisenman and others—Zaha Hadid eschewed the posturing of these anti-classicizing classicists by inventing a new kind of line entirely. The new line was at once the centrifugal expression of a pulse of energy fleeing from a moving point—as in a universe not of grids but of vortexes—but also the reinvention of the line in relation to a new type of eye. On the one hand, Hadid’s softly arcing lines that compress such enormous energy into their subtly inflected bends are generated by the hand and body itself, by the inescapable rotational dynamics of the hips, shoulder, elbow, and wrist as they coordinate to extend the pen through space yet also leave a powerful trace of their orbital roots. Far from seeking to hide these organic geometrical foundations, Hadid extended and exaggerated their seductive and radical qualities, to the infinite perplexity and fascination of her self-proclaimed avant-gardist colleagues. Those most threatened by the unexpected (and untimely) affirmation, within Modernism, of the body and its lines of flight, dismissed this type of work as “paper architecture.” But the question of the eye was a critical weak point of the modern and Modernist tradition, and this is where Hadid’s ace resided. Modern space was born of a rational connection to optics, the moment when Brunelleschi and others formalized the modern perspectival approach to drawing based on a vanishing point and the straight lines that emanated from it. But the universe was not like this at all and every Renaissance theorist knew it. The cosmos was rather a play of orbits and attractions, bends and perpetual influences, a continuum of ongoing angular interaction and modification. What’s more, the eye itself was dual and mobile, its imaging surface spherical, and the ground to which it is anchored was defined by a horizon and an azimuth, both of which bend and bend visibly and meaningfully. It was in many ways Zaha Hadid who restored this tension and reality to the world of space-making.
Hadid’s vertiginous work both transposes and displaces the very horizon that serves as our orientation point in the world. Her curves arc ever-so-slowly as they careen across the canvas or page as if to mock the straight lines that they partly portray, but also to free our intuition from the many regimens of conventional orthogonality to which our modernity has subjected us. Until recently, perhaps the last seven or eight years, her lines never ventured to risk more than a single inflection; today, one finds two- and even three-part arabesques as the work becomes ever more plastic, and the development of the semi-free “line” becomes ever more extended into developing the possibilities of sheets and surfaces and even three- and four-dimensional flows of space and material.
New technologies almost always seem to have less soul than whatever they replace. Music streamed via Spotify or Pandora lacks the texture and context that accompanies pulling a record off the shelf and giving it a spin; even the most thoughtful emails feel prosaic compared to written letters. McLuhan said that every technology was an amputation of some human faculty, so perhaps this effect is no accident: Our tools harbor the ghosts of skills we’ve lost. The newer the tool, the less familiar the ghost. The haunting can be alienating for a while but we usually get used to it.
Too easily, we blame our negative attitudes toward new technology on nostalgia or failure to embrace change. We’re likely not reacting to the innovation or even to the broader change that has occurred, though—we’re reacting to the process of unbundling that this form of progress represents.
Unbundling, like disruption, is a favorite tech industry buzzword (both terms often apply to the same phenomena, in fact) but the former turns out to be quite useful. Netscape CEO Jim Barksdale, preparing for the company’s IPO, said that bundling and unbundling were the only two ways to make money in software. Unbundling, in particular, is the hallmark of the currently dominant mobile app economy, in which a singular app breaks off a popular feature formerly embedded in a less focused platform or pre-digital service, isolating and intensifying that activity—messaging, photo sharing, search, food ordering, taxi requesting are prime recent examples (for a deeper introduction to the concept, see Marc Andreessen’s tweetstorms on unbundling and rebundling and Benedict Evans’ written and podcasted musings).
McLuhan saw us amputating ourselves but now, having amputated as much as possible, we’re doing the same to our tools: amputating the amputations. Industrialization is basically unbundling writ large: the separation and intensification of human effort in the name of greater efficiency. The reason that the concept of “unbundling” only emerged recently is that its opposite state, “bundled,” is the default state of the world. To paraphrase Rousseau, life begins bundled but is everywhere unbundled.
Countless beloved pillars of traditional and even modern civilization are bundles: family, cities, and novels, to name a few examples. Unbundling, in this context, is a kind of destruction. Maybe we overestimate our ability to judge which aspects of a complex thing deserve to be unbundled and separated from their milieu, and lose something valuable in the process of isolating what we think is most important.
Or perhaps unbundling is an expression of dislike, a revolt against what we think we hate. By unbundling the flawed we hope to perfect it through that Sisyphean work, and when we go too far we rebundle the same, powering our entire economy through opposing phase changes that add up to nothing much better or worse.
The city unbundled (source)
If unbundling is one manifestation of our hubris in technology, then the distance between a bundled entity and its unbundled components is the distance between what we really want and what we think we want. When technology vanishes, it’s typically this process at work: unbundling without rebundling. Spotify is one example: music’s pure content separated from its context, such as album liner notes, the record store shopping experience, and music criticism as a way to preview before purchase. That all made up the halo of social ritual around the music itself, but few would seek out the rest on its own, much less pay for it, so once unbundled, it started disappearing, leaving us to be nostalgic for it.
Spotify, ideally, distills and focuses the part of music listening that we actually want, removing the red tape that doesn’t serve that singular objective. It accomplishes this with astounding effectiveness, but perhaps we sometimes cut too deep in the frenzy to optimize every technology we don’t fully understand. Ironically, Steve Jobs made the famous pronouncement that we don’t know what we want until it’s shown to us. In the same way, we also don’t know what we like about what we already have. When we take it apart, we sometimes find we can’t put it together as well again (and Steve Jobs gave us the most powerful tool for such disassembly).
Adrian Shaughnessy explored the impact of Spotify in Design Observer, lamenting the “contextual thinness of streaming services“ and the loss of the “metadata” that surrounds the music and provides its true cultural significance. In this sense, we can read Spotify and its ilk as high modernist efforts to replace illegible environments with legible, enervated ones. The notion that unbundling a service like music distills it to a more purely usable form is also disingenuous, because nothing is ever just unbundled, and on the internet, unbundled services are typically rebundled with, you guessed it, advertising.
Shaughnessy finally concludes, “Streaming sites have resulted in the suburbanization of music.” The city, after all, is the ultimate bundle, and the evolution of the suburb is the unbundling of the city in almost every sense. Thinner in context, poorer in information, the suburb reflects what its builders think people want more than what they really do want or need, and does away with the rest. The world supposedly contains far more information than it did at any previous point in history, but when we unbundle that world so aggressively, information—the unquantified kind—is exactly what we lose.
“What is Favela Chic? Favela Chic is when you have lost everything material, everything you built and everything you had, but you’re still wired to the gills! And really big on Facebook. That’s Favela Chic.
You lost everything, you have no money, you have no career, you have no health insurance, you’re not even sure where you live, you don’t have children, and you have no steady relationship or any set of dependable friends. And it’s hot. It’s a really cool place to be.”
“We used to make shit in this country, build shit. Now we just put our hand in the next guy’s pocket.”
-Frank Sobotka, The Wire
At the beginning of 1927 there were no bridges or tunnels connecting New York City to New Jersey. Six years later there were four bridges and a tunnel. The most majestic of these, the George Washington Bridge, was built in less than four years, a speed of construction difficult to imagine almost a century later. Two of the bridges, the Goethals and the Outerbridge Crossing, opened to traffic on the very same day (the Outerbridge Crossing was named after then-Port Authority chairman Eugenius Outerbridge, not for being the outermost bridge connecting the two states).
Four bridges and a tunnel in six years: No one under the age of 50 has witnessed such a feverish pace of public construction in the United States, but most realize that there was a phase of American history, now past, during which the infrastructural skeleton of the country was more or less built. Today, our relationship with infrastructure is better characterized by frustrating stalemates, painful service disruptions, system failures, decade-plus project timelines, and delayed maintenance—all signs of a past commitment that is increasingly expensive and therefore difficult to uphold. The biggest part of the Port Authority’s current capital budget, in fact, is dedicated to simply maintaining its existing facilities in a “state of good repair,” with four of the aforementioned bridges and tunnels requiring billions of dollars worth of capital investments in the coming decade just to keep the lights on. This, of course, is at the expense of new projects that could expand the regional transportation network.
The infrastructural wealth of midcentury America established a new global standard that had to exceed the limitations of sustainability in order to achieve its innovative heights, even though no one involved could have known those limitations at the time. Last week, Washington DC shut down its transit system for a full day due to maintenance needs that could no longer be postponed, yet another example of the technical debt that today’s transit passengers have inherited from the past. A comparable analogy (that I can’t track down now) about the 2000 dot-com bubble emphasized the underrated benefit of that boom: Bubbles create useful infrastructure that future generations get for free and build on top of, even though the businesses that drive that creation turn out to be bad bets for which their original investors pay the price. The last century’s public investment in hard transportation infrastructure may have come with a bill that nobody could ultimately pay, but its built legacy was real and a boon to the millions who still got to use all of those trains, roads, and bridges in subsequent years.
Interestingly, the twentieth century’s prolific construction perfectly mirrors the values of the generations who came of age during that boom: a belief in the importance of “hardware,” from freeways to space shuttles to suburban homes. The last era’s middle class turned its disposable income into physical assets: homes, then cars, then vacation homes, plus all the accessories that support those assets. The cohort now coming of age, we’re told, value “experiences” and while this is not a thinkpiece about millennials, few can deny that the value of expensive toys is declining relative to that of less tangible wealth. A survey of what’s being built today in the United States reveals little growth in heavy infrastructure but much more growth in “lighter” areas of investment: private real estate, telecommunications, and, of course, software of all kinds. We’re also still adept at layering new facades over the same foundations, or refactoring those established systems—as parodied by the great Onion headline, “We Must Repaint Our Nation’s Crumbling Infrastructure.” Even the Port Authority, over the last 40 years, found its principal focus to be the World Trade Center, a real estate project outside of the agency’s traditional scope, pursued at the expense of its mandate to improve regional transportation. New economic imperatives envelop even the unlikeliest participants.
This week, New York magazine published a guide to traveling with friends, or “flopping” as the author calls that particular brand of Airbnb-enabled group travel. The background of the piece, though not the point, is the enormous disparity in owned assets between generations past and present, as the first paragraph alludes: “We temporarily live well beyond our means in five-bedroom New England country homes…” The young author of the article and his cohort don’t necessarily aspire to own any of the grandiose properties that they’re able to enjoy for a few days or a week: The Hudson Valley farmhouses and Sicilian seaside manors are understood as the wealth of generations past (including generations much older than the baby boomers), inherited by our present society but too expensive to buy or even use for more than a weekend.
The author of the article acknowledges the economic reality that powers the group flop, the intersection of an advantageous digital marketplace for underused assets and a financial condition that necessitates pooled funds: “The proliferation of extravagant home listings on sites like Airbnb and VRBO—made affordable if you bring enough people to fill the myriad bedrooms—has made traveling a group activity.” So many of those houses and estates are now sunk costs of society, artifacts that live on after the disappearance of the realities that birthed them.
At the heart of the Airbnb flop phenomenon, in other words, is a crucial duality of the present condition: an economic revolution wrapped in a technological one. It’s easy to attribute Airbnb’s revolutionary impact on travel to the prior lack of an efficient marketplace for unused properties—a gap that the internet was well-suited to fill—but Airbnb’s expansion of the market for short-term rental properties reveals the degree to which those properties had been abandoned by their previous owners while a more enthusiastic, largely younger, group that wanted access but couldn’t afford it waited in the wings. The same can be said of the automobile industry’s overproduction and the excess capacity that has subsequently become available on ridesharing platforms. The old owners and the new renters are largely the same socioeconomic class from two different eras, illustrating the sea change in that economy that is as much responsible for the flopping phenomenon as any web interface is.
Bruce Sterling coined the term favela chic to describe the condition of material scarcity plus immaterial abundance—an increasingly widespread reality now. There are examples of favela chic everywhere if you pay attention. Most are more dystopian than flopping vacations, like the Google employee who lived in a Winnebago on the company headquarters’ parking lot to save money. Google and Facebook, interestingly, are wellsprings of the immaterial abundance that put the “chic” in favela chic. They also generate an overflow of material abundance that their employees and immediate neighbors might be lucky enough to capture. Anyone further removed can buy in another way, by giving the platforms free content and personal data, while relying on services like Airbnb for other morsels of real wealth that spill over from more abundant times and places. It may be getting easier to live off the fat of the land but it’s also getting harder not to.
One trope of the 21st century, almost too mundane for discussion by now, is that nobody knows phone numbers anymore, aside from the few they permanently memorized through thousands of dials on bygone landlines. Birthdays are similarly difficult to remember for lack of necessity: We remember the ones we learned before a certain point (probably the point when Facebook assumed that responsibility) but have added few dates to the memorized list since that moment. Even street addresses used to demand temporary memorization until the iPhone-map-ridesharing stack began eroding the need for that small effort, a development that now often frees us from seeing those numbered addresses at all when navigating to a new place.
In contrast to the decline of many technologies, there is no sorrow or nostalgia about the waning presence of phone numbers, addresses, and comparable scrap information in contemporary conscious life (and not even much sadness about knowing fewer birthdays). The clutter of ugly, messy numbers is disappearing from life and we’re better for it, even if it doesn’t truly free up mental “space” for us to invest in higher pursuits.
All that information we’re forgetting—and this will be increasingly true in the coming years—didn’t actually disappear. It just got a better interface. A better UI/UX, if you will. The information is under the hood, working invisibly. Phone numbers and street addresses have long served as protocols that enable humans to interact with and navigate complex landscapes by constituting a legible layer atop something illegible. Like an API, those numbering schemes enable two systems to talk to one another. One of those two systems can even be people.
Thanks to computers that fit in our pockets and go everywhere with us, as well as equally ubiquitous software and connectivity, it’s possible to add another layer on top of the ugly prosaic numbers, translating them back into human terms that are still indexed the same way and seamlessly mapped to the underlying systems. Machines are the ones that should be dialing 867-5309 or finding directions to 767 5th Avenue; people should be thinking of and searching for “Jenny” or “Apple Store.” Embedded within Facebook is a birthday API: By using Facebook you tell the platform who your friends are, and it tells you when it’s their birthday. The date is the invisible key; the value of memorizing that date perhaps just a relic of analog calendar technology that’s receding into the past. Life is getting easy, isn’t it?
Steve Jobs at home (source)
The API and the user interface are powerful metaphors for what Marc Andressen’s statement about “software eating the world” actually means. Software is not eliminating things, but eating them in the sense of absorbing or swallowing them. The iconic photo of young Steve Jobs at home among his few beautiful possessions (see above) foreshadows Jobs’ own role in putting so much of daily life under the hood. His life’s work was enabling me and you to own a few sleek, perfect devices that replaced entire file cabinets, stacks of paper, CD shelves, printers, power strips, and tangled wires in our homes. Most of that personal infrastructure for managing one’s information now hides within a few cubic inches wrapped in glass, aluminum, and plastic (or a data center hidden in the Oregon woods), to the extent that we’ve embraced its potential. Software is the obvious reason. Steve Jobs and countless contemporaries shepherded mankind into the “smooth” environments we now inhabit, which offer few clues about the hidden work being done by the invisible code.
I’ve written previously about the smooth environment that digital technology creates and its implications for physical space in cities (specifically online retail, in that post). The sparse quality of Jobsian space attests to a fattening layer of software that will keep absorbing more elements of meatspace. To occupy that space as a human is to exist above the API. The texture of economically advanced urban areas exhibits this condition, increasingly comprising coffee shops, public space, and showroom-style retail outlets—everything that software can’t eat, either because it’s too tactile (food and drink) or because machines haven’t learned how to do it yet.
As the software layer absorbs more of everything, it becomes important for humans to decide what they want their relationship to that layer to be: above it (as a user), interacting with it (as a builder), competing with it, displaced by it, or oblivious to its presence. When you hear the advice that everyone should “learn to code,” it’s shorthand for a more nuanced assessment of the software-consumed landscape and the embedded judgment that it’s better to be one of the builders than either a mere user or a refugee from a domain the software ate.
John Robb describes the phenomenon of “turking”—humans doing work that software will ultimately do but isn’t able to yet. The rise of machine learning massively expands the range of tasks that fall into this category. Take Microsoft Excel, which can be understood even today as a tool for turking. Excel users typically employ the program toward repetitive tasks that basic software could perform. It’s a training ground that bots could observe, learn from, and quickly replicate. Excel spreadsheets, like memorized phone numbers, won’t be missed by many people when they go under the API, but the jobs that Excel supports will certainly be missed if the transition isn’t properly anticipated.
Interfaces and platforms free us from responsibility and turn us into users and operators, unless we actively redefine our relationship to those smooth surfaces we end up on top of. But there’s also no time to learn how everything works, so everyone accepts some degree of automation in daily life. The rough ocean of technological change forces a constant reorientation to this landscape, a perfect balance of trading down and trading up. Let your iPhone memorize all the numbers but hold onto and strengthen the faculties that make you human, and know that the big solid middle range will always be melting into air.
I don’t follow politics as closely as many and I don’t intend to write about politics on this blog, but I care plenty about culture, media, and anything that smacks of zeitgeist, so of course I am paying attention to the current election cycle and will now say a few words about Donald Trump.
The most fascinating aspect of Trump’s present success is the way it demands that its spectators form a theory about why it’s happening and about the overall structure of the electoral process. One politician interviewed on MSNBC the other night said, “We can’t pick a president the way we vote for American Idol,” which surprised me because the establishment actually wants us to think that we do pick a president that way—by a basically popular vote—even though we of course don’t. Trump’s success so far represents the apparent breakdown of the “real” process by which parties nominate establishment candidates to run in the general election.
This week, John Oliver’s monologue about Trump exploded into my Facebook feed so I went ahead and watched. It’s an interesting addition to the Trump conversation, mostly because of how it exposes the limitations of the political metadialogue that The Daily Show pioneered and for which Oliver now carries the torch. The bit has that familiar structure—a witty buildup and preposterous conclusion (the Donald Drumpf call to action) with a serious take sandwiched in between, all of which gets labeled “important” by so many who share it on social media.
At one point the exasperated Oliver says “a candidate for president needs a coherent set of policies” and that with Rubio and Cruz, the lesser two of three evils, at least we all know where they stand on the issues. Of course Oliver’s right, but he’s also so deeply wrong, revealing in his statement that many smart people haven’t learned the necessary lessons from the Trump campaign. Far more perceptive is Matt Taibbi, who makes a better point: “It turns out we let our electoral process devolve into something so fake and dysfunctional that any half-bright con man with the stones to try it could walk right through the front door and tear it to shreds on the first go.” Complaining about Trump not having policies, or digging through the archives for two contradictory public statements he’s made, are actions that not only fail to “eviscerate” Trump, as so many said Oliver did, but feed the beast that Taibbi describes so perfectly. John Robb calls Trump’s campaign an insurgency, in which the lack of policies is an intentional strategy to unify his base and amplify his appeal.
Yes, unity: One thing everyone agrees about, whether they love him or hate him, is that Donald Trump is a brand. That quality is the true engine of his campaign’s success, and it doesn’t depend upon policies, platforms, or rational persuasion of any kind. Kevin Simler has argued that most effective advertising does not work via “emotional inception”—by establishing subconscious associations with desirable outcomes that the advertised product will supposedly bring about. Instead, brands operate through what Simler calls cultural imprinting, “the mechanism whereby an ad, rather than trying to change our minds individually, instead changes the landscape of cultural meanings—which in turn changes how we are perceived by others when we use a product.” In elections, candidates are the products being marketed and Trump, instead of taking positions on issues—the political version of emotional inception—focuses on creating a new cultural imprint that resonates with a huge mass of people at a seemingly primal level.
Viewed through the lens of cultural imprinting, Trump’s success makes more sense. He has created a political persona on the foundation of raw personality and crude, loud signals: a political EDM festival. Emphasizing policies would only dilute this brand and weaken its power. Trump has reoriented the landscape of the current election by putting up a tent that a lot of people want to be under and by carving out a domain in which nobody wants to associate themselves with Rubio or Cruz, issues be damned. Trump certainly does not thrive by convincing anyone of anything or by making rational appeals of any kind. Amazingly, what he’s doing is how the electoral process has worked for some time but nobody has embraced that dynamic or hacked the entire process so effectively. Of course, none of this has anything to do with whether Trump should be president, but the John Olivers of the world don’t yet understand how little that matters at the moment.