Walled Gardens & Escape Routes

Slack and Snapchat are two of the platforms that best embody the current technological moment, the fastest recent gainers in Silicon Valley’s constant campaign to build apps we put on our home screens and not only use constantly but freely give our locations, identities, relationships, and precious attention. One of those products is for work and one is for play; both reflect values and aesthetics that, if not new, at least differ in clear ways from those of email, Facebook, and Twitter—the avatars of comparable moments in the recent past.

Recently I compared Twitter to a shrinking city—slowly bleeding users and struggling to produce revenue but a kind of home to many, infrastructure worth preserving, a commons. Now that Pokemon Go has mapped the digital universe onto meatspace more literally, I’ll follow suit and extend that same “city” metaphor to the rest of the internet.

I’m kidding about the Pokemon part (only not really), but the internet has nearly completed one major stage of its life, evolving from a mechanism for sharing webpages between computers into a series of variously porous platforms that are owned or about to be owned by massive companies who have divided up the available digital real estate and found (or failed to find) distinct revenue-generating schemes within each platform’s confines, optimizing life inside to extract revenue (or failing to do so). The app is a manifestation of this maturing structure, each app a gateway to one of these walled gardens and a point of contact with a single company’s business model—far from the messy chaos of the earlier web. So much urban space has been similarly carved up.

BONAVENTURE2.jpg

  Illegible space: the Bonaventure Hotel (source)

If Twitter is a shrinking city, then Slack or Snapchat are exploding fringe suburbs at the height of a housing bubble, laying miles of cul-de-sac and water pipe in advance of the frantic growth that will soon fill in all the space. The problem with my spatial metaphor here is that neither Slack nor Snapchat feels like a “city” in its structure, while Twitter and Facebook do by comparison. I never thought I’d say this, but Twitter and Instagram are legible (if decentralized): follower counts, likes, or retweets signal a loosely quantifiable importance, the linear feed is easy enough to follow, and everything is basically open by default (private accounts go against the grain of Twitter). Traditional social media by now has become a set of tools for attaining a global if personally-tailored perspective on current events and culture.

Slack and Snapchat are quite different, streams of ephemeral and illegible content. Both intentionally restrict your perspective to the immediate here and now. We don’t navigate them so much as we surf them. They’re less rationally-organized, mapped cities than the postmodern spaces that fascinated Frederic Jameson and Reyner Banham: Bonaventure Hotels or freeway cloverleafs, with their own semantic systems—Deleuzian smooth space. Nobody knows one’s position within these universes, just the context their immediate environment affords. Facebook, by comparison, feels like a high modernist panopticon where everyone sees and knows a bit too much.

Like cities, digital platforms have populations that ebb and flow. The history of urbanization is a story of slow, large-scale, irreversible migrations. It’s hard to relocate human settlements. The redistributions of the digital era happen more rapidly but are less absolute: If you have 16 waking hours of daily attention to give, you don’t need to shift it all from Facebook to Snapchat but whatever you do shift can move instantly.

The forces that propel migrations from city to city to suburb and back to city were frequently economic (if not political). Most apps and websites cost nothing to inhabit and yield little economic opportunity for their users. If large groups are not abandoning Twitter or Facebook for anything to do with money, what are they looking for?

To paraphrase Douglas Adams, people are the problem. As people, we introduce some fatal flaw to each technology we embrace, especially technologies that facilitate communication, and especially when they amplify some basic weakness in our nature. Almost always, the experience of using a technology can’t be regulated or moderated properly, some misuse of it becomes rampant, and that quality gradually or quickly drives its users to another platform that solves its particular problem. Then the cycle begins anew.

Slack is not the unbundling of another platform’s chat feature, then—it’s the reverse unbundling of email, an antivenom for email’s problems. The familiar version of unbundling is splitting off a feature from a product and building a more robust standalone product out of it. What I’m describing now is an equally powerful and prominent phenomenon in the evolution of technology.

Email, in work and in personal life, has strayed far from its origin as a joyous, playful technology that early adopters used to send one another jokes. It’s more essential than ever now, a supporting infrastructure for life in every sense, but it’s also something we feel the urge to hide from on vacation. We hate it. Email’s flaws are potent: Information lives forever; everyone has equal access to everyone else; spam marketers have optimized it as a tool for their nagging. Even the most powerful people in the world toil over email for an hour daily, while strategies like Inbox Zero have emerged to help us escape from under its burdensome weight.

Our uneasy dependence on email in professional and personal life created a massive opportunity for a tool that isolated its benefits and discarded its shortcomings. Slack embodies this opportunity. It offers freedom from the oppressive inbox, in which one owns everything that ends up there, and establishes a smooth space in which the most important information reaches its recipients indirectly but effectively. The streamlike work patterns enabled by Slack, which Venkat Rao calls Flow Laminar, “avoid the illusion of perfectibility of information flows implicit in notions like Inbox Zero altogether.”

Jenna Wortham, contemplating Snapchat in the NY Times, suggested that “maybe we didn’t hate talking—just the way older phone technologies forced us to talk.” Texting, she thinks, did for phone calls what Slack promises to do for email. She proceeds to praise Snapchat for its reverse unbundling of social media and even SMS: the escape from the coldness and flatness of text-based communication, the intimacy absent from Facebook and Twitter, the triumph of the stylistic over the literal. An essay by Ben Basche makes a similar point: “Perhaps the task of constantly manicuring a persistent online identity — of carefully considering what effect your digital exhaust will have on your ego — is beginning to weigh on people.” Traditional social media, it seems, has reached the point of maturity that email already attained: more rigid and less playful. We’re looking for escape routes and Snapchat is one.

If we’ve learned anything from recent technology, we can expect Slack and Snapchat to reveal their own serious flaws over time as users accumulate, behaviors solidify, and opportunists learn to exploit their structure. Right now most of the world is still trying to understand what they are. When the time comes—and hopefully we’ll recognize it early enough—we can break camp and go looking for our next temporary outpost.


The Human as Interface

Thoreau said in Walden that “we do not ride on the railroad; it rides upon us.” He was talking about the true cost of the ride—that while he embarked on a trip by foot and arrived at his destination that same day, you would have to first get a job and work for a day in order to earn enough money to make the same trip by train. “Instead of going to Fitchburg, you will be working here the greater part of the day. And so, if the railroad reached round the world, I think that I should keep ahead of you.”

Generalizing his assessment, Thoreau found that so much in the modernizing world bore a similar cost. The railroad and the industrialization it embodied was a powerful enough force to remake society in its image because of the demands placed upon us to build, maintain, and operate those systems, as well as our willingness to accept those demands in exchange for speed and efficiency.

help-desk.png

 People helping computers talk to people (source

It makes sense why all of that heavy infrastructure weighs us down, why it’s such a chore, just like Thoreau’s image of a train driving on tracks made of people makes such vivid sense, or why a household cluttered with stuff feels so oppressive. But haven’t we escaped the era of stuff and ventured into the era of information, finally mastering the former? Software shouldn’t be able to “ride upon us” like industrial machinery can. If anything it should be loosening the physical world’s grip on us (and in many ways it is). Information storage is basically free and processing power increases exponentially and yet we don’t feel certain that we are freer than we used to be, or any less burdened. Why does email now ride upon us like trains once did?

Commenting on artificial intelligence as it reaches a tipping point in its maturity, John Robb recently observed that “we can’t even design systems that work for human beings”—that is, we’re designing AI as a godlike force that works in mysterious ways, not a true agent of our own objectives, and ensuring that we will somehow bow to it, just like we did to the industrial behemoths that we built in a previous era.

Put another way: Every medium-sized company with a competent customer service operation automates a large chunk of that work. When you call an airline or a credit card company, you pass through a tree of often-frustrating multiple-choice menus before getting your issue resolved. You only get to speak to a live operator after exhausting the menus’ abilities. That process of escalation is the Human as Interface—a reversal of roles.

The Human as Interface is the troubling but darkly funny outcome of our white-hot progress in the digital realm. An interface is traditionally a point of contact between people and computers (or between hardware and software, or two separate software systems) that eases their interaction and translates between two modes of communication.

Software is eating human work so fast that there’s less of a role for interfaces between humans and computers, as the latter can finish more and more of their work without humans dropping in partway through the process to guide them. At the same time, that software is doing even more of the jobs that humans used to do and eliminating the need for those jobs. Finally, the various activities that computers do are becoming so sophisticated that humans can’t only not understand them, we don’t even have a language for describing them. The gap between human and computer abilities is either closing or widening, depending on how highly you regard humans, and there’s a shortage of a different kind of interface or API: the kind that mediates between software and its human users in the transitional phase before a computer can handle that step too.

Thus, machines need people to translate between themselves and their users—the Human as Interface. This is a form of turking, in the sense that it’s yet another role humans only fulfill until software learns how. This type of work is found at every ability level: Customer support reps who handle the overly complex issues that automated systems escalate. Convenience store employees who help customers get unstuck from the self-checkout machines that replaced all the other employees. Explainers who can communicate to a broader audience a concept like machine learning and why it matters. IT help desks.

It’s surely a sign of increasing economic polarization that a small percentage of specialized individuals build and run the advanced systems that transform everyone else into a user in both their work and their free time. For this majority, their jobs await imminent automation, at best, and already function as an interface for machines otherwise (meanwhile everyone’s a user of some kind in their free time). No matter the reason for this condition, it’s hard to pretend that we don’t somehow work for computers, or that software doesn’t ride upon us as heavily as the railroads did.


Computers Can’t Do Party Tricks

“The horse has lost its role in transportation but has made a strong comeback in entertainment.”

Marshall McLuhan

My friend Brendan can glance at satellite photos and immediately identify their locations in the world, a skill that might have qualified him as a warlock or wizard hundreds of years ago (if satellite photos had existed then). Last week, he and I were trying to figure out whether he could somehow cash in on this skill or use it to advance his career. No, we quickly decided: This is the epitome of a task that computers have evolved to do better than people, whether anyone demands that particular service or not. But seeing Brendan identify Google Earth images in person is still an impressive feat, a great party trick.

That computers can recognize images more accurately than the most adept humans, and then instantly cross-reference the results with any other set of data, requires no explanation. It’s less certain but a true enough generalization that almost anything we call work today, machines will do eventually (as we concurrently keep inventing new things to call work). Technological progress not only eliminates the need for certain tasks in their traditional domains, thereby lowering those tasks’ economic value, but it also gradually eliminates—or amputates, as McLuhan said—the faculties certain people had developed to better perform those tasks. The printing press drastically reduced the need for hand copying as a means of reproducing texts; in the much longer run, it also led to a weakening of the widespread practice of writing by hand. We’ve seen the same phenomenon time after time: technology devaluing the job and amputating the corresponding skill.

But the automations and amputations of technology are anything but the end for what they supposedly replace. Six hundred years after Gutenberg, almost everyone literate can still write by hand. And handwriting isn’t an exception. Most of what past generations did for money is still being done somewhere, probably for less money and at a smaller scale—a reality only partially explained by William Gibson’s observation that the future is never evenly distributed after it arrives.

The world is weirder and more complex than macroeconomics usually recognizes. What doesn’t contribute to GDP still exists and often still matters. Many abilities that technology debases economically, like Brendan’s knack for recognizing satellite photos, reappear in a hundred other domains: as competitive sports, hobbies, artisanal crafts, tourist attractions, historical preservation objects—the cultural substrate that underpins the productive economy where quantifiable value is added. At the very least, they often make great party tricks. McLuhan observed that the horse thrived as a form of entertainment after becoming obsolete as a mode of transportation. To say that one technology “killed” another obscures the reality that it only fragmented its predecessor and may have actually made its presence in the world more interesting.

Computers can add value but they can’t do party tricks. No one at a party gives a shit what a computer can do. Everyone at a party cares what a person can do.

Computers can’t party by definition. But computers excel at being productive and can add more value than any human. One computer can do the work of thousands of erstwhile humans. The cutting edge of technology, always pressuring mankind to justify its existence, reminds us that the productive economy isn’t the best place to affirm our own humanity.

One useful predictor of technology’s evolution, the Varian Rule, states that what the rich do today is what everyone else will be doing in the future. This has held true for nearly every major development, from the printed word to electricity to trains to cars to air travel to personal computers to iPhones. A companion to the Varian Rule might be this: What technology renders obsolete today, the rich will eventually enjoy again. The resurgence of artisanal everything among the urban consumer class is certainly a response to the dominance of industrial food production, and of course rich people are the most likely to be found riding horses in modern society today.

horse-pulling-smart-car-shopped-or-not.img_assist_custom.png

 The horse as entertainment (source)

Cars, predicted by the Varian Rule, were the most transformative technology of the hundred years preceding the computer. They began as an expensive luxury until Henry Ford brought them to the masses. GK Chesterton described the modern transformation of so many luxuries into binding necessities—a trend exemplified by the automobile—as a degrading force (“I say the thing should be kept exceptional and felt as something breathless and bizarre”) and there is no question that driving has by now become an inescapable chore for the majority of those who have to do it, as well as a scourge on the urban landscapes we inhabit.

The car as we know it might soon get its own chance to become obsolete as a form of transportation. The long-anticipated self-driving car sits at the precise (figurative) intersection of every trope about technological progress: automation (it will eliminate jobs), social benefit (it will save us from killing ourselves behind the wheel) and massively profitable business opportunity (it will drastically reduce the cost of transportation).

If we’re lucky, autonomous vehicles will deliver the human-driven car back to the domain of entertainment and luxury from whence it came, alongside the horse. Perhaps never having to drive would restore some magic and breathlessness to the act on the special occasions when we still do it. If we’re less fortunate, however, self-driving vehicles will further solidify the car culture in which we’ve been stuck for the past century and from which we need to escape. Right now, we fulfill two roles behind the wheel, driver and passenger, and only one of those promises to go away anytime soon so the outcome will be a mixed blessing at best.


The Digital NIMBY

“The landscape was always made by this sort of weird, uneasy collaboration between nature and man. But now there’s this third coevolutionary force, algorithms…and we will have to understand those as nature. And in a way, they are.”

-Kevin Slavin

If hundreds of cars suddenly started flooding your street in your otherwise quiet neighborhood, you’d probably wonder what changed. Maybe a mall or a Trader Joe’s just opened nearby, something that could make the house you own more valuable.

It turns out you’re not so lucky. Not only did nothing materially or permanently improve in your area, but something got worse: Construction closed off a nearby thoroughfare and Waze identified your block as the fastest detour. Now, your little residential street is an arterial highway without the capacity to handle its new traffic. Your home is no more valuable and there’s nothing you can do to stop the cars.

The above scenario, described in this week’s Washington Post, has become the latest in a long list of nuisances that worry and occasionally torture suburban inhabitants. Map apps bail out their increasingly dependent users with not merely an acceptable detour to follow, but the optimal one—which is quantified and therefore the same for everyone, until the best option causes too much congestion of its own. Instead of randomly dispersing around a road closure, a platoon of Waze-guided cars can march along its detour with the precision and endlessness of an ant colony.

In negotiating the built environment, we have always sacrificed the local to the global: Major public improvements that benefit broad constituencies—airports, for example—have to go somewhere, even when none of the beneficiaries gain additional benefits from living near them. “NIMBY” (“not in my backyard”) eventually emerged as the name for this political resistance to large-scale developments in one’s neighborhood. The phenomenon is most associated with suburban homeowners not unlike those terrorized by Waze reroutes today.

Stadiums and power plants still get built in people’s backyards, but now a new type of globally eminent system, the digital platform, has priorities to assert against its local citizenry. Without the unforgiving tradeoffs of geography, one would think, the digital should be easier to live with and compromises less costly to achieve. The internet doesn’t need to “be” anywhere, and certainly not in anyone’s backyard.

Unfortunately for the digital NIMBY, by not being anywhere, the internet is everywhere, as the Waze example teaches. The digital is no longer a place to hide from meatspace—perhaps it never was, but it’s less so now than in the message-board ‘90s. In every sense, the digital world has been reinscribed upon its geographical counterpart, furnishing the logic that guides so much activity in the latter. What better demonstration of how closely the two are intertwined than a traffic app temporarily “activating” a block for use?

Screenshot_2013-06-12-12-30-27_1.png

Waze (source)

To fight against a proposed airport expansion or condo tower involves clear political processes, known stakeholders, and the possibility of tangible victory: If you block the development and it goes elsewhere, you won that battle. The absolute fluidity of the algorithmic landscape means that nothing is ever finished. The victims of a Waze reroute in Takoma Park, Maryland used the app itself to report fake speed traps and accidents, turning the platform’s own logic against it to gain momentary relief. The temporary success of that strategy, however, is dwarfed by its long-term futility—the algorithms learn and adapt, and today’s victory is tomorrow’s start from scratch.

For much of history, maps described the world more than they prescribed behavior in it. The internet has brought about a near-total inversion of that relationship, isolating the map’s prescriptive qualities—informing certain decisions—and tightening that feedback loop while expanding the loop’s user base. The descriptive function of maps has dwindled accordingly: We often use maps without looking at them, as the embedded logic behind a Yelp search or Tinder swipe. The internet has similarly inverted the descriptive-prescriptive dynamic in almost every domain it touches by turning data into decisions without involving the inefficient middleman, the human being.

Kevin Slavin, in perhaps the best TED Talk ever, describes the algorithmic underpinnings of Netflix and similar services as the “physics of culture.” He concludes by pointing toward algorithmic trading and its imperative of instantaneous speed, which has hollowed out buildings and strung underground fiberoptic cables between New York and Chicago to produce the necessary edge in financial markets. Yes, algorithms can route extra traffic down one’s street, but they also literally shape buildings and terraform the earth, completing the unification of two universes that always seemed separate. Keeping these phenomena out of one’s backyard is the uncharted territory of NIMBY politics.


Scaling Bias

Facebook got in trouble this week…sort of. If you were already sufficiently awed by the power Facebook has amassed then it may not have surprised you to learn that Facebook’s method of deciding what rises to the top of your newsfeed and what drops out of it involved a process that could be considered “biased,” perhaps even biased in a way that reflects the left-leaning humans behind the curtain.

Because Facebook is a platform that displays and curates news, with an audience that dwarfs that of any self-identified news outlet, we project our standards for journalistic integrity onto an entity that never gave us a good reason to hold it to those standards in the first place. The best reason I can come up with is “Facebook’s grip on discourse is so powerful that I can’t face the possibility that it would distort that discourse in any way.” Again, the outcry seems to come from an innocence we projected onto Facebook, not anything the company said about itself.

If bias is a problem, Facebook is certainly biased in more important domains than American politics, and our focus on that current narrow, concrete example shows the limits of our imagination and skepticism. Facebook’s ultimate bias, expressed throughout its fabric, is toward growth: increasing its total active users and increasing the number of hours that existing users spend on Facebook. If the obvious behavior of Facebook’s algorithms in service of that goal don’t alarm us as much as a liberal bias in its surfacing of political news (something we’ve seen time and time again) it’s only because we’ve so internalized the values of late capitalism that we lack any vocabulary for criticizing Facebook’s shameless harvesting of our attention.

Another of Facebook’s biases, for example, is its philosophical and practical preference for the subjective over the objective, which again serves the goal of user growth: Mark Zuckerberg would rather show you your own personal feed tailored to your past behavior (hewing cautiously to the familiar and unadventurous) than expose you to a more objective or expansive version of reality. I’m not making a statement here about which is better—that’s a longer essay—but it’s certainly a bias.

The most valuable lesson of this Facebook controversy has been the discussion of what an algorithm is, exactly, given the involvement of a human editorial team at Facebook. Like those human editors, algorithms are never objective. They’re the opposite, in fact: human tools for achieving desired results more consistently. An algorithm is an engine of bias more than an antidote, the codification and repetition at scale of an outcome that an individual or group wants to achieve. Consistently and reliably, algorithms achieve only the results that their creators intended and little else. If algorithms were less consistent and reliable they would be more random and therefore more objective, less tied to one group’s intentions. The manual involvement of an editorial staff in Facebook’s news ranking effort is no contradiction of that dynamic: Once algorithms can do tasks those editors are doing now, they will.

And reality, to the extent that it’s even separate from the internet anymore, does not always perform better. Benedict Evans points out, “If Google or Facebook have arbitrary and inscrutable algorithms, so do people’s impulses and memories, and their decisions as to how to spend their time.” Facebook’s black box performs no worse than its users, but because it’s a more controlled and regulated environment, it offers a lower probability of exposure to unexpected or counterbalancing forces, in politics or elsewhere—less possibility of correcting the algorithms we already embody. For any individual, Facebook is a monoculture that is always refining itself to be more how it already is, and refining us to be more how we already are, or how Facebook wants us to be.

The issue of bias, then, pales in comparison to two broader problems, one Facebook’s and the other ours. Facebook’s problem is that it wants to comprise as much of our experienced reality and waking life as possible, and it’s actually been adept at increasing its share of that reality. That’s a more ambitious goal than programming our political preferences, although it encompasses the latter.

Our problem is that we willingly accept the terms Facebook offers us, and have become increasingly engaged with the platform as a society but not critical enough of what its algorithmically feeds back to us. At best, Facebook will keep giving us what it thinks is best for us; at worst, it will give us what is best for Facebook. Most likely, we’ll keep getting a bland mixture of the two. The total hours we all spend immersed in Facebook’s mirror universe will continue to be a direct measure of Facebook’s success and of its grip on our collective minds.

If you think that sounds like a bad deal? Log off.


An Environment Smarter than You Are

“The merit of style exists precisely in that it delivers the greatest number of ideas in the fewest number of words.”

-Victor Shklovsky

“Basic” is the best insult to emerge in the last few years, a slicing, leveling adjective perfect for the present zeitgeist. Like all great slang, there was no single word for what it expresses until now, and we immediately knew what basic meant when we first heard it. In a period of information overload, the word compresses whole blog posts and essays that might previously have had to be written, sparing us from reading them. We can now sweep the North Face jackets and pumpkin spice lattes away with a single gesture and free up time to pursue our higher callings or refresh Twitter more furiously.

Awareness of the basic is a necessary value of the present age, heavier than it seems, but not yet properly appreciated. The basic, to attempt a more rigorous definition, is that which adds nothing new: no information, no value. There’s a quote I can’t track down but always attributed to David Mamet, that bad drama merely affirms what we already know. Basicness is bad drama: It invokes what we already know and derives its full value from what’s already been created. Gregory Bateson called information “a difference that makes a difference.” That which is basic does not make a difference.

Bad drama, of course, has a place in life and can be fun to watch, and something basic can be quite good—an appreciation of The Wire or a Uniqlo shirt, for example—but sharing it with another person is only a handshake, an affirmation of sameness, and not a revelation that enriches reality (again, not everything needs to be a revelation that enriches reality). This only becomes a problem when there’s too much of what’s basic and the richer material—the real information—is crowded out or lost in the noise.

The opposite of basicness is style. If the basic is pure redundancy, in the informational sense, then Victor Shklovsky’s definition of style (see above) will suffice: an act of semantic compression that adds new information, expresses the familiar more concisely, or ideally both. Style involves an actual statement. We often equate style with high quality; this richness of information is why. There’s an excellent Ribbonfarm essay that praises density in writing, contrasting it with brevity. Density, as a prerequisite for style, involves an intelligent compression of information. Brevity, by contrast, is often just a loss of information. Style, density, and quality are frequently different ways of describing the same characteristics, in many domains.

shopping-mall-retail-space-commercial-nki.jpg

The rules (source)

And now we arrive at why “basic” is such an important concept today: because the human environment—cities, the internet, commercial establishments, and even homes—are becoming more basic all the time. We finally have the perfect word to describe them. Maybe the incessant spread of the basic is why we finally had to name it. The generative forces now producing the world yield not only the corporate sameness of Starbucks and malls, but also the platform-driven predictability of the internet (Medium, Facebook) and the well-behaved consumerism of reconstructed urban areas. All of this has in common the above definition of basic: no new information, no surprises, no mystery or ambiguity, just what’s already known. Simpler, flatter, thinner. There are countervailing forces at work making the world more interesting, of course, but right now they are less powerful.

Why are we getting more basic? I blame design. Design, that impossibly broad discipline that by now touches—if it doesn’t envelop—almost every human product. Design, not software, is eating the world (and software is arguably a subset of design). To a hammer, everything is a nail; with the tools and technologies now at our civilization’s disposal, everything is a design opportunity.

Sanford Kwinter criticized our culture’s relationship with design and this emergent world “in which we are hectored mercilessly by design, swathed in its miasma of artificial affectation, hyperstyle and micro-human-engineering that anticipates, like a subtle reflex arc, our every move and gesture. Design has now penetrated to, even threatens to replace, the existential density, the darkness, the slow intractable mystery of what was once the human social world…Design has become us; it is, alas, what we are, and there is no way (for now at least) to separate ourselves from it.” To Kwinter, design is making the world less free, less complex, and less interesting, if possibly more “user-friendly.”

Design has existed for millennia, though, so why is it only now having this effect? Well, design is fundamentally the organization of information, and information has grown easier to copy and more mobile than ever. The “rules” that emerge as design principles today are too easy to disseminate, cheaper (in time, effort, or price), and often proven effective in some way, so they end up everywhere. A few examples:

  • A developer builds a strip mall by applying a few rules, optimizes for economic performance, and plugs in a variety of chain stores that are more or less copied from existing templates.
  • A writer posts on Medium to reach a wider audience and avoid worrying about managing a personal website or blog. The written content may be excellent, but the context, page layout, and often tone are standardized by the platform’s rules, restricting the variety of the readers’ experience.
  • Any American city’s truly weird bars and restaurants are slowly being replaced by more familiar “types” that stray less from familiar experience: wine bar, New American restaurant, high-end ice cream shop. Hence a contrasting example like New Orleans becomes more exceptional and interesting as time passes.

You’re probably thinking, “this didn’t start with the internet—every culture has copied itself in most of what it produced.” This is true, and originality is always scarce, but what’s changed is how me make the copy when we reproduce an idea. The fidelity of digital transmission, so valuable in other domains, means that copies don’t change enough. A designer’s rule can travel around the world without much alteration. And this isn’t even counting algorithms, which produce even more inflexible and widespread outcomes.

A copy, a rule, a transmitted design are basic in their pure forms—no information is added. In contrast to such rules we have patterns, best described by Christopher Alexander: a better way for information to travel. A pattern is an idea of something that is consistently desirable within a larger system, like a park in a city or a novelistic device, but with a need for interpretation and no exact blueprint for its execution. Each new person who copies these must add information through their interpretation of the pattern, and the variation that results is where everything interesting happens.

In his introduction to A Pattern Language, Alexander writes, ”It is possible to make buildings by stringing together patterns, in a rather loose way. A building made like this, is an assembly of patterns. It is not dense. It is not profound. But it is also possible to put patterns together in such a way that many patterns overlap in the same physical space: the building is very dense; it has many meanings captured in a small space; and through this density, it becomes profound.” Urbanists so often call for one kind of density—a greater concentration of population in space—but the kind of density Alexander describes is more important.

Among the aforementioned design failures is the so-called “smart city” movement, a quixotic effort to build cities out of rules rather than patterns. Smart cities, like their modernist forebears, embody the shortcomings of design as Kwinter critiqued it, and usually only succeed in the pockets their plans fail to reach or influence. Ironically, the smart city produces an environment that is dumber (more basic) than the delicate, creative complexity humans are usually able to produce on their own. Lewis Mumford wrote that the chief function of a city is to “convert power into form, energy into culture, dead matter into the living symbols of art, biological reproduction into social creativity.” In other words, to create information and counteract entropy. Despite the intentions and unintentions of design, if we hope to live in the world we want, we need to seek out environments that are smarter than we are.


Zaha Hadid

Sanford Kwinter’s excellent essay collection Far from Equilibrium contains several short reflections on the work of contemporary architects he finds compelling. After Zaha Hadid’s death, I revisited Kwinter’s piece on her, and decided it was worth reposting in full here:

1673188-inline-750-zaha-021.jpg

  Hadid’s Zitra Fire Station (source

During the 1970s, when it had become an obligatory affectation of macho bravura in architecture to manipulate the ruler and straightedge with something akin to a virtuoso performance—note Libeskind, Tschumi, Eisenman and others—Zaha Hadid eschewed the posturing of these anti-classicizing classicists by inventing a new kind of line entirely. The new line was at once the centrifugal expression of a pulse of energy fleeing from a moving point—as in a universe not of grids but of vortexes—but also the reinvention of the line in relation to a new type of eye. On the one hand, Hadid’s softly arcing lines that compress such enormous energy into their subtly inflected bends are generated by the hand and body itself, by the inescapable rotational dynamics of the hips, shoulder, elbow, and wrist as they coordinate to extend the pen through space yet also leave a powerful trace of their orbital roots. Far from seeking to hide these organic geometrical foundations, Hadid extended and exaggerated their seductive and radical qualities, to the infinite perplexity and fascination of her self-proclaimed avant-gardist colleagues. Those most threatened by the unexpected (and untimely) affirmation, within Modernism, of the body and its lines of flight, dismissed this type of work as “paper architecture.” But the question of the eye was a critical weak point of the modern and Modernist tradition, and this is where Hadid’s ace resided. Modern space was born of a rational connection to optics, the moment when Brunelleschi and others formalized the modern perspectival approach to drawing based on a vanishing point and the straight lines that emanated from it. But the universe was not like this at all and every Renaissance theorist knew it. The cosmos was rather a play of orbits and attractions, bends and perpetual influences, a continuum of ongoing angular interaction and modification. What’s more, the eye itself was dual and mobile, its imaging surface spherical, and the ground to which it is anchored was defined by a horizon and an azimuth, both of which bend and bend visibly and meaningfully. It was in many ways Zaha Hadid who restored this tension and reality to the world of space-making.

Hadid’s vertiginous work both transposes and displaces the very horizon that serves as our orientation point in the world. Her curves arc ever-so-slowly as they careen across the canvas or page as if to mock the straight lines that they partly portray, but also to free our intuition from the many regimens of conventional orthogonality to which our modernity has subjected us. Until recently, perhaps the last seven or eight years, her lines never ventured to risk more than a single inflection; today, one finds two- and even three-part arabesques as the work becomes ever more plastic, and the development of the semi-free “line” becomes ever more extended into developing the possibilities of sheets and surfaces and even three- and four-dimensional flows of space and material.

 


Follow

Get every new post delivered to your Inbox.

Join 149 other followers