Can we outsmart the smart city

 

Adam Greenfield

CAN WE OUTSMART THE SMART CITY?

 

A new generation of cities are being wired up to control themselves by harnessing huge quantities of real-time data. But by turning a city onto autopilot, are we in danger of losing what makes it human?

As with any organised human community, that dedicated to the theory and practice of the built environment tends to be susceptible to fads. Over the past decade or so, we've seen enthusiasms for parametricism, 3D printing, CNC milling and other digital processes used not merely to make studio models, but in the fabrication of full-scale structures.

Though eventually wearing out their welcome, these fads all had some valuable insight at their core, a lesson to impart that we could not have gleaned just as easily from some other line of enquiry. In that sense, they had something permanent to offer us.

But the latest idea to capture the imagination of would-be bleeding-edge thinkers sets out to be permanent from the beginning, even eternal. Though it is trafficked under various names – the "smart city", the "connected city", even the "sentient city" – at root, all of these schemes argue that the complex dynamics of large human communities can be optimised once and for all via the collection, analysis and real-time use of data.

This tends to be discussed casually, as if it were self-evident that all one need do to finally "solve" the city is to weave sensors into the urban fabric by the million, trawl the relevant social networks for geotagged utterances, and apply just the right analytic algorithms to the ever-mounting tally of terabytes captured this way.

Traffic jams would be a thing of the past, averted before they form; those planning dark deeds would be identified by the police, and taken off the streets before they had the chance to hurt anyone; supply would be smoothly adjusted to meet demand, in a delicately balanced, moment-to-moment modulation of a million networked and largely automated systems.

This is the proposition that underlies next-generation new towns such as Songdo in South Korea, or Masdar in Abu Dhabi, and it's at the heart of the Intelligent Operations Centre that IBM built for the city of Rio de Janeiro, at the cost of some $40 million. For all its of-the-moment gloss, anyone familiar with the utopian current running through 20th-century architecture will recognise immediately that this is an idea with a history, even a pedigree.

Similar notions surfaced in 1960s projects such as Archigam's Computer City, Constant's New Babylon or Yona Friedman's Spatial City, while Stafford Beer took the concept one giant step towards reality by developing the legendary CYBERSYN coordination network for the Chilean government of Salvador Allende.

But these projects were framed at the very dawn of human engagement with the technics of digital information processing; perhaps we think of them today as little more than idealistic gestures because they were simply too far ahead of the available technology. Could we now have within our grasp the tools and techniques to make them a reality?

Well, not likely. Given the scale of the ambitions involved, we need to treat such claims with scepticism. The first thing we should understand is that virtually everything we hear of places like Masdar or Songdo originates with an interested party. Until very recently, just about anything we learned about either project was fed to us by multinational IT vendors with a direct stake in the venture, their PR firms or the tech bloggers and other media outlets unwary enough to rerun their press releases.

(Predictably enough, on the rare occasions that we were afforded access to an unfiltered and independent take, we heard a very different story: one of cities in form only, of ghostly, depopulated sites with most of the advanced technologies at the heart of the media buzz around them not and apparently never to be installed.)

Smart-city proponents might argue that these are just local and temporary stumbles – the result, perhaps, of reliance on the wrong vendor, or an inappropriate business model – but that the essential paradigm of urban management by data collection remains valid.

As it turns out, however, there are at least four substantial reasons for thinking the paradigm itself is fundamentally flawed: the inevitable contingency of data collection; the questionable integrity of the parties we entrust with the stewardship of data; the low likelihood that any human community would ever permit policy to be derived from data transparently and without bias; and the inherently technocratic, top-down nature of this ambition in the first place.

If we are ever to understand the deeper prospects for municipal stewardship based on the collection and interpretation of data, we must first get a better handle on just what it is we mean by that curiously underexamined word – particularly how it is made, who makes it, and how the act of making it can transform the people involved.

Consider the experience of Leila Laksari, an immigrant, community activist and founder of an organisation called Living Under One Sun, in Harringay, north London. She was given a grant to study what kind of incentives might coax people out of their cars and onto public transport (or even better, onto the pavements and cycle lanes). As it happens, the budget the group was allocated was some 10% of that offered to the neighbouring and rather better-off district, explicitly on the theory that poor people are constrained by circumstance, fully set in their ways, and won't change behaviour no matter what incentives they are offered.

Laksari found this proposition doubtful, but understood that the only way to shift opinion on the question would be to put some numbers up against it – and doing so within the paltry budget afforded her. There was nothing particularly innovative about the structured interviews she used to undertake the study. But instead of having them conducted by graduate students or professional researchers, Leila approached people from her own community – unemployed people and at-risk youths, both native-born and immigrant – and offered them a modest wage to go door-to-door and engage the people of Harringay in conversation about their mobility habits.

Leila's team collected better, more granular data about the community's mobility choices than had ever been gathered before. This act of collection cost far less than it would have if conducted by professionals, and of course those who conducted the survey were meaningfully and gainfully employed during this period.

Furthermore, the volunteers developed a far better understanding of their community's needs and developed a sense of their own competence and worth – some for the very first time. While the initiative certainly fulfilled its stated objectives, the enduring value in the Harringay engagement resulted from decisions that Leila and her colleagues made about how data was to be collected – and not one of these decisions was "neutral" or "objective" or "scientific."

This, of course, is the very furthest thing from the way data is invariably depicted in the literature of the smart city: as something serene, mechanical and immaculate, existing on a plane far removed from the grubby details of human desire. Proponents actually argue that "the data is the data", often in so many words.

Have they forgotten that you can get a different air quality reading for a neighbourhood by moving the sensor a single metre higher or lower? Never learned that backfiring trucks and holiday bottlerockets can all too easily trip audio surveillance systems designed to detect the signature of small-arms fire? Failed to understand that you can garner a different response to a survey by altering the wording or even the sequence of a question ever so slightly?

The fact is that the data is never just the data, that knowing something about the circumstances of its collection is critical to its accurate interpretation. And yet these are just the circumstances that are routinely effaced from the data-driven systems places like Songdo or Masdar are predicated on, and which IBM and Hitachi and Siemens and Cisco want to sell to the cities we live in.

There's one other question we have to ask of Laksari's experiment: if the same results could have been derived via technical means. Could sensors or "beacons" or smartphones or Oyster-card counts have told us the same things about her community's mobility patterns?

The answer to that depends on what you mean by "results". If you're interested in a tabulation of what percentage of people in Harringay got on a 341 bus on a given morning, walked to Manor House tube station to catch the Piccadilly line, or snagged a lift to work with a friend, almost certainly the answer is "yes".

But any technical armature capable of gathering that information would have cost a good deal more than Laksari's way of doing things – at least as an initial outlay – and in the end, the only parties to benefit from such an expenditure would have been the manufacturers and vendors of sensing devices. Few of these parties call Harringay home, so all of the knowledge collected by Laksari's team have left the community. So too would the capital.

Assuming that technical means can furnish us with as full and meaningful a picture of urban activity as other methods of collecting information, what of the motivations of the parties we entrust with the data produced in this way? Most of us are beginning to have at least some sense of the torrents of information we shed in the course of our ordinary activities, and of the powerful inferences about our beliefs, propensities and likely patterns of future behaviour that can be derived from analysing them.

In these post-Snowden days, the point doesn't need to be laboured. What we are handing the administrators of a smart city is a suite of all the tools they would need to isolate, quash or even prevent whatever conduct they defined as undesirable.

Most apprehension along these lines relates to public space, and constraints on the range of activity that might be pursued there – for instance, the pulldown menu in IBM's Intelligent Operations Centre software that an operator uses to determine just how many troops [sic] will need to be deployed in order to suppress an unwanted demonstration.

But given the scale and nature of the technologies involved, it's also clear that our worries about the capability of smart-city systems to afford oppression must now extend to the private and interior spaces of the city. (Imagine, say, a Jaruzelski-type regime equipped with networked thermostat histories that reveal a dissident salon taking place in a given flat monthly, and the phone-location traces that clarify precisely who is in the habit of attending, and it ought to be obvious that in any such context Solidarity never happens.)

Perhaps you have sufficient confidence in democratic government and the rule of law. But what about when the datasets on which the tools rely pass into other hands – whether through systems intrusion, corporate acquisition or simple human clumsiness? Our touchstone here should be the Dutch civil registry of 1936, tabulated on Hollerith machines which immediately fell under German control after the invasion of 1940.The same information that was innocuous when provided to the Bureau of Statistics turned out to be lethal in the hands of the Gestapo.

There's one final point to make, and it's simply this: that it strains credibility to believe that anything as sensitive as municipal resource allocation would ever be settled by purely computational means. Far too much is at stake — too much money, power, pride and ego. At least, this is what the relevant history suggests, loud and clear.

In the early 1970s, at the New York city government's request, the RAND Corporation – its reputation freshly burnished by its "successful" application of information-processing techniques to the prosecution of the Vietnam War — set out to devise computational models for the placement of city firehouses. As documented in Joe Flood's The Fires and Deborah and Rodrick Wallace's A Plague on Your Houses, RAND's process was flawed from its inception: it asked the wrong questions, it measured the wrong things, and it certainly didn't account for the wounded amour propre of a hostile and sceptical Fire Department bureaucracy.

The resulting misallocation of resources left the South Bronx in ashes, some 600,000 New Yorkers displaced, and a local economy that hasn't fully recovered since. New York's experience with RAND suggests something about the perils that lie in wait when life-critical decisions are based on corrupted, partial or inaccurate data. The important thing isn't whether the systems responsible for analysing urban data and furnishing human decision-makers with recommended courses of action actually perform as claimed; it's whether they are believed to.

RAND could have burnished its process to perfection, and it wouldn't have altered the outcome one iota. It was never as if a punched card was going to pop out of its IBM System 360, into the eager hands of the junior McNamaras running it, with the One Perfect Distribution of New York City firehouses. And even if it had, the notion that the world would somehow have been pulled into conformity with whatever map was drawn on it beggars the imagination.

This may seem like a cynical conclusion, but we know what happened to RAND's computation-based recommendations: they immediately ran afoul of internal Fire Department politics, and bogged down further in the negotiations between that department's leadership, the city council, the mayor's office, and various other constituencies in the city. When money or power are at stake, particularly, interested parties will always and invariably seek to place their thumb on the scale. RAND's naivety, as now that of its successors, lay in thinking that it could ever have been any other way.

That RAND's blundering interventions in the data-driven management of New York City unfolded in high modernism's final moments tells us something significant. Too often, the desire to let "the data" determine municipal policy amounts to a reinscription of some of the least appealing aspects of high modernist urban planning – particularly its core notion that cool and dispassionate technical management alone is capable of instilling rationality into the decision-making process, imposing clarity on the city's ever-unruly dynamics, and bringing the ungovernable urban beast to heel.

Put to the side all of the concerns we have already raised; assume that a net of ubiquitous sensors is capable of accurately capturing the city's salient behaviour in real time. Assume that custody of the data remains in trustworthy hands, that useful models can be built from it, and that municipal managers can be found who are sufficiently coldblooded that they will act in accordance with the output of those models, regardless of the implications. What possible grounds could anyone sane have for objecting to such a circumstance?

None, perhaps – unless, that is, you place value on dissent, contestation and negotiation as vital elements of democratic decision-making, for it is precisely these qualities that are designed out of the smart city and its functioning. As things stand now, at least, the data-driven paradigm has no way of modelling such qualities that does not construct them as a disruption to the smooth and untroubled flow of operations. But this is to display a profound contempt for politics, and it ought to prompt the gravest misgivings in the heart of anyone who cherishes great cities precisely as incubators of heterogeneous vitality, rather than simply as processes to be optimised.

At a time when we are seeing the emergence of far more supple and responsive forms of urban governance in cities around the world – including exciting experiments with deliberative democracy and participatory budgeting – the notion that municipal management can or ought to be reduced to a concern for sensors and algorithms feels curiously retrograde. As the fate of the Dutch census reminds us, whatever the promises made about its integrity or security, sensitive data too often passes into the hands of those with malicious intent.

As Leila Laksari and her colleagues teach us, it is to mistake the city for the set of things that can be measured by technical means. Conversely, as RAND taught us, it is to place entirely too much trust in systems that represent themselves as coolly technical, while remaining all too human. For all of these reasons, this way of thinking about cities doesn't properly belong to our present, let alone our future.

 

 

This article first appeared in ICON’s December 2014 issue: Data, under the headline “The blind planner”. With thanks to ICON. iconeye.com

With thanks to Adam Greenfield. https://speedbird.wordpress.com/

 

READ MORE