Big data is immediate. It is laden with the technological and informational ubiquity of the digital age. As much as 90 percent of the planet’s data was collected between 2010 and 2012. This is hardly surprising, given the fact that our human genome can be stored in less than 1.5 gigabytes. Yet big data is useless as raw material. It is valuable only after a program, or a search engine, is written to pull out information necessary for the task at hand. The researcher, acting as a human conduit, is introduced into the equation after information has been harvested. He or she is responsible for processing and distilling this crude data into meaningful information. This is the semiotic process referred to as data analytics.

The immediacy of data overshadows a real set of issues that have been pushed aside due to an unbridled collective enthusiasm over the vastness and supposed objectivity of data. Big data already directly influences urban policies, from the distribution of public funds to infrastructural updates and zoning changes. But it also carries with it inherent flaws relating to its own feedback effects. In most instances of algorithmic city planning, outputs rather than outcomes are assessed.1 The widely-held belief that an algorithm can directly reflect reality to predict growth, pattern, and infrastructure is problematic. Algorithms are written to serve the eye of their author: they look for what they want to see.

PDF Download