The world keeps tabs on us. The recent revelations about the NSA and other organizations reinforced the impressive scale of this tracking, and show the contrast between modern techniques and those of the past. The expiration date on a driver’s license is a tool for biometrics. Registering to vote is a mechanism for observation. But now, we are so much better at it.
I want to explore whether these systems can rot from within. One of the most lauded qualities of software is its ability to be “upgraded”, often without physical change. New software is written, drivers are updated, techniques are refined, etc. But the reality is one of legacy systems holding a controlling interest on our interactions: old infrastructure running outdated software designed for calcified human systems.
What happens when our new software – with massive databases and sophisticated learning models – are left for ten, twenty, thirty years or more? Core to the human experience is that we grow and learn while having only limited foresight into our future state. What if we asked the same of computational systems?
A human baby learns to recognize faces early, and there is some debate as to whether facial recognition is hard-wired into the human brain, or something that’s entirely learned. In either case, our ability to recognize faces means understanding key physical form (that a nose goes here and not there), understanding how these forms are abstracted across categories (this is a human nose and not a Llama nose), and tying meaning to specific members of that category based on unique traits (your nose is different from my mother’s nose, so you can’t be my mother). These abilities make us highly specialized systems for understanding and reacting to the state of other human beings, and form some of the underlying mechanisms for navigating community, others’ emotions, and relationships.
Computers are not as good at this. A specially-configured computer might recognize a human face in an image, or a differently configured one might recognize – or even learn to recognize – a number of specific faces, but neither approaches the kind of adaptable specialization that humans demonstrate.
Implicit in the language of configuration is a kind of permanence that learning navigates around. The human condition is one of constant, often monumental, change that manifests itself in countless external and internal ways. A computer might be configured to interpret the world like a professional forensics artists, for example, and project how features might change over time, but it would be ill-suited to speculating on the details of that change, and likely not even considered for that task. I wonder if we can truly build computers that can grow along with us, even if we configure them with those core capabilities.
Paranoid Computers in a Haunted World
I suspect that “learning” can be a kind of decay within a system. Learning might manifest itself through unexpected behaviour that drifts from a configured state, as if by dead reckoning. Just as an analog synthesizer needs to be adjusted like the instrument that it is, maybe the learning machine can drift out of tune through its own functionality.
Let’s imagine a system that could learn to interact on a pseudo-emotional level, had a database for remembering people, and could improve its interactions by “learning” from each encounter with a user. This system tries to identify visually, interprets an emotional response, self-corrects, and tries again. Like a person, it learns by re-configuring based on experience.
Let’s say it’s a concierge system for a condo parking lot.
But what if it was mostly unable to recognize faces like it was designed to: its lens is distorted, or maybe its image processor is poorly coded. Maybe’s it’s simply old. This distortion causes that computer to collect false, or semi-formed experiences. A series of cameras in a parking lot – now a normal thing – algorithmically squint at detected movement, trying to make sense of the known and the unknown. The sun sets, its lens flares, and suddenly everyone looks like that one visitor two weeks ago who was allowed in that time, right?
How would that computer – confused and confronted by aggressive users reacting to false negatives – reconfigure itself, and react to its inability to make sense of the world it was configured for, but can’t quite see. By trying to absorb a context based on a skewed understanding of the world, the computer is experiencing a kind of decay. It slowly drifts away from its original configuration and inadvertently finds itself in a state of obsolescence.
I find myself feeling a weird empathy for these kinds of machines. It’s an empathy for a simple thing caught unawares, almost like Charlie Stross’ semi-sentient Lobsters serving as confused artificial intelligence models for the human-centered internet. There is no easy solution for legacy software and systems. So how do we deal with them?
Maybe we exist with old systems the way that we will (hopefully) exist with our aging human population: with respect and inclusivity. An aged facial recognition system might be bolstered by proximity, the new media pose, or a patient pause of ones facial expression. Or perhaps its understanding that some names need to be spelled phonetically or pronounced as such.
Bruce Sterling suggested our future is one of “old people in big cities who are afraid of the sky.” I think this prediction might be applied to old cities with too many people that struggle for purpose, as well. As we continue to grapple with the challenges of legacy systems and their moldering infrastrcuture, perhaps we ought to also develop a practice of patient interaction design to deal with this future gracefully on both fronts.
In high school, a friend once told me that a pair of sneakers hanging from a power line marked a place where drugs were sold. I later found out that this (mostly) wasn’t the case, but I still notice them dangling in the wind, as though they had meaning.
Lately, our shoes have become a vector for reading our behaviours. Tools like Nike+ currently exist as small sensor pods that attach to our shoes and talk to our phones, but in proposals and patent drawings, a future is being sketched that describes washable computers and fabric-like sensors, seeing these once detachable objects embedded directly into our clothing. And like any other fashion, if it wears out in style or in structure, it is discarded.
The MIT Senseable City Lab explored the strange life of the discarded with their project Trash|Track. For this project, the lab designed a circuit called Trash Tag to be embedded in the refuse – including an old shoe – which worked by responding to movement and broadcasting a cellular signal to local towers. This signal was triangulated by the service providers, and sent back to the lab for analysis.
I’m wondering what the “fully embedded” future might look like, versus one that needs to be attached. If the seams of a piece of clothing can be designed as an antenna, then anything from a shirt or a shoe can be made to record and report. Presumably this will be in no way nefarious: these sensor-clothes will analyze our posture, or track how many calories we’ve burned today. They’ll measure heart rates, buzz when they detect an open Wifi system, or change shape depending on the temperature.
The electronics sewn into our clothes will be cheap, might be powered by Microelectromechanical systems (MEMS) or inductive laundry baskets, and will be utterly discardable like our clothing is now.
Recycle lost sensor networks
So what happens after a wearable is discarded?
In my neighbourhood, there are a fair number of forgotten shoes jostling each other from their wirey perch. Since starting to think about this project, I’ve been wondering if they could talk to each other, or teach us something from their vantage point, were they suitably enabled with software and sensors.
What if the networked waste we’re creating continues to broadcast into the ether? What if their MEMS-enabled batteries and ultra low powered processors doing what they were designed to do: reporting on location, or movement, or localized heat, or brightness. They might send that data to a networked server, or ping helplessly for a phone or router they think is nearby.
I want to think about recycling digital things as not just reusing their components, but by recycling or upcycling their purpose. I’m wondering if there’s some way to turn the detritus of our vogues and narcism into a localized story. Maybe we could create a beacon for these discarded smart things: giving dead wearables a new life through access, organization, and transparency. A cheap computer, some clever routing, and a community catalogue of wireless protocols and APIs might be all we need to turn once private products into a localized and public sensor network. Like a digital version of Jane Jacob’s eyes on the street, these discarded smart things just might teach us something incredible.
I make things. A restlessness sets in when I’m not using my body to manipulate some tool – be it a computer, a kitchen knife, or my bicycle. And I’m far from alone in this compulsion. What I hadn’t expected was to share this compulsion with machines.
Machines like the BERG Little Printer and the Makerbot genus share this compulsion, and project their compulsion on us. More than any other type of tool, they evoke a style of creation that is specific to their nature, instead of complementary to ours. This makes it easy to get started within the frame they create. Whether by API or CAD, these tools are made to make, and we become their managers.
The Little Printer is a powerful case study in managed creation, or “instancing.”
The Little Printer emerged from the theory of a Social Printer as articulated by Matt Webb, suggesting that the printer itself isn’t what’s valuable, but rather the appearance of an artefact tied to my tribe. The mail suddenly becomes mediated by the mailbox, and the mailbox’s role is to describe and manage the mail. The mail becomes an instance of the mailbox.
You subscribe to a variety of feeds via Berg’s web app, and from that point on, your Little Printer will print out the contents of that feed according to when you’ve scheduled it. The printer uses Thermal Paper, which has information printed on its specially-treated surface by a heated printer head. Berg points out that they use recycled and BPA free thermal paper, and they’ve made clear efforts to be environmentally conscious.
The question is what happens after the printing. There is no method to remove this instance from the world. The Little Printer is a thing that creates without giving an option to destroy the creation when its usefullness is over.
What I want is a Little Printer with a Destructor Function. I want the Little Printer to take responsibility for its waste – not burden me with it. Just as well-written software cleans up after itself by freeing the memory it was using on my computer, I want the well-designed Making Machine to clean up after its mess.
So let’s start with the quintessential “Bad Idea.” I present to you, Little Printer: self powered edition.
What happens when things are capable of navigating their lifespans? I wonder what those things would be like.
Would they be buildings that pay their own property taxes with their energy savings?
Or smartphones that knew where their recycling plant lived, and guessed at when they would arrive there? What if that recycling plant knew when to expect them?
In the biological world, Decay is a physical phenomenon where materials become a simpler form of material and energy.
But in the culture of making objects, decay is a multidimensional problem. The physical decay of an object fails to sync up to the behavioural, cultural, or digital decay of that thing.
Decay becomes the natural output of an ecosystem of use, disuse, and obsolescence not dictated by material, but by software and consumer expectation from software behaviour. This decay is taking the form of obsolescence and apathy: a world of forgotten things with short lifespans and nowhere to go afterwards.
The danger is that culture rot is claiming the utility of objects before material rot ever does, and the physical casings that held the once functional circuits and software can take an eternity to decay.
To combat this, decay must be reframed as inherent to the value of an object. This can be done by situating time as something that adds value (or detracts by its absence), and by challenging the emerging anonymity and replaceability of network connected objects.
We want to enable a graceful ecosystem of creation, decay, and rebirth in a software-infested and thing-saturated world.
I suspect that designing for decay means designing Viridian things.
In 1998, Bruce Sterling set out on a decade long journey called the Viridian Design Movement. One of its many goals was to address the failures in the communication design of existing environmentalist movements, and to develop an interface for designers to become not green, but Viridian (i.e. effective).
His 2005 book, Shaping Things, synthesized much of the Viridian movement into a manifesto reminiscent of Machiavelli’s The Prince. Razor focused in its intentions, accessible, and action oriented; Shaping Things gives its (possibly unwitting) readers a toolkit for parsing the past and present through Viridian goggles. For designers in particular, this means changing the way we make decisions for our world.
Viridian things are shaped by many principles, and I’ve come to focus on two in particular:
“Avoid the Timeless, Embrace Decay” and “Planned Evanescence”.
A third principle, “Be When You Are”, serves to ground this exploration. I’ll explore decay through the lens of computationally-enabled things and contemporary technology: a subset of “Gizmo” culture that Sterling references in Shaping Things.
I’m looking forward to this journey with you.