Gender is frequently treated as a binary or “N/A” option in online forms with the purpose of collecting demographic information, analytics being the life blood of many online applications.
Freed from this need to collect information, the open social network Diaspora turns Gender in to a text field, freeing us to put whatever we want. Which begs the question: how accurate can these binary or opt out analytics be at identifying trends and behavior when they don’t account for something as fundamental and varied as gender? What else are we missing?
The Kinect as a means of altering ones visual environment is definitely a novel use for the device, and it’s incredible to see it explored by a talented artist. As computer vision speeds its panoptic advance in to public space, our environment will become filled with visual aids to computer vision techniques, allowing the categorization and sorting of the real world into addressable, identifiable objects. What began with the bar code or the scan card will become a sparkling world of lights and sounds just beyond our range of perception, but that we can still detect.
I’m reminded of the ever-present sparkle of nano-technological mites described in the world of Diamond Age by Neal Stephenson. Will the lack of these things seem foreign to us in a few years time, like a natural environment with no hum of electricity?
“With these images I was exploring the unique photographic possibilities presented by using a Microsoft Kinect as a light source. The Kinect – an inexpensive videogame peripheral – projects a pattern of infrared dots known as “structured light”. Invisible to the eye, this pattern can be captured using an infrared camera.”
Yesterday I gave a short talk on the Rapid prototyping of Interaction Design for a profession development event at the University of Toronto’s Knowledge Media Design Institute. These guys are doing some incredibly interesting work in Toronto right now, so it was a surprise and an honour to have been invited to speak. I did a presentation chatting briefly about how I got in to Interaction Design from graduating as a political science specialist last year, and five principles I’ve come to apply in my use of Rapid Prototyping as a design practice.
I’m giving a short lil’ talk this Wednesday about rapid prototyping as an Interaction Designer, and what I’ve learned in the past year and a half since graduating with a Political Science specialist degree from the University of Toronto.
It’s exciting to be able to share what I’ve been learning with others entering or about to enter similar fields, and to learn from those more established (I’m by farrrrr the most junior person there). Should be a fun event, and many thanks to Margaret for inviting me to speak!
Now… to finish slides.
As we move closer and closer to a world of rapid fabrication, I can’t help but wonder how our appreciation for flaws and error will continue to evolve.
I was fortunate enough to attend the Adam Greenfield lecture on Elements of Networked Urbanism. Broadly speaking, he considered the city and policy from an informational standpoint. Through sensors, cameras, and APIs we’re seeing the city become porous: information flooding in and out, and a wealth of different analytical and perceptual views opened in consequence.
Greenfield spoke predominantly about the policy challenges that this kind of situation yields, and lay down key structural changes that our system of knowledge is coming up against. I have about eight pages of notes which I need to transcribe, but that’ll be another post.
The lecture included a walkshop where we wandered about a bit of downtown and gave a good hard look at the inputs, outputs, and interactions of networked urbanity.
Here is a set of photos from that trip:
This is Interaction Design.
I did a good bit of reading this weekend into the idea of creating interfaces which invoke or respond to emotional cues. One of the things I cam across was the Emotion Markup Language from the HUMAINE group, a fascinating attempt to generate and record meta emotional information within various media types.
As valid and valuable as I think these efforts might be, I’ve some strong feels related to this and the role that computers might come to play as emotional mediators. Do Androids Dream of Electric Sheep sees its first scene as one with human beings struggling with emotional control in a post apocalyptic work, using computers to dial in and determine their emotional state for the day. Its cinematic twin, Bladerunner, likewise sees a tension between emotion and computing, as Turing Tests are employed to track down androids, those who aren’t entirely human. But we see by the end of the movie that things aren’t so simple.
The issue I have is the categorization and quantification of emotion in these kinds of settings. I think this is a fantastic idea by the HUMAINE group, and I think a lot of the work they’ve been doing is very valuable. What worries me is the inevitable automation of this kind of markup will do more to alter our emotional responses than to record them.
A brief example: an image or movie is automatically tagged with emotional notation related to anger, loss, grief, etc. through its association with its uploader, a youtuber who really likes war films. Interpreting this markup, the site itself is modified to enhance these feelings: darkening the background, saturating the reds, cooling the colours. Further, since naturally our homes will connected, the lights will dim to enhance the feeling, the temperature control will rise a few degrees, ambient noise will be blocked out.
The movie in question might be something as emotionally sophisticated as a Thin Red Line (I love that movie) or might just be an action romp like Commando. By automating the process of emotional tagging, the intended nuances of that media will be altered by the computers’ interpretation of past quantified data and in-media visual cues.
There’s a lot of exciting stuff here, but there needs to be a lot more in the way of research and responsible design using these methods, I’d think.