What will become endless nights of coding away for my new Kinect have begun.
I’ve been wanting one since hearing about a cheap, accessible 3D camera, so it was about time. Having been a long time user of the PS3 eye, it’s a natural first step. Installing the OSX drivers was a fairly painless task, though first time I’d encountered cmake.
The ever brilliant Daniel Shiffman has begun working on a set of processing libraries around the open kinect drivers, which so far capture depth and image data. After some initial struggling, I was able to get a good framerate at 640×480 for a point cloud and associate colour mapping to the points. I literally cannot wait to start at it with the openCV libraries, though whether I can stick with Processing in doing so is questionable.
All told, it was fun to hack around with the stuff last night. I’m looking forward to using it as a means of exploring gesture based interactions and specifically, some of the classic notions of “virtual reality,” as you can see in that image of me holding the world in my hand, created in only a few hours after getting the Kinect itself.
Whether we’ll be seeing a snow crash like “Street” is another matter, and I was really taken with how disorienting the act of “grasping” that sphere was. As these previously locked away technologies become more accessible, we’re bound to see some absolutely incredible stuff emerge from it simply being available. But if my struggling to grasp that orb is any indication, we’ve got a very, very long way to go.