We’ve migrated the instructions for the aluminum mounts over to Instructables. Those of you that have made your own mounts, modified ours or even made them out of parts on hand – make an Instructable for your mount! There are SO many ways to do this and many of you have done it already. We’d love to get your versions out there and start tinkering.
The RGBDtoolkit Shapeways store is open for business!
We have just made two beta mounts for the Asus available on Shapeways! Send in requests or suggestions for new mounts or new mounting points. For example – wouldn’t it be convenient to just attach the Asus to the hot shoe? Or to your rail system? Also, send us your mods and adaptations!
I have put together a few of the Kinect & Asus sensor 3D printable mounts we’ve been working on – the link is above. It has been an intriguing and gratifying process – I carried the first 3D print test we made around in my pocket like a good luck charm for days.
As some of you know the mounts have taken many forms: the slightly nautical looking aluminum and wood mounts and the more recent pale and slender 3D printed ones. Maybe in the near future we’ll have a set of very easy and inexpensive mounts made with injection molding.
Meanwhile, these are all tested and usable but they are far from perfect. James convinced me to make them public in the Open Source spirit with a call for assistance. Anyone who is interested in assisting or advising in the process of optimizing or expanding the set of mounts please do get in touch.
In the meantime I’ll be making the models public on the Google Warehouse and a 3D printing site as soon as we can get some test prints. Feel free to get in touch with questions. More to come!
“Looks like a mess. I see no benefit to it. Why would a DSLR need to have a depth sensor? It actually already can manipulate depth, it’s called an aperture. This doesn’t make the video 3D or anything. You can already get a video to look like this out of the kinect - perhaps not as high quality, but this isn’t really high quality either…It’s all broken up and messy looking. Just some kids messing around. There’s really no application, you could throw a wireframe grid around objects in post production and various special effects with more accuracy.”—The sole comment from CNET Article discussing the RGB+D research efforts
We’ve put together an exhibition featuring RGBD work along side the work of Kyle McDonald and Arturo Castro. Opening May 10th from 6-8 at Eyebeam in NYC (540 West 21st St, Chelsea NY)
Our environment is full of machines interpreting our every gesture. We have video games programmed to judge our dance moves, electronic storefront advertisements that infer our gender, and security cameras that algorithmically deduce our intentions. These automated eyes peer through lenses of code continually attempting to make sense of our world.
What would happen if we cracked open these vision machines to reveal the images ﬂowing through? How does their way of seeing inﬂuence our own self-perception? Their gaze is strange and unsettling, yet we recognize ourselves within it. We appear distorted, somehow alien and uncanny—as if we’ve stumbled into a funhouse hall of mirrors, confronted with reﬂections at once foreign and familiar, virtual and real. Lines and dots overlay our faces and ﬁgures, a new type of tribal mask depicting our body’s interface to computational logic. These digital depictions speak to our contemporary existence as half virtual/half analog beings, and have inspired a group of artist-technologists to explore their potential.
WIRED FRAMES brings together artists who engage with machine perception to explore the humanistic, expressive, and creative potential of this mode of representation. The artists have taken up the subject of portraiture, exploring a new vanguard of the age-old genre. Both the viewers and the machines are responding to these portraits, following instinctual tendencies to look for facial patterns, to ﬁnd the human within the ﬁeld of view.