I want all the gizmos cluttering my life to work like Stanford University’s SLR camera.
This baby has been around for a while, but now the university is releasing the code behind it and picking up $1 million from the National Science Foundation to make free Frankencameras for computational photography professors.
OK, I should be more specific: The wild and free code will work with the Nokia N900, evidently a supa-dupa smartphone. Your 5D ain’t gonna be R2D2 any time soon. But we can hope, right? Maybe the necessity of omni-programmabilty will be, ahem, properly exposed with Stanford’s latest effort.
One trick of the Frankencamera I know that I would look forward to is bending light to my will. With Linux in your cambox, there’s no more choosing between a sharp, fast but dark exposure, or a blurry, but well-lit, one. See, a Frankencamera “shoots both exposure speeds in rapid succession and then automatically combines them, resulting in a photo that is both bright and sharp.”
So you’ve got software extending the abilities of a device. Nothing new, just check out the app store for the iPhone. Same story, different tool. But why stop there? I say, “Apps for everything!”
Give me customizable beeps on my microwave, let my washer show me my percentage of a city’s water usage, make my alarm clock add a few minutes to the time if I continually push snooze … you know, make these things better.
Is it that hard? Every gizmo has important similarities. It accepts input through various means (buttons, switches, keyboards, touchscreens and so on), it displays some kind of useful information (the time of day, your location on a map, the image you just captured and so on), and it does something (heats your food, washes your clothes, wakes you up, connects you to your e-mail, snaps a photo and so on). The biggest obstacle it seems is the display and input mechanism. If these were standardized across gizmos, then so too could apps. But what screen and input mechanism could possibly work on so many sizes and types of device? I don’t know. All I can think of are those hologram communicators in Star Wars.
Surely Google is working on this. Would you want a washing machine’s cost subsidized by ads streaming across its screen? Is Google going to pay for my house if the walls are screens full of a stream of ads? My apartment is feeling pretty small lately…
Anyway, back to the Frankencamera (Professor Marc Levoy’s name for it, not mine).
The SLR for all those cam-comp profs will be available within a year. It will gets its full introduction at the SIGGRAPH conference in Los Angeles starting July 25. The programmable-camera project began in 2006 with Nokia.
What else, what else. Ah, forget it. I’m tired. Chew on this: all the links in this post in a neat little pile:
- Stanford announcement: http://news.stanford.edu/news/2010/july/frankencamera-072110.html
- Frankencamera site: http://graphics.stanford.edu/projects/camera-2.0/
- Frankencamera API and Paper: http://graphics.stanford.edu/papers/fcam/
- About computational photography: https://secure.wikimedia.org/wikipedia/en/wiki/Computational_photography
- Professor Marc Levoy: http://graphics.stanford.edu/~levoy/
- Nokia n900: http://maemo.nokia.com/n900/
- Nokia Research Center: http://research.nokia.com/
- Association for Computing Machinery’s Special Interest Group on Graphics and Interactive Techniques (SIGGRAPH): http://www.siggraph.org/
- The N.S.F: I never found a $1 million award, but I found this and this.