The grid-as-barcode is a very clever solution, and seems intriguing in its own right—that every page of a book is a unique identifier for itself (albeit a possibly inefficient one).
What I find fascinating about the current batch of prototypes is where things are Dynamic and where they are fixed. Perhaps we are closing in on something (too early to say) which says that Dynamic should not necessarily equal 100% ethereal.
I’ve tried making versions of Walter’s photo wall for the glowing rectangle and they have always fallen short. I did use them myself when editing some scenes because I was too lazy to go get the printed pages and put them up on my wall, but the screen real estate limit was a real drag. However, convenience still won out, which is worth noting.
The great thing/problem with a prototype like this is that it is just good enough to leave me wanting a million more things because I’ve tasted the possibilities… I may be getting overly enthusiastic about the specifics here: images on paper triggering images on screen. There may be other ways of making media feel tangible here—I am just blinded by the magic of RMO’s latest video.
Paper to Screen
What I want next from this prototype is to be able to somehow [laser] point, as you said, at a specific image on the page and navigate to that moment on the screen. I am also imagining somehow connecting this with the standing navigation prototype, so I am standing in front of a big projection screen with my photo book on a music stand, flipping pages and touching/pointing at images in the book. The screen jumps to those moments instantly.
Screen to Paper
What about the other direction: watching on the big screen and seeing a compelling moment go by? Stop the video (by standing in the center of the screen, per RMO’s earlier “standing nav” prototype) and make some gesture to “grab” that moment from the big screen and put it onto a thumbnail grid somewhere. Of course, it can’t yet appear on the printed page in the book in front of you immediately (until electronic ink kicks more ass—will it ever?), but maybe there is a receipt-printer-like thing in the background ready to make new pages as soon as a grid has been filled up. And a little robot guy that wheels the page over to you and puts it in the binder. “Thanks PhotoBot!"
The magic of all this is that you are making your media significantly more tangible. You can literally grab a moment from the flowing river of cinema and place it onto a page, then touch that image to make it reappear on the screen. Once someone became habituated to that interaction, using traditional computer video editing would feel painfully non-tacticle. One might even be tempted to use the phrase “unleash your creativity” when marketing such technology. It would certainly unleash something.
I would love to try some flavor of this on the film I’m about to start cutting. Maybe it’s still too prototypical to be useful, but it seems so great that I am tempted.
Laser-Shooting Eyes
Watching the RMO video, it occurred to me that the camera required for the CV might be a cumbersome part of this process. One answer is to keep the book in a fixed place, but that’s feeling more like a glowing rectangle prison. Another possibility that occurred to me is to build a device that combines a laser pointer with a camera, such that the camera is handheld (like, the size of a pen) and it can also be used as a pointer. It seems like there might be all kinds of interesting applications for something like that. Maybe it already exists? What would it mean to have a pen that sees? In some ways, that seems like a perfectly logical component of a Dynamic Medium—a writing tool that also reads, and thus can react.
Smart Pen
I suppose someone had to bring it up the
Livescribe pen eventually. I have a friend who works there if anyone is interested (in what? a demo? I’m not even sure what I’m offering here). He invented Final Cut Pro XML.