Date: Sun, 23 Aug 2015 21:11:32 -0700
From: May-Li Khoe
Subject: Re: INPUT (sound & vision meetup @ CDG)
Toward the end of the discussion, we got into to chatting about what kind of UI opportunities might be up for improvement.

Due to the unique nature of the drawing app, Lia had the feedback that it was hard to know where on the wall you were going to start drawing when you turned the beam on (this isn’t an issue on the rest of the projects since they trigger on laser OFF).

Although we’re all well aware that the lasers are being used until the rest of technology catches up with us, allowing us to point or gesture in other ways, I thought it could lead to some other behaviors to play with a solution to this problem. It feels like a spot where eventually we’d need to use voice, or another hand, or something (like you were all experimenting with play-doh and the iPad already!).

I started futzing around a bit and noticed that I could cap the amount of light with my fingertip over the laser pointer, and use the lower-light amount of light to find where I wanted to draw. I could then just roll my finger back a bit to let the full amount of laser light through, and draw a dot at a time.

At the end of INPUT I made a super cheap little paper prototype of a laser pen cap that could let you alternate between two laser beam strengths:







I then felt like the bit of paper was a bit unstable, so I adjusted it to one that folded twice, so it attached on both sides of the pen.

It wound up feeling weirdly close to “clicking” (constant feedback about cursor position + a finger push to trigger a thing).



Anyway if you’d seen these random bits of paper lying around the Serengeti how you know what they were for.

On Aug 19, 2015, at 11:25 PM, Robert M Ochshorn wrote:

On Aug 19, 2015, at 3:40 PM, Toby Schachman wrote:

Food: for the math group I would order on Eat24, usually Mehfil Indian. I'd get like samosas, naan, saag paneer, lamb korma, chicken tikka masala in whatever quantity made sense. All the things come with rice so don't order more rice. Allow 45-60 min for delivery.

Word to the wise: “whatever quantity made sense” turns out to be roughly one main dish for every two attendees. That is to say, I ordered about twice as much food as we needed. The fridge is full of delicious Mehfil Indian food—you are welcome to as much as you can bear.

INPUT.1 turned out pretty well. About a dozen people showed up, with varied but overlapping backgrounds and interests, and we stayed pretty closely to the agenda I had prepared.

<IMG_0019.jpeg>
The Demo (photo by Götz Bachmann)

I tried to get down most of the proper nouns when we opened up to discussion at the end. Here’s a decoded form:

• Google’s cloud-based ASR systems are secretive and closed, but work well (they don’t give timing info).
• Mechanical Turk-based realtime ASR is now a thing, with <2sec latency.
• A few people really liked the Asus Xtion for RGB-D imaging, and think that our CV problems would be a lot easier with depth info.
• There were some good pointers for 3-d scanning from RGB-D cameras, including Kinect Fusion and a fellow from Oxford mentioned a SLAM system that, he kept saying, “does everything” but I can find no traces of the so-called Infinita system online.
• The trigram “deformation of people” is rather unfortunate. And saying “there’s a lot of literature on it” could mean more than one thing. This, of course, is in reference to our “skeleton tracking” investigations. Someone was really into this University of Kentucky research paper on the Real-time Simultaneous Pose and Shape Estimation for Articulated Objects Using a Single Depth Camera.
• Google Glass and egocentric perspective.
• I swear Neeraj was talking about research in “knowledge spaces,” but the term of art is “knowledge bases,” so either I misheard or CDG slipped into his subconscious. Neeraj thinks Luke Zettelmoyer’s research on “intersections of natural language processing, machine learning, and decision making under uncertainty” may be relevant as we progress.
• I think SEMPRE was the intended referent when discussing Percy Liang’s work. I’m imaging a booming God-like voice:
  Utterance: Which college did Obama go to?
  Denotation: Occidental College, Columbia University


People seemed excited about coming back next month, and some had ideas for other people who should be in our orbits. There were some interesting conversations and observations—Götz took some notes and may send out some of his impressions.


Onward!

R.M.O.