Date: Mon, 24 Aug 2015 17:25:53 -0700
From: Robert M Ochshorn
Subject: More INPUT (Next Session September 16 @ CDG)
Dear all,

I hope you enjoyed the tour of CDG last week. It was a pleasure to host, and I found our conversations deeply stimulating and useful. If you weren’t just being polite when you said that you’d come back for more INPUT, then let’s continue!

INPUT.2: 
September 16, 2015, 7-9pm, 2 S Park St (SF)
RSVP requested.

As before, dinner will be provided, and you are welcome to come earlier to hang out, read our books, &c. (Actually, you’re always welcome to come by, hang out, read our books, &c.—just ask.) Now that you already know what we are trying to do, the schedule is more open. Do any of you have research or projects or ideas or performance you’d like to share? Write to me off-list. Are there other people who should be invited? Invite them!

Bret and I would like to share the poster about our project that you may have seen in the space in case you want to refer back to some of the ideas we’ve been thinking about. It’s a large PDF that’s nice designed for screen viewing (how meta…[0]), and we’d appreciate you not sharing/distributing this document (yet):

[2015-07-14-a-computing-system-for-dynamic-spatial-media.pdf]

Finally, I had collected some of the references from our discussion. Here they are—what did I miss?


• Google’s cloud-based ASR systems are secretive and closed, but work well (they don’t give timing info).
• Mechanical Turk-based realtime ASR is now a thing, with <2sec latency.
• A few people really liked the Asus Xtion for RGB-D imaging, and think that our CV problems would be a lot easier with depth info.
• There were some good pointers for 3-d scanning from RGB-D cameras, including Kinect Fusion a SLAM system from Oxford that, he kept saying, “does everything” but I can find no traces of the so-called Infinita system online.
• The trigram “deformation of people” is rather unfortunate. And saying “there’s a lot of literature on it” could mean more than one thing. This, of course, is in reference to our “skeleton tracking” investigations. This is apropos a University of Kentucky research paper on the Real-time Simultaneous Pose and Shape Estimation for Articulated Objects Using a Single Depth Camera.
• Google Glass and egocentric perspective.
• I swear Neeraj was talking about research in “knowledge spaces,” but the term of art is “knowledge bases,” so either I misheard or CDG slipped into his subconscious. Neeraj thinks Luke Zettelmoyer’s research on “intersections of natural language processing, machine learning, and decision making under uncertainty” may be relevant as we progress.
• I think SEMPRE was the intended referent when discussing Percy Liang’s work. I’m imaging a booming God-like voice:
  Utterance: Which college did Obama go to?
  Denotation: Occidental College, Columbia University


Your correspondent,

R.M.O.

[0] There are some reports of this document actually crashing contemporary computer systems. Chrome’s viewer may fare better than OS X’s Preview.

On Aug 19, 2015, at 2:43 PM, Robert M Ochshorn wrote:

Thanks for RSVP’ing for tonight! I’m looking forward to seeing and meeting everyone. 

Secret CDG Office Info (please don't share): 3rd floor of 2 South Park, San Francisco. Entrance is on 2nd street, next to Jeremy’s. Press "Start,” then #, then **** on the keypad to get into the building, and take the stairs to the top. If you require the use of an elevator or other assistance, let me know and I’ll make arrangements.

Give me a call at (********** if you have any problems finding us, &c.

RMO

On Aug 17, 2015, at 11:33 AM, Robert M Ochshorn wrote:

Reminder: the first INPUT is happening this Wednesday at CDG.

Please let me know that you’re coming (if you haven’t already RSVP’ed), and hope to see you soon!

All best,
-Robert

On Jul 29, 2015, at 2:35 PM, Robert M Ochshorn wrote:

Dear all,

I would like to invite you to an informal gathering of sound and vision practitioners, to be hosted at the Communications Design Group. The meeting is called INPUT because we imagine computers interacting with us in and through our world aided by the inputs of seeing and listening, rather than us (humans) needing to constrain our inputs through the narrow range of digital keyboards, mice, and slippery screens. The meeting is also called INPUT because we are in urgent need of your input as we (so far, myself and Bret Victor) develop a new system for room-based spatial/physical computing.

My hope is that we will continue meeting regularly (monthly?). To start, I propose:

August 19, 7-9pm, 2 S Park St (SF)

Here’s the schedule I am imagining:

6pm - you are welcome to come early and loiter / acclimatize yourself to the space

7pm - dinner (please let me know any dietary restrictions you have)

7:45pm - tour of CDG room projects so far & cursory systems architecture

8:15pm - brief presentation on my efforts so far to integrate modern ASR + CV into the room

8:30pm - group brainstorming, reference collecting, and planning for our next meeting


RSVP is requested so that I can order the right amount of food. Please feel welcome to invite others that you think would be interested & have something to contribute.

Hope to see you there!

- Robert M Ochshorn

Researcher
Communications Design Group
(**********


<Screen Shot 2015-07-15 at 6.27.02 PM.png>
Pointcloud of our office, reconstructed through SfM.

<Screen Shot 2015-06-28 at 12.47.03 PM.png>
Kaldi’s internal FST lattice, visualized on top of an FFT.



PS - This is not the best introduction to CDG, but neither is it the worst: