I sort of wish I had recorded the entire process of making this.
So far it’s been two short (~30min ea) sessions with Nagle. Since I’ve already figured out most of the computer vision techniques while working on other sketches, and since I’m using an (ad-hoc and not particularly elegant) “live” environment for numerical computation, the process moves almost at the speed of conversation. Even though I know the “routines,” there’s plenty of room for improvisation and I am able to answer many of Nagle’s questions with code, as well as words.
I stumble sometimes and get frustrated with the state I need to keep in my head (“oh, the SDL screen buffer uses 4-bytes per pixel, but I’m operating on a three-byte BGR view and OpenCV’s text drawing routines expect a contiguous memory allocation so I need to make a new array for it”). When I’m working alone, I hardly even notice these strange contortions my code must sometimes make, but in “conversation” such compromises are conspicuous. The history of computer science lives in every leaky line of code we write.
I’m very excited for Toby’s CAD Space. I think there are many problems, including these computer vision studies, that could benefit from being spread out in physical space. If there is a conceptual pipeline for the code (e.g. thresholding -> contours -> token corners -> rectification -> preprocessing -> SVM model -> token tracking), why not keep all of these parts out at once so tey can be examined and worked on in parallel?
RMO
On Jun 3, 2015, at 7:55 PM, Bret Victor wrote:
"Recorded conversation as documentation" is fascinating. It feels so much closer to us being in the room with you than a "presentation" screencast.
On Jun 3, 2015, at 6:28 PM, Robert M Ochshorn wrote:
Current status of the token-recognizer (works better than it deserves to[0]):
-rmo
[0] getting extremely lazy with feature extraction—this is doing SVM with 30K raw pixels (absolutely no color correction, white balance, &c).