Yep. I think a few things that might help are the computer giving very clear feedback about what it's recognized (perhaps even rendering its own map that shows its interpretation of the scene, as well as visualizing the CV data that it's working with), the recognition being a "snapshot" rather than continuous (you set things up, the feedback looks correct, you hit the "capture" button, and then you don't have to worry about occlusion from hands, lighting changes, etc.), and maybe being able to manually specify certain things if the computer really can't seem to recognize it for some reason (e.g., it's not getting this vertex, so I'll laser-point to it and say there's a vertex there). There would be some symbiosis in the person and computer working together to recognize the scene.
We might come up with standard patterns (and perhaps libraries) for this feedback and intervention, and they might fade as our CV gets better.
On May 10, 2016, at 9:21 AM, Toby Schachman wrote:
--
************************
************************