Date: Mon, 14 Mar 2016 12:14:32 -0700
From: Toby Schachman
Subject: Re: Possibly-less-dynamic whiteboard archiving
Automated stitching with Python and opencv should be pretty simple using the technique/tutorial from my last email. Let's try to put it together as a test this week.

On Monday, March 14, 2016, Joshua Horowitz wrote:
Re: stitching – Supposedly Hugin can do this, though it is a kinda off-label use, and Hugin is my second-favorite example of open-source tools being impossible to use. (Favorite example: the name "qtfpsgui".) Tutorials which might be useful: this one, probably not this one. PTGui is an alternate front-end for Hugin's back-end which is probably worth a try.

Your grocery-store picture put a huge smile on my face.

Re: diffraction – I think you're right! One strong sign: the two f/22 pictures are more similarly blurry than the 2.5s picture is to the 2s picture. I'm really excited by this, because I took an exam or two on this stuff back in sophomore year and this is the first time I've ever seen it in real life.

On Sun, Mar 13, 2016 at 11:33 PM, Dave Cerf <> wrote:
Exciting progress!

My experiments with stitching so far (using camera-phone shots) have been frustrating, but I'll put some more time into it this coming week.
I am most familiar with Photoshop’s stitching tools (with mixed results) but I imagine there are a number of open source alternatives out there these days. I even tried taking a photo of an entire grocery store aisle once.

Exposure settings

Re: the image grid above – I was surprised that the shorter-exposure shots came out better. I wasn't expecting so much motion blur with a tripod and delay. That's motion blur, right?

I’m not so sure. Unless someone was playing laser socks nearby (maybe Room OS can pause laser socks sessions when someone takes a whiteboard photo), I think you may be running into the issue of lens diffraction, which becomes more noticeable the smaller the aperture (which conflicts with longer depth of field, but you don’t need to worry too much about that).



All the images above are from RAW captures, processed by Mac’s Preview application. Comparing this with the camera’s JPEG output suggests that compression could be a problem.

Is OCR a concern at all? I’m guessing no, especially if Toby’s idea to relink to the original printed images/text (in which case you can OCR those if you needed to). Not surprising that JPEG is showing artifacts at that scale. The camera may have some (limited) control over the JPEG quality—it’s worth checking. 

RAW vs JPEG – chromatic aberration

However, whatever the camera is doing to process the JPEG from the RAW includes some other useful bits. For instance, bits of the image near the edge of the lens have dramatic chromatic aberration which the processing does a good job of cleaning up:


I know Lightroom/Photoshop has chromatic aberration correction, so maybe other tools out there do as well. It might be interesting to compare different RAW interpretations of the images to see which processing you prefer (Adobe has their own; Apple has their own; etc.).

Lens distortion

I haven’t taken a look at lens distortion yet. It’s likely we will get a significant amount, since we are squeezing our whiteboards into the frame edge-to-edge. Fortunately, this kind of distortion is predictable and correcting it is easy. Just something else to consider.

I’d love to know what you use for correcting the distortion—we have a few shots in the movie I need to correct for this. Right now I’m using off-the-shelf tools but it’s good to know what other options are available.