Date: Sun, 13 Mar 2016 16:01:49 -0600
From: Nicky Case
Subject: Re: [journal] color tracking -> drawing on emoji sim
Hi everyone!

(Sorry for the late response, I was away the last few days at NICAR, a code-journalism conference. And in two days, I’ll be starting my OpenNews fellowship at PBS Frontline, in Boston!)

(For instance, StarLogo has "turtles" walking around on top of "patches".)

Totally agree! After reading Turtles, Termites & Traffic Jams, there’s so many design choices I could totally re-use for Emoji Sim – or whatever modeling tool I make next, because it turns out emoji support is horrible and inconsistent, I’m probably just better off letting people pick stuff out from a bunch of pre-drawn icons or The Noun Project.

By the way, someone from the MIT Media Lab showed me this recently. Totally in-line with the kind of stuff we’re doing! http://cp.media.mit.edu/city-simulation (if you haven’t already seen it)

~ Nicky


On Mar 10, 2016, at 20:13, Joshua Horowitz wrote:

This is rad as heck Paula!

The idea of a second CA layer actually came up for very different reasons earlier. I think this was when Paula was modeling the thieves in her apartment... There were tiles representing "inside the apartment" and "outside the apartment", but it was hard to write down the rules for thief motion, since there was no way of knowing what kind of tile the thief should leave behind when they move:

<image.png>
does the evil thief leave behind "inside" or "outside" when they move to an "inside" square???

You can solve this by making two separate emoji for "inside thief" and "outside thief", but this starts to get crazy...

The deeper spiritual reason to want to add a second layer is that a lot of simulations involve the behavior of a set of mobile agents "on top of" an immobile background environment. (For instance, StarLogo has "turtles" walking around on top of "patches".)

In a two-level Emoji Simulator*, the background cells could be totally static (/ controlled by an external factor, like Paula's webcam), in which case you're exploring how mobile/dynamic agents act on a certain fixed background. E.g., mice running around in mazes! forest fires with a certain pattern of soil types across the land!

Or the background cells could themselves evolve over time & respond to the foreground cells. E.g., you could use the background cells to represent the "scent" of moving foreground prey, which moving foreground predators could respond to.

I'm not saying any of this stuff actually belongs in Emoji Simulator, since it's an unnecessary complication for most people. But it feels like a natural extension to throw into Emoji Simulator Pro. :)

* - my abbreviation for "Simulating The World (In Emoji 😘)"

On Thu, Mar 10, 2016 at 5:55 PM, Chaim Gingold wrote:
This is so cool!

Paula, I’m going to email what we just talked about since Nicky is on this thread. :-)

It would be super cool if there were a 2nd CA layer that the captured image got dumped into (and it could be hidden and shown), and then you simply wrote CA rules that responded to this 2nd layer. This would open up all kinds of creative possibilities.

> On Mar 10, 2016, at 5:45 PM, Paula wrote:
>
> it is tracking the cyan color of the pen!
> <poopCameraTracker.gif><fireCameraTracker.gif>