Click on panels for more information.
Dear pals —

We went into 2022 with the aim of bringing the spirit of Dynamicland to the science lab. In July, we gave a
demo at a small conference, where we showed a series of prototypes around a real-world project at UCSF.

Here we’re designing proteins together on the table.

Here we’re designing a DNA nanostructure by snapping together physical blocks to form a 3D scale model.

Here we’re carrying out an experiment in the wet lab. The test tubes show us what’s inside them, and where
they want to go next.

Here we’re making an analysis pipeline for simulated microscope images, by pinning up programs on the wall.

(When we first tried this one, Shawn stopped and said, “I’m getting a bit emotional”. Apparently he could have
saved a couple years on the last project if he had been able to do it this way.)

The implementation of this whole set of tools is just four posters of Realtalk pages.

We wrote the pages right on the table among the proteins and test tubes.
For anyone from Dynamicland, the principles in play here are all pretty familiar. And yet, at least for me, the way they
transformed the experience of the science lab was deeply moving.

When Shawn described his idea for a structure with two binding sites, I didn't understand it at all. When he showed
it to me out on the table, mixing computational models with scratch-paper sketches, not only did I get it immediately, we
both started proposing and playing with further ideas with our hands.

I never really understood the geometry of DNA crossovers until we started building structures with our origami blocks, and I
could examine them as they snapped together and trace the helices with my fingers. These models were not pedagogical toys —
they were our materials for designing real molecules.

Wet lab experiments felt like “blindly manipulating liquids”. It was like lifting a blindfold to see what was in every test
tube at all times, and see their concentrations change with each pipette transfer.

A visitor asked about a feature in our simulated micrographs, and in a few seconds, I live-edited the electron microscopy
simulation to emphasize it.

We designed molecules and planned experiments by “thinking with our hands”...

So much of computational biology is about being “trapped in an app”. It’s a whole different world to
improvise computational tools as needed, and poke them at anything on the table.

... picking up virtual molecules into physical test tubes...

... transforming them with handheld programs...

... placing them onto racks and grids like playing a board game...

... arranging programs around ourselves in real space like decorating a room.

{height=false} In retrospect, it all feels so... obvious?

Why shouldn’t our molecules be lying out on the table? Why shouldn’t we casually play with them and reprogram them as we walk
by?

Dave once called this feeling “ordinary magic”...

An immediate purpose of this work is to prototype a new lab for Shawn, as a model for how science could be done by scientists.
But beyond that — I also see a model for how the invisible systems of our world could be made visible, tangible, playable,
inhabitable for all people.

Can we make molecules — and the mathematics by which we infer them and simulate them — as ordinary and familiar as chairs
and coffee cups?

Can we make science ordinary? As ordinary as reading and writing? For everyone?
All of this is running on Dynalamps. We’ve got a dozen or so tucked away in the science lab, and Luke, Shawn, and I also have
lamps at home.

Back at the Oakland space, equipment was bolted into the ceiling, and it took a day to set up a new machine. By now, it’s
pretty routine for us to grab a lamp and light up whatever space we need, and Realtalk sets up new machines by itself.

After the July demo, Joanne and I put the dyna-portability to the test, by spending two months lugging this dyna-suitcase
around Europe.

In a London guest house, Realtalk is recognizing our new British Library cards.

Here we’re working with a group of students in Bordeaux.

A dynamic dinner party in Berlin.

A musical secret code made with our friends in Somerville.

Meanwhile, pushing portability even further, here’s the mini Dynalamp, running a full Realtalk system on a Raspberry Pi and
pico projector.

An immediate practical purpose of these devices is to carry Dynamicland through its nomadic phases, enabling residencies,
pop-up workshops, world tours, gatecrashing, etc.

But the real motivation here is dynamic media in weird places. If live-editing is about improvising with computation,
and recognizers are about improvising with physical material, then the Dynalamp is about improvising with physical space.

Alan would call all of this “late binding”.

For instance, here’s a 360° body-scale DNA model made with three Dynalamps, a huge cardboard tube from Home Depot, and a one-page
progam.

Our other recent hardware adventure is the Dynapad, running Realtalk natively on a Surface tablet.

It's easy to characterize Realtalk as “anti-screen”, but the anti is more like “cramming your entire life into a tiny box”.
The Dynapad has no UI, it's exactly like any piece of paper in Realtalk, an equal participant in the space around it —
except with higher resolution and a better stylus.

It's also expensive, fragile, heavy paper. But in a few decades, it might just be... paper.
Like the Dynalamp, the Dynapad lets us simulate the dynamic material of the future, using materials available today.

We're looking forward to dynamic drawing within a full Realtalk environment.
Until last year, the implementation of Realtalk was a poster gallery.
This is an ideal form for the OS of a Dynamicland-like site, 
and I loved working with the poster kits.

But portable Dynalamps call for a portable OS. Realtalk needed an alternative form that was compact and easily replicable.

This is UCSF’s Realtalk.

This is the Realtalk we brought to Europe.

My house is running this one.

Realtalk itself is now an object in Realtalk. Every site has its own. They
are completely self-contained and independent, but it’s easy for sites to compare and trade changes.

Our immediate need was multiple sites, travel, etc. But long-term — we see this as another step toward
community-constructed computing environments.

Realtalk is not a product, but a set of ideas. Those ideas will someday be propagated as kits+games that guide local
communities through crafting their own computing systems, for their own needs, which they understand and control
top-to-bottom. Those future things won’t look like our binders, nor like our posters, but they may descend from
both.

The entire implementation of Realtalk is in the binder, and every page is live. We routinely swap out pages and entire kits
while the system is running, instantaneously. We can switch to another Realtalk entirely in seconds. Here’s a renderer bug that Luke
tracked down and fixed by live-editing GPU Kit.

Realtalk was designed for reacting to the real world, but a fully reactive system turns out to be good at reacting to any
kind of change, even itself. Now that we figured out how to integrate Realtalk’s reactivity with side effects, even
live-editing hardware interfaces comes for free as Realtalk rules match and unmatch.

The hardware interfaces are mostly in C — which is to say that they look exactly like any Realtalk page with When and Claim,
but the goop in between has more semicolons. Shawn’s been writing Realtalk Python pages to integrate his existing DNA tools.

Here’s a polyglot sampler with Lua, C, C++, Python, Julia, and JavaScript (running in a web browser on a laptop) —
all happily reacting to each others’ statements in realtime with When and Claim, all live-editable.

You can imagine the immediate leverage this gives us. But our long-term motivation is a non-monocultural dynamic
medium, where different communities work together by bringing their own objects, in their own languages, onto the same
table. This is a gesture in that direction.

We’re still getting used to working across multiple sites, but Realtalk’s networking is very good at glueing sites together.
Turns out that “remote collaboration” can be as simple as seeing and interacting with the stuff on the other person’s desk.

After a few sessions with Shawn, Luke wrote:
To get the visibility and tangibility we needed in the science lab, we ended up overhauling Realtalk’s rendering, printing,
and camera systems.

We needed to render and interact with 3D scenes on the table. Seeing the convoluted 3D structure of
a protein requires looking all around it.

We needed to make a lot of 2D drawings, such as these DNA origami schematics, and designed a more readable drawing language.

To better produce physical objects, we needed to be able to print anything we’ve drawn. (And not just paper, but posters,
labels, cut paths, PDFs...)

Here’s a DNA origami block which was rendered from six views of the 3D molecular model...

... printed, cut out, and recognized — all in Realtalk, with a few pages. (Folded and magnetized by hand... for now!)

With our new high-resolution camera system with better optics, we could immediately apply our existing dot recognizer to much
smaller objects.

This includes real-world lab equipment such as racks and plates...

... “game tokens” for manipulating virtual objects...

... and “playing cards” for tools and data.
It was suddenly trivial to bring all of these into Realtalk.

From the beginning, we’ve looked to board games and card games as models for tangible activities, but we’ve never been able to
actually work at that scale.

We’re just now starting to “play” Realtalk as we always imagined, and it’s a whole different
world.
Beyond dots, our new camera system enables entirely new kinds of recognizers.

Each of these hand-drawn shapes is recognized as an individual Realtalk object, with a novel algorithm for matching shapes to
an alphabet.

It’s hard to overstate the possibilities that open up when a dynamic object can be created with a pen instead of a printer, a
diagram can be a living program, and our awkward single-player keyboards can be set aside in favor of communally drawing and
writing together, on anything.

Here’s a janky one-page wind simulation (like the cover of the zine!) with pens specifying wind vectors and construction-paper
rectangles representing buildings — recognized by matching to the “example pen” and “example building” on the right.

Not only can we match hand-drawn shapes, we can match ordinary real-world objects and give them meaning, for spontaneous dynamic
conversations.

Recognizing printed text with OCR also opens up a lot of possibilities. There’s a lot of text in the real world. And with a
Realtalk-connected label printer, generating new bits of text is almost instant.

This hand-drawn map with text labels is readable both by humans and Realtalk. No encodings or UIs — it just is what it is.

Here, we’re projecting onto a 3D-printed biological model, and tracking its orientation in space.

This is early work — today’s Realtalk objects still live on flat surfaces. But it won’t be long before we’re exploring
complex structures by turning over dynamic 3D models in our hands,
and the Realtalk way of programming physical objects will really come into its own.

Here’s our latest way of constructing a recognizer. The operators in this image-processing pipeline are arranged by hand, and
show what they’re doing.

Anything can be live-edited, of course, but the quickest way to adjust a parameter is to put down a knob next to it, and turn.

Like any arrangement of objects, this “machine” can be captured as a snapshot — a kind of living photograph. Objects run
within the snapshot, and can be pointed at and edited as if they were on the table.

Is this an “abstraction”? It is a module that can be programmatically instantiated, replicated, applied in any context. But
it’s not a black box. You see the innards, see the data, and can edit both live.
Optimized for transparency, not mysteriousness.

Snapshots continue to run even when tucked away into a binder. So do hand-drawn diagrams, for that matter. These pages are just as legit
as those with text on them — they’re all just reacting to each others’ statements.

This means we can have OS components which are not made of source code. We can move toward a multi-modal
computing system, where every idea is implemented in its clearest representation, whether that’s textual, pictorial,
tangible, or any combination.
More interesting — the website is entirely in Realtalk, and a website in Realtalk is a real place. We put something “on the
web” by physically putting it into the space.

We’re making a new website for Dynamicland, documenting and contextualizing pretty much every project, prototype, and
publication from the last nine years. At least 500, at my last estimate. They’re all going public. So that’ll be interesting.

For example, here’s the current draft of the front page. The shelves are 1¼ in. x 4 in. poplar. The “stack” is garbanzo bean
cans.

Lea said, “You mean, when I update my website, I have to log into WordPress, but you just rearrange your shelves?”

While web visitors will click on objects through their screens, visitors to the real space itself will get a deeper experience.

For example, on the web, a video is just a video. In Realtalk, a video is a booklet that can read and watched, browsed and
skimmed, annotated by hand. The current line of the transcript highlights as the video plays, and you can skip around with a
laser pointer.

For large collections, we’ve been using binders. This album contains Dynamicland’s 22,000 photos and videos. They can be
browsed by flipping through the pages, but can also be tagged, searched, and referenced programmatically.

That archive of 500 projects is organized in baseball card sleeves. Putting a card in the binder puts a web page on the web.

It’s been a joy to work on the archive by stacking cards in piles, pinning them up, hand-annotating them, pointing other
programs at them, tossing them out.

Like the OS binder, the media binder and archive binder are finite, bounded, self-contained. Everything is in there, it fits
in your hand, it can be skimmed from front to back. It’s not an endless scroll.

This poster is the “content management system” that converts these binders into an indexed, cross-linked, media-heavy
online archive. It’s about 20 pages.

WordPress, if printed, would be 4700 pages, an order of magnitude larger than all of Realtalk.
Then again, Chromium would be 350,000 pages and would weigh 2 tons.

No local community will ever build systems like these for themselves, or understand them
top-to-bottom. They’re all dead ends.

We’re focused on dynamicland.org for now, but we expect future Realtalk sites will set aside their own nooks for effortless
public sharing.

And in the long term, we imagine  — when Dynalamps and fabricators are as ubiquitous as laptops are now — people will
publish and download dynamic physical spaces, not just pixels. Our modest nooks may someday be seen as early prototypes of
the “knowledge spaces” that succeeded the web.
We covered a lot of ground last year, even with a small team and comically absurd bureaucratic obstacles. I’m optimistic about
this year.

In the science lab, we’re just getting started, and there’s so much we want to do. One direction I’m especially excited about
is bringing some of the computational tools from Shawn’s lab into Realtalk.

Like many scientific tools, these programs predict and optimize structures based on mathematical models of the underlying
physics. They are interesting algorithms, based on interesting physics, but both the algorithms and the physics are buried in
the Python. By using the tool, we don’t learn the physics, we learn parameters to tweak.

Now that we have molecular models on the table, can we get mathematical models on the table? Can we make them tangible,
explorable, malleable? Can using a tool mean getting your hands directly on the algorithms, understanding how they work,
remixing them as needed?

This is a much harder task, but I’ve been dying to do this forever, and we finally have a computing environment in
which a glimpse might be possible.

Realtalk shaped up last year into an utterly solid and delightful foundation to build upon. We have barely started to explore
what’s now possible with mini Dynalamps, Dynapads, small objects, tangible recognizers, shape matching, 3D tracking, and
other new capabilities I haven’t even mentioned. The needs of the science lab will continue to push Realtalk in ambitious
directions.

The website will get released, of course. Or I die trying. When that day comes, dynamicland.org/archive will deliver you the
details on everything mentioned in this letter, and everything else we’ve ever done.

This letter didn’t mention our work towards the next Dynamiclands, but we’re actively pursuing a couple of major
opportunities which arose last year. Institution-building can be a slow process, but we’ve got big plans and
we’re going to make them happen.

Nine years ago, David drew these pictures for the Communication Design Group’s Research Agenda And Floor Plan. Pure fantasy.
I used to say they were 40 years out. I once said 50, and Yoshiki was grateful for the extra decade.

Since then, we’ve continued to develop the vision of a dynamic medium for human communication.
Here’s an assessment of our progress towards realizing it.
              Conversing

In a casual conversation in an ordinary place, two people sketch tangible computational models of their ideas at
the speed of thought, showing and telling, immersed in a context of data and evidence, looking each other in the
eyes.

We certainly can explore computational models around a table, looking each other in the eyes. Tangibility is still primitive
— dynamic matter is a long way off, and our static matter is mostly confined to 2D, although that could improve soon.
Immersion is limited by the difficulty of summoning and formatting data to immerse in.

There have been conversations (including ones last year in the science lab) where the presence of dynamic media enabled new
realizations, but mostly within toolsets that had already been authored. We don't have anything like general-purpose de
novo computational sketching. Text-and-keyboard programming is too slow, demanding, and inward-facing to support a
conversation about something other than itself.

One of our active projects is pushing in this direction, with Sketchpad-like mathematical microworlds drawn and arranged by
hand. We’re looking forward to applying it to the mathematical models in Shawn’s scientific tools. If we’re able to express a
new idea, in realtime conversation, with a spontaneous hand-drawn dynamic model, that will be quite a milestone.

Presenting

Instead of a slide deck, a presentation is a tour through a space of human-scale dynamic models. Just as
presenters improvise their words, they improvise with computation. The audience digresses and discusses, explores and
reconfigures the models, challenges assumptions, makes their own discoveries.

In my presentation in Bordeaux, I put media up on walls, and we gathered around a long table. Digressions started almost
immediately, and I showed whatever came up in discussion, laying down photos, videos, and working projects. The audience got
their hands in, tried things out, spontaneously broke into groups and discussed among themselves. In response to questions, I
made little example programs, live-edited the projects, live-edited the operating system. Pretty standard for a Dynamicland
demo, although this started from an empty conference room 5000 miles away.

That wasn’t a tour, but back at Dynamicland, we had a couple events where we literally walked people though the new Realtalk,
whose components were represented by tangible exhibits throughout the space, and played the algorithms out by hand while
discussing them. That may have been the closest we’ve come to a “dynamic presentation”.

We’ve never had an event where the audience really gets to exercise an extensive dynamic model of a significant subject, let
alone an immersive space of them. I hope we can try that this year in the science lab.

Reading

By reading, we mean a context for intense studying over a period of time. Instead of opening a book, a group of readers
downloads a space of human-scale dynamic activities, which guide the readers through exploring and reconstructing interlinked
computational models of the systems under discussion.

Realtalk now has the capabilities to host coherent room-scale dynamic activities. We don’t yet have any significant examples of
such activities — they weren’t really doable at Dynamicland, before Dynalamps and Realtalk-2020.

For dynamic reconfiguration, robotic actuation would be useful, and at some point I’d like to carry forward what we started
with the dragglebot and pi bots.

Early last year, Shawn and I took the initial steps of turning one of his published papers into a space. (pScaf World: an
immersive environment for understanding and performing all aspects of the scalable production of custom single-stranded DNA.)
We sketched a series of posters and board games representing the biological systems described in the paper, with the
intention that the reader would understand the systems by constructing them in simulation.

We’re eager to take this further this year, most likely with a different paper that Shawn is currently writing. (I’ve
mentioned a few different science-lab aspirations for this year: a tool, a presentation, a paper... In Realtalk, these might
be different faces of the same thing.)

Authoring

A group of authors craft a dynamic activity throughout a space, getting their hands on the same physical materials, composing
simple objects with rich interactions, directly manipulating dynamic behavior via multiple representations, seeing all data
and state at all times.

As far as text-and-keyboard programming goes, authoring in Realtalk is by now a fairly embodied activity. I’m constantly spreading
out, pinning up, grabbing tools, making little programs and arranging them to point at other programs. It’s mostly paper, but
the tactility really does stake out a unique and pleasurable position on the emacs-to-woodshop axis.

Realtalk’s visibility — into statements, matches, variables, persistent state, all in realtime — is exceptional for a
general-purpose non-toy system. I led a workshop for Shawn and Konlin in which I laid down little challenges on the table,
they made their objects respond, mine responded in turn — an improvised dialog/mural of spatially-connected visible
statements, inconceivable (and inexplicable) outside of Realtalk.

But visibility needs to be much better. We sometimes view computational state with maps, timelines, and domain-specific
visualizations, but not in any deep or systematic way. We don’t have a fluid (Apparatus-like) way of creating visualizations.
We certainly aren’t defining dynamic behavior by directly manipulating powerful representations.

There are endless heartwarming stories of communal authoring in Realtalk, but text-and-keyboards is a real bottleneck. We’ve
prototyped some possibilities — we once built a music sequencer without typing any code nor using 
premade blocks — and Realtalk’s current capabilities offer unprecedented fertile ground for further developments.

There's a long way to go. We’re getting there.


Dynamicland Foundation staff:  Bret Victor, Luke Iannini
.              in collaboration with:  Shawn Douglas, UCSF

P.S. This report was designed, printed, and laser cut in Realtalk. “Page Layout Kit” is three pages, 
and I made it as I was laying this out.

I don't know how many pages InDesign is, but I bet it’s more than InDesign itself can handle.