Progress report 2020

Bret Victor — May 2020 — overview, download

ChaptersIntro, Visibility, Space, Recognizers, Bootstrapping

Informal video tour of the lab, demonstrating early progress on Realtalk 2020, with an emphasis on extreme visibility.

×

Intro

It's been a while since you visited, so I thought it was about time to catch you up on what we've been up to. I wish I could invite you over, but that's not really possible right now, so I thought I'd try a video tour.

The four of us have been working from home, which is not ideal for Dynamicland, but it's actually possible because we all have dynalamps in our homes right now, and I'll talk about what that means. That's really exciting.

The project for this year was Realtalk 2020 [more]

The three focus areas for the new system, one was visibility.

We wanted the entire system, top to bottom, to be visible and tangible and gather-aroundable and get-your-hands-on-able, and basically to be the most accessible, visible operating system there's ever been.

A lot of progress there that I want to show you.

The second focus area was space.

Instead of having isolated islands of computation, we want to be able to make activities that span as much space as they need, even taking up whole buildings, and allow you to light up whatever space you need to make the thing that you want to make.

I've got some stuff to show there.

The third big focus area was what we call "recognizers".

Which is really about being able to make dynamic material out of any physical material that you need, instead of just using pieces of paper. And to be able to express your ideas, not just in text, but in any kind of form.

Some progress there, too, that I'll show you.

Visibility

I think the last time you visited, we were doing some very low-level stuff with the timeline here. There was the mutexes locking and unlocking, and all that. [more]

That was when we were building up the very low levels of the system, and felt that, like everything else, we needed to be able to see what was going on as we were doing this. [more]

Some of that early stuff is posterified here on the wall. [more]

What you're seeing here is compiling a very simple process that does nothing but sleep. But even as a simple process, it has an address space. [more]

And we can see what's in the process image, and we can see the dynamic libraries that were loaded in. And here's the heap, and here's the stack, and here's the symbols. [more]

If we add a line to the program that makes it malloc as it sleeps, then we see the heap is growing. [more]

And here's a program which calls a function which recursively calls itself. And so now you see the stack is growing. [more]

And these are just fun little examples, but the idea is that, for building a process, we need to be working in an environment where we can see everything that the process is actually doing. [more]

Things like the stack and the heap and dynamic libraries should be something that's just visible in the environment at all times. So people can walk up and talk about them and point at them, and they just have a real presence. [more]

"Unload that." "I'm going to unload our first Realtalk 2020 process." [more]

"Oh, load it back in." "Let's load a new process." [more]

"It's back. It's outta here. It's so fun. It's gone. Here it is. There it goes. Say hello. Say goodbye." [more]

And so as we were building up these low-level components, there were a lot of visualizers like this that we made. A couple others that I happen to have here on these posters:

Here's a memory visualizer. This is literally a block of memory that I just cleared out to zeros. [more]

As we're developing our data formats, we can write them to memory and then see them decoded. [more]

Here we're going to write a statement in the statement format that we had, and it writes these bytes. Then we see the visualizer decoding these bytes, and we can see that it's an array of three things, a string, a string, and a string. [more]

We actually we caught some bugs because we saw that there was a gap here where there shouldn't have been a gap. There was an off-by-one error. [more]

We were using this for, as we were developing the algorithms that were writing on this memory, we were able to see them in real time being decoded. [more]

And this led to an environment where we could develop our algorithms by actually holding blocks of memory in our hands, laying them out, manually poking the algorithms at them, and see how the bytes changed. [more]

It gives you a completely different feeling for what it means to develop an algorithm where you can actually see every stage of it that's laid out and hold it in your hands, and manually actuate the different parts of it before you automate that. [more]

And have a bunch of people all gathered around seeing what's going on and understanding it and teaching it to each other.

So as we're building up the low-level components, there's a whole bunch of visualizers that we made for that.

And then, as we're working on some of the higher-level algorithms, we're developing visualizers for those as well.

So, this is fundamentally a reactive system. Processes make statements, and then another process notices those statements and makes statements in return, and other processes make statements, and they just keep making statements until they have nothing more to make, and that's how time moves forward. [more]

And you need to be able to see that. So in this environment, you can put down a statement, and you see the statement comes in, and you see the thing that responds to that statement. [more]

And then another process notices that statement. You can see it noticing the statement, making its own statements in return, and someone notices that and makes its own statements. [more]

And this keeps going until we converge at the end. [more]

This was something that was going on invisibly in the old system, and nobody really understood how convergence worked. We're using a completely different algorithm for the "reactor", is what we're calling it, in the system, and we're doing it in such a way that you can always see how it works. [more]

And not just in little toy simulations, but for the actual data flowing through the system, you actually can see what's reacting to what. [more]

And again, every part has some sort of visualization or game attached to it.

This is looking at how the different tasks that come up in what you just saw are then scheduled onto threads. We can try looking at it with fewer threads or more threads, and bringing in different processes and taking them out and seeing what it looks like. [more]

These different colors are different tasks that are running on these different threads. [more]

And then over here, this is showing how "whens", which are the reactive component, find statements to react to. [more]

If we have a statement, the statement is hashed three ways, and gets put into three places in the frame. And then this "when" thing comes along, and there's one particular hash it's looking for. So it looks there and finds this hash from this statement. [more]

Then this other second statement I brought in, both of them get stuck in that hash and get noticed by this. [more]

And so this is getting to the heart of: how are the whens able to react to statements? It's because of this connection right here that you can point to with your hands. [more]

We had a couple events where we brought over a dozen people and walked them around, like almost physically walking them around the operating system, and showing them how it works from the inside. [more]

Playing these things out for people and letting them get their hands on them and ask questions.

Another example was this auto-calibration algorithm that Josh had developed for the projectors and cameras. [more]

He did it in such a way that you could actually act out the algorithm kind of like a board game, by putting out different cards and performing the algorithm with your hands. [more]

And by doing that in this group setting, people could ask questions. And he could answer the questions by going off on different directions and doing other things, or changing how it worked, or letting other people get their hands in. [more]

It became what learning about computing should be. Learning about computing should be people explaining it to each other through things that we can actually see and touch and get our hands on.

And ultimately, that's what we want an operating system to be. We want an operating system to be just part of the environment, something that everybody can see and explain to each other and get their hands on. Not a gigabyte of text files that only hundred people in the world understand.

"Okay, do you want to narrate what we're doing?" [more]

"Yeah, so we've got this handled queue, which means that people can put stuff on it. But then the stuff has to get handled, which means if nobody's handling it yet and you put something on it, you become the handler. [more]

"So here's A. A is going to put 'a' on the queue. So let's stick it in here. [more]

"Now 'a' is on the queue. And now this thread is running, because it needs to handle that. And we're gonna handle it manually, with our tickler. [more]

"But even before we do that, B's gonna come along. Oh it stuck it in A. And then how just that for now? [more]

"And then the tickler comes along, and then that makes A handle 'a'. So now A handled 'a', and the read index has incremented. [more]

"And if I do this again... Well, actually, how about, B comes along and does something else? Oh man, so much stuff. [more]

"And then the tickler comes along and handles 'b', and then it handles that second 'b', and now its thread is done." [more]

As we were developing out... I mean, in addition to the visualizers, we're actually, you know, rewriting the entire system, and I'm not really talking about that part.

But all the different new things that we're making, we're also making all these visualizations along with it, to make sure that what we're making is actually visible and understandable and tangible, top to bottom.

So that's a bit about the visibility component.

Space

And then for the second focus area, the spatial component, our impetus here was:

We were invited to an event where somebody wanted us to bring Dynamicland to their event. Dynamicland had never really gone anywhere other than Dynamicland before, but we wanted to.

So what we made were the dynalamps. You can see a couple of them here. [more]

They're very beautiful. They kind of look like real floor lamps. And what's inside these things is a projector and a camera, and a couple light bulbs so they actually are lamps. [more]

There's a processor in the base of some of them, and other ones daisy-chain into ones that do have a processor in the base. [more]

And so for this event, we built out seven of these things. [more]

We were able to bring Dynamicland, and the event was a huge success and thousands of people were really happy. [more]

And now we have this thing where, instead of being restricted to what we've bolted into the ceiling, we can light up any space we want by dragging over a lamp and pointing the lamp at that area.

And then we started taking the lamps home, because we wanted to explore using Realtalk from home, and explore what it meant to have multiple sites that all had to stay in sync with each other. [more]

Then the virus hit. And that meant that we could actually continue our work, as we all had little mini-dynamiclands at home, even if we weren't coming into the lab. [more]

Recognizers

The third focus area that I mentioned was recognizers. That's about getting away from these dot-framed pieces of paper and typed code.

And being able to work with a much richer set of physical materials and representations. [more]

And we're just getting started there. But here's what Omar's currently working on.

It's a image processing pipeline so we'll be able to do OCR in a Realtalky, visible, tangible way, and be able to start doing more handwriting instead of typing code. [more]

Here's a recent project that lets you set up a different kind of image processing pipeline. And then reference those images over on a canvas using these tags, and replicate the tags in certain ways. [more]

So you can do these very interesting graphic designs. [more]

The program that makes this work is here. [more]

It's like 40 lines of code, it's half a page. [more]

Moving towards physical instruments, here's a stylus. [more]

You can draw with it. You can select text. You can scrub numbers. [more]

The "device driver", if you will, is 20 lines of code to make this thing work. [more]

You can cut it out and build it yourself in 15 seconds. No electronics. [more]

And Luke's most recent project is called Marks Kit. [more]

"I'm going to show you Marks Kit, something that might deserve that name. Let's just jump into something fun. [more]

"So I'm gonna draw a little landscape here and maybe a happy little cloud up here. And then make it rain, like we like to do. So I'm drawing a little raindrop next to the cloud and now magical rain is falling down onto the mountain below. [more]

"And maybe I should have some grass growing out of that, now that it's now that's raining. [more]

"So you can have that happen, and make another cloud up here if you want. Another raindrop, and we get more rain. [more]

"And then of course because this is all Realtalk, everything's live, so we can switch meanings around on the fly. So I can say, we want to have raindrops grow on clouds and grass rain down from mountains, if we want." [more]

So what's going on here is we're extending the Realtalk object model down to the level of drawings. So every individual shape here is a full-fledged Realtalk object. It can look around itself and interpret the objects around it, and successively make meaning out of the scene. [more]

And this can enable entirely new ways of programming non-textually, and working with fine-grained concepts without using the keyboard, and doing it in this very visible, expressive, multiplayer manner. [more]

Bootstrapping

So that's a bit about those three focus areas of visibility, space, and recognizers.

And then in addition to that, and in addition to all the foundational work of rebuilding the system so that it's robust and not a prototype, there's more work around what you might call "bootstrapping". Which is getting the system to a point where we can use it exclusively for our own work, and we're not using conventional computers at all.

We've been programming in the system exclusively for a couple years now. We've got a very nice code editor where you can see the values of match variables, and you can see who's noticing each of the statements that's being made, and you can see the live values of variables and function arguments, and all that kind of stuff. [more]

So that's been there for a little while. But another place where we've been using conventional technology is, we used to have a bunch of display monitors showing diagnostic displays for each of the machines. [more]

That is now on a page. We've got the diagnostic console here on the page, and this can look at not just this machine but any other machine on the network. And so if things go wrong, we can just open up the book and take a look at that.

Another thing that we used to use laptops for, when things went wrong, was SSHing in. And now we don't have to do that because we have a nice little shell on a page. [more]

So this is maybe an emulation of the old world in Realtalk.

Another emulation of old computer technology that we have here is a web browser. [more]

So you can search for some things. [more]

We still do a lot of web browsing on our laptops, of course. But it's been interesting to use the browser for special-purpose browser tasks. [more]

For example, we had some folks from Wikimedia come over, and we whipped up this little thing where you can browse Wikipedia. [more]

When you find links that you want to follow, you poke in this page over to the link, and it shows up there. [more]

And then you can poke a link onto the next one. [more]

The pages that you're browsing on Wikipedia are literally a trail of pages on the table. [more]

And that means that many people can play at once. It means you can fork off, and you can see connections between things. [more]

In addition to that system level stuff, there's also been other kinds of tools and applications and extensions that we've been making.

For example, here's a quantum computer interface that Josh made, which is hopefully going to actually get hooked up to a real quantum computer at IBM. [more]

This will serve as the visible tangible interface for quantum circuits which will actually then go run on the IBM machine. [more]

So that's a quick update as to what we've been up to, or as quick as I can make it. It seems not unlikely that by the end of 2020, there actually will be a Realtalk 2020 that's worthy of the name.

And at that point, we'll start reaching back out to other organizations and looking for partnerships, trying to find people who can make good use of this, work with them, and make some dynamic media.