Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The most remarkable thing about "Bret Victor's vision" is how different people have interpreted what his vision even is. His ideas are multi-faceted, so they inspire in several important directions all at once, e.g.:

- "what if" feedback loops (crudely, "live programming")

- direct manipulation (an old idea but beautifully captured in his projects)

- making logic feel more geometric / concrete

- visualizing and manipulating data, especially over time

- humane interfaces (putting "computing" into the world around us, but without AR)

- etc.

Bret Victor is very much Alan Kay's protege and has unfortunately inherited the curse of people cherry-picking particular ideas and missing the bigger picture.

So as others have pointed out, the only person who may be fully attempting Bret Victor's vision is Bret Victor with Dynamicland. You may also be curious to check out Humane [1] which is a hardware startup founded by ex-Apple people. They're rumored to be shipping a projection-based wearable device this year. This device could potentially be a platform for people to experiment more in the direction of Bret Victor's vision.

[1] http://hu.ma.ne



> Bret Victor is very much Alan Kay's protege and has unfortunately inherited the curse of people cherry-picking particular ideas and missing the bigger picture.

Maybe it's time we lay some of the blame for us idiots just not getting Alan Kay's ideas on Alan Kay. At this point he only has himself to blame if he's spent 50 years trying and failing to communicate his wonderful ideas.


Honest question: who in this field has a better track record of their visionary ideas leading to real-world implementations than Alan Kay?

Not everything ended up being adopted in the form he envisioned or with the semantics he proposed, but there have been a lot of right calls and influential designs in his 50+ year career.


Douglas Engelbart. Tim Berners-Lee. Smalltalk is pretty cool, though.


What kind of impact did Engelbart have after "the mother of all demos"? Much more limited than Kay, IMO. Kay’s Dynabook is arguably just as important, and he went on to do a lot of other stuff.

Same for Berners-Lee. Sure, he remains influential over the web's incremental progress on W3C, but anything more visionary seems to be a miss: XHTML, semantic web, the Solid project...


The mother of all demos is the world we live in now. He demonstrated a working model of 30 years into the future. It's like someone walking on stage and showing you what life will be in 2050.

Engelbart invented the mouse and the entire idea of pointing at things on the screen to interact with them. The results of that interaction is now what you call "the web" (hypertext).


Englebart also substantially inspired Kay, but each of these people would probably have not had success without the network of ispirations. In fact I think we should talk about it like this; Bush->Englebart->Kay->Berners-Lee (superficially) rather than the individual on their own. It's also interesting that none of these people were at the head of big successful companies, though not surprising since that basically involves a lot of compromises.


I agree entirely with you but had a hard time ignoring the "yeah but what have you done for me lately?" tone above


Oh I'm with you. And I think (and hope) TBL isn't finished yet.


I think his experiment with DSLs (VPRI STEPS project) is very insightful, I always herpderp about how we need better tools to in situ express "the domain", but of course reality has a very strong bias for keeping apparent complexity down, keeping cognitive friction between components minimal, leaky abstractions are bad so encapsulation has to be total, so it's extra hard as an afterthought (hence ORMs and middlewares tend to be very clunky), etc.

Though there are a few examples of moving in the right (?) direction, for example styled components (for React, which moves CSS into JS/TS code, there's a VScode plugin for it, and thanks to the tagged templates they can be validated).


I remember seeing a demo where A.K. used an experimental OS made with ~1k LOC using STEPS approach to actually run his slides. Never found the link to it again (if someone has it I'd appreciate), but even more importantly, I'd love to know what happened with that OS. It would seem like a great research OS going forward if it really had GUI, networking and FS expressed with such low amount of user code. It also seems to me the project coming closest to Engelbart's vision (as their NLS also did everything just by meta-programming to an assembler with increasingly high levels of abstraction).


Alan Kay addresses Qualcomm https://vimeo.com/82301919


Thank you! Is he actually running their own OS here, or is it just a scripted slide application? What I saw was more of a smaller talk given to students if I remember correctly, where he goes into the technical details of his setup a bit.


I am one of three people who have this code running live. It is way more amazing than you think, it is not scripted at all. Its a full OS/GUI personal computer in 20 KLOC, no external libraries. The graphics for example are just 435 lines of code (versus millions for Cairo).


Have you considered creating e.g. a YouTube Series going through it? Or contacting e.g. Computerphile? This is way too awesome not to share with the world. How did you get involved, have you been working for Alan Kay?


I got involved when I was 17 years old back in 1981 reading about the Alto and Smalltalk in Byte magazine. Alan Kay and Dan Ingalls at Xerox PARC had build this amazing GUI, programming language and virtual machine [4]. By 1985 I was building my first Smalltalk Transputer Supercomputer and typing in the code listing of the Blue Book. Byte magazine even invited us to publish this supercomputer on their front page as a DIY construction kit for their readers.

Things got really interesting in 1996 when Alan and Dan released Squeak Smalltalk with Etoys as free and open source with this almost metacircular virtual machine.

In 2008 we had progressed to designing SiliconSqueak, a Smalltalk Wafer Scale Integration, a 10.000 manycore microprocessor with the late bound message passing Squeak Smalltalk as its IDE, operating system and the RoarVM virtual machine with adaptive compilation. We are still working on that, it costs $100K for the mask set that you send to the TSMC chip fab and you get back a 180nm wafer with the 10 billion transistor supercomputer for $600 a piece. Getting funding for mask sets at smaller nodes like $3 million for 28nm or the most advanced 3nm node what costs over 50 million for a million cores is a life's work.

We have not been directly working for Alan Kay, Dan Ingalls or David Ungar but we exchange emails, write scientific papers [2], give lectures [1] and meet in online video sessions [1] with the vibrant Smalltalk community.

When these researchers release the source code like the STEPS project, RoarVM or the Lively Kernel we try to port it to our SiliconSqueak supercomputer prototypes and of course we develop our own Smalltalk of the Future, parallel adaptive compilers, virtual machines and hardware X86 emulators.

So to answer your first question, yes, there are hundreds of lectures and talks on Youtube and we share all this work with the world. Bret Victor's, Dans or Alans lectures are just a small part of that.

The hard part of our research is getting $100K funding together for the 10.000 core supercomputer, a $2000 wafer scale integration (WSI) computer is a little to big an amount for a crowdfunding project.

So I still hope YCombinator will fund me, but they have this silly 'no single founder' restriction. You seem to be a researcher at ETH Zurich, why don't you join me as cofounder?

We make a 3 cent Smalltalk microcontroller (an ALTO on a chip) and a $1 version with 4 MB and gigabit ethernet, with Smalltalk, Etoys and Scratch built in you get a superior Raspberry Pi/Arduino successor that 5 year old children can program because Smalltalk and Etoys where designed with children in mind.

Our Morphle WSI would be a great desktop supercomputer but the real advance would be the $20.000 (retail price) costing 3nm wafer scale integration. More than 40 trillion transistors, a runtime reconfigurable amount of 1 million cores and the full IDE, GUI and OS in 10.000 lines of Smalltalk language, IDE, GUI and OS at exaflops per second. Way more advanced than CUDA on a GPU. I gave a 2 hour talk on that:

[1] https://vimeo.com/731037615

[2] https://scholar.google.nl/citations?user=mWS92YsAAAAJ&hl=en&...

https://scholar.google.nl/citations?hl=en&user=6wa49gkAAAAJ

[3] https://web.archive.org/web/20140501222143/http://www.morphl...

[4] https://youtu.be/id1WShzzMCQ?t=519


Super interesting stuff, will go through it! Somewhat unfortunately I've mostly departed from research and have defected to the financial industry. I actually recently gave a talk about Engelbart and his ideas to my colleague, in case someone here finds this interesting:

https://www.youtube.com/watch?v=jIlzXEaOH1I


You seem to be in a perfect position to advise Bret Victor or us about financing options for this work, especially the non-research parts. For example, we apply our wafer scale technology and Smalltalk software to energy systems and energy production at 1 cent per kWh, around 60 times lowr then the European Grid prices. That should interest the financing sector and asset management.


... holymoly.. that's certainly a kind of perseverance, fortitude and probably obsession :)

how are you going to cool the wafer? what's the TDP? :o

100K sounds very doable for crowdfunding - or maybe you need to find just one eccentric multi millionaire.


I cool a wafer scale with a liquid that boils at 43 C and immersing the wafer in this liquid. The bubbles (cavitation) of the boiling liquid should not damage the surface layers of the wafer, of course. This boiling liquid is further cooled by water and a sonic heat pump moving the heat into a water tank where the stored heat is used for showers or cooking [1].

Given 45 trillion transistors (45x10e12) times 3 femtojoule (3x10e-15) to switch each transistor at 1 Ghz (10e9) you get 1.000.000 joules/sec = 1 megawatt. These are ball-park numbers, back of the envelope calculations. In reality I make full physics simulations and electrical SPICE simulations of the entire wafer on a supercomputer aka on the wafer scale integration FPGA prototypes and the wafer supercomputer itself.

The EDA (Electronic Design Automation) software tools we write ourselves in Smalltalk and Ometa, and these also need our own supercomputer to run on. Of course the feedback loops are Bret Victor style visualizers [3][2]. Apple Silicon or this small company demonstrate that only with custom EDA tools can you design ultra-low power transistors to prevent our wafer to melt.

The FPGA prototype is a few thousand Cyclone 10 or Polarfire FPGA's with a few terabytes/sec memory bandwidth or a cluster of Mac Studio Ultra's networked together in a Slim Fly network that can double as a neighbourhood solar smart grid [5]. You need a dinosaur egg to design a dinosaur, or is is it the other way around? [6]

A TDP (Thermal Design Power) of 1 megawatt from a 450 mm disk is huge, it will melt the silicon wafer. But then not all transistors are switching all the time and we have the cooling effect of the liquid.

We must power the wafer from a small distance inductively or capacitively, best with AC. So we need AC-DC inverters on the wafer, self-test circuits to make sure we find defects from dust and contamination and isolate those parts and reroute the network on the wafer.

[1]https://vimeo.com/731037615 at 21 minutes

[2] https://youtu.be/V9xCa4RNfCM?t=86

[3] https://youtu.be/oUaOucZRlmE?t=313

[4] https://bit-tech.net/news/tech/cpus/micro-magic-64-bit-risc-...

[5] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[6] Frightening Ambitious Startup Ideas (dinosaur egg)

https://youtu.be/R9ITLdmfdLI?t=360

http://www.paulgraham.com/ambitious.html


OK, one thing I don't understand. You're talking about a ~1MW supercomputer. With 100K funding you could just about pay for the cost of electricity of this thing for 3-4 weeks (using US electricity prices). Actually building it would be on the order of at least 10s, if not 100s of million. I gathered from one video that you're an independent researcher group - how is this all being funded?


I am an independent researcher, my funding is zero and I am therefore rather poor. I get paid for technical work on the side, like programming or building custom batteries, tiny off-grid houses or custom computer chips (to charge batteries better). I am for hire.

Solar electricity prices can be below 1 cent per kWh [1]. I generate 20kW solar in my own garden and store some in my own custom battery system with our own charger chips. The prototype supercomputer warms my room. I hope to move to my own design off-grid tiny house in a nature reserve in Spain or Arizona to get 2.5 times more energy yield and even lower cost of living and cheaper 10 Gbps internet.

If you only run the computation during daylight and then move the computation with the sun to two wafers in two other timezones when that location has sunlight you keep below 1 cent per kWh. Some supercomputers do this already. In contrast, running 24/7 from batteries raises the cost to almost 2 cents per kWh, still far below bulk electricity prices in datacenters. Batteries turn out to be more expensive than having three solar supercomputers in three time zones. You learn from all this that energy costs dominate the cost of compute hardware, even our cheapest transistors cost. Hence our ultra-low transistors, not just to prevent melting of the wafer but mostly to make cheaper compute (for the cloud).

The wafer scale integration at 180nm costs around $600 per wafer to manufacture, only once it cost $100K to make the mask set, amortised over the $500 wafers you mass produce, this is how you get to $600 for 10000 cores at >1 Ghz.

These $600 wafer supercomputers use less than 100-700 watt with normal use, because not all transistors switch all the time at 1 Ghz. They are asynchronous ultra low power transistors, no global clock wasting 60% of your energy and transistors and you don't touch all SRAM locations all of the time. The larger 3nm wafer scale integrations won't use 1 MW either, just a few kW, less than a Watt per core.

Actually building these supercomputers will cost $100k for 180nm, $3 million at 28nm or around $30 million at 3nm. The FPGA prototypes cost $10 per core, similar to GPU prices. This includes the cost to write the software, the IDE, compilers, etc.

You can run X86 virtual machines unchanged on our 10000 - 1000000 manycore wafer scale integrations at 1 cent per kWh. This is by far the cheapest hyper-scale datacenter compute price ever and may come to outcompete current cloud datacenter which currently consume more than 5% of all the worlds electricity. And by locating our wafer supercomputers in your hot water storage tank at home [6], you'll monetise its waste heat so the compute cost drops below 1 cent per X cores (dependant on the efficiency of your software [5]). Another place you need these ultra low power wafer scale supercomputers is in self driving cars, robots, space vehicles and satellites, you can't put racks of computers there and you need to be frugal with battery storage.

These CMOS wafer scale integration supercomputers are themselves prototypes for carbon solar cells and carbon transistors we will grow from sunlight and CO2 in a decade from now [2]. Then they will cost almost nothing and run on completely free solar energy.

Eventually we will build a Dyson Swarm around our sun and have infinite free compute [3] called Matrioshka Brains [4]. To paraphrase Arthur C. Clarke, if you take these plans to seriously you will go bankrupt. If your children do not take these plans seriously they will go bankrupt.

[1] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[2] https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf

[3] https://en.wikipedia.org/wiki/Matrioshka_brain

[4] https://gwern.net/docs/ai/1999-bradbury-matrioshkabrains.pdf

[5] https://youtu.be/K3zpgoazRAM?t=1602

[6] https://tech.eu/features/7782/nerdalize-wants-to-heat-your-h...


The OS (in 20K lines of code) is called "Frank" and in the talks where Alan uses it for his slides at one point he zooms out and you can see a cartoon Frankenstein monster in the top left corner.

You might find this list of Kay's talks interesting:

https://tinlizzie.org/IA/index.php/Talks_by_Alan_Kay


Please see the comments on this Morphle HN account for those Alan Kay talks or mail morphle &at& ziggo &dot& nl for all those Alan Kay links student lectures you remember.


Alan Kay is the Tesla of programming. Beautiful design, genius implementation but utterly impractical in 90% cases.


He hasn't failed at all. What are you talking about? Dozens of programming languages are more sensible and ergonomic because of his influence.


I agree with this. It's hard to nail down why Victor's talks are so compelling, when each of these items separately are much more mundane but are still quite well explored areas.

* "What if" feedback loops/direct manipulation

Victor's vision abstractly seems to be trying to predict/explore the consequence of some action in programming, and in specific demonstration seems to be using small widgets to allow easy manipulation of inputs to get an intuitive understanding of outputs. This could be boiled down to different goals: "Allow a program to be more easily tweaked" and "Explore a concept to get intuition of a different viewpoint". The more cynical/pragmatic interpretations for these are "make a GUI for your program" and "use interactive demos when teaching certain topics".

The first interpretation is almost comical, but we can maybe expand this to be "when you make a GUI, think about how your interface is being interpreted intuitively and this can help make your app more usable". This can maybe understood more easily when taken with the fact that Bret Victor helped design the interface for the first iPhone - famously intuitive to use. This also leads to its limitations - only concepts that have another more intuitive viewpoint can be represented. I can add a colour wheel to my WYSIWYG editor rather than hex values, but I can't easily create a GUI that lets me express that I want to validate, strip the whitespace from an email address and put it into lowercase.

The second interpretation leads to explorable explanations, which Victor has made a few of himself [0,1], but I would also cite Nicki Case [2] and unconed [3] as being other good examples. Again, this is only afforded to specific topics that have scope for exploration.

* Making logic feel more geometric/concrete

This can be seen in things like Labview (made in 1986), Apache NiFi (made in 2006) among others, e.g. SAS. In a sense, this has existed in the form of UNIX pipelines and functional programming since the first LISP was made. There is a further point which is "there currently aren't tools like this that are suitable for a non-programming audience", which is what 'Low Code' and 'No Code' is trying to achieve, but unfortunately in practice as soon as you hit a limitation of the framework then you're back to needing an engineer again.

* Human Interfaces

Sort of addressed in 'feedback loops' point above, but the DynamicLand is an interesting demo of what he's trying to get to. I think this speaks more to me with internet of things. I have friends who have set up full smart-home heating systems and can move music between rooms which are all very much seen the same as adjusting a physical thermostat rather than 'programming' or similar.

There is definitely a lot that can be explored here for certain applications, but there probably isn't direct utility in arranging pieces of paper with coloured dots on it in order to set the path of a robot. I can see this in a more consulting/capture sense of presenting certain input parameters in a more physical format, but again this is deviating from the OP's notion that this is a whole programming environment.

[0] http://worrydream.com/LadderOfAbstraction/

[1] http://worrydream.com/KillMath/

[2] https://ncase.me

[3] https://acko.net




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: