I never figured out how they did the turtle graphics in this game. The C64 didn't have whole screen bitmaps, you could either use sprites or user defined character sets, neither of which made this straightforward.
And the loading screens were also amazing, particularly for tape loading.
The C64 does have a couple of bitmap modes. The Last Ninja uses mode 3, which is multicolor bitmap mode. It occupies 9000 bytes including pixels (8000 bytes) and color RAM (1000 bytes).
The TI99/4a version of the Logo language which has turtle graphics used user defined characters to implement them. There were only (I think) 128 user definable characters, and when the turtle graphics had redefined all of them to create its output, it gave the user a message, "out of ink".
As others have said, the C64 does have bitmap modes, though it's understandable not being aware of it as they weren't that commonly used for games since it was often easier to use user defined character sets as tilesets if you had repetition.
Even on NES a lot of games use CHR-RAM so arbitrary bitmaps are at least possible, though only a small part of the screen is unique without some rarely used mapper hardware. Zelda and Metroid mostly just use this to compress the graphics in ROM, Qix is a simple example with line drawing, Elite is an extreme one.
I made a demo of the Mystify screensaver using the typical 8KB CHR-RAM. Even with a lot of compromises it has pretty large borders to avoid running out of unique tiles. https://youtube.com/watch?v=1_MymcLeew8
Those requirements are all facially illegal and unenforceable though. In the US you have federally protected labor rights that you cannot contract out of. The right to discuss pay and working conditions with other workers and the public is one of them.
Yes, imagine being in breach of contract if you apply for a mortgage, and they ask "What do you do" and "How much do you make a year" and "Can we see a pay stub (or income tax info)".
I admire people who don't lie about past drug use on their clearance forms. Sure, it might delay their clearance, but I still admire them.
The core social problem with drug addiction and alcoholicism is this concept of telling people what you think they want to hear from you, not telling them the truth.
I do wonder why diffusion models aren't used alongside constraint decoding for programming - surely it makes better sense then using an auto-regressive model.
Diffusion models need to infer the causality of language from within a symmetric architecture (information can flow forward or backward). AR forces information to flow in a single direction and is substantially easier to control as a result. The 2nd sentence in a paragraph of English text often cannot come before the first or the statement wouldn't make sense. Sometimes this is not an issue (and I think these are cases where parallel generation makes sense), but the edge cases are where all the money lives.
But I do wonder if diffusion models will be used in more complex Software Architecture for their long-term coherence, no exposure bias, and their symmetric architecture could work well with interaction nets.
I think that's because Opus 4.6 has more "initiative".
Opus 4.6 can be quite sassy at times, the other day I asked it if it were "buttering me up" and it candidly responded "Hey you asked me to help you write a report with that conclusion, not appraise it."
When I first started using Claude I was pretty annoyed by the cute phrases. But when I built my own wrapper I started using them because I had gotten used to them. But added in a setting of course: [Fun Phrases
Show playful status messages while waiting for responses. Disable to show only "Thinking..."]
Why Anthropic can't provide such a setting, we will never know!
They bothered to let you override default nonsense messages with custom messages, but they still don't want to let you see what you are actually interested in. The actual information is kept hidden.
Luckily, new open weight models from China caught up with Anthropic, so you can use a sane harness with a much cheaper subscription and never look back.
> This suggests that scaling alone won't eliminate incoherence. As more capable models tackle harder problems, variance-dominated failures persist or worsen.
This is a big deal, but are they only looking at auto-regressive models?
I don't think it's fair to say there's no skill in it - one of the new problems with AI coding that I have found, is I now have to go away and think more.
I think that the bottom line is that the bottlenecks are - the specific model you use & your skills, experience and reasoning capacity (intelligence); and you control only the latter, so focus on that!
And the loading screens were also amazing, particularly for tape loading.
reply