Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
BF-SYS: A fantasy computer that uses Brainfuck as its instruction set (zptr.cc)
67 points by todsacerdoti on Dec 12, 2023 | hide | past | favorite | 27 comments


Okay, let's actually talk about the facilities provided by the project instead of Brainfuck in general, shall we?

First of all, the sound channels: 10/20/30/.../310/320 Hz? What? 10 and 20 Hz are inaudible frequencies, and A of the first octave 440 Hz which you can't play.

As for the actual output mechanism:

    The output instruction (.) increments the output pointer. If the output pointer is divisible by 33, it changes the delay of a sound channel to the value at the current memory cell. Otherwise, it changes a pixel on the screen to the value at the current memory cell.
What is an "output pointer"? The delay of which sound channel, exactly, is changed? And similarly, which pixel on the screen, exactly, is changed? What about decrementing the output pointer?


Meh, Brainfuck is clearly inadequate as the base for a modern computer.

Concurrent Brainfuck, now, that's where it's at. Forget "GPUs" and their measly "thousands" of compute units... can you imagine how many Brainfuck cores we can fit on to a system with a 5nm process? Millions, probably.

If there is any system that deserves to be tagged inconceivable performance, it would be such a system.


I feel like the NUMA penalty would be high, and the coordination between so many execution units would be... painful.

That said, with someone clever enough working on it, it would be boss.


NUMA is for when you have expensive processors, hence few processors, and need to share the memory. This definitely calls for just giving what would be each NUMA node its own few-hundred-thousand CPUs and letting them use a communication bus directly to talk among themselves.

"Just". Such a wonderful word in these sorts of conversations. All we have to do is "just" compile modern languages into Brainfuck for these processors, no problem. All we have to do is "just" build the hardware for the rest of the system on a radically different paradigm than all current computers. I'm sure we can "just" repurpose some FPGAs for this for prototype purposes. It's all so easy here in "just"land I'm surprised we don't "just" do it.


GreenArrays have really cool chip that boasts "the most power efficient per instruction executed" when they released it. On the order of picojoules per cycle IIRC.

The chip has 144 independent computers (not processors -> computers)

The inventor of the Forth programming language, Chuck Moore, developed it, and is designed to execute with this parallel model in mind. A parallel Brainfuck system might be a cool project to port over to this platform.

https://www.greenarraychips.com/

Here is a talk given at Strange Loop by Chuck: https://youtu.be/0PclgBd6_Zs?si=r8FqQo4LUvdFmHbd


Yeah, I had that in mind when this idea jumped unbidden into my head.

With something less blindlingly stupid than brainfuck, it's a legitimately interesting architecture. I fear between CPUs, GPUs, and the fact that apparently we're going straight to custom hardware for neural nets, there isn't a niche for this in our world. But there certainly is a parallel universe in which we went down this road instead starting circa the late 1970s, and in that universe it is the GPUs and custom neural net architectures that are facing the uphill battle because while they may be better at their task than the 2048 "conventional" CPU systems in that world, they have a hard time being better enough to justify the investment.


I did it, I took the plunge and just bought a dev board.

You are definitely right on with regard to the custom hardware, but I still want want to experiment with it, and I'm pretty sure there will be good niches for such flexibility/low power operations.

There are still a lot of problems that are solved/need solving in which neural nets are superfluous/overkill for (I think).

I might try my hand at some DSP for radio or audio to start.

I'll also think I'm gonna try their Forth implementation, I've never used it before, but it seems like a pretty neat language.

Maybe Uiua stack/array programming would make for a better port down the line.


I just had to enter a complain to the Cisco security umbrella that my work uses - "I am getting tired of the high quality security checks that uses scary words in the url as a metric..."


If you have admin rights, you can disable the umbrella trashware, apparently no one notices.

https://gist.github.com/jasmas/4976d359c00726cd3be1c9828aadd...

I think that old script needs a path adaption.


thanks, will check it out.


Nobody notices unless you have some sort of security event, at which point it's ten times as bad. If you're confident that won't happen, sure...


sure, but on the other hand one time when this project I'm on was just starting out I put a link on our teams channel to one of the netlify deployment builds for a PR I wanted a tester to test and I got an urgent message demanding I explain why I was sharing dangerous links over teams trying to infect my coworkers.

So there's only so safe one can reasonably stay.


"Brain" is probably terrifying to a brainless IT department, yes


See also “Brainfuck Enterprise Solutions” on GitHub, who have a whole suite of Enterprise Ready solutions written in Brainfuck, such as an OS (https://github.com/bf-enterprise-solutions/os.bf), IDE-plus-text editor (https://github.com/bf-enterprise-solutions/ed.bf), BF interpreter in BF (https://github.com/bf-enterprise-solutions/meta.bf), etc.


Do I understand correctly that each frame one is to call the output/“.” command 33(number of pixels) times, 32(number of pixels) times for audio, and (number of pixels) for drawing to the screen?

Hm, and, the way to do this.. would it be to, before a part where you actually output the output, you first store what you intend to output in region of the tape, and then go through that part to make the output?

Also, what happens if the computations take too long and you don’t have time to compute the frame?


This appears not entirely different to modern pixel shaders on GPUs, which basically run a program for every pixel on the screen. Quite efficient if you have large amounts of concurrent execution units. See https://www.shadertoy.com/view/MdXSWn .


In 2023 I would expect at least a Brainfuck FPGA. Wouldn't be that hard to do. I bed somebody did that.



The only thing good about Brainfuck is its name. Brainfuck.


Does anyone actually use brainfuck?


Honestly, this might be an unpopular opinion, but I'll just say it: Brainfuck is not a great programming language for production use-cases.


Okay, but why?


Counterpoint: why not?


They could have spent the time doing something good for humanity, like making their bed.


But they did do something good for humanity, create something that gave me two seconds of joy. Depression makes it hard for me to enjoy.


Because doesn't create value?


Because humans. Of course someone did; there's always one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: