Hacker Newsnew | past | comments | ask | show | jobs | submit | empiricus's commentslogin

Well, for some reason horse numbers and horse usage dropped sharply at a moment in time. Probably there was some horse pandemic I forgot about.

I agree that the current models are far from perfect. But I am curious how you see the future. Do you really think/feel they will stop here?

I mean, I'm just some guy, but in my mind:

- They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago

- It's clearly possible to solve this, since we humans exist and our brains don't have this problem

There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training.

The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training.

There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells.

Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology.

The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon.

That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs.


The complexity of actual biological neural networks became clear to me when I learned about the different types of neurons.

https://en.wikipedia.org/wiki/Neural_oscillation

There are clock neurons, ADC neurons that transform analog intensity of signal into counts of digital spikes, there are neurons that integrate signals over time, that synchronizes together etc etc. Transformer models have none of this.


Thanks, that's a reasonable argument. Some critique: based on this argument it is very surprising that LLM work so well, or at all. The fact that even small LLM do something suggests that the human substrate is quite inefficient for thinking. Compared to LLMs, it seems to me that 1. some humans are more aware of what they know; 2. humans have very tight feedback loops to regulate and correct. So I imagine we do not need much more scaling, just slightly better AI architectures. I guess we will see how it goes.

Well, the secret is not how you crawl the web, but how you decide what to show to the users.

It's not like the LOC either publishes their official procedure for what gets to appear on the foremost shelves.

I think the genome might be mostly just the "config file". So the cell already contains most of the information and mechanisms needed for the organism. The genome is config flags and some more detailed settings that turn things on and off in the cell, at specific times in the life of the organism. From this point of view, the discussion about how many pairs/bytes of information are in the genome is misleading. Similar analogy: I can write a hello world program, which displays hello world on the screen. But the screen is 4k, the windows background is also visible, so the hardware and OS are 6-8 orders of magnitude more complex than the puny program, and the output is then much more complex than the puny program.


By the time stellarator designs become economical (tens of years in the most optimistic case), you can cover the entire Germany in PV panels. Or even grow an entire new generation of forrest. So far stellarators look just like interesting vaporware. I mean they are irrelevant to any current energy discussion.


I "like" when ppl talk about UBI and say "but ppl on UBI are not happy and lack purpose". Compare with being poor.


It's even more annoying when you consider that most proposals that gain any type of traction can't even ever approach the "I have everything now so I'm not going to work because I'm so lazy"-type abundance that the fear-mongers try to sell you. If we just were able to use some of all this wealth to create an absolute baseline of "enough money to not starve, have any kind of roof over your head and not be trapped in your current situation", I wonder what society could've looked like.


This looks nice, but somebody should pay the difference, and maybe it should be those that oppose the normal looking supports.


well, cochlea is working withing the realm of biological and physical possibilities. basically it is a triangle through which waves are propagating, and sensors along the edge. smth smth this is similar to a filter bank of gabor filters that respond to rising freq along the triangle edge. ergo you can say fourier, but it only means sensors responding to different freq becasue of their location.


Yeah, but not only the frequency is important - the wave-form is very relevant. For example if your wave-form is a triangle, listerners will tell you that it is very noisy compared to a simple sinus. If you use sinus as a base of your vector space triangles really look like a noisy mix. My question is, if the basic elements are really sinus, or if the basic Eigen-Waves of the cochlea are other Wave-Forms (e.g. slightly wider or narrower than sinus, ...). If physics in the ear isn't linear, maybe sinus isn't the purest wave-form for a listener.

Most people in Physics only know sinus and maybe sometimes rectangles as a base for transformations, but mathematically you could use a lot of other things - maybe very similar to sinus, but different.


But if you apply a frequency-dependent phase shift to the triangle wave, nobody will be able to tell the difference unless the frequency is very low.


Well, the actual interesting part is that when the vector dimension grows then random vectors will become almost orthogonal. smth smth exponential number of almost orthogonal vectors. this is probably the most important reason why text embedding is working. you take some structure from a 10^6 dimension, and project it to 10^3 dimension, and you can still keep the distances between all vectors.


Hm, these are just called AC in Europe. To my knowledge, AC has always worked both ways here. In Europe we call heap pumps when they heat the water. So they are are Air-Water (the outdoor unit looks like a bigger AC outdoor unit) or Water-Water (the outdoor unit is a long underground water pipe loop). The warm water is used for underfloor heating or for shower, kitchen, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: