Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure. There's a view that, as I understand it, suggests that language is intelligence. That language is a requirement for understanding.

An example might be kind of the contrary—that you might not be able to hold an idea in your head until it has been named. For myself, until I heard the word gestalt (maybe a fitting example?) I am not sure I could have understood the concept. But when it is described it starts to coalesce—and then when named, it became real. (If that makes sense.)

FWIW, Zeitgeist is another one of those concepts/words for me. I guess I have to thank the German language.

Perhaps it is why other animals on this planet seem to us lacking intelligence. Perhaps it is their lack of complex language holding their minds back.



  > There's a view that suggests that language is intelligence. 
I think you find the limits when you dig in. What are you calling language? Can you really say that Eliza doesn't meet your criteria? What about a more advanced version? I mean we've been passing the Turing Test for decades now.

  > That language is a requirement for understanding.
But this contradicts your earlier statement. If language is a requirement then it must precede intelligence, right?

I think you must then revisit your definition of language and ensure that it matches to all the creatures that you consider intelligent. At least by doing this you'll make some falsifiable claims and can make progress. I think an ant is intelligent, but I also think ants do things far more sophisticated than the average person thinks. It's an easy trap, not knowing what you don't know. But if we do the above we get some path to aid in discovery, right?

  > that you might not be able to hold an idea in your head until it has been named
Are you familiar with Anendophasia?

It is the condition where a person does not have an internal monologue. They think without words. The definition of language is still flexible enough that you can probably still call that language, just like in your example, but it shows a lack of precision in the definition, even if it is accurate.

  > Perhaps it is why other animals on this planet seem to us lacking intelligence
One thing to also consider is if language is necessary for societies or intelligence. Can we decouple the two? I'm not aware of any great examples, although octopi and many other cephalopods are fairly asocial creatures. Yet they are considered highly intelligent due to their adaptive and creative nature.

Perhaps language is a necessary condition for advanced intelligence, but not intelligence alone. Perhaps it is communication and societies, differentiating from an internalized language. Certainly the social group can play an influence here, as coalitions can do more than the sum of the individuals (by definition). But the big question is if these things are necessary. Getting the correct causal graph, removing the confounding variables, is no easy task. But I think we should still try and explore differing ideas. While I don't think you're right, I'll encourage you to pursue your path if you encourage me to pursue mine. We can compete, but it should be friendly, as our competition forces us to help see flaws in our models. Maybe the social element isn't a necessary condition, but I have no doubt that it is a beneficial tool. I'm more frustrated by those wanting to call the problem solved. It obviously isn't, as it's been so difficult to get generalization and consensus among experts (across fields).


> It is the condition where a person does not have an internal monologue.

These people are just nutjobs that misinterpreted what internal monologue means, and have trouble doing basic introspection.

I know there are a myriad of similar conditions, aphantasia, synaesthesia, etc. But someone without internal monologue simply could not function in our society, or at least not pass as someone without obvious mental diminishment.

If there really were some other, hidden code in the mind, that could express "thoughts" in the same depth as language does - then please show it already. At least the tiniest bit of a hint.


I know some of these people. We've had deep conversations about what is going on in our thought processes. Their description significantly differs from mine.

These people are common enough that you likely know some. It's just not a topic that frequently comes up.

It is also a spectrum, not a binary thing (though full anendophasia does exist, it is just on the extreme end). I think your own experiences should allow you to doubt your claim. For example, I know when I get really into a fiction book I'm reading that I transition from a point where I'm reading the words in my head to seeing the scenes more like a movie, or more accurately like a dream. I talk to myself in my head a lot, but I can also think without words. I do this a lot when I'm thinking about more physical things like when I'm machining something, building things, or even loading dishwasher. So it is hard for me to believe that while I primarily use an internal monologue that there aren't people that primarily use a different strategy.

On top of that, well, I'm pretty certain my cat doesn't meow in her head. I'm not certain she has a language at all. So why would it be surprising that this condition exists? You'd have to make the assumption that there was a switch in human evolution. Where it happened all at once or all others went extinct. I find that less likely than the idea that we just don't talk enough about how we think to our friends.

Certainly there are times where you think without a voice in your head. If not, well you're on the extreme other end. After all, we aren't clones. People are different, even if there's a lot of similarities.


I’m like that more often than not. Words and language always seemed like a “translation layer” to express myself to other people, not something essential that needs to happen in my head. Especially when thinking deeply about some technical problem there’s no language involved, just abstract shapes and seeing things “in my mind’s eye”.

We might just be rehashing that silly internet meme about “shape rotators”, but there could be a correlation here where people whose minds work this way are more dismissive of LLMs.


I suggest you revisit the subject with your friends, with two key points:

1. Make it clear to them that with "internal monologue" you do not mean an actual audible hallucination

2. Ask them if they EVER have imagined themselves or others saying or asking anything

If they do, which they 100% will unless they lie, then you have ruled out "does not have an internal monologue", the claim is now "does not use his internal monologue as much". You can keep probing them what exactly that means, but it gets washy.

Someone that truly does not have an internal dialogue could not do the most basic daily tasks. A person could grab a cookie from the table when they feel like it (oh, :cookie-emoji:!), but they cannot put on their shoes, grab their wallet and keys, look in the mirror to adjust their hair, go to the supermarket, to buy cookies. If there were another hidden code that can express all huge mental state pulled by "buy cookies", by now we would at least have an idea that it exists underneath. We must also ask, why would we translate this constantly into language, if the mental state is already there? Translation costs processing power and slows down. So why are these "no internal monologue" people not geniuses?

I have no doubt that there is a spectrum, on that I agree with you. But the spectrum is "how present is (or how aware is the person of-) the internal monologue". E.g. some people have ADHD, others never get anxiety at all. "No internal monologue" is not one end of the spectrum for functioning adults.

The cat actually proves my point. A cat can sit for a long time before a mouse-hole, or it can hide to jumpscare his brother cat, and so on. So to a very small degree there is something that let's it process ("understand") very basic and near-future event and action-reactions. However, a cat could not possibly go to the supermarket to buy food, obviating anatomical obstacles, because: it has no language and therefore cannot make a complex mental model. Fun fact: whenever animals (apes, birds) have been taught language, they never ask questions (some claim they did, but if you dig in you'll see that the interpretation is extremely dubious).


  > 1. Make it clear to them that with "internal monologue" you do not mean an actual audible hallucination
What do you mean? I hear my voice in my head. I can differentiate this from a voice outside my head, but yes, I do "hear" it.

And yes, this has been discussed in depth. It was like literally the first thing...

But no, they do not have conversations in their heads like I do. They do not use words as their medium. I have no doubt that their experience is different from mine.

  > 2. Ask them if they EVER have imagined themselves or others saying or asking anything
This is an orthogonal point. Yes, they have imagined normal interactions. But frequently those imaginary conversations do not use words.

  > The cat actually proves my point.
Idk man, I think you should get a pet. My cat communicates with me all the time. But she has no language.

  > Fun fact: whenever animals (apes, birds) have been taught language, they never ask questions (some claim they did, but if you dig in you'll see that the interpretation is extremely dubious).
To be clear, I'm not saying my cat's intelligence is anywhere near ours. She can do tricks and is "smart for a cat" but I'm not even convinced she's as intelligent as the various wild corvids I feed.


It's pretty self explanatory: there's actual voice heard with your ears, there's the internal monologue, and then there's a hallucination.

> Yes, they have imagined normal interactions. But frequently those imaginary conversations do not use words.

And you did not dig in deeper? How exactly do you imagine a conversation without words?


  > there's actual voice heard with your ears, there's the internal monologue, and then there's a hallucination.
This needs no explaining. I think I sufficiently made it clear that we agree with these distinctions.

  >> I hear my voice in my head. I can differentiate this from a voice outside my head, but yes, I do "hear" it.
Though to be more precise I would say that a hallucination appears to come from outside the head, even if you are aware that it is coming from inside. Still, clearly distinct from an internal monologue, which is always clearly internal.

  > And you did not dig in deeper?

  >>>> I know some of these people. ***We've had deep conversations about what is going on in our thought processes.***
Yes. Multiple hours long conversations. One of these people I know now studies psychology. I research intelligence and minds from an artificial standpoint and they from a biological. Yeah, we have gotten pretty deep and have the skills and language to do so far more than the average person.

I think you need to consider that you may just be wrong. You are trying very hard to defend your belief, but why? The strengths of our beliefs should be proportional to the evidence that supports them. I am not trying to say that your logic is bad, let's make that clear. But I think your logic doesn't account for additional data. If you weren't previously aware of this data then how could you expect the logic to reach the correct conclusion? I want to make this clear because I want to distinguish correctness from intelligence (actually relevant to the conversation this stemmed from). You can be wrong without being dumb, but you can also be right and dumb. I think on this particular issue you fall into the former, not the latter. I respect that you are defending your opinion and beliefs, but this is turning as you are rejecting data. Your argument now rests on the data being incorrect, right? Because that's the point. Either the data is wrong or your model is wrong (and let's distinguish that a model is derived through logic to explain data).

I want to remind you that this idea is testable too. I told you this because it is a way to convince yourself and update the data up have available to you. You can train yourself to do this in some cases. Not all and obviously it won't be an identical experience to these people, but you can get yourself to use lower amounts of language when thinking through problems. You had also mentioned that people with aphantasia couldn't function, but think about that too. These topics are quite related actually, considering how we've discussed anendophasia you should be able to reason that these people are really likely to have low aphantasia. Notice I said low, as this is a spectrum. You can train the images in your mind to be stronger too. The fact that some images are stronger than others should lead you to believe that this is a spectrum and that it is likely people operate at different base levels. It should also lead you to reason that this is likely trainable in an average person. The same goes for anendophasia. Don't make this binary, consider it a spectrum. That's how the scientific literature describes the topic too. But if you pigeonhole it to being binary and only true in the extreme cases then your model isn't flexible enough as it also isn't considering the variances in people.

Go talk with your friends. Get detailed. When you imagine an apple in your head how much do you see? As the person if their process involves words or if it is purely imagery. If words, how many? Is it a red apple? Green? Yellow? Can they smell it? Can they taste it? What's it smell and taste like? I will bet you every single person you talk to will answer these differently. I will even wager that each time you do the exercise you yourself will answer differently, even if the variance is much smaller. But that's data, and your model needs to be able to explain that data too. While I think you have the right thought process I don't think you are accounting for this variance, instead treating it as noise. But noise can be parameterized and modeled too. Noise is just the statistical description of uncertainty.


Let me be clear: yes, I know I might be wrong. I hope I'm not dumb and wrong, or at least not dumb. I am also not writing here as some kind of debate exercise. I do because I find this topic extremely interesting and insightful. What if language is the intelligence? What if "guessing the next word" really was all that was there, to peak human intelligence, knowledge, and understanding of our world? I am not hyped by AI, it's rather that I find this possibility somewhat sad.

I've made up a model, an idea, and I don't think the data opposing it is trustworthy. My first problem is that there are many people that claim that they have NO internal monologue, which means NEVER constructing a sentence from theirselves or others in their head (except directly as verbal speech), and this seems outright impossible. When pressed, these people usually either admit that they do have some monologue, just "much less". Or they misunderstood it for something similar to schizophrenia, actual hallucinations. If they don't admit to actually, sometimes, having them, then they fail to explain where exactly the line between "thinking of someone or themselves saying something" and the internal monologue/dialogue is. As if they had been caught lying by the detective, they end the conversation. Or at least that's how I feel, I really don't know how to ask more questions here before making them feel too interrogated, or like someone that has self-diagnosed being told that they are imagining things.

With "absolutely none" group out of the way, it leaves us with people who claim to perceive the internal monologue very scarcely, and claim that they do not need to "think" or "do". How can we possibly test this scientifically? The data is all self-reported. Or at least I don't know if this can or has been neurologically researched.

Consider also that all self-reported data about internal monologue is "poisoned": we are trying to get objective data with the data itself as a vehicle. We are not asking if someone feels pain, or if they can solve a puzzle in a timeframe. We cannot measure electric activity with some instruments, nor evaluate yes-or-no questions.

What if it is true that some people do not perceive their internal monologue? I certainly don't remember it "popping" into my head at a certain age, and I think nobody does. When we learn language, we become conscious with it, because it allows to model the world, beyond putting things in our mouth and screaming. So it could be that not everybody perceives it equally, a spectrum like you said, and that some people rationalize it retroactively as not being there - just "thoughts", ideas, feelings. We reconstruct past events via a narration, filling in details by guessing, so why wouldn some people guess that they are not narrating in their head? It is not something that is taught in school or from or parents, you either perceive it as "internal monologue", or as "just thinking", because, well, it's the thinking doing it's thing.


Somewhat out of my league in this thread but,I think I am.one of these people. I do remember a time before I had an internal monologue , in fact I remember the day in elementary school when I learned, after having been explained to me by my teacher, that everyone else was "talking to themselves in their head". I think I spent the next month or so obsessing over this new found ability. But before that day I was perfectly capable of thought, and conversation, and writing. Even now I can "switch modes" and have coherent thoughts occur, with no labeling or accompanying narrative. I can distinctly identify concepts and transitions between them but there are no words involved until I open my mouth. So I dont know if it was just a hidden background process before that day. But it definitely "feels' different when its in the foreground or back, or nor there.


  > What if language is the intelligence? 
Almost certainly not. There does not seem to be a strong correlation between the two. We have a lot of different measures for intelligence when it comes to animals. We can place them across a (multidimensional) spectrum and humans seem unique with language. It also appears that teaching animals language does not cause them to rapidly change on these metrics despite generations of language capabilities.

  > What if "guessing the next word" really was all that was there, to peak human intelligence, knowledge, and understanding of our world?
I believe this is falsifiable. As I best understand it is a belief of this relationship: predict next word <--> understanding. Yet we know that neither direction holds true. I'll state some trivial cases for brevity[0] but I have no doubt you can determine more complicated ones and even find examples.

-> I can make accurate predictions about coin flips without any understanding of physics or how the coin is being flipped. All I need to do is be lucky. Or we can take many mechanical objects like a clock that can predict time.

Or a horse can appear to do math if I tell it how many times to stomp its foot. It made accurate predictions yet certainly has no understanding.

Ehh I'll give you a more real example. Here's a model that gives accurate predictions for turn by turn taxi directions where the authors extract the world model and find it is not only inaccurate but find that it significantly diverges. Vafa has a few papers on the topic, I suggest reading his work.

<- You can understand all the physics to a double pendulum and not predict the movement for any arbitrary amount of time moving forward if you do not also know the initial conditions. This is going to be true for any chaotic system.

I said we've seen this in the history of science. {Geo,Helio}centrism is a great example. Scientists who had no affiliation with the church still opposed Galileo because his model wasn't making accurate predictions for certain things. Yet the heliocentric model is clearly a better understanding and more accurate as a whole. If you want to dive deeper into this topic I'd highly recommend both the podcast "An Opinionated History of Math" and the book "Representing and Intervening" by Ian Hacking. They're both very approachable. FWIW, metaphysics talks about this quite a lot.

  > My first problem is that there are many people that claim that they have NO internal monologue
So again, I cannot stress that we should not represent this as a binary setting. The binary cases are the extreme (in both directions). Meaning very few people experience them.

The problem here is one of language and semantics, not effect. I completely believe that someone will say "I have no internal monologue" if >90% of their thinking is without an internal monologue. Just like how a guy who's 5'11.75" will call themselves 6'. Are they a liar? I wouldn't say so, they're >99% accurate. Would you for someone 5'11"? That's probably more contextually dependent.

So you distrust the data. That's fine. Let's assume poisoned. We should anyways since noise is an important part of any modeling[2]. It is standard practice...

So instead, do you distrust that there's a distribution into how much of an internal monologue individuals use? Or do you presume they all use them the same.

I'd find it hard to believe you distrust the spectrum. But if you trust the spectrum then where is the threshold for your claim? 0%? That's really not a useful conversation even if heavy tailed.

You are hyper-fixated on the edge case but its result isn't actually consequential to your model. The distribution is! You'll have to consider your claims much more carefully when you consider a distribution. You need to then claim a threshold, in both directions. Or if you make the claim that we're all the same (I'd find that quite surprising tbh, especially given the nature of linguistics), you need to explain that too and your expected distribution that would claim that (narrow).

All I can tell you is that my friend and I have had this conversation multiple times over many years and it seems very constant to me. I have no reason to believe they are lying and if they are they are doing so with an extreme level of consistency, which would be quite out of the norm.

[0] Arguing the relationship still requires addressing trivial relationships.

[1] https://arxiv.org/abs/2406.03689

[2] Even if there are no liars (or "lizardmen"[3]) we still have to account for miscommunication and misunderstandings.

[3] https://en.wiktionary.org/wiki/Lizardman%27s_Constant


> We have a lot of different measures for intelligence when it comes to animals.

But there is an abismal difference between animal intelligence and human intelligence.

> predict next word <--> understanding

Yes, and I could say a stone understands the world because its state reflects the world: it gets hot, cold, wet, dry, radiated, whatever. Perhaps its internal state can even predict the world: if it's rolling downhill, it can predict that it will stop soon. But the stone is not conscious like a human, and neither is a clock nor a horse that can count to ten. The stone obviously is "reducing to the absurd" - a horse can actually "guess" to some degree, but nothing like a human. It cannot ask a question, and it cannot answer itself a question.

> I cannot stress that we should not represent this as a binary setting.

That was kind of my point, to eliminate the binary "no", leaving us with a spectrum.

My initial claim "these are just nutjobs" - my apologies for the phrasing - was addressing this: there are no people "without internal monologue AT ALL".

Since we seem to actually agree on this point, our difference is that I believe that the people with "little internal monologue" are simply not aware of it.

Let me phrase string it this way: If language is the understanding, then the internal monologue is not some quirky side effect. To understand something at the human level, we need to describe it with language, the rest are primitive instincts and "feelings".

We can model the past and the future. We can model ourselves in 10 years. And what is one of the most important things we would model? What we would say or think then - thinking being "saying something out silently in our head". Not really just feelings: "I would love my partner", sure but why? "Because . . .".

When we are utilizing language, the internal monologue, to construct the model, we cannot be "aware of it" constantly. That is, the bandwidth is taken by the tasks at hand that we are dealing with, it would be detrimental if every other phrase would be followed with "btw did I notice that I just understand this via a string of words?". The more complex actions or idea we process, the less aware we are that we are using language for it. That is "being in the flow". We can reconstruct it when done, and here, if there is a lack of awareness of internal monologue, it will be rationalized as something else.

> Or if you make the claim that we're all the same (I'd find that quite surprising tbh, especially given the nature of linguistics), you need to explain that too and your expected distribution that would claim that (narrow).

My explanation (without proof), is that it's just a matter of awareness.

> All I can tell you is that my friend and I have had this conversation multiple times over many years and it seems very constant to me. I have no reason to believe they are lying and if they are they are doing so with an extreme level of consistency, which would be quite out of the norm.

Can you think of some kind of tests question (or string of questions) that could prove either? I have been thinking about it obviously, but I can't come up with any way to empirically test that there is or is no internal monologue. Consistency could simply mean that their rationalization is consistent.

I'll leave you this article, which I found quite interesting: https://news.ycombinator.com/item?id=43685072 The person lost language, and lost what we could consider human-level consciousness at the same time, and then recovered both at the same rate. Of course, there was brain damage, so it's not an empirical conclusion.

Also this book https://en.wikipedia.org/wiki/The_Origin_of_Consciousness_in... while partially debunked and being pop-sci to begin with, has wildly interesting insights into the internal monologue and at least draws extremely interesting questions.


There is us a book written by a woman who suffered a stroke. She lost the ability to speak and understand language. Yet she remained conscious. It took her ten years to fully recover. The book is called "A stroke of insight".


Conscious, like an animal or a baby. She could not function at all like a normal adult. Proves my point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: