I disagree here. We DO "learn them at a very early age, without being explicitly instructed by anyone and just by observing the world" the majority of information that a human ingests doesn't include direct actions upon the world (in any more than a philosophical sense anyway). We may feel as though actions are more memorable, but I claim that we observe than take actions, not the other way around. We learn things all the time that are not "consciously learned". Addition (e.g. 2 apples are more than one apple) is learned by most humans in an unsupervised manner far before labels are added later. I claim that early childhood development and thus much of our "foundation" is primarily rooted in having no explicit information AND not enough power to directly label data ourselves (through experiments, play, etc)
This is important because "learning without explicit instructions" in ML speak is Unsupervised learning (clustering and dimensionality reduction). There are no labels except the ones that you decide upon yourself (cluster membership). Unsupervised Learning is still far in its infancy in effectiveness compared to supervised systems, and it's no surprise that its algorithms are generally extremely easy to implement from scratch (e.g. K-means or DBSCAN) compared to relatively difficult work like automatic-differentiation in neural networks.
Learning by reading information in a book or by direct didactic teaching would be supervised learning. Learning through a dialectical format would be reinforcement learning . Self-supervised learning would be equivalent to autodidactic learning and the creative act upon the world. (maybe the distinction between self-supervised and reinforcement is arbitrary)
The point is that we want to learn as much as we can given the information available to us. We should not rule out the role that the biological analogue to unsupervised learning plays in human development.
It is for all of these reasons that I become far more excited when a new clustering or dimensionality reduction algorithm comes out than I am when a new neural network architecture becomes state of the art.
Children do not learn completely "unsupervised" and recieve frequent feedback from the agents. I would argue that a significant amounts of childhood development, (especially around "labels"), is due to our hardwired attachment to human faces.
I have always felt that the signifance of the "social software of human culture" in our general intelligence and learning capacity was underestimated by the AGI community.
So personally, I see more potential in communities of learning agents than any developments in the underpinnings.
This is important because "learning without explicit instructions" in ML speak is Unsupervised learning (clustering and dimensionality reduction). There are no labels except the ones that you decide upon yourself (cluster membership). Unsupervised Learning is still far in its infancy in effectiveness compared to supervised systems, and it's no surprise that its algorithms are generally extremely easy to implement from scratch (e.g. K-means or DBSCAN) compared to relatively difficult work like automatic-differentiation in neural networks.
Learning by reading information in a book or by direct didactic teaching would be supervised learning. Learning through a dialectical format would be reinforcement learning . Self-supervised learning would be equivalent to autodidactic learning and the creative act upon the world. (maybe the distinction between self-supervised and reinforcement is arbitrary)
The point is that we want to learn as much as we can given the information available to us. We should not rule out the role that the biological analogue to unsupervised learning plays in human development.
It is for all of these reasons that I become far more excited when a new clustering or dimensionality reduction algorithm comes out than I am when a new neural network architecture becomes state of the art.