Posts tagged with "intelligence"
- Potential and Actual Intelligence: How Human Thinking Differs from AI Thinking
1/19/2026
After listening to the EconTalk podcast episode Nature, Nurture, and Identical Twins (with David Bessis), I read the essay David Bessis wrote, Twins reared apart do not exist, which was the subject of the episode. Both were informative and helped me understand the conversation between hereditarians and blank‑slatists. But something else caught my attention in the essay.
Bessis begins with three illustrations of potential values for the heritability of IQ, 30%, 50%, and 80%. He maps genetic potential against actual IQ for each percentage. In this post I’m not addressing questions about IQ, its measurement, or use. What sparked my interest is the distinction between potential and actual, and whether that distinction adds to the conversation on what differentiates human thinking from AI thinking.
It seems to me that genetics does set limits on the range of capacity each human is able to achieve in thinking, but it is not a predetermined number. Other factors have significant influence on where each human ends up in this range. Therefore, there is some capacity the individual human can achieve, but there is also some measured value designating where they currently are.
This leads me to believe that there are many factors that influence how the brain thinks. It’s not about crunching data about a question and ending up at a result. Instead there’s a lot of seemingly unrelated data accumulated over a lifetime of experiences which mingle in the brain, impacting pathways on the way to producing the thought.
How does AI compare when mapping genetic potential and actual IQ to machines? Is it accurate to say that their training is their genetic potential and then reinforcement learning is their actual potential? I don’t think it is. In humans their actual potential continues to be shaped from their interactions in the world. The experiences humans have influence us. For AI, once the model is released it is fixed. Additional information can be provided to them which influences their generated responses, but their intelligence is locked-in; their genetic potential is still their actual intelligence.
In the end an AI’s intelligence is hereditary rather than blank‑slate, and that leads to a very different form of thinking. AI thinking is not human thinking.
- Quoting Andrej Karpathy regarding LLM intelligence Animals vs Ghosts
10/21/2025
Quoting Andrej Karpathy regarding LLM intelligence Animals vs Ghosts
In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we’re not going to redo evolution. But with LLMs we have stumbled by an alternative approach to “prepackage” a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that’s what a lot of frontier work is about.
That’s beautifully said. It paints a vivid picture of how LLM intelligence differs from biological intelligence. By training on the collective content of the internet, these models become a form of us, our past selves, our ghosts. We recognize an intelligence in them, but it’s a mistake to equate it with human or animal intelligence.
Is the current intelligence enough for AGI? Will the next AI winter come from trying to make models more like animals? Is that even a wise path to pursue?
I don’t think today’s intelligence is sufficient for true AGI. As Karpathy pointed out, it’s a fundamentally different kind of intelligence. I don’t see how this architecture evolves into something truly general. It can get closer, sure, but there will always be holes needing to be plugged. This will bring forth the next AI winter, until the next breakthrough is discovered and our capabilities reach the next level.
Still, I’m uneasy about that pursuit. There’s already so much potential in what we have now. Entire industries and creative fields haven’t even begun to fully explore it. And as a society, we’re not prepared for the intelligence we already face. However, it is in our nature to always be progressing. Perhaps by the time the next breakthrough occurs, society will have adjusted to the current level of intelligence, better preparing us us for the next level.