Posts tagged with "human"
- More empathy for that robot than for each other
9/15/2025
Quoting Jessica Kerr, Austin Parker, Ken Rimple and Dr. Cat Hicks from AI/LLM in Software Teams: What’s Working and What’s Next
Empathy
People will do things for AI that they won’t do for each other. They’ll check the outcomes. They’ll make things explicit. They’ll document things. They’ll add tests.… And all of these things help people, but we weren’t willing to help the people. It’s almost like we have more empathy for that robot than for each other… we can imagine that the AI really doesn’t know this stuff and really needs this information. And at some level, we can’t actually imagine a human that doesn’t know what we do.
This comparison creates a visceral blow, because I feel it describes me. I consider myself a fairly empathic person, but I’m slow to create this information for other humans, yet find myself more willing to do so for AI.
Why do we behave this way? Here are some theories. Different expectation levels, ex: AI doesn’t have this background knowledge, but humans should, or at least can figure it out. Comparison and competition between ourselves and others. The impact is immediate when working with the AI, but unknown and in the future for humans. More self-serving when providing these to the AI, at least in the near term again.
Even with these plausible explanations, I can’t quite get myself off the hook. This nagging self-awareness, however, doesn’t diminish my fear that my behavior will remain unchanged.
Participation
Another topic of this interview deserves mention:
…have a training data problems, right? And we can question what we use it for, but it’s very difficult to do that if you sit outside of it. If you set yourself apart, you have to participate.
I do think that is incumbent upon us to grapple with, you know, the reality we’re faced with… We have the universal function approximator finally and there’s no putting that toothpaste back in the tube, so we can figure out how to build empathetic systems of people and technology that are humanistic in nature, or we can let the people whose moral compass orients slightly towards their bank account make those decisions, and I know which side of it I’m on.
AI is here and it will change a lot of things. It’s understandable to be worried about the negative impact of AI, but letting that prevent you from engaging is a way of sitting on the sidelines. Instead, we have a duty to participate and shape its future.
- Quoting Andrej Karpathy regarding LLM intelligence Animals vs Ghosts
10/21/2025
Quoting Andrej Karpathy regarding LLM intelligence Animals vs Ghosts
In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we’re not going to redo evolution. But with LLMs we have stumbled by an alternative approach to “prepackage” a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that’s what a lot of frontier work is about.
That’s beautifully said. It paints a vivid picture of how LLM intelligence differs from biological intelligence. By training on the collective content of the internet, these models become a form of us, our past selves, our ghosts. We recognize an intelligence in them, but it’s a mistake to equate it with human or animal intelligence.
Is the current intelligence enough for AGI? Will the next AI winter come from trying to make models more like animals? Is that even a wise path to pursue?
I don’t think today’s intelligence is sufficient for true AGI. As Karpathy pointed out, it’s a fundamentally different kind of intelligence. I don’t see how this architecture evolves into something truly general. It can get closer, sure, but there will always be holes needing to be plugged. This will bring forth the next AI winter, until the next breakthrough is discovered and our capabilities reach the next level.
Still, I’m uneasy about that pursuit. There’s already so much potential in what we have now. Entire industries and creative fields haven’t even begun to fully explore it. And as a society, we’re not prepared for the intelligence we already face. However, it is in our nature to always be progressing. Perhaps by the time the next breakthrough occurs, society will have adjusted to the current level of intelligence, better preparing us us for the next level.
- Responding to I Do Not Want to Be a Programmer Anymore
10/5/2025
Responding to I Do Not Want to Be a Programmer Anymore (After Losing an Argument to AI and My Wife)
The article begins by sharing a story of attempting to use AI to resolve a difference of opinion with his wife, which convinced him he was wrong. His wife reaction:
It wasn’t the victory that stuck with her. It was how easily I surrendered my judgment to a machine.
He gives another example from work, from which he writes:
That’s the unsettling part. We don’t just listen to the machine; we believe it. We defer to it. And sometimes, we even prefer its certainty over the reasoning of the actual humans in front of us.
His concerning conclusion:
Wisdom has always come as the byproduct of experience. But if experience itself is outsourced to machines, where will the young earn theirs?
I also have experienced myself being resistant to the arguments of another only to be won over by consulting a LLM and reasoning through the arguments. In part this seems reasonable, the ideas of others which are contrary to our own are costly for us. Ideas which we arrive at, or we think we arrive at, on our own we believe we have already been through the work to vet.
Therefore, the question is whether we ask AI’s answer on the first take, or do we go back and forth with the AI examining the rationale. The first is concerning, to blindly accept the response without any further examination. But I suspect that is not what occurs in most use cases. Instead we become convinced by it because it is a nonthreatening way to explore the topic. I wonder if there is intimations of that when he says:
Clients, colleagues, even strangers are emboldened not because the machine gives them ideas, but because it gives them confidence.
When he provides the example at work the person sent him a “detailed breakdown” of how to improve the system. It sounds to me the person invested a lot of effort and thought into this, not quickly typed a question and forwarded on the AI response.
Circling back to his concern about wisdom, or lack of, I believe this highlights the need for relationship. If relationships continue to erode, lack of mentorship, and trust in AI continues to rise then is wisdom lost?
It feels this may be the case. But humans still accumulate experiences, from both our failures and triumphs. And from those experiences wisdom will still either be derived or ignored. It’s hard to imagine a complete loss of wisdom. Even the author gain wisdom from the experiencing of bringing AI into the conversation with his wife. There is precious wisdom humankind has obtained across our existence, which would be a tragedy to lose. But I have a hope in humanity, that we will continue to push forward and adapt, accumulating wisdom. It is in our nature, I don’t think we can do anything otherwise.
- Responding to The real deadline isn't when AI outsmarts us — it’s when we stop using our own minds
10/5/2025
Replying to “You have 18 months” The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.
And I am much more concerned about the decline of thinking people than I am about the rise of thinking machines.
I’m not precisely concerned about this. I don’t believe my thinking has declined since using these tools. Maybe they have in some trivial ways. But I believe my thinking has become more active as a result of these tools, because I am able to explore and ask questions, to investigate in ways I either was not able to before, or at least not as easily.
My concern is that there will be a division between people on how they use these tools. One side’s thinking will decline, while the other side’s thinking will be enhanced, which will lead to a further imbalance in society. It appears the statistics he references support this, the declines he reports are not coming from those who already reported as high.
The author later he answers the question about what kids should study:
While I don’t know what field any particular student should major in, I do feel strongly about what skill they should value: It’s the very same skill that I see in decline. It’s the patience to read long and complex texts; to hold conflicting ideas in our heads and enjoy their dissonance
While I do not entirely agree with his phrasing, or at least I am uncertain with how he phrased it, I do believe being able to work with conflicting ideas is an important skill. Perhaps if someone “enjoys” the dissonance they become energized and thrive in these situations. And so maybe the language is not too strong. But at the minimum I have found being able to wrestle with conflicting ideas to be an important life skill.