Posts tagged with "humanity"
- From magic to understanding to magic again
9/12/2025
Ethan Mollick writes that our relationship with AI is shifting, from being a partner we create with to the AI performing the work itself, On Working with Wizards. This aligns with the pursuit of AI agents, 2025 being dubbed the year of the agent. While Mollick doesn’t explicitly discuss agents, he uses the term “wizards” to describe a similar concept. He calls them wizards because:
Magic gets done, but we don’t always know what to do with the results
…wizards don’t want my help and work in secretive ways that even they can’t explain.
This presents two important challenges. First, we lose, or never develop, the skill to evaluate what was produced. We lose ground, and so we are forced to trust more. As Mollick states:
every time we hand work to a wizard, we lose a chance to develop our own expertise, to build the very judgment we need to evaluate the wizard’s work.
But what I found especially striking is that throughout time, when we did not understand something, it was considered magic. Science came along and brought a method of understanding of how things work. Technology replaced magic. Is this direction now reversing? Are we losing our understanding, is magic returning? Mollick writes:
The paradox of working with AI wizards is that competence and opacity rise together. We need these tools most for the tasks where we’re least able to verify them. It’s the old lesson from fairy tales: the better the magic, the deeper the mystery.
This is a shocking realization. While many would push back against this analogy, it seems to be at least partially accurate. It raises a pressing question: Why are we so willing to trade our understanding and accept the magic in this context?
- Quoting Richard Matthew Stallman in Reasons not to use ChatGPT
10/3/2025
Quoting Richard Matthew Stallman in Reasons not to use ChatGPT:
It does not know what its output means. It has no idea that words can mean anything.… people should not trust systems that mindlessly play with words to be correct in what those words mean.
My initial reaction is agreement. But then what are the implications? Does it follow that machines will never have intelligence? What makes our intelligence different? A calculator doesn’t know what its output means. Should we trust it? Is the way the calculator works with numbers significantly different than the way the LLM works with words. It’s empirically different, but when attempting to consider each domain they operate in is it significantly different?
Did he think about all of this and then come to his conclusion?
These are all interesting questions and conversations we should be having but I wonder if his real complaint is included in his trailing thoughts:
Another reason to reject ChatGPT in particular is that users cannot get a copy of it.