Quoting Richard Matthew Stallman in Reasons not to use ChatGPT
Quoting Richard Matthew Stallman in Reasons not to use ChatGPT:
It does not know what its output means. It has no idea that words can mean anything.… people should not trust systems that mindlessly play with words to be correct in what those words mean.
My initial reaction is agreement. But then what are the implications? Does it follow that machines will never have intelligence? What makes our intelligence different? A calculator doesn’t know what its output means. Should we trust it? Is the way the calculator works with numbers significantly different than the way the LLM works with words. It’s empirically different, but when attempting to consider each domain they operate in is it significantly different?
Did he think about all of this and then come to his conclusion?
These are all interesting questions and conversations we should be having but I wonder if his real complaint is included in his trailing thoughts:
Another reason to reject ChatGPT in particular is that users cannot get a copy of it.
Written 10/3/2025