Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
At this point in the conversation, I was not asking for more statements about AIs. Instead, I was interested in statements about human usage of language, or a comparison between the two.
ChatGPT did understand my question as intended, using the context provided.
See, I don’t argue LLMs are super intelligent and deeply understand the meaning of words, and can use language like a master poet. Instead, I’m questioning if our own, human, ability to do so is actually as superior as we might like to believe.
I don’t even mean we often err, which we obviously do, myself included. The question is: Is our understanding and usage of language anything else but lots and lots of algorithms stacked on each other? Is there a principal, qualitative difference between us and LLMs? If there is, “how can we tell”?