ChatGPT only jumbles words together in a way that is statistically likely to resemble a coherent sentence based on the bits and pieces it’s been fed, without checking whether or not they’re factual. asking them anything doesn’t prove or disprove anything. it says it’s got an account because someone on the internet mentioned once that they’ve made an account to check it out and that they’re excited to see where it’s going.
ChatGPT only jumbles words together in a way that is statistically likely to resemble a coherent sentence based on the bits and pieces it’s been fed, without checking whether or not they’re factual. asking them anything doesn’t prove or disprove anything. it says it’s got an account because someone on the internet mentioned once that they’ve made an account to check it out and that they’re excited to see where it’s going.
exactly. However people tend to believe that the AI slipped and admitted by mistake that it do have an account and then it tried to hide it again.
It’s interesting how this exchange reveals flaws in how large language models think and also flaws in how humans think.