• earthquake
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 months ago

      You’re not just confident that asking chatGPT to explain it’s inner workings works exactly like a --verbose flag, you’re so sure that’s what happening that it apparently does not occur to you to explain why you think the output is not just more plausible text prediction based on its training weights with no particular insight into the chatGPT black box.

      Is this confidence from an intimate knowledge of how LLMs work, or because the output you saw from doing this looks really really plausible? Try and give an explanation without projecting agency onto the LLM, as you did with “explain carefully why it rejects”

      • Curtis "Ovid" Poe (he/him)@fosstodon.org
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        @earthquake You’re correct that projecting agency to the LLM is problematic, but in doing so, we get better quality results. I’ve argued that we need new words for LLMs instead of “think,” “understand,” “learn,” etc. We’re anthropomorphizing them and this makes people less critical and gradually shifts their attitudes in incorrect directions.

        Unfortunately, I don’t think we’ll ever develop new words which more accurately reflect what is going on.

        • earthquake
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Seriously, what kind of reply is this, you ignore everything I said except the literal last thing, and even then it’s weasel words. “Using agential language for LLMs is wrong, but it works.”

          Yes, Curtis, prompting the LLM with language more similar to its training data results in more plausible text prediction in the output, why is that? Because it’s more natural, there’s not a lot of training data on querying a program on its inner workings, so the response is less like natural language.

          But you’re not actually getting any insight. You’re just improving the verisimilitude of the text prediction.

        • earthquake
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Got it, because the output you saw from doing this looks really really plausible. Disappointing, but what other answer could it have been?

          Here’s a story for you: a scientist cannot get his papers published. In frustration, he complains to his co-worker, “I have detailed charts on the different type and amount of offerings to the idol, and the correlations to results on prayers answered. I think this is a really valuable contribution to understanding how to beseech the gods for intervention in our lives, this will help people! Why won’t they publish my work?”

          His co-worker replies, “Certainly! As a large language model I can see how that would be a frustrating experience. Here are five common reasons that research papers are rejected for publication.”