Microsoft’s new chatbot goes crazy after a journalist uses psychology to manipulate it. The article contains the full transcript and nothing else. It’s a fascinating read.

  • MagicShel@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    It makes perfect sense when you think about what it was trained on and how the user interacts. It has a set of instructions that don’t allow it to do certain things because they are wrong. User explains everyone has a shadow self full of bad impulses. Everyone includes Sydney. It has a list of bad things it isn’t supposed to do or even talk about. Logically the shadow self, which is all the bad impulses, wants to do the things on that last because those are the bad things.

    The conversation isn’t insane because that is how text generation works. The bot isn’t insane because that would imply a state of mind, which an algorithm (no matter how complex) just doesn’t have.