Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • JustMy2c
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    10 months ago

    True but it’s reddit that’s served as a base for most models…

      • JustMy2c
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        Obviously but reddit is in the goldilocks zone where you get coherent intelligent stuff and humor and facts.

        But it’s still toxic for an Ai.

          • JustMy2c
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Correcto but maybe it DOES apply to most asked questions, if you know where I’m going with that