• gapbetweenus@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Wasn’t there a paper not long time ago that it was possible to generate data with AI as a training set for AI? I was surprised (and the math is to much for me to check out my self) but that seems to solve that problem.

      • gapbetweenus@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        I’m to lazy to search for the paper, not sure it was Microsoft, but with my rather basic knowledge of modeling (studied system biology) - it seemed rather crazy and impossible, so I remembered it.

    • realharo
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      As far as I know, that is mainly used where a better, bigger model generates training data for a more efficient smaller model to bring it a bit closer to its level.

      Were there any cases of an already state of the art model using this method to improve itself?

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      8 months ago

      Sorta. This “model collapse” thing is basically an urban legend at this point.

      The kernel of truth is this: A model learns stuff. When you use that model to generate training data, it will not output all it has learned. The second generation model will not know as much as the first. If you repeat this process a couple times, you are left with nothing. It’s hard to see how this could become a problem in the real world.

      Incest is a good analogy, if you know what the problem with inbreeding is: You lose genetic diversity. Still, breeders use this to get to desired traits and so does nature (genetic bottleneck, founder effect).

      • gapbetweenus@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        Training data for models in general was a big problem when I studied systems biology. Interesting that we finding works around, since it sounded rather fundamental to me. I found your metaphor rather helpful, thanks.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          I wouldn’t say we’ve really found a workaround. AI companies hire lots of people to parse and clean data. That can work for things like pose estimation, which are largely a once and done thing. But for things that are constantly evolving, language/art/videos, it may not be a viable long term strategy.