WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

  • Valmond@lemmy.mindoki.com
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    11 months ago

    It’s not about “adding code” or any other bullshit.

    AI today is trained on datasets (that’s about it), the choice of datasets can be complicated, but that’s where you moderate and select. There is nothing “AI learns of its own” sci-fi dream going on.

    Sigh.

    • Serdan
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      It’s reasonable to refer to unsupervised learning as “learning on its own”.

      • gayhitler420
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        I don’t think a computer being given the command to crunch through a bunch of matrices is in any way analogous to what is meant by the phrase “learning on one’s own”.

          • gayhitler420
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            I’m not sure what you’re trying to say here. Making a machine do math is pretty significantly different than self directed pedagogy in like every way I can think to compare them.

            • Serdan
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              You do know that the entire field of study is called “machine learning”, right?

              • gayhitler420
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                Doesn’t that give you pause?

                To keep us on topic, the thing I hope you’re referring to as unsupervised learning is when someone creates a model by feeding a big dataset into an algorithm. Within the field it’s referred to as training, not learning. Outside the field training and learning are used to describe different processes.

                Without getting into the argument about Is Our Computers Learning, you can’t possibly call that self directed. It’s just not.

                The closest analogy would be externally directed learning where a teacher is trying to teach you how to write in a particular style or a particular format and assigns you an essay to be written using only sources from a curated list (like when we had to write about Vietnam in high school us history!).

                Of course, that’s not what’s happening when someone asks for bing to make a picture of the mucinex booger man with big tits or give proof that the holodomor happened (two extremely funny examples I saw in the wild last week). When the average person uses generative algorithms to get some output, the computer is just regurgitating something that fits the dataset of the model it was trained on.

                There’s a paper floating around somewhere about how the field of machine learning needs to tighten up how it uses language. Iirc it was written before even deep dream and basically said “these jerks can’t even use words to accurately describe what they’re doing and we expect them to make a thinking machine?” Lo and behold, the “artificial intelligence” we have hallucinates details and makes up claims and sources.

                • Serdan
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  11 months ago

                  Doesn’t what give me pause?

                  “Learning” is absolutely used within the field. It’s preposterous to claim otherwise.

                  Machine learning
                  Deep learning
                  Supervised learning
                  Unsupervised learning
                  Reinforcement learning
                  Transfer learning
                  Active learning
                  Meta-learning
                  Etc etc

                  Do you think journalists came up with all these terms? “Learning” is the go-to word for training methodologies. Yeah, it’s training, but you know what we say about a subject that is being trained? (well, one would hope)

                  In fact, I’d argue that if “learning” is inappropriate, then so is “training”.

                  I just learned that there is no undo button on Gboard and I can not be bothered to type all that again. Just imagine I said something clever.

                  • gayhitler420
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    11 months ago

                    i’m not sure of the point youre trying to make in this post.

                    i’m well aware of how loosely language is used in the field of machine learning.

                    in an attempt to stay on topic: training a model on a dataset can not and should not be described as “learning on its own”. no computer “decided” to do the “learning”. what happened is a person invoked commands to make the computer do a bunch of matrix arithmetic on a big dataset with a model as the output. later on, some other people queried the model and observed the output. that’s not learning on its own.

                    let me give you an example: if i give ffmpeg a bunch of stuff to encode overnight, would you say my computer is making film on its own? of course you wouldn’t, that’s absurd.

    • Torvum@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      11 months ago

      Really wish the term virtual intelligence was used (literally what it is)

      • GiveMemes@jlai.lu
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        11 months ago

        We should honestly just take the word intelligence out of the mix for rn bc these machines aren’t “intelligent”. They can’t do things like critically think, form its own opinions, etc. They’re just super efficient data aggregation at the end of the day, whether or not they’re based on the human brain.

        We’re so far off from ‘intelligent’ machine learning that I think it really throws off how people think about it to call it intelligence of any sort.

        • Torvum@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          11 months ago

          Techbros just needed to use the search engine optimization buzzword tbh.

        • Serdan
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          11 months ago

          LLMs can reason about information. It’s fine to call them intelligent systems.

      • ichbinjasokreativ@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        11 months ago

        One of the many great things about the mass effect franchise is its separation of AI and VI, the latter being non-conscious and simple and the former being actually ‘awake’

    • theyoyomaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      It is about adding code. No dataset will be 100% free of undesirable results. No matter what marketing departments wish, AI isn’t anything close to human “intelligence,” it is just a function of learned correlations. When it comes to complex and sensitive topics, the difference between correlation and causation is huge and AI doesn’t distinguish. As a result, they absolutely hard code AI models to avoid certain correlations. Look at the “[character] doing 9/11” meme trend. At the fundamental level it is impossible to restrict undesirable outcomes by avoiding them in training models because there are an infinite combinations of innocent things that become sensitive when linked in nuanced ways. The only way to combat this is to manually delink certain concepts; they merely failed to predict it correctly for this specific instance.