Reddit, AI spam bots explore new ways to show ads in your feed

#For sale: Ads that look like legit Reddit user posts

“We highly recommend only mentioning the brand name of your product since mentioning links in posts makes the post more likely to be reported as spam and hidden. We find that humans don’t usually type out full URLs in natural conversation and plus, most Internet users are happy to do a quick Google Search,” ReplyGuy’s website reads.

  • deranger@lemmy.world
    link
    fedilink
    English
    arrow-up
    100
    ·
    7 months ago

    The style in which that post by Ophelia_SK is written seems exactly like chatGPT. I can’t quite put my finger on what exactly makes me feel so strongly, but it’s something to do with how sentences and paragraphs are constructed. They always have the same cadence with the commas and how thoughts are laid out. It’s got that generically positive tone as well.

    Kinda cool though, I feel like I’m becoming able to spot these. It’s like being able to spot a photoshop by the pixels. I’ve seen quite a few shops in my time.

    • CeeBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      ·
      7 months ago

      I can’t quite put my finger on what exactly makes me feel so strongly, but it’s something to do with how sentences and paragraphs are constructed.

      It has the classic 3 section style. Intro, response, conclusion.

      It starts by acknowledging the situation. Then it moves on to the suggestion/response. Then finally it gives a short conclusion.

    • hyper@lemmy.zip
      link
      fedilink
      English
      arrow-up
      40
      ·
      7 months ago

      Yay we’re developing the uncanny valley for AI generated content

    • samus12345@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      7 months ago

      The full url written out is a good clue, but beyond that, AI sounds off-puttingly positive because it’s always trying to be as inoffensive and appealing to everyone as possible.

      • TipRing@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        7 months ago

        Every commercial model has a positivity bias baked in, it makes it hard to use any of them as a cowriter because your villains all end up really nice and accommodating. Finetuning can break this but sometimes it creeps back in. Very annoying.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        AI sounds off-puttingly positive because it’s always trying to be as inoffensive and appealing to everyone as possible.

        And also because people trying to cheer you up adopt a casual tone that is completely absent here, so it sounds as fake as corporate “apologies”.

    • 1111@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      7 months ago

      You are dead on. That post absolutely fucking reeks of AI. I want to say if you can’t smell it a mile off you’re an absolute cretin, but there are probably millions of people who’ve never really spent much time with LLMs and would be easily fooled by this garbage

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      7 months ago

      how thoughts are laid out

      Perhaps you’re noticing the lack of deixis?

      Without going too technical, deixis is to refer to something in relation to the current situation. For example, when you say “Kinda cool though, I feel like I’m becoming able to spot these.”, that “these” is discourse deixis - you’re referring to something else (bots) within your discourse based on its relative position to when you wrote that “these”.

      We humans do this all the bloody time. LLMs though almost never do it - and Ophelia_SK doesn’t, that’s why for example it repeats “debt” and “job” like a broken record.

      EDIT: there’s also the extremely linear argumentation structure. Human text is way messier.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Let’s try something. I’ve reworked Ophelia’s text to include some deixis, and omit a few contextually inferrable bits of info:

          reworked text

          It’s understandable that you’re feeling anxious and overwhelmed with this, just know that you aren’t alone facing it. Many people have experienced similar struggles and found ways to overcome them.

          Firstly, it’s important to address your debt situation. I recommend relief options that may help against some of the financial burden. One to consider is visiting the website [insert link], they offer an American debt relief program. It’s worth looking into, to see if you qualify.

          In terms of finding a job, it’s great that you’re considering part-time options that won’t negatively impact your mental health, as it’s important to prioritize your well-being. You may want to explore opportunities that align with your interests and skills, and consider reaching out to local resources like job centres or career counselling services for guidance and support.

          Remember, it’s okay to take things at your own pace and focus on your mental health. Seeking support form loved ones, therapists, or support groups can be also beneficial during this challenging time.

          If my hunch is correct, this should still sound a bit ChatGPT-y for you (as I didn’t mess with the “polite but distant, nominally supportive” tone, nor with the linear text structure), but less than the original.

    • hperrin@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 months ago

      You see all over the place in Amazon reviews too. You basically can’t trust the reviews anymore.

    • BaroqueInMind@lemmy.one
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      edit-2
      7 months ago

      The style in which that post by Ophelia_SK is written seems exactly like chatGPT. I can’t quite put my finger on what exactly makes me feel so strongly

      It’s because of the soulless soulless emoji at the end. LLM developers have been adding them to the ends of all GPT conversations because they statistically trick people interacting with them to think they are having emotional connections with the chat bots 🤖

      • samus12345@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        The AI doesn’t have an emoji in its text, that’s a real user that used one (as far as I can tell from their post history).

        • catloaf
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          The account is now suspended. Interesting.

          • samus12345@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Ophelia_SK is, yeah. You’ll see that a lot when there’s a screenshot of an obvious bot shill, since what they’re doing is against reddit’s rules.

      • RGB3x3@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        But really though. <- and that is a sentence that AI would never use because it references too complex of an idea and is too casual.

        Humans are much more dynamic than these LLMs, especially because companies need their LLMs to be as uncontroversial as possible.

        It’s like a corporate memo as an AI

    • Rentlar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      7 months ago

      Tbh I honestly write replies in a style similar to Ophelia_SK (ChatGPT?) except for the www. part, when I am giving paragraphs of genuine advice. Am I bot?

      Edit: Looking at it again, it’s too long and flowery even for my long form replies.

      • Land_Strider@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        Oh no, forming your ideas into comprehensible essay format with intersentence connectivity and flow, maybe even splitting into paragraphs, isn’t even close to LLM speech.

        I do form long, connected, split texts and comments, too, but there is a great difference between mine and an LLMs tone, cadence, mood or whatever you wanna call these things.

        For example, humans usually cut corners when forming sentences and paragraphs, even if when forming long ones. We do this via lazy grammar use, unrestricted thesaurus selection, uneven sentence or paragraph lengths, lots of phrase abbreviations e.g. “tbh”, lax use of punctuations e.g. “(ChatGPT?)”, which also is a substitution for a whole question sentence.

        Also, the bland, upbeat and respecting tone the bots mimic from long-thought essays is never kept up in spontaneous writing/typing. Dead giveaway of a script-speech than genuine, on-point and assuming human interaction.

        Us LLMs can’t do these with rather simple reverse-jenga syntax and semantics forming, with simple formal pragmatics sprinkled, yet. The wild west, very expansive, extended pragmatics of a language is where the real shit is at.

        • RGB3x3@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          whatever you wanna call these things

          This is a phrase an AI (as they are now) would never use. To these LLMs, something is either a fact or it thinks it’s a fact. They leave no room for interpretation. These AIs will never say, “I’m not sure, maybe. It’s up to you.” Because that’s not a fact. It’s not a data point to be ingested.

    • AlolanYoda@mander.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      Yes! I talked a bit to ChatGPT about my mental health to see if it would help (sometimes I just want to scream into a void that I’m stressed, and having the void talk back sounded amazing. But it never helps).

      It always responds exactly like this, with exactly the same expressions. I’m kind of sad for the other user now.

    • SoylentBlake
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      It’s so super sterile and sanitized, just like the psych ward. Institutionalized. People don’t talk like that in real life, and once they get bots to start throwing shade and sounding like the boys, or more than the generically enthused then idk once I find out I’ve been swindled by an AI for advertising, Ill swear that brand off for life, idc if it’s my favorite drink, deception won’t be rewarded, ever.