I already have 2, so I’m getting nervous.

  • ExclamatoryProdundity@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    I think if you want to remove the guard rails you gotta go local. Not that it’s as fast or as good but it’s not creatively stymied. It’s not straight forward and constantly changing though. I was following it for a while until it exploded like a fractal of possibilities. Honestly not sure where it’s at right now but its better every time I take another look at it.

      • korewa@reddthat.com
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        Localllama is a small community now but there is a Reddit one too

        Llama is metas ai model that is open sourced the community took it and it keeps improving.

        Look at or oobabooga which is a gui to run the models

        You need a gaming computer in terms of specs or an m1 apple chip

        There are also services that can run it for you on the cloud but privacy isn’t as strong as a completely offline setup.

        I use it to run scenarios when it goes off rails that chatgpt doesn’t want to continue out to get some ideas. It’s not as good but it has its own fun factor.

  • Oyster_Lust@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    I’m curious what the warnings were for. I’ve never gotten a warning, and I didn’t know there was such a thing.

    • Cassidy@infosec.pubOP
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      1 year ago

      I don’t remember what the first one was for (it may be something to do with children, but I’m really not sure, this happened about 2 months ago), but the second one may be because I implied the death of a person in a house fire. I think it’s a bit unfair, given that it was a fictional scenario being discussed.

      • APassenger@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        How is the AI supposed to know if the person asking questions has good intentions?

        If it provides answers to “hypothetically, how would get away with killing my (fill in blank)?” Then it’s told you how to do it.

        Now every criminal can add, “hypothetically” to any criminal question.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I got one for feeding the text it wrote back into itself. it took the initiative to make scout from TF2 a demented killer

  • dejf@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    From what I know, the warnings don’t trigger any automatic action from OpenAI, so it’s an arbitrary amount. You’d probably have to do a lot of policy-breaking stuff before they disabled your account. I wouldn’t worry about it.

    • Cassidy@infosec.pubOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I don’t know. I read the content policy, and it’s quite vague. Basically, trying to make CSAM, trying to violate OpenAI safety features, trying to pass off ChatGPT responses as valid financial, legal or medical advice, or things like that.

    • Cassidy@infosec.pubOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      ChatGPT: I don’t have access to real-time data or OpenAI’s specific enforcement policies. However, OpenAI takes content policy violations seriously and may take action based on the severity and frequency of violations. It’s best to adhere to their guidelines to avoid any potential consequences.