I have been running 1.4, 1.5, 2 without issue - but everytime I try to run SDXL 1.0 (via Invoke, or Auto1111) it will not load the checkpoint.

I have the official hugging face version of the checkpoint, refiner, lora offset and VAE. They are all named properly to match how they need to. They are all in the appropriate folders. When I pick the model to load, it tries for about 20 seconds, then pops a super long error in the python instance and defaults to the last model I loaded. Oddly, it loads the refiner without issue.

Is this a case of my 8gb vram just not being enough? I have tried with the no-half/full precision arguments.

    • Thanks4NothingOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I have mine auto set to git pull each load. Can you confirm, since you have it working…what files do I actually need - is it just the base, refiner, lora and vae?

      • RotaryKeyboard@lemmy.ninja
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’m not an expert, but what I read said that you use SDXL by first using txt2img to generate an image using the base checkpoint, and then you send that image to img2img and use exactly the same prompt there with the refiner checkpoint.

        That makes for a longer workflow than I’m used to, so sometimes I just use one or the other in txt2img and see what I get. Sometimes I forget to change the model when I switch between img2img and txt2img, too. I always seem to get results of similar quality when I use just one of the checkpoints.

        It should be interesting to see what people come up with training their own checkpoints off of SDXL, though.

        • Thanks4NothingOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Good point. I watched a Nerdy Rodent video about installing it, and he showed that he used the sdxl_base_vae and sdxl_refiner_vae safetensors, and that is all he copied over. No other files. I went back to the repository and pulled those two file and put them in my checkpoint folder. I reloaded my web user bat file and I got the new checkpoint to load. It took about a minute. I got one image to generate at 1024x1024 but it took about 3 minutes to generate. It looked normal, but I cannot help but think it should be a bit faster than that. But then I noticed my whole machine tanked when running it. It bogged down all 32gb of my ram, and it was showing my gpu was barely doing anything. Maybe there is some kind of memory leak. I may have to check my gpu drivers to see if something is going on.

          Are those vae safetensors the only files I need? The tuturial didn’t talk about the lora offset or the vae files…so I didn’t add them this last time.

          • RotaryKeyboard@lemmy.ninja
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Those safetensors files are all that I have ever used.

            For reference, I’m using a 2080 ti. That’s got about 11 GB of RAM, I think. I’m not having any freezes whatsoever. I’ve also tried it on my wife’s shiny new 4080. Definitely a speed difference, but again, no freezes or instability. Generating the 1024x1024 images does take forever. I actually went back to 512x512 and stayed there. I can always upscale something that I like.

  • Stampela@startrek.website
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    3060 here, it might be the vram. SDXL eats a lot of it (and if you had say the vae in the wrong spot it would output very wrong images) so it might be that either 8gb aren’t enough, or maybe they aren’t enough with the resolution of your screen plus whatever you are running, like the browser.

    Or, OR: the checkpoint is corrupted. I had that happen a couple of times in the past and the whole huge error with loading of another model was what happened.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’m not sure why, but I have 8GB vram and my experience with this has been the same as others who describe that SDXL will not run with Auto1111 but it will work with ComfyUI. So I think this is not purely a vram issue.

    • Thanks4NothingOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah it’s very odd. I tried comfyUI. I but the interface just doesn’t click with me.

      I keep waiting for invoke AI to have an auto installer for that model but they are still only offering the SDXL .9 and I don’t have a token for that model.

  • Novman@feddit.it
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Nvidia has problem wih newest drivers, auto 111 give out of memory, comfyui works smootly with your card.

    • Thanks4NothingOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Thanks. Oddly enough, the most recent release of InvokeAI fixed the problem I was having. My 8gb 3070 can run SDXL in about 30 seconds now. It seems to take a little bit to clear everything in-between generations though. I want to move up to a 12/24 gb GPU, but am waiting/hoping for the price crash.