the diskspace on the server I’m using for lemmyfly.org is getting full (it was 40GB local volume, and it’s at 80% now) I can get extra volumes at my hosting (Hetzner) but is it possible lemmy looks at different volumes for the volumes folder that contains pictrs, postgres and lemmy-ui ?

Or do I need to move the complete main lemmy folder to this other volume and run lemmy from there, leaving my system-volume unused for lemmy ?

  • Alfi@lemmy.alfi.casa
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Pictrs supports object storage, you should look into that at your provider, it should be a lot cheaper than additional disk space

  • gerbilOFdoom@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Server admin here, you can do this in a way that avoids lemmy even knowing anything has changed. Read through all of this and do some googling first if you don’t know the specific commands to use!

    First, you need to set the remote volume to automatically mount correctly on system restarts. On Linux, this is done by adding an entry for it to /etc/fstab if one does not exist already. Once done, ‘mount -a’ will mount the volume.

    Mount your remote volume to the filesystem and rsync the folder you want to migrate off-server to it. Take the lemmy service offline, rsync again to catch any changes from when you started.

    Now, you can move the old folder that lemmy has been using elsewhere - I recommend renaming it by appending “.old” or something.

    Next, you need to make a symbolic link. This link should point from the old folder’s original path and point to your remote volume. Once done, make sure everything is there and that file permissions match the ones in your .old folder - file permissions are important and you may need to recursively set them if your lemmy service runs on a different user than you were making these changes with.

    Finally, say a prayer to the machine spirit, waft the holy incense, perform the ritual whack with a wrench, and start the lemmy service. Make sure everything is running properly before you walk away!

    The only issue you’re likely to run into is that remote volumes are constrained by network bandwidth. This may slow down load speeds, so some kind of CDN caching solution is recommended.

    • majorswitcher@lemmyfly.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      thanks for your detailed how-to ! Will try this, hopefully it holds 2 more weeks I’m leaving for holidays now :)

  • majorswitcher@lemmyfly.orgOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I also noticed the logs were growing and taking up quite some gb’s. I’ve used truncate to make the syslog smaller and journalctl vacuum to keep that smaller

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You can split the pict-rs volume and the postgres volume onto different disks.

    But the Postgres volume will very likely significantly shrink with the next release, as a fix was added that removes a lot of unnecessary data.

    For the Pictrs it is more difficult, but I recommend configuring a max upload size and an auto-conversion to webp to limit the further growth.

  • majorswitcher@lemmyfly.orgOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    pictrs isn’t even taking up that much space, postgres takes up most !

    # du -sch volumes/*
    8.0K	volumes/lemmy-ui
    3.4G	volumes/pictrs
    8.2G	volumes/postgres
    12G	total
    • majorswitcher@lemmyfly.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      14 days later, my postgres directory was 12GB. I was still on 18.1. Updated to 18.3 just now. Postgres directory was downscaled to 2.6GB !!! Thank you Lemmy dev’s and contributors !!

  • hitagi (ani.social)@ani.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Doesn’t block volume usually pool together as one? Or at least it “scales up” to larger storage if that makes more sense.