Hello All,

I currently have a home server on a raspberry pi 4 with all my services running as docker containers. All containers have their own directories containing the config and database files. This makes it easy to backup and export then.

However, in the future I have plans to migrate to a more powerful server. This means I will probably not be using a CPU with an ARM architecture. So effectively, I will also have to use the corresponding docker images. So will this new x86 docker image work with my backup docker config volumes?

  • Murky-Sector@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    These are containers not VMs. Just rebuild them. Rerun docker compose or docker run on the new machine

    • cryptobots@alien.topB
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Neither docker compose (unless docker compose build and you have source files) nor docker run will rebuild anything. OP has to check if he is using multi arch images and if not, has to change them. As for actual data in containers it varies app to app - I believe arm and x86 have different byte order, so for apps that are not storing data in platform agnostic format that might be a problem.

      • SnowyLocksmith@alien.topOPB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Interesting bit about the byte order. Question though, I have a disk formatted with ext4. Now, both on an arm device and a x86 device, the files on this disk are perfectly accessible. So why would this not apply to docker config data?

        • -myxal@alien.topB
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          A filesystem has a standardised on-disk format, as it’s used as a medium of exchange between different systems, which might use not just different CPU architectures, but different implementations of the filesystem.

          Whether a random software developer has put in the effort for their config/data format is anyone’s guess. It will probably work in most cases where config files are just text anyway, but as soon as you venture into binary formats that aren’s just standard compression (zip, etc.) you’d need to check if what they’re actually doing is CPU arch agnostic.

          • SnowyLocksmith@alien.topOPB
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I have never really built an app, so I don’t really know, but most of the docker containers I have used use some kind of linux base in the image. So then, since the config data is mounted as a volume, should its format be decided by the linux image, i.e. it should be more or less standard, right? Mostly the developer builds an app in some language, which are CPU agnostic.

            • -myxal@alien.topB
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              So then, since the config data is mounted as a volume, should its format be decided by the linux image, i.e. it should be more or less standard, right?

              The volume mechanism in docker is nothing more than a means of allowing a part of the container’s filesystem to be redirected to a directory on the host OS - not that dissimilar from networked file-sharing. It has no bearing on what’s in the saved files.

              The format of the config/data is determined by the app developer. The app developer makes a choice in how the config/data is written from the app’s memory to a file on a disk. If they write their data through libraries, using formats that are designed for CPU portability (Unicode text, sqlite DB, zip archive, etc.) then the data will be usable in the same app running under different CPU arch. But if they use non-portable formats, roll their own format, or just serialise objects from memory, those typically won’t open/de-serialise correctly without extra effort on developer’s part.

              In practice IMHO it’s down to what kind of apps you’re using. Most stuff that’s developed in the last 10 years or so, and not high-performance/custom code would default to using CPUarch-portable formats.

    • SnowyLocksmith@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes, I know that. I am just curious if the files containing the data of previous images will work with the new images as well?

      • Ieris19@alien.topB
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It should, in theory, only the binaries should be different so all config/media should work.

      • Sir_Squish@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You’ll have to make sure the volumes are mapped to appropriate directories. Such that if on your pi setup its /somedir/app1/blah, that you change it to /newdir/app1/blah.

        It’s even easier if you re-create the same directory heirarchy on the new server, and you can rsync your folders over. I found it easier to re-create the containers first, and all my volume mounts are bind type: ie use an existing folder, rather than letting docker create the volume wherever it wants.

        I did exactly this kind of migration not that long ago, and for the same reason, and from the same source/destination platforms.

  • doeknius_gloek@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    The CPU architecture does not affect the files of your applications. Just copy the content of the volumes to your new system.

  • colin_colout@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hi, I just went through this. Slight difference (I’m running k3s and use NFS on my NAS for configs and persistent data).

    It was super smooth. Almost every container in running has an identical x86 build.

    I spun up a new k3s cluster on the x86 servers and redeployed the images. Going from arm to x86 wasn’t a problem except for one mariadb image that I was using that only had an arm build.

    I didn’t didn’t build any of my own images, but if that’s what you’re doing, rebuild should work in most cases.

    Tl;Dr, If you copy all the data folders and run the same image it should work no issue.