Hi all,

as I’m running a lot of docker containers in my “self-hosted cloud”, I’m also a little bit worried about getting malicious docker containers at some points. And I’m not a dev, so very limited capabilities to inspect the source code myself.

Not every docker container is a “nextcloud” image with hundred of active contributors and many eyes looking at the source code. Many Self-Hosted projects are quite small, and Github accounts can be hacked, etc. …

What I’m doing in the moment, is:

Project selection:
- only select docker projects with high community activity on GitHub and a good track record

Docker networks:
- use separate isolated networks for every container without internet access
- if certain APIs need internet access (e.g. Geolocation data), I use an NGINX-proxy to forward this domain only (e.g. self-made outgoing application firewall)

Multiple LXC containers:
- I split my docker containers into multiple LXC instances via Proxmox, some senitive containers like Bitwarden are running on their own LXC instance

Watchtower:
- no automatic updates, but manual updates once per month and testing afterwards

Any other tips? Or am I worrying too much? ;)

  • nukacola2022@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Since you are using LXC/LXD, make sure that AppArmor is enabled on the host and ensure that a configuration profile exists (should be a decent default one available) that blocks the containers from reading things like the /etc/passwd file.

    I personally run all containers in centos/alma/fedora systems specifically to take advantage of the strong SELinux-container policies.

    Other things you can do would be to rebuild public images, patch them, and save them to your private registry. I find that not all container maintainers patch as aggressively as I would like. Furthermore, you can look into running containers as non root and use a non root “daemon” like Podman instead of Docker.