There was a recent post about whether to enable ufw and it made me ask: how protected I am from a rogue docker container? I have a single server with 15-20 docker containers running at any given time. Should one get hacked or be malicious from the get go, are there (hopefully easy to implement for an armchair sysadmin) best practices to mitigate such an event? Thanks!
Only give the container access to the folders it needs for your application to operate as intended.
Only give the container access to the networks it needs for the application to run as intended.
Don’t run containers as root unless absolutely necessary.
Don’t expose an application to the Internet unless necessary. If you’re the only one accessing it remotely, or if you can manage any of the other devices that might (say, for family members), access your home network via a VPN. There are multiple ways to do this. I run a VPN server on my router. Tailscale is a good user-friendly option.
If you do need to expose an application to the Internet, don’t do so directly. Use a reverse proxy. One common setup: Put your containers on private networks (shared among multiple only in cases where they need to speak to each other), with ports forwarded from the containers to the host. Install a reverse proxy like Nginx Proxy Manager. Forward 80 and 443 from the router to NGM, but don’t forward anything else from the router. Register a domain, with subdomains for each service you use. Point the domain and subdomains to your IP, or using aliases, to a dynamic dns domain that connects to a service on your network (in my case, I use my Asus router’s DDNS service). Have NGM connect each subdomain to the appropriate port on the host (ie, nc.example.com going to a port on the hose being used for NextCloud). Have NGM handle SSL certificate requests and renewals.
There are other options that don’t involve any open ports, like Cloudflare tunnels. There are also other good reverse proxy options.
Consider using something like fail2ban or crowdsec to mitigate brute force attacks and ban bad actors. Consider something like Authentik for an extra layer of authentication. If you use Cloudflare, consider its DDOS protection and other security enhancements.
Keep good and frequent backups.
Don’t use the same password for multiple services, whether they’re ones you run or elsewhere.
Throw salt over your shoulder, say three Hail Marys and cross your fingers.
You can run your containers through a vulnerability scanner like Trivy and then patch with Copacetic. It will only fix the container image’s OS vulnerabilities though, not the app code dependencies.
Otherwise one step simpler is you can just vulnerability scan the containers, look at the issues, then decide if you want to deploy them.
Noob here, What if we use something like authelia or authentik for signing in to use any container. Will that make it safe?
I saw it the documentation of r/CosmosServer the creator mentions how his setup does not allow docker containers to talk to each other.
Safe-r. Not inherently safe. It’s one good practice to consider among others. Like any measure that increases security, it makes your service less accessible - which may compromise usability or interoperability with other services.
You want to think through multiple security measures with any given service, decide what creates undo hassle, decide what’s most important to you, limit the attack surface by making unauthorized access somewhere between inconvenient and near-impossible. And limit the damage that can be done if someone gets unauthorized access - ie not running as root, giving the container limited access to folders, etc.
“Only” having an authenticator doesn’t stop malicious containers from reaching outside. Least privileges and network segmentation is the minimum necessary.
So attempt to run every container with the least privilege:
- seperate networks for each stack
- only map needed folders
- run the container as a non root user (some containers won’t work so they need to be run as root user)
- use a RP with authentication (if a app is valuable)
- make differential backups to shrink size and increase the interval (and check if they work)
- block internet access to containers that don’t need them
run the container as a non root user (some containers won’t work so they need to be run as root user)
To avoid issues with containers, could also make use of user namespaces: https://docs.docker.com/engine/security/userns-remap/
Allows a process to have root privileges within the container, but be unprivileged on the host.
That’s the way Proxmox issues privileges to containers by default. I don’t know how bulletproof it is, seems very reasonable.
I’d argue it’s up there :) In the end you’re quite limited with what you can do as an unprivileged user.
Granted it’s not for Docker, but Kubernetes, but userns is userns. This Kubernetes blog post even has a short demo :) https://kubernetes.io/blog/2023/09/13/userns-alpha/
Does using this method allow mounting to folders on the host drive without permission issues?
o use podman and dont run any container as root if it need root i will use a VM
Some good advice here. I would say avoid using network_mode: host unless you really have to. And make use of no-new-privs feature. This is easy to do and IMO bare minimum for preventing rogue actions from containers.
What an informative and fantastic set of replies, just wanted to say thanks to everyone for sharing!
As someone who works in infrastructure security, but not with dockers (yet) I learnt a few things which is what this sub is all about…
Docker provides some basic security guidelines here: https://docs.docker.com/engine/security/
But aside from specific containers and guidance, general network and system hardening guidelines would apply. You can look up plenty of server hardening guidelines via google. General principles such as least privilege, segmentation via VLANs and firewall rules, user ownership/privilege for accounts and services, will go a long way. Keep defense in depth in mind, so 1 control is none, 2 is one, and you can always find more ways to make something secure up to and including removal. The most secure thing, is a thing that doesn’t exist.
There are also automated tools that can perform scans and ‘audits’ on your system, or your containers, to guide you on specifics you can adjust (such as lynis) and help lock you down in a more systematic way. These tools can be automated, report on a scheduled basis or one time use. One of those is your best bet for targeted and effective controls.
Run a server with SELinux enabled, and use Podman instead of docker (Podman I assume has better selinux support)
Never heard of podman but what I read on google is that its a drop in replacement for docker. Even read you can alias podman for docker. So does that mean we can just use docker images and docker compose files with podman? Are there drawbacks for using podman instead of docker?
This. On RHEL (or Fedora or CentOS Stream) containers are confined by the
container_t
domain and SELinux policy prevents them from interfering with host resources. In addition each container runs with a unique set of MCS labels, which stops a rogue container from interfering with other containers.
The thing about containers is they usually have no NÉED in general for pure ope file system access. No need for full network access (host, LAN, WAN). So the smaller the privileges the better. So even if it is compromised there’s very little you can do with it.
This is also a general principle for network management. For instance when does the TV need to print or access any server other than Jellyfin?
This is not true sorry. Even in k8s any container has access to any other container in the same pod or in dockers case on the same host. In k8s you can at least add network profiles. If its a host or MACVLAN container it gets worse if no proper isolation is configured on the network level.
What is a rogue docker container?
If the source of the image is getting hacked/ the maintainer does make a backdoor, etc
Doesn’t make any sense since you can see all the code and what you are installing.
The people who read the source code of all their Docker containers and especially understand everything in there are probably around 1%.
For this sub maybe. It doesn’t take to much to look at what you are copy and pasting.
It’s not limited to what you copy and paste. One of my containers has a pretty long starter script written by the container maintainer. That is needed because the application doesn’t have an official Docker version and the starter script takes care of all the necessary work arounds to get the app running inside a container. There could be something malicious in there I don’t know about, if I don’t read the whole starter script, which is probably in a language I don’t understand well.
Even more complicated : I could have been studying the starter script and made the decision it’s fine and the author trustworthy, so I pull the container image with the tag “v1. 0” every few months a new version gets released, I take a look at the changelog, if no braking changes are mentioned I pull tag v1. 1 and replace my existing container. At some point the maintainer stops mainting the container and hands over the Repository to someone else. This person unfortunately now places malicious code in the starter script and releases an update. If I would now pull that new container image I now have a rougue container.