I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.
It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.
I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?
dude, im kinda you. i just jumped into docker over the summer… feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.
i spent a weekend containering all my home services… totally worth it and easy as pi[hole] in a container!.
Well, that wasn’t a huge investment :-) I’m in…
I understand I’ve got LOTS to learn. I think I’ll start by installing something new that I’m looking at with docker and get comfortable with something my users (family…) are not yet relying on.
Forget docker run,
docker compose up -d
is the command you need on a server. Get familiar with a UI, it makes your life much easier at the beginning: portainer or yacht in the browser, lazy-docker in the terminal.I would suggest docker compose before a UI to someone that likes to work via the command line.
Many popular docker repositories also automatically give docker run equivalents in compose format, so the learning curve is not as steep vs what it was before for learning docker or docker compose commands.
There is even a tool to convert Docker Run commands to a Docker Compose file :)
Such as this one hosted by Opnxng:
https://it.opnxng.com/docker-run-to-docker-compose-converter
Second this. Portainer + docker compose is so good that now I go out of my way to composerize everything so I don’t have to run docker containers from the cli.
# docker compose up -d no configuration file provided: not found
like just
docker run
by itself, it’s not the full command, you need a compose file: https://docs.docker.com/engine/reference/commandline/compose/Basically it’s the same as docker run, but all the configuration is read from a file, not from stdin, more easily reproducible, you just have to store those files. The important is compose commands are very important for selfhosting, when your containers expected to run all the time.
Yeah, I get it now. Just the way I read it the first time it sounded like you were saying that was a complete command and it was going to do something “magic” for me :-)
you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here’s an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.
`version: ‘3’ services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: “UTC” ports: - “5810:5800” volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:NFSPATH”
cleanedaudiobooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:OTHER NFSPATH” `
dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don’t like dockge, you just go back to cli and do your docker compose up -d --force-recreate .
If you are interested in a web interface for management check out portainer.
As a guy who’s you before summer.
Can you explain why you think it is better now after you have ‘contained’ all your services? What advantages are there, that I can’t seem to figure out?
Please teach me Mr. OriginalLucifer from the land of MoistCatSweat.Com
No more dependency hell from one package needing
libsomething.so 5.3.1
and another service absolutely can only run withlibsomething.so 4.2.0
That and knowing that when i remove a container, its not leaving a bunch of cruft behind
You can also back up your compose file and data directories, pull the backup from another computer, and as long as the architecture is compatible you can just restore it with no problem. So basically, your services are a whole lot more portable. I recently did this when dedipath went under. Pulled my latest backup to a new server at virmach, and I was up and running as soon as the DNS propagated.
Modularity, compartmentalization, reliability, predictability.
One software needs MySQL 5, another needs mariadb 7. A third service needs PHP 7 while the distro supported version is 8. A fourth service uses cuda 11.7 - not 11.8 which is what everything in your package manager uses. a fifth service’s install was only tested on latest Ubuntu, and now you need to figure out what rpm gives the exact library it expects. A sixth service expects odbc to be set up in a very specific way, but handwaves it in the installation docs. A seventh program expects a symlink at a specific place that is on the desktop version of the distro, but not the server version. And then you got that weird program that insist on admin access to the database so it can create it’s own user. Since I don’t trust it with that, let it just have it’s own database server running in docker and good riddance.
And so on and so forth… with docker not only is all this specified in excruciating details, it’s also the exact same setup on every install.
You don’t have it not working on arch because the maintainer of a library there decided to inline a patch that supposedly doesn’t change anything, but somehow causes the program to segfault.
I can develop a service on windows, test it, deploy it to my Kubernetes cluster, and I don’t even have to worry about which machine to deploy it on, it just runs it on a machine. Probably an Ubuntu machine, but maybe on that Gentoo node instead. And if my osx friend wants to try it out, then no problem. I can just give him a command, and it’s running on his laptop. No worries about the right runtime or setting up environment or libraries and all that.
If you’re an old Linux admin… This is what utopia looks like.
Edit: And restarting a container is almost like reinstalling the OS and the program. Since the image is static, restarting the container removes all file system cruft too and starts up a pristine new copy (of course except the specific files and folders you have chosen to save between restarts)
It sounds very nice and clean to work with!
If I’m lucky enough to get the Raspberry 5 at Christmas, I will try to set it up with docker for all my services!
Thanks for the explanation.
Just remember that Raspberry is an ARM cpu, which is a different architecture. Docker can cross compile to it, and make multiple images automatically. It takes more time and space though, as it runs an arm emulator to make them.
https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ has some info about it.
It just making things easier and cleaner. When you remove a container, you know there is no leftover except mounted volumes. I like it.
It’s also way easier if you need to migrate to another machine for any reason.
I use LXC for all the reasons most people use Docker, it’s easy to spin up a new service, there are no leftovers when I remove a service, and everything stays separate. What I really like about LXC though is that you can treat containers like VMs, you start it up, attach and install all your software as if it were a real machine. No extra tech to learn.
Not completely true you probably have to prune some images, or volumes.
deleted by creator
For sure! Most seem to be random git repo level of reviewed instead of being seriously tested and hardened. I really wish we had more of an source for reliable audits of containers, and flatpaks. Just someone trusted or collectively running trivy, clair, sonarqube, etc, posting the results publicly, and having tools like podman/K3s/etc have sane defaults for checkibg it against containers on pull.
I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.
Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.
Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.
Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.
I really started to love Docker, especially in my Homelab.
Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.
The backup and easy set up on other servers is not necessarily super useful for a homelab but a huge selling point for the enterprise level. You can make a VM template of your host with docker set up in it, with your Compose definitions but no actual data. Then spin up as many of those as you want and they’ll just download what they need to run the images. Copying VMs with all the images in them takes much longer.
And regarding the memory footprint, you can get that even lower using podman because it’s daemonless. But it is a little more work to set things up to auto start because you have to manually put it into systemd. But still a great option and it also works in Windows and is able to parse Compose configs too. Just running Docker Desktop in windows takes up like 1.5GB of memory for me. But I still prefer it because it has some convenient features.
It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?
You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.
Another old school sysadmin that “retired” in the early 2010s.
Yes, use docker-compose. It’s utterly worth it.
I was intensely irritated at first that all of my old troubleshooting tools were harder to use and just generally didn’t trust it for ages, but after 5 years I wouldn’t be without.
I’m a little younger but in the same boat. There is some friction having filesystems, ports and processes “hidden” from your hosts programs that you typically rely on. But I needed them sooooo much less now that all my services are in Docker with exactly matching dependencies instead of rolling my eyes about running two PostgreSQL servers in different versions or juggling Python / node / Ruby versions with ASDF.
Yeah, so worth it! The first time I moved a service to a new box and realised all I had to do was copy the compose file and
docker-compose up -d
… I was sold.Now I’m moving everything to Docker Swarm which is a new adventure. :-)
Docker is amazing, you are late to the party :)
It’s not a fad, it’s old tech now.
I’m gonna play devil’s advocate here.
You should play around with it. But I’ve been a Linux server admin for a long time and — this might be unpopular — I think Docker is unimportant for your situation. I use Docker daily at work and I love it. But I didn’t bother with it for my home server. I’ll never need to scale it or deploy anything repeatedly or where I need 100% uptime.
At home, I tend to try out new things and my old docker-compose files are just not that valuable. Docker is amazing at work where I have different use cases but it mostly just adds needless complexity on a home server.
That’s exactly how I feel about it. Except (as noted in my post…) the software availability issue. More and more stuff I want is “docker first” and I really have to go out of my way to install and maintain non docker versions. Case in point - I’m trying to evaluate Immich so I can move off Google photos. It looks really nice, but it seems to be effectively “docker only.”
The advantage of docker, as I see it for home labs, is keeping things tidy, ensuring compatibility, and easy to manage/backup setup configs, app configs, and app data. It is all very predictable and manageable. I can move my docker compose and data from one host to another in literal seconds. I can, likewise, spin up and down test environments in seconds too. Obviously the whole scaling thing that people love containers for is pointless in a homelab, but many of the things that make it scalable also make it easy to manage.
Im probably the opposite of you! Started using docker at home after messing up my raspberry pi a few too many times trying stuff out, and not really knowing what the hell I was doing. Since moved to a proper nas, with (for me, at least) plenty of RAM.
Love the ability to try out a new service, which is kind of self-documenting (especially if I write comments in the docker-compose file). And just get rid of it without leaving any trace if it’s not for me.
Added portainer to be able to check on things from my phone browser, grafana for some pretty metrics and graphs, etc etc etc.
And now at work, it’s becoming really, really useful, and I’m the only person in my (small, scientific research) team who uses containers regularly. While others are struggling to keep their fragile python environments working, I can try out new libraries, take my env to the on-prem HPC or the external cloud, and I don’t lose any time at all. Even “deployed” some little utility scripts for folks who don’t realise that they’re actually pulling my image from the internal registry when they run it. A much, much easier way of getting a little time-saving script into the hands of people who are forced to use Linux but don’t have a clue how to use it.
This is kinda where I’m at as well. I have always run my home services each in their own VM. There’s no fuss to set up a new one, if I want to move it to a different server I just copy the *.img file over and launch it. Sure I run a lot of internet services across my various machines but it all just works so I don’t understand what purpose there would be to converting all the custom configurations over to docker. It might make sense if I was trying to run all my services directly on the bare metal, but who does that?
VM’s have much bigger overhead, for one. And VM’s are less reproducible too. If you had to set up a VM again, do you have all the steps written down? Every single step? Including that small “oh right” thing you always forget? A Dockerfile is basically just a list of those steps, written in a way a computer can follow. And every time you build an image in docker, it just plays that list and gives you the resulting file system ready to run.
It’s incredibly practical in some cases, let’s say you want to try a different library or upgrade a component to a newer version. With VM’s you could do it live, but you risk not being able to go back. You could make a copy or make a checkpoint, but that’s rather resource intensive. With docker you just change the Dockerfile slightly and build a new image.
The resulting image is also immutable, which means that if you restart the docker container, it’s like reverting to first VM checkpoint after finished install, throwing out any cruft that have gathered. You can exempt specific file and folders from this, if needed. So every cruft and change that have happened gets thrown out except the data folder(s) for the program.
I’m not sure I understand this idea that VMs have a high overhead. I just checked one of my servers, there are nine VMs running everything from chat channels to email to web servers, and the server is 99.1% idle. And this is on a poweredge R620 with low-power CPUs, it’s not like I’m running something crazy-fast or even all that new. Hell until the beginning of this year I was running all this stuff on poweredge 860’s which are nearly 20 years old now.
If I needed to set up the VM again, well I would just copy the backup as a starting point, or copy one of the mirror servers. Copying a VM doesn’t take much, I mean even my bigger storage systems only use an 8GB image. That takes, what, 30 seconds? And for building a new service image, I have a nearly stock install which has the basics like LDAP accounts and network shares set up. Otherwise once I get a service configured I just let Debian manage the security updates and do a full upgrade as needed. I’ve never had a reason to try replacing an individual library for anything, and each of my VMs run a single service (http, smtp, dns, etc) so even if I did try that there wouldn’t be any chance of it interfering with anything else.
Honestly from what you’re saying here, it just sounds like docker is made for people who previously ran everything directly under the main server installation and frequently had upgrades of one service breaking another service. I suppose docker works for those people, but the problems you are saying it solves are problems I have never run in to over the last two decades.
Nine. How much ram do they use? How much disk space? Try running 90, or 900. Currently, on my personal hobby kubernetes cluster, there’s 83 different instances running. Because of the low overhead, I can run even small tools in their own container, completely separate from the rest. If I run say… a postgresql server… spinning one up takes 90mb disk space for the image, and about 15 mb ram.
I worked at a company that did - among other things - hosting, and was using VM’s for easier management and separation between customers. I wasn’t directly involved in that part day to day, but was friend with the main guy there. It was tough to manage. He was experimenting with automatic creating and setting up new VM’s, stripping them for unused services and files, and having different sub-scripts for different services. This was way before docker, but already then admins were looking in that direction.
So aschually, docker is kinda made for people who runs things in VM’s, because that is exactly what they were looking for and duct taping things together for before docker came along.
Yeah I can see the advantage if you’re running a huge number of instances. In my case it’s all pretty small scale. At work we only have a single server that runs a web site and database so my home setup puts that to shame, and even so I have a limited number of services I’m working with.
Yeah, it also has the effect that when starting up say a new postgres or web server is one simple command, a few seconds and a few mb of disk and ram, you do it more for smaller stuff.
Instead of setting up one nginx for multiple sites you run one nginx per site and have the settings for that as part of the site repository. Or when a service needs a DB, just start a new one just for that. And if that file analyzer ran in it’s own image instead of being part of the web service, you could scale that separately… oh, and it needs a redis instance and a rabbitmq server, that’s two more containers, that serves just that web service. And so on…
Things that were a huge hassle before, like separate mini vm’s for each sub-service, and unique sub-services for each service doesn’t just become practical but easy. You can define all the services and their relations in one file and docker will recreate the whole stack with all services with one command.
And then it also gets super easy to start more than one of them, for example for testing or if you have a different client. … which is how you easily reach a hundred instances running.
So instead of a service you have a service blueprint, which can be used in service stack blueprints, which allows you to set up complex systems relatively easily. With a granularity that would traditionally be insanity for anything other than huge, serious big-company deployments.
Well congrats, you are the first person who has finally convinced me that it might actually be worth looking at even for my small setup. Nobody else has been able to even provide a convincing argument that docker might improve on my VM setup, and I’ve been asking about it for a few years now.
Instead of setting up one nginx for multiple sites you run one nginx per site and have the settings for that as part of the site repository.
Doesn’t that require a lot of resources since you’re running (mysql, nginx, etc.) numerous times (once for each container), instead of once globally?
Or, per your comment below:
Since the base image is static, and config is per container, one image can be used to run multiple containers. So if you have a postgres image, you can run many containers on that image. And specify different config for each instance.
You’d only have two instances of postgres, for example, one for all docker containers and one global/server-wide? Still, that doubles the resources used no?
I started using docker myself for stuff at home and I really liked it. You can create a setup that’s easy to reproduce or just download.
Easy to manage via docker CLI, one liner to run on startup unless stopped, tons of stuff made for docker becomes available. For non docker things you can always login to the container.
Tasks such as running, updating, stopping, listing active servers, finding out what ports are being used and automation are all easy imo.
You probably have something else you use for some/all of these tasks but docker makes all this available to non-sysadmin people and even has GUI for people who like clicking their mouse.
I think next time you find something that provides a docker compose file you should try it. :)
I’m a VMware and Windows admin in my work life. I don’t have extensive knowledge of Linux but I have been running Raspberry Pis at home. I can’t remember why but I started to migrate away from installed applications to docker. It simplifies the process should I need to reload the OS or even migrate to a new Pi. I use a single docker-compose file that I just need to copy to the new Pi and then run to get my apps back up and running.
linuxserver.io make some good images and have example configs for docker-compose
If you want to have a play just install something basic, like Pihole.
Why not jumping directly to Podman if you want more resilent system from beginning? Just my opinion
Why not? Because I’ve never heard of it until this thread - lots of people mentioning it so obviously I’ll look into it.
Welcome to the party 😀
If you want a good video tutorial that explains the inner workings of docker so you understand what’s going on beneath the surface(without drowning in the details), let me know and I’ll paste it tomorrow. Writing from bed atm 😴
I’d like that please
Check out my previous comment: https://lemmy.ml/comment/6629930
(sorry, haven’t learned how to tag users on Lemmy yet!)
Not OP, but, yes please.
Here you go: https://www.youtube.com/playlist?list=PLTk5ZYSbd9Mg51szw21_75Hs1xUpGObDm
LearnCantrill does a good job at being straightforward and clear in his courses. His networking fundamentals is also pretty good.
Check also out this resource where you can fool around and get a feel for Docker in a virtual enviorment: https://labs.play-with-docker.com/
Thank you!
Here is an alternative Piped link(s):
https://www.piped.video/playlist?list=PLTk5ZYSbd9Mg51szw21_75Hs1xUpGObDm
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I’m also interested in that, please
Hi, also used to be a sysadmin and I like things that are simple and work. I like Docker.
Besides what you already noticed (that most software can be found packaged for Docker) here are some other advantages:
- It’s much lighter on resources and efficient than virtual machines.
- It provides a way to automate installs (docker compose) that’s (much) easier to get started with than things like Ansible.
- It provides a clear separation between configuration, runtime, and persistent data and forces you to get organized.
- You can group related services.
- You can control interdependencies, privileges, shared access to resources etc.
- You can define simple or complex virtual networking topologies between containers as you like.
- It adds extra security (for whatever that’s worth to you).
A brief description of my own setup, for ideas, feel free to ask questions:
- Router running OpenWRT + server in a regular PC.
- Server is 32 MB of RAM (bit overkill for now, black Friday upgrade, ran with 4 GB for years), Intel CPU with embedded GPU, OS on M.2 SSD, 8 HDD bays in Linux software RAID (MD).
- OS is Debian stable barebones, only Docker, SSH and NFS are installed on the host directly. Tip: use whatever Linux distro you know and like best.
- Docker is installed from their own repository, not from Debian’s.
- Everything else runs from docker containers, including things like CUPS or Samba.
- I define all containers with compose, and map all persistent data to host storage. This way if I lose a container or even the whole OS I just re-provision from compose definitions and pick up right where I left off. In fact destroying and recreating containers cleanly is common practice with docker.
Learning docker and compose is not very hard esp. if you were on the job.
If you have specific requirements eg. storage, exposing services over internet etc. please ask.
Note: don’t start with Podman or rootless Docker, start with regular Docker. It will be 10x easier. You can transition to the others later if you want.
I’m basically the same here, used to be a sysmin too. Docker compose is running a couple of complicated inter-dependent services at my job as a first try for me, it’s been quite stable and clear on what’s happening within the containers.
I really like how the docker setup files also become a source of truth documentation wise, particularly when paired with git.
P.s. I know it’s a typo, but imagine a ‘black Friday upgrade’ for your server being a move from 4gb ram to 32mb. Return to
monke1998.That typo made me chuckle way harder than it should’ve, too.
As someone who just started their container adventure by setting up rootless podman on arch, it wasn’t terrible but I think I agree. I think I’m going to go check out some vanilla-ass docker until I can understand everything better.
It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?
There’s nothing stopping you from using a single instance of those and only adding databases and config. The configs that come with projects set them up individually because they need to offer full examples but those configs are only meant as a guideline.
Also keep in mind that the overhead of just running multiple instances isn’t very big. The resources are consumed when you start having connections and using CPU and storing data and so on, and those are going to be the same no matter how many instances you have.
No. (Of course, if you want to use it, use it.) I used it for everything on my server starting out because that’s what everyone was pushing. Did the whole thing, used images from docker hub, used/modified dockerfiles, wrote my own, used first Portainer and then docker-compose to tie everything together. That was until around 3 years ago when I ditched it and installed everything normally, I think after a series of weird internal network problems. Honestly the only positive thing I can say about it is that it means you don’t have to manually allocate ports for those services that can’t listen on unix sockets which always feels a bit yucky.
- A lot of images comes from some random guy you have to trust to keep their images updated with security patches. Guess what, a lot don’t.
- Want to change a dockerfile and rebuild it? If it’s old and uses something like “ubuntu:latest” as a base and downloads similar “latest” binaries from somewhere, good luck getting it to build or work because “ubuntu:latest” certainly isn’t the same as it was 3 years ago.
- Very Linux- and x86_64-centric. Linux is of course not really a problem (unless on Mac/Windows developer machines, where docker runs a Linux VM in the background, even if the actual software you’re working on is cross-platform. Lmao.) but I’ve had people complain that Oracle Free Tier aarch64 VMs, which are actually pretty great for a free VPS, won’t run a lot of their docker containers because people only publish x86_64 builds (or worse, write dockerfiles that only work on x86_64 because they download binaries).
- If you’re using it for the isolation, most if not all of its security/isolation features can be used in systemd services. Run
systemd-analyze security UNIT
.
I could probably list more. Unless you really need to do something like dynamically spin up services with something like Kubernetes, which is probably way beyond what you need if you’re hosting a few services, I don’t think it’s something you need.
If I can recommend something instead if you want to look at something new, it would be NixOS. I originally got into it because of the declarative system configuration, but it does everything people here would usually use Docker for and more (I’ve seen it described it as “docker + ansible on steroids”, but uses a more typical central package repository so you do get security updates for everything you have installed, and your entire system as a whole is reproducible using a set of config files (you can still build Nix packages from the 2013 version of the repository I think, they won’t necessarily run on modern kernels though because of kernel ABI changes since then). However, be warned, you need to learn the Nix language and NixOS configuration, which has quite a learning curve tbh. But on the other hand, setting up a lot of services is as easy as adding one line to the configuration to enable the service.
Docker is great. I learned it from aetting up an Openmediavault server that had a built in docker extension, so now lots of servers running off that one server. Also portainer can be very handy for working with containers , basically a gui for the command line stuff or compose files you’d normally use in docker cli
I couldn’t get used to Docker at all before using Portainer. GUIs are great if you can’t use CLI.
That’s how I “onboarded” to docker. Portainer acted like a stepping stone, as I got familiar with how docker worked.
Learn it first.
I almost exclusively use it with my own Dockerfiles, which gives me the same flexibility I would have by just using VM, with all the benefits of being containerized and reproducible. The exceptions are images of utility stuff, like databases, reverse proxy (I use caddy btw) etc.
Without docker, hosting everything was a mess. After a month I would forget about important things I did, and if I had to do that again, I would need to basically relearn what I found out then.
If you write a Dockerfile, every configuration you did is either reflected by the bash command or adding files from the project directory to the image. You can just look at the Dockerfile and see all the configurations made to base Debian image.
Additionally with docker-compose you can use multiple containers per project with proper networking and DNS resolution between containers by their service names. Quite useful if your project sets up a few different services that communicate with each other.
Thanks to that it’s trivial to host multiple projects using for example different PHP versions for each of them.
And I haven’t even mentioned yet the best thing about docker - if you’re a developer, you can be sure that the app will run exactly the same on your machine and on the server. You can have development versions of images that extend the production image by using Dockerfile stages. You can develop a dev version with full debug/tooling support and then use a clean prod image on the server.