(This is a repost of this reddit post https://www.reddit.com/r/selfhosted/comments/1fbv41n/what_are_the_things_that_makes_a_selfhostable/, I wanna ask this here just in case folks in this community also have some thoughts about it)
What are the things that makes a selfhostable app/project project good? Maybe another way to phrase this question is, what are the things that makes a project easier to self-host?
I have been developing an application that focuses on being easy to selfhost. I have been looking around for existing and already good project such as paperless-ngx, Immich, etc.
From what I gather the most important thing are:
- Good docs, this is probably the most important. The developer must document how to self-host
- Less runtime dependency–I’m not sure about this one, but the less it depends on other services the better
- Optional OIDC–I’m even less sure about this one, and I’m also not sure about implementing this feature on my own app as it’s difficult to develop. It seems that after reading this subreddit/community, I concluded that lots of people here prefer to separate identity/user pool and app service. This means running a separate service for authentication and authorization.
What do you think? Another question is, are there any more good project that can be used as a good example of selfhostable app?
Thank you
Some redditors responded on the post:
- easy to install, try, and configure with sane defaults
- availabiity of image on dockerhub
- screenshots
- good GUI
I also came across this comment from Hacker News lately, and I think about it a lot
https://news.ycombinator.com/item?id=40523806
This is what self-hosted software should be. An app, self-contained, (essentially) a single file with minimal dependencies.
Not something so complex that it requires docker. Not something that requires you to install a separate database. Not something that depends on redis and other external services.
I’ve turned down many self-hosted options due to the complexity of the setup and maintenance.
Do you agree with this?
My list of items I look for:
- A docker image is available. Not some sort of make or build script which make gods know what changes to my system, even if the end result is a docker image. Just have a docker image out on Dockerhub or a Dockerfile as part of the project. A docker-compose.yaml file is a nice bonus.
- Two factor auth. I understand this is hard, but if you are actually building something you want people to seriously use, it needs to be seriously secured. Bonus points for working with my YubiKey.
- Good authentication logging. I may be an outlier on this one, but I actually look at the audit logs for my services. Having a log of authentication activity (successes and failures) is important to me. I use both fail2ban to block off IPs which get up to any fuckery and I manually blackhole entire ASNs when it seems they are sourcing a lot of attacks. Give me timestamps (in ISO8601 format, all other formats are wrong), IP address, username, success or failure (as a independent field, not buried in a message or other string) and any client information you can (e.g. User-Agent strings).
- Good error logging. Look, I kinda suck, I’m gonna break stuff. When I do, it’s nice to have solid logging giving me an idea of what I broke and to provide a standardized error code to search on. It also means that, when I give up and post it as an issue to your github page, I can provide you with some useful context.
As for that hackernews response, I’d categorically disagree with most of it.
An app, self-contained, (essentially) a single file with minimal dependencies.
Ya…no. Complex stuff is complex. And a lot of good stuff is complex. My main, self-hosted app is NextCloud. Trying to run that as some monolithic app would be brain-dead stupid. Just for the sake of maintainability, it is going to need to be a fairly sprawling list of files and folders. And it’s going to be dependent on some sort of web server software. And that is a very good place to NOT roll your own. Good web server software is hard, secure web server software is damn near impossible. Let the large projects (Apache/Nginx) handle that bit for you.
Not something so complex that it requires docker.
“Requires docker” may be a bit much. But, there is a reason people like to containerize stuff, it avoids a lot of problems. And supporting whatever random setup people have just sucks. I can understand just putting a project out as a container and telling people to fuck off with their magical snowflake setup. There is a reason flatpak is gaining popularity.
Honestly, I see docker as a way to reduce complexity in my setup. I don’t have to worry about dependencies or having the right version of some library on my OS. I don’t worry about different apps needing different versions of the same library. I don’t need to maintain different virtual python environments for different apps. The containers “just work”. Hell, I regularly dockerize dedicated game servers just for my wife and I to play on.Not something that requires you to install a separate database.
Oh goodie, let’s all create our own database formats and re-learn the lessons of the '90s about how hard databases actually are! No really, fuck off with that noise. If your app needs a small database backend, maybe try SQLite. But, some things just need a real database. And as with web servers, rolling your own is usually a bad plan.
Not something that depends on redis and other external services.
Again, sometimes you just need to have certain functionality and there is no point re-inventing the wheel every time. Breaking those discrete things out into other microservices can make sense. Sure, this means you are now beholden to everything that other service does; but, your app will never be an island. You are always going to be using libraries that other people wrote. Just try to avoid too much sprawl. Every dependency you spin up means your users are now maintaining an extra application. And you should probably build a bit of checking into your app to ensure that those dependencies are in sync. It really sucks to upgrade a service and have it fail, only to discover that one of it’s dependencies needed to be upgraded manually first, and now the whole thing is corrupt and needs to be restored from backup. Yes, users should read the release notes, they never do.
The corollary here is to be careful about setting your users up for a supply chain attack. Every dependency or external library you add is one more place for your application to be attacked. And just because the actual vulnerability is in SomeCoolLib.js, it’s still your app getting hacked. You chose that library, you’re now beholden to everything it gets wrong.At the end of it all, I’d say the best app to write is the one you are interested in writing. The internet is littered with lots of good intentions and interesting starts. There is a lot less software which is actually feature complete and useful. If you lose interest, because you are so busy trying to please a whole bunch of idiots on the other side of the internet, you will never actually release anything. You do you, and fuck all the haters. If what you put out is interesting and useful, us users will show up and figure out how to use it. We’ll also bitch and moan, no matter how great your app is. It’s what users do. Do listen, feedback is useful. But, also remember that opinions are like assholes: everyone has one, and most of them stink.
@hono4kami To me, good documentation is the number one thing that makes a selfhostable application good.
Second would be “is it dockerized ?”Yep, documentation and a good base level default installation configuration/guide with minimal friction.
I’m perfectly willing to play around once I know at the basic level that the core flow is going to work for me. If it takes me digging through a stack of documentation (especially if it’s bad) to even get something to experiment with on my own system? I won’t bother.
To me, good documentation is the number one thing that makes a selfhostable application good.
I agree. If you don’t mind: what are your qualifications for good documentation? Do you have some good examples of good docs?
What helps a lot for apps with multiple config files:
- if you tell the user to “add code xy to the config file” : tell me which file. is it the main config file? the one of the reverse proxy etc.?
- provide a sensible example library of the config structure. For example: duting the implementation of an importer for beancount I was struggling with what goes where. The example structure was really, really helpful.
- also, if you have configurations which allow different options: TELL ME THE OPTIONS! If I get an error during startup, that for config.foo the value “bar” is not allowed, I need a list of options somehwere, so many hours lost to find out what I can write to config.foo
@hono4kami
One of the best documentation I’ve encountered so far:
https://borgbackup.readthedocs.io/en/stable/
IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.
I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.
What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.
My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.
A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down.
I see, it’s probably good to have some balance between those. Noted
To me the number one thing is, that it is easy to setup via Docker. One container, one network (ideally no network but just using the default one), one storage volume, no additional manual configuration when composing the container.
No, I don’t want a second container for a database. No I don’t want to set up multiple networks. Yes, I already have a reverse proxy doing the routing and certificates. No, I don’t need 3 volumes for just one application.
Please just don’t clutter my environment.
I disagree with pretty much all of this, you are trading maintainability and security for easy setup. Providing a docker-compose file accomplishes the same thing without the sacrifice
- separate volumes for configuration, data, and cache because I might want to put them in different places and use different backup strategies. Config and db on SSD, large data on spinning rust, for example.
- separate container for the database because the official database images are guaranteed to be better maintained than whatever every random project includes in their image
- separate networks because putting your reverse proxy on a different network from your database is just prudent
No, I don’t want a second container for a database.
Unless you’re talking about using SQLite:
Isn’t the point of Docker container is to only have one software/process running? I’m sure you can use something like s6 or other lightweight supervisor, but I feel like that’s seems counterintuitive?
To me, the point of Docker is having one container for one specific application. And I see the database as part of the application. As well as all other things needed to run that application.
Since we’re here, lets take Lemmy for example. It wants 6 different containers with a total of 7 different volumes (and I need to manually download and edit multiple files before even touching anything Docker-related).
In the end I have lemmy, lemmy-ui, pictrs, postgres, postfix-relay, and an additional reverse proxy for one single application (Lemmy). I do not want or need or use any of the containers for anything else except Lemmy.
There are a lot of other applications that want me to install a database container, a reverse proxy, and the actual application container, where I will never ever need, or want, or use any of the additional containers for anything else except this one application.
So in the end I have a dozen of containers and the same amount of volumes just to run 2-3 applications, causing a metric shit-ton of maintenance effort and update time.
I agree with this. If you are going to be using multiple containers for a single app anyways, what is the point of it being in multiple containers? Stick all of it in one container and save everyone the hassle.
I prefer this, but if the options are available its shows me that soneone actually thought about it while creating the software/conatiner
I came here to basically say this. It’s especially bad when you aren’t even sure if you want to keep the service and are just testing it out. If I already have to go through a huge setup/troubleshooting process just to test the app, then I’m not feeling very good about it.
My points are totally in the other direction:
- stable, this is critic, if the app is not able to performs its duties with. 2 weeks uptime, then it is bad. This also applies to random failures. I don’t want to spend endless days to fix it
- docker, with a all-in-image, and as a nice to have the possibility to connect external docker composes for vpn, or databases
- a moderate use of resources, not super critic, but nobody likes to have ram problems
And then as a second league that lean the balance:
- integration with LDAP or any central user repo
- relatively easy to backup and restore
- relatively low level of break changes from version to version
- the gui / ease of use (in like with the complexity of the problem I want to address)
- sane use of defaults and logging capabilities
That’s all from my side
-
Has a simple backup and migration workflow. I recently had to backup and migrate a MediaWiki database. It was pretty smooth but not as simple as it could be. If your data model is spread across RDBMS and file, you need to provide a CLI tool that does the export/import.
-
Easy to run as a systemd service. This is the main criteria for whether it will be easy to create a NixOS module.
-
Has health endpoints for monitoring.
-
Has an admin web UI that surfaces important configuration info.
-
If there are external service dependencies like postgres or redis, then there needs to be a wealth of documentation on how those integrations work. Provide infrastructure as code examples! IME systemd and NixOS modules are very capable of deploying these kinds of distributed systems.
-
I totally disagree with the quote from hackernews. Having the option to use sqlite is nice to test it, but going with postgresql or mariadb allows you to have better performance if you use rdbms. Also, packaging with containers allows to have one standardized image for support if some third party packaging (from a distro repo) is bugging to test it further. To me, a good gui really depends on what service is provided. For kanidm (IAM), I don’t care this much of a web admin panel, the cli is really intuitive and if you need some graph views of your users, you can generate some diagram files. Considering OIDC/LDAP, I’d rather have OIDC implemented for two reasons : I can point my users to the (really minimalist) kanidm ui where they have a button for each app allowed. Also, the login informations are only stored in kanidm, no spreading of login password.
I saw a comment about not needing to rely on many third services but I partly disagree with it. Using nextcloud as a mixed example, using elastic search for full text search is better than reimplementing it, but the notify_push should not be as separated as it is (it is here because I understood, apache-php and websockets does not mix well).
All in all, the main criterias for me are :
- SSO with OIDC, but ldap is good enough
- Good documentation
- easy deployment to test, prod deployment can be more advanced
- Not reimplement the weel eg if you need full text search, meilisearch or elastic can do it better than you will, so don’t try to much (a simple grep for a test instance is enough)
- If you need to store files, having remote stores is nice to have (webdav or s3)
For me, it’s screenshots.
I can’t even count how many self-hosted or open source projects I’ve wanted to check out, and the project page is just text.
If I don’t know exactly what I’m getting into in the first 10 seconds, I’m onto something else, especially when it’s something heavily based on UI/UX with frequent interaction.
Configuration by config file is preferred but not mandatory for me, but a docker image is mandatory for me to even try the app anymore. And the ability to backup and restore state is key, preferably in such a way that I can write my backup to a mounted smb share rather than writing locally and copying to the network.
I’m running everything on commodity or 2nd hand gear, so failures aren’t unheard of. I had one of my micro PCs cook itself this year, and the majority of my services on that box fit that mold (mostly), so I got them back up pretty quick. Though, I did run into issues with container backups not working (because they write the backup like a database, so it has to be a local write for a db lock) and had to start from scratch .
Please be mindful of HDD spindown.
If your app frequently looks up stuff in a database and also has a bunch of files that are accessed on-demand, then please have an option to separate the data-directory from the appdata-directory.
A lot of stuff is self-hosted in homes and not everyone has the luxury of a dedicated server room.
separate the data-directory from the appdata-directory
Would you mind explaining more about this?
Take my setup for jellyfin as an example: There’s a database located on the SSD and there’s my media library located on an HDD array. The HDD is only spun up when jellyfin wants to access a media file.
In my previous setup, the nextcloud database was located on a HDD, which resulted in the HDD never spinning down, even if the actual files are never really accessed.
In immich, I wasn’t able to find out if they have this separation, which is very annoying.
All this is moot, if you simply offer a tiny service which doesn’t access big files that aren’t stored on SSDs.
Exactly. Separate configuration and metadata from data. If the metadata DB is relatively small, I’ll stick it on my SSD and backup to my HDD on a schedule.
I’ve turned down many self-hosted options due to the complexity of the setup and maintenance.
Do you agree with this?
Yes. If I have to spend an hour reading your documentation just to find out how to run the damn thing, I’m not going to run it.
I hate docker with a passion because it seems like the people who develop on it forego writing documentation because docker “just works” except when it doesn’t.
I archived one of my github repos the other day because someone requested I add docker support. It’s a project I made specifically to not use docker…
I’d say it’s good if it’s easy to use, well written with maintainability in mind, offers good functionality, is reliable and follows current best practices.
It’s easy to selfhost if it’s packaged. Because then I can just
apt install gitlab
edit a few config files and I’m done. Or click on it in Yunohost, or maybe run the Docker container.But just “easy” isn’t the whole story. It needs to be maintainable, still around in a few years, integrate into the rest of my ecosystem…
Not something so complex that it requires docker.
I disagree. Docker makes things a lot easier and I’m going to use it regardless.
My rule is pretty simple: not PHP. PHP requires configuring a web server, so either that’s embedded in the docker image, (violates the “do one thing” rule of docker) or it’s pushed onto the user. This falls under the dependencies part, but I uniquely hate dealing with standalone web servers and I don’t mind configuring databases, so I called it out.
I actually tried switching to OCIS from Nextcloud specifically to avoid PHP, but OCIS is even more complex so I bailed.
Give me an example configuration that works out of the box and detailed documentation about options and I’ll be happy. Don’t make me configure a web server any particular way, and do let me handle TLS myself. If you do that, I’ll probably check it out.
Do you agree with this?
Yes, at least for hobby use. If it really needs something more complex than SQLite and an embedded HTTP server, it’s probably going to turn into a second job to keep it working properly.