Started off by
- Enabling unattended updates
- Enable only ssh login with key
- Create user with sudo privileges
- Disable root login
- Enable ufw with necessary ports
- Disable ping
- Change ssh default port 21 to something else.
Got the ideas from networkchuck
Did this on the proxmox host as well as all VMs.
Any suggestions?
-
Don’t bother with disabling icmp. You’ll use it way more then it’s worth disabling, and something like
nmap -Pn -p- X.X.X.0/24
will find all your servers anyways (same can be said for ssh and port 22. But moving that does stop some bots) -
As long as i go out not exposing anything the the global internet, you really don’t need a lot. The fire wall should already deny all inbound traffic.
The next step is monitoring. It’s one thing to think your stuff is safe and locked down. It’s another thing to know your stuff is safe. Something like Observium, Nagios, Zabbix, or otherwise is a great way to make sure everything stays up, as well as having insights into what everything it doing. Even Uptime Kuma is a good test. Then something like Wazuh to watch for security events and OpenVAS or Nessus, to look holes. I’d even though in CrowdSec for host based virus detection. (Warning, this will quickly send you down the rabbit hole of being a SOC analyst for your own home)
Block outbound traffic too.
Open up just what you need.
Segment internally and restrict access. You don’t need more than SSH to a Linux Server, or perhaps to it’s web interface for an application running on it.
I just set up Wazuh at work and pointed it at a non-domain, vanilla Windows 11 machine to test and it came back with over 300 events immediately. Not trying to scare anyone off as I think it’s a great tool, more just a heads up that the rabbit hole runs very deep.
-
Take a look at CIS benchmarks and DoD STIGs. Many companies are starting to harden their infrastructure using these standards, depending on the requirements of the environment. Once you get the hang of it, then automate deployment. DO NOT blow in ALL of the rules at once. You WILL break shit. Every environment has security exceptions. If you’re running Active Directory, run Ping Castle and remediate any issues. Audit often, make sure everything is being monitored.
Don’t expose anything to the outside world. If you do, use something like Cloudflare tunnels or Tailscale.
Or host a VPN on it and get in through that. Many of these microservices are insecure, and the real risk comes from opening them up to the Internet. This is important.
Also set permissions properly if applicable
Honestly, between the home lab being behind a RTR, NATed, patched & updated, and given the lack of users clicking on random crap and plugging in thumb drives from God Only Knows Where … I’d go out on a limb and say it’s already more secure than most PCs.
There’s also no data besides what I already put on Medium and GitHub, so it’s not a very attractive target.
I watch networkchuck on occasion, but some of his ideas are… questionable I think. Not necessarily wrong, but not the “YOU MUST DO THIS” that his titles suggest (I get it, get clicks, no hate).
Of the ideas you mentioned, (2), (3), (4), and (5) are somewhere between “reasonable” and “definitely”. The rest are either iffy (unattended updates) or security theater (disable ICMP, change ports).
Something to keep in mind for step (2), securing SSH login with a key: this is only as secure as your key. If your own machine, or any machine or service that stores your key, is compromised then your entire network is compromised. Granted, this is kind of obvious, but just making it clear.
As for security theater, specifically step (6). Don’t disable ping. It adds nothing to security and makes it harder to troubleshoot. If I am an attacker in a position for ping to get to an internal resource in the first place, then I’m just going to listen for ARP broadcasts (on same subnet) or let an internal router do it for me (“request timed out” == host is there but not responding).
Armed guards at every entrance.
Air gapped, no Internet access. I don’t use Internet services for any of my stuff though so I can get away without direct Internet access
I use practical security measures that match my level of exposure and don’t severely limit my convienience.
If your lab isn’t exposed directly to the internet, at the very least update your servers from time to time, use a string root (admin users as well) password. That’s more than enough.
If your lab is exposed, the same applies but update more often. Use SSH keys.
Don’t go overboard - the majority of security incidents are from lack of basic security
Filter incoming traffic from countries with malicious attacks :)
ssh default port is 22.
Really, unless I’m trying to learn security (valid), or have something to protect. I do the basic best practices.
Real security is an offline backup.
SSH port really doesnt matter. If it is an exposed SSH port, it will probably get picked up if its 69 or 22.
- strict 3-2-1 backup policy
- VLANs. all VLANs are controlled by my Fortigate FWF-61E (soon to be replaced by a FG-91G). the VLANs have strict access permissions on a per-device basis on what they can and cannot access.
- CORE network where the NAS live
- only specific devices can access this VLAN, and most only have access to the SMB ports for data access. even fewer devices have access to the NAS management ports
- this network has restrictions on how is accesses the internet
- I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, deep SSL inspection, and intrusion protection activities
- everything is logged, any and all incoming and outgoing connections both to/from the internet but also any LAN based local communications.
- Guest wifi
- can ONLY access the internet
- has very restrictive web and DNS filtering
- I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
- APC Network Management Cards
- can ONLY access my SMTP2GO email client so it can send email notifications
- it does have some access to the CORE network (NTP, SYSLOG, SNMP)
- very select few devices can access the management ports of these cards
- I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
- Ethernet Switch / WIFI-AP management
-
- very select few devices can access the management ports of the switches
- ZERO internet access allowed
-
- ROKUs
- restrictive web and DNS filtering to prevent ads and tracking. Love seeing the space where ads SHOULD be and seeing a blank box.
- can access ONLY the IP of my PLEX server on the CORE network, on ONLY the PLEX port for the services PLEX requires.
- IoT devices
- Internet access ONLY except for a few devices like my IoTaWatt that needs CORE network access to my NAS on ONLY the port required for InfluxDB logging.
- Wife’s computer
- because of HIPPA due to her job, i have ZERO logging, and no SSL inspection, but do have some web and DNS filtering.
- print server
- zero internet access, and only the machines that need to print can access.
- CORE network where the NAS live
- as already indicated i have a fortigate router which has next generation firewall abilities to protect my network
- while i do not have automatic updates i am notified when updates are available for my router, my NAS, the switches, and APC network cards. i always like to look at the release notes and ensure there are no known issues that can negatively impact my operations. I do have most of my docker containers auto-update using watchtower.
- i keep SSH disabled and only enable when i ACTUALLY need it, and when i do, i use certificate based authentication
- i have disabled the default admin account on ALL devices and made custom admin/root users but also have “normal” users and use those normal users for everything UNLESS i need to perform some kind of activity that requires root/admin rights.
- on all devices that have their own internal firewall, i have enabled it to only allow access from VLAN subnets that i allow, and go even further by restricting which IPs on those VLANS can access the device
- changing default ports is fairly useless in my opinion as once someone is on your network it is trivial to perform a port scan and find the new ports.
- all windows based endpoint machines
- have a strict endpoint control using fortigate’s fortiguard software with EMS server. this allows me to enforce that machines have minimum specifications,
- i use group policy to enforce restrictive user environments to prevent installation of programs, making system changes, accessing the C: drive etc as this prevents a decent amount of malware from executing
- antivirus must be enabled and active or the endpoint becomes quarantined.
- if the system has unusual behavior it is automatically quarantined and i am notified to take a look
- even though the fortigate router blocks all ads and trackers i also use a combination of UBlock Origin to prevent ads and trackers from running in the browser as ADs are now one of the most common points of entry for malware
- i use ESET antivirus which also performs and ties into the fortiguard endpoint protection to ensure everything on the machines is OK
- for all phones/tablets i have Adguard installed which blocks all ads and malicious web sites and tracking at the phones level
this is not even all of it.
the big take away is i try to layer things. the endpoint devices are most important to protect and monitor as those are the foot hold something needs to then move through the network.
i then use network level protections to secure the remaining portions of the network from other portions of the network.
Messy…just messy
Anything that has internet access like your IoT can be C&C utilizing stateful connections. An outbound socket is built, and reflected traffic can come back in. Your IoT devices especially should not be exposed to the internet. They can’t even have an antivirus agent installed on them.
They can’t even have an antivirus agent installed on them.
That’s actually no longer true… kinda. You can’t install AV on them, but there are security companies filling the niche of embedded IoT security. Now, you won’t see this in your average consumer device, but on the commercial market there is a growing demand for some way to secure an embedded device from malicious software/firmware modifications.
You can SPAN internal traffic to an IDS device currently. Or, if internal network throughput isn’t an issue, you can force east-west traffic through an IPS with DPI enabled instead.
That’s historically how east-west would be mediated within an enterprise environment for devices incapable of being secured with agents.
That’s historically how east-west would be mediated within an enterprise environment for devices incapable of being secured with agents.
Absolutely, and I’ve implemented similar east-west controls (as either prevent-first or for detection). You’ll get no argument from me on that. I’m just noting an interesting trend as IoT devices become more ubiquitous in commercial and industrial environments, and some of those devices must (for whatever reason) have access to some part of the network or internet.
True, and 100% agree except I forgot to mention
1.) The fortigate has a known list of botnet command and control servers that are blocked 2.) I only allow them to access their home server domain names for the only purpose of allowing for firmware updates. They are not capable of accessing any other domains or IPs
Replace Fortinet with Pfsense (+Suricatta/Snort) for non-propriety. (I have a Fortinet firewall and I can’t bring myself to pay for their packages). One thing I’d recommend for you, as I host a lot of stuff is DNS Proxy though cloudflare, so the services I’m hosting are not pointing to my origin IP.
None of my services are available outside my house without first logging into the fortigate SSL VPN. That is the only open port I have.
The SSL VPN uses a loopback interface so only IPs from the US can access it, and I have strong auto block enabled and I add IPs of systems that try brute forcing into the box so they get blocked
I did forget to mention that I use cloud flair already for the exact reason you mentioned so my home IP is not used.
I also have a domain name with valid wildcard certificate. The domain is used to access the SSL VPN and I also then use the cert within my entire homelab so I have everything encrypted
I was not a fan of PF sense, the fortigate has more security features that I wanted
Pretty cool man, thanks for sharing.
Hosted reverse proxy and VPN servers. I have no open ports on my home network.
UDM’s regular built in threat filtering, good firewall rules, updated services, and not opening up unnecessarily to the internet. And be vigilant but don’t worry too much about it. That’s it.
My home lab and production network are separated by a firewall.
I have backups and plans to rebuild my lab, I actually do it regularly.
My labs do risky things, I get comfortable with those things before doing it in production.
- Domain auth (1 place to set passwords and SSH keys), no root SSH
- SSH by key only
- Passworded sudo (last line of defence)
- Only open firewall hole is OpenVPN with security dialled up high
- VLANs - laptops segregated from servers
- Strict firewall rules between VLANs
- TLS on everything
- Daily update check alerts (no automatic updates, but persists until I deal with them)
- Separate isolated syslog server for audit trails
- Cold backups
What are the risks of passwordless sudo? Is it mainly just if someone has physical access to the machine or if you run a malicious program?
If someone or something malicious gets a shell account on my systems, then it at least stops them doing anything system-wide. And yes, if a script is going to request admin rights to do something, it’ll stop right at the
sudo
prompt. Passwordless, it could do stuff without you even being aware of it.Whether or not this is a line of defence at all is open to debate.