Is it bad to keep my host machines to be on for like 3 months? With no down time?

What is the recommend? What do you do?

  • horse-boy1@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    I had one Linux server that was up for over 500 days. It would have been up longer but I was organizing some cables and accidentally unplugged it.

    Where I worked as a developer we had Sun Solaris servers as desktops to do my dev work on. I would just leave it on even during the weekends and vacations, it also had our backup webserver on it so we just let to run 100%. One day the sys admin said you may want to reboot your computer, it’s been over 600 days. 😆 I guess he didn’t have to reboot after patching all that time and I didn’t have any issues with it.

  • R_X_R@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Prod environments typically don’t have downtime. Save for patching every quarter that requires a host reboot.

  • Nick_W1@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Usually I reboot once a year, but in reality power outages limit uptime to about this anyway.

  • cll1out@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    My Proxmox VM host ran for well over a year and I had to shut it down to add more RAM when I finally bought it. A couple VMs on it ran for just as long. All Linux stuff. Windows guest have to reboot minimum every 90 days or things start getting weird, just a DC

  • destronger@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    mine is small and idles at 17 watts, but i’ll shut it down if i don’t use for many days. also when i’m on vacation.

  • reni-chan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Once a month to install patch Tuesday updates because my only host is still running Microsoft Hyper-V 2019 server. Planning to switch to Proxmox that but gonna take a while so I haven’t got myself around to do it.

  • horus-heresy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Out of 6 Cisco servers 3 have auto power on at 7am and auto shutdown at 11 pm. Other 3 are 24/7

  • aorta7@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I have two hosts: raspberry pi that serves as a pi-hole and as a log of infrequent power outages, it goes 24/7, often with 100+ days of uptime (seeing the “(!)” sign in htop is so satisfying) and a SFF that shuts itself off nighty, provided nothing is happening on it (power is expensive).

  • LAKnerd@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    My optiplex 9010 sff is what I use for experimenting with services and as a staging area for moving VMs to my main lab because it’s air gapped. At max load it runs at 140w but it has a GTX 1650 that I use for gaming as well.

    Otherwise the rest of my lab is only turned on when I’m using it or forget to turn it off when I leave the house. When I get a laptop again I’ll leave it on more. None of it is more than $150 to replace though. It’s a Hyve Zeus, Cisco isr 4331, and a catalyst 3750x so nothing heavy, just a little loud.

  • Brilliant_Sound_5565@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Never really shit my mini pcs down, sometimes I restart a proxmox node if I want it to use an updated kernal but that’s it. I don’t run large servers at home