I’m beautiful and tough like a diamond…or beef jerky in a ball gown.

  • 98 Posts
  • 529 Comments
Joined 4 months ago
cake
Cake day: July 15th, 2025

help-circle



  • And you’re going to honestly believe a mod’s reasoning at face-value?

    Irrelevant. As a literate human being, I can click on your username and see your submissions. I can search the alt they listed and read those submissions. And, finally, I can look at those and arrive at the conclusion that both of those seem like trolling and the same person.

    Now that you’ve been sufficiently fed, I bid you adieu with my handy dandy block button.









  • The only reason I gave up on Docker Swarm was that it seemed pretty dead-end as far as being useful outside the homelab. At the time, it was still competing with Kubernetes, but Kube seems to have won out. I’m not even sure Docker CE even still has Swarm. It’s been a good while since I messed with it. It might be a “pro” feature nowadays.

    Edit: Docker 28.5.2 still has Swarm.

    Still, it was nice and a lot easier to use than Kubernetes once you wrapped your head around swarm networking.



  • I had 15 of the 2013-era 5010 thin clients. Most of them have had their SSDs and RAM upgraded.

    They’ve worn many hats since I’ve had them, but some of their uses and proposed uses were:

    1. I did a 15 node Docker Swarm setup and used that to both run some of my applications as well as learn how to do horizontal scaling.
    2. After I tore down the Docker Swarm cluster, I set them up as diskless workstations to both learn how to do that and used them at a local event as web kiosks (basically just to have a bunch of stations people could use to fill out web based forms).
    3. One of them was my router for a good while. Only replaced it in that role when I got symmetric gigabit fiber. Before that, I used VLANs to to run LAN and WAN over its single ethernet port since I had asymmetric 500 Mbps and never saturated the port.
    4. Run small/lightweight applications in highly-available pairs/clusters
    5. Use them to practice clustered services (Multi-master Galera/MariaDB, multi-master LDAP, CouchDB, etc)
    6. Use them as Snapcast clients in each room
    7. Add wireless cards, install OpenWRT, and make powerful access points for each room (can combine with the above and also be a Snapcast client)
    8. Set them up as VPN tunnel endpoints, give them out to friends, and have a private network

    Of the 15, I think I’m only actively using 4 nowadays. One is my MPD+Snapcast server, one is running HomeAssistant, ,the third is my backup LDAP server, and one runs my email server (really). The rest I just spin up as needed for various projects; I downsized my homelab and don’t have a lot of spare capacity for dev/test VMs these days, so these work great in place of that.










  • This has been the push I’ve needed to pull the trigger on installing solar. My electric rates have gone from $0.09/KWh to $0.23/KWh in the last 5 years. Just got my bill after reducing as much as I could (my house is all electric sans the furnace). “Surely it’ll be under $100 this month,” I thought. Nope.

    I’ve got 800W of PV currently in an ad-hoc setup* but I’m putting together the plan for a 3.2 KW system that can auto switch between battery, PV, and grid without backfeeding. Minus the batteries, the whole setup is going to cost me about $7,000. (Batteries aren’t required and will be added later)

    Grid-tie is technically legal in my area, but the hoops you have to jump through are insane and there’s a high likelihood of being denied by the power company over the most bullshit of minutiae (seriously, they treat someone possibly feeding back 400 watts the same as if you were a MW-scale solar farm).

    *The ad-hoc setup is just 4x200W panels in a 2S2P config. I charge an Anker PowerStation from that and use it to power random stuff. It’s currently powering my server stack while charging from the panels. :)


  • Nope. Lived on the coffee table and was mostly (almost exclusively) used for IMDB lookups when we’re watching a movie or something and one of us is like “is that…?”

    I’ve got other SSDs that are 10+ years also fine. And I’ve had some last less than a month (note: never buy Silicon Power brand drives).

    Woke up the laptop this morning and there were a bunch of kernel messages about the root volume being inaccessible. Power off and back on: BIOS doesn’t even detect the drive. Pulled the drive and USB->NVMe adapter also doesn’t recognize it from my main laptop.

    This SSD was bought in July and had otherwise been performing great. Luckily still had the old one (it didn’t fail, just upgraded from 256 to 500 GB) and threw it back in and re-installed Ubuntu.

    :shrug: You win some you lose some lol.