• 7 Posts
  • 30 Comments
Joined 21 days ago
cake
Cake day: February 3rd, 2026

help-circle
  • I’m like 90% sure that this post is AI Slop, and I just love the irony.

    First of all, the writing style reads a lot like AI… but that is not the biggest problem. None of the mitigations mentioned has anything to do with the Huntarr problem. Sure, they have their uses, but the problem with Huntarr was that it was a vibe coded piece of shit. Using immutable references, image signing or checking the Dockerfile would do fuck-all about the problem that the code itself was missing authentication on some important sensitive API Endpoints.

    Also, Huntarr does not appear to be a Verified Publisher at all. Did their status get revoked, or was that a hallucination to begin with?

    To be fair though the last paragraph does have a point, but for a homelab I don’t think it’s feasible to fully review the source code of everything you install. It would rather come down to being careful with things that are new and doesn’t have an established reputation, which is especially a problem in the era of AI coding. Like the rest of the *arr stack is probably much safer because it’s open source projects that have been around for a long time and had had a lot of eyes on it.


  • Worth noting that despite the headline this does not have anything to do with the huge outage in the end of 2025.

    The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”

    Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.

    I would also have felt some level of schadenfreude if it turned out that any of the really big incidents in the end of 2025 was a result of managements aggressive pushes for AI coding. Perhaps that would cool off the heads of executives a bit if there were very real examples pf shit properly hitting the fan…


  • The free version is mainly just a number of user and device limit. Although the relaying service might be limited as well, but that should only matter if both of your clients have strict NAT, otherwise the Wireguard tunnels gets directly connected and no traffic goes through Netbirds managed servers.

    You can also self-host the control plane with pretty much no limitations, and I believe you no longer need SSO (which increased the complexity a lot for homelab setups).












  • I believe something like this is supposed to be a use-case of the digital EU Wallet. A website is supposed to be able to receive an attestation of a users age without nessecarily getting any other information about the person.

    https://en.wikipedia.org/wiki/EU_Digital_Identity_Wallet

    Apparently the relevant feature is Electronic attestations of attributes (EAAs). I’m not really familiar with how it will be implemented though and I am a bit afraid of beurocratic design is going to fuck this up…

    Imo something like this would be magnitudes better than the current reliance of video identification. Not only is it much more reliable, it will also not feel nearly as invasive as having to scan your face and hope the provider doesn’t save it somewhere.


  • För Discord tror jag det beror väldigt mycket på hur aktivt chatten är. För större servrar så håller jag absolut med dig om att det blir för mycket och saker bara försvinner. Men för mindre instanser, typ där det bara är ens närmaste vänner så fungerar upplägget väldigt bra. I instanser med mindre aktivitet tror jag att någonting som är mer tvingande att skapa trådar mest skulle få diskussioner att kännas fragmenterade.

    Samtidigt så är det nog många communities som använder Discord vara för att det är stort, även om det inte nödvändigtvis är det bästa alternativet.



  • Sir. Haxalot@nord.pubtoMemes@sopuli.xyzwhat a coincidence
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 days ago

    Is there really a lot of AI generated doorbell camera videos out there? I can’t remember anything posted but then again maybe that just proves the point.

    Then again the low resolution does make it much easier to hide typical artefacts and issues so I don’t think it proves anything.










  • Maybe i misunderstand what you mean but yes, you kind of can. The problem in this case is that the user sends two requests in the same input, and the LLM isn’t able to deal with conflicting commands in the system prompt and the input.

    The post you replied to kind of seems to imply that the LLM can leak info to other users, but that is not really a thing. As I understand when you call the LLM it’s given your input and a lot of context that can be a hidden system prompt, perhaps your chat history, and other data that might be relevant for the service. If everything is properly implemented any information you give it will only stay in your context. Assuming that someone doesn’t do anything stupid like sharing context data between users.

    What you need to watch out for though, especially with free online AI services is that they may use anything you input to train and evolve the process. This is a separate process but if you give personal to an AI assistant it might end up in the training dataset and parts of it end up in the next version of the model. This shouldn’t be an issue if you have a paid subscription or an Enterprise contract that would likely state that no input data can be used for training.