he/him | any

I wrangle code, play bad music, and write things. You might find some of it here.

  • 1 Post
  • 21 Comments
Joined 2 years ago
cake
Cake day: March 13th, 2024

help-circle









  • I don’t see much to laugh at here myself. Hank may have been a massive fencesitter on AI, but I still think his reaction to Sora’s completely goddamn justified. This shit is going to enable scams, misinformation and propaganda on a Biblical fucking scale, and undermine the credibility of video evidence for good measure.

    No, it’s absolutely justified and I agree with basically everything he says in the video (esp. the title, there is really no reason for technology like this to exist in the hand of the public, or anyone really, there’s zero upsides to it). It’s just funny to me because the video is just so different from his usual calm stuff.

    But honestly, good for him and (hopefully) his community too.


  • After kinda fence-sitting on the topic of AI in general for while, Hank Green is having a mental breakdown on YouTube over Sora2 and it’s honestly pretty funny.

    If you’re the kind of motherfucker who will create SlopTok, you are not the kind of motherfucker who should be in charge of OpenAI.

    Not that anyone should be in charge of that shitshow of a company, but hey!

    Bonus sneer from the comment section:

    Sam Altman in Feb 2015: “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

    Sam Altman in Dec 2015, after co-founding OpenAI: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    Sam Altman 4 days ago, on his personal blog: “we are going to have to somehow make money for video generation.”





  • AI use at my company is now mandatory. We all have gotten really expensive Claude of ChatGPT licenses and (apparently) it’s being monitored whether we actually use them. For while I’ve resisted, telling everyone I don’t touch that shit with a ten-foot pole. Now I just write the occasional non-invasive prompt (“check for spelling errors in the comments” or "format the buffer so that all lines have less than 80 columns) just so someone can report to the c-suite that our department is using AI.

    Meanwhile I have to do code reviews on patches that have been actually written by a chatbot and doing that takes sometimes as long as writing the whole fucking code from scratch.

    But hey, style guidelines are out the window too because the slop machine can’t follow them consistently anyway and apparently fixing that shit manually is too much to ask. So at least I never have to fight anyone again over indentation and braces.







  • My favorite bullshit test is to have the AI count letters. Something as simple as “How many A’s are in this sentence?” will have the thing give you a seemingly random number (which might sometimes even be correct), but as soon as you ask it to list them, it will have a stroke and just make up stuff that makes no sense.

    I can’t be arsed to make an account with OpenAI or whatever, so here’s an example with Bing Chat, but this has worked on every single chatbot I’ve tested it on.

    Another fun test (as suggested by @flizzo last time I brought this up) is to have the chatbot attempt to crack a simple code. Just Caesar cipher something and ask it to crack it. Hilarity will ensue. You can even ask it to encrypt it first and then to crack it for extra fun.

    Me: Encrypt the sentence “I am a stupid roided Clippy” with the Caesar cipher.

    Clippy: [After explaining what the Ceasar cipher is, which I didn’t ask for, thanks] Convert the sentence to uppercase: “I AM A STUPID ROIED CLIPPY”. Let’s apply the Caesar cipher to your sentence using a shift of 7.

    • ‘I’ becomes ‘P’
    • ‘A’ becomes ‘H’
    • ‘M’ becomes ‘T’
    • ‘Y’ becomes ‘F’

    So the encrypted sentence is: “PH HT H ZBHTLW YVPLUU”. 😊

    Okay, first of all, it dropped a letter. And the shift isn’t even correct, lol. It said Y becomes F and then does whatever the fuck.

    Okay, so let’s give it an easy example, and even tell it the shift. Let’s see how that works.

    This shit doesn’t even produce one correct message. Internal state or not, it should at least be able to read the prompt correctly and then produce an answer based on that. I mean, the DuckDuckGo search field can fucking do it!