Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    15 hours ago

    university where the professor physically threatened me and plagiarized my work called to ask if i was willing to teach a notoriously hard computer science class (that i have taught before to stellar evals as a phd student[1]). but they had to tell me that i was their last choice because they couldn’t find a full professor to teach it (since i didn’t finish my phd there because of said abusive professor). on top of that, they offered me a measly $6,000 usd for the entire semester with no benefits, and i would have to pay $500 for parking.

    should i just be done with academia? enrollment deadlines for the spring are approaching and i’m wondering if i should just find a “regular job”, rather than finishing a PhD elsewhere, especially given the direction higher ed is going in the us.


    1. evals are bullshit for measuring how well students actually learn anything, but are great for measuring the stupid shit business idiots love, like whether students will keep paying tuition. also they can be used to explain the pitfalls of using likert scales carelessly, as business idiots do. ↩︎

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Every time I learn one single thing about how academia works in the USA I want to commit unspeakable acts of violence

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    Found two separate AI-related links for today.

    First, AI slop corpo Apiiro put out a study stating the obvious (that AI is a cybersecurity nightmare), and tried selling its slop agents as the solution. Apiiro was using their own slop-bots to do the study, too, so I’m taking all this with a major grain of salt.

    Second, I came across an AI-themed Darwin Awards spinoff cataloguing various comical fuck-ups caused through the slop-bots.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      14 hours ago

      I mean it’s still just funny money seeing the creator works for some company that resells tokens from Claude, but very few people are stepping back to note the drastically reduced expectations of LLMs. A year ago, it would have been plausible to claim that a future LLM could design a language from scratch. Now we have a rancid mess of slop, and it’s an “art project”, and the fact it’s ersatz internally coherent is treated as a great success.

      Willison should just have let this go, because it’s a ludicrous example of GenAI, but he just can’t help himself defending this crap.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      14 hours ago

      Top-tier from Willison himself:

      The learning isn’t in studying the finished product, it’s in watching how it gets there.

      Mate, if that’s true, my years of Gentoo experience watching compiler commands fly past in the terminal means I’m a senior operating system architect.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 hours ago

        which naturally leads us to: having to fix a portage overlay ~= “compiler engineer”

        wonder what simonw’s total spend (direct and indirect) in this shit has been to date. maybe sunk cost fallacy is an unstated/un(der?)accounted part in his True Believer thing?

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          6
          ·
          12 hours ago

          maybe sunk cost fallacy is an unstated/un(der?)accounted part in his True Believer thing?

          Probably. Beyond throwing a shitload of cash into the LLM money pit, Willison’s completely wrapped his public image up in being an AI booster, having spent years advocating for AI and “learning” how to use it.

          If he admits he’s wrong about LLMs, he has to admit the money and time he spent on AI was all for nothing.

          • flere-imsaho@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            he’s claiming he is taking no llm money with exception of specific cases, but he does accept api credits and access to early releases, which aren’t payments only when you think of payments in extremely narrow sense of real money being exchanged.

            this would in no way stand if he were, say, a journalist.

          • David Gerard@awful.systemsM
            link
            fedilink
            English
            arrow-up
            6
            ·
            8 hours ago

            if you call him an AI promoter he cites his carefully organised blog posts of concerns

            meanwhile he was on the early access list for GPT-5

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      15 hours ago

      Good sneer from user andrewrk:

      People are always saying things like, “surprisingly good” to describe LLM output, but that’s like when 5 year old stops scribbling on the walls and draws a “surprisingly good” picture of the house, family, and dog standing outside on a sunny day on some construction paper. That’s great, kiddo, let’s put your programming language right here on the fridge.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      15 hours ago

      Sigh. Love how he claims it’s worth it for “learning”…

      We already have a thing for learning, it’s called “books”, and if you want to learn compiler basics, $14000 could buy you hundreds of copies of the dragon book.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        10 hours ago

        $14,000 could probably still buy you a lesser Porsche in decent shape, but we should praise this brave pioneer for valuing experiences over things, especially at the all-important boundary of human/machine integration!

        (no, I’m not bitter at missing the depreciation nadir for 996-era 911s, what are you talking about)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        12 hours ago

        I’ve learned so much langdesign and stuff over the years simply by hanging around plt nerds, didn’t even need to spend for a single dragon book!

        (although I probably have a samizdat copy of it somewhere)

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      That the useless programming language is literally called “cursed” is oddly fitting, because the continued existence of LLMs is a curse upon all of humanity

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    17 hours ago

    New Loser Lanyard (ironically called the Friend) just dropped, a “chatbot-enabled” necklace which invades everyone’s privacy and provides Internet reply “commentary” in response. As if to underline its sheer shittiness, WIRED has reported that even other promptfondlers are repulsed by it, in a scathing review that accidentally sneers its techbro shithead inventor:

    If you’re looking for some quick schadenfreude, here’s the quotes on Bluesky.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 hours ago

        Nah, call it the PvP Tag.

        These things look dorky as fuck, wearing them is a moral failing, and people (rightfully) treat it as grounds to shit on you, might as well lean into the “shithead nerd who ruined everything” vibe with some gratuitous gaming terminology, too.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    19 hours ago

    Apparently the hacker who publicized a copy of the no fly list was leaked an article containing Yarvin’s home address, which she promptly posted on bluesky. Won’t link because I don’t think we’ve had the doxxing discussion but It’s easily findable now.

    I’m mostly posting this because the article featured this photo:

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 hours ago

      I was curious so I dug up the post and then checked property prices for the neighbourhood

      $2.6~4.8m

      being thiel’s idea guy seems to pay pretty well

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    21 hours ago

    Belgian AI fails. So there is a big kids pop group popular in Belgium/The Netherlands, called K3. They used genAI images for their show. And apparently it created images of them with with both bikinis and headscarfs at the same time. According to this article https://www.nu.nl/muziek/6368424/k3-geschrokken-van-ongepaste-ai-beelden-tijdens-optreden-jammere-fout.html (in Dutch sorry) lot of people apparently were mad. (I have not seen mad people myself, so not sure if it actually was a problem, and how many people are mad over the four main reasons to be mad possibly (being, ‘islamisation’, immodesty, being insulting to Muslims, and feeding genAI crap to kids while you are one of the biggest acts around (and can certainly afford professional artists), so take into account this is likely a nothingburger))). I was amused to read first that they had shown inappropriate images, and then read it was just headscarves and bikinis.

    e: also note for context, nu.nl is a news site, but usually the quality of their articles isn’t the greatest, not a lot of actual journalism, and a lot of bias towards the establishment and a tendency to mainstream pro the current social order stuff no matter how out there. (They had articles going 'no the inflation isn’t due to companies rising prizes, as not all inflation is because of that) and a tendency to post a lot of gossip like shit. (nos.nl is our big main news site). And their source, shownews is worse. It is basically on our fox news style tv channel. (they have an eveningshow ‘vandaag inside’ which is basically causing the people who watch it to become nuts in the fox news style way, esp elderly people. Weird transphobic rants, anti-woke shit, contrarian idiots who don’t realize they are idiots but are brave truthtellers, all brought in a ‘bar style’ sort of setting. Watched some of it (really funny to hear them say leftwingers don’t watch their shows) and every hour of it would require several ours to explain why almost everything they say is wrong)). I’m trying to provide context for where my poor country is going.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      20 hours ago

      ‘islamisation’, immodesty, being insulting to Muslims, and feeding genAI crap to kids while you are one of the biggest acts around (and can certainly afford professional artists)

      Considering those AI-generated images are hitting a pentafecta(?) of ragebait, I’d be shocked if this didn’t ignite backlash.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        17 hours ago

        Tbh, from looking it up it seems to have just been a Muslim rights org who complained it wasnt great (which seems fine), reading between the gossip lines. Also seems that the AI images were from the people who hired K3 to play not them themselves.

        Nobody seems to have been mad enough to firebomb openAIs offices.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    14 hours ago

    When it started in ’06, this blog was near the center of the origin of a “rationalist” movement, wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders. - Robin Hanson, 2025

    I hear that even though Yud started blogging on his site, and even though George Mason University type economics is trendy with EA and LessWrong, Hanson never identified himself with EA or LessWrong as movements. So this is like Gabriele D’Annunzio insisting he is a nationalist not a fascist, not Nicholas Taleb denouncing phrenology.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      He had me in the first half, I thought he was calling out rationalist’s problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 hours ago

          To add to blakestacey’s answer, his fictional worldbuilding concept, dath ilan (which he treats like rigorous academic work to the point of citing it in tweets), uses prediction markets in basically everything, from setting government policy to healthcare plans to deciding what restaurant to eat at.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 hours ago

            Every tweet in that thread is sneerable. Either from failing to understand the current scientific process, vastly overestimating how easily cutting edge can be turned into cleanly resolvable predictions, or assuming prediction markets are magic.

            • Architeuthis@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              ·
              33 minutes ago

              assuming prediction markets are magic

              Bet it’s more like assuming it will incentivize people with magical predicting genes to reproduce more so we can get a kwisatz haderach to fight AI down the line.

              It’s always dumber than expected.

            • istewart@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 hours ago

              Pretty easy to look at actually-existing instances and note just how laughable "traders trusted us enough for the market to be liquid” is.

              This is just another data point begging what I believe to be the most important question an American can ask themselves right now: why be a sucker?

          • CinnasVerses@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            14 hours ago

            So Hanson is dissing one of the few movements that supports his pet contrarian policy? After the Defence Department lost interest the only people who like prediction markets seem to be LessWrongers / EAs / tech libertarians / crypto bros / worshippers of Friend Computer.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 day ago

        Also a concept that Scott Aaronson praised Hanson for.

        https://web.archive.org/web/20210425233250/https://twitter.com/arthur_affect/status/994112139420876800

        (Crediting the “Great Filter” to Hanson, like Scott Computers there, sounds like some fuckin’ bullshit to me. In Cosmos, Carl Sagan wrote, “Why are they not here? There are many possible answers. Although it runs contrary to the heritage of Aristarchus and Copernicus, perhaps we are the first. Some technical civilization must be the first to emerge in the history of the Galaxy. Perhaps we are mistaken in our belief that at least occasional civilizations avoid self-destruction.” And in his discussion of abiogenesis: “Life had arisen almost immediately after the origin of the Earth, which suggests that life may be an inevitable chemical process on an Earth-like planet. But life did not evolve beyond blue-green algae for three billion years, which suggests that large lifeforms with specialized organs are hard to evolve, harder even than the origin of life. Perhaps there are many other planets that today have abundant microbes but no big beasts and vegetables.” Boom! There it is, in only the most successful pop-science book of the century.)

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 day ago

          Most famously, Robin is […] also the inventor of futarchy

          A futarchy, you say? Tell me more, Robin Hanson

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 day ago

          He’s the one that used the phrase “silent gentle rape”? Yeah, he’s at least as bad as the worst evo-psych pseudoscience misogyny posted on lesswrong, with the added twist he has a position in academia to lend him more legitimacy.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 day ago

            I started reading his post with that title to refresh myself. Just to get your feet wet:

            DEC 01, 2010

            Added Oct ’13: <insert content warning here>

            Man, what happened in the three years it took for a content warning?

            Anyway I skimmed it, the rest of the post is a huge pile of shit that I don’t want to read any more of, I’m sure it’s been picked apart already. But JFC.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      I deeply regret I have made posts proclaiming LessWrong as amazing, in the past.

      They do still have a decent article here and there, but that’s like digging for strawberries in a pile of shit. Even if you find one, it won’t be great.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      21 hours ago

      i’d like to say “there is great fitna among republicans” but i can’t feel like it’ll blow over with thielbux recipient freaks just becoming more visible, and it’s not like trump cares about common clay of the new west over his deals with billionaires either way

    • bigfondue@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      20 hours ago

      I get a bit of Schadenfreude from seeing everyone that cozies up to Trump eventually get turned against. The only people who have stuck around from the first term seem to be the Steves (Miller and Cheung)

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    Some poor souls who arguably have their hearts in the right place definitely don’t have their heads screwed on right, and are trying to do hunger strikes outside Google’s AI offices and Anthropic’s offices.

    https://programming.dev/post/37056928 contains links to a few posts on X by the folks doing it.

    Imagine being so worried about AGI that you thought it was worth starving yourself over.

    Now imagine feeling that strongly about it and not stopping to ask why none of the ideologues who originally sounded the alarm bells about it have tried anything even remotely as drastic.

    On top of all that, imagine being this worried about what Anthropic and Google are doing in the research of AI, hopefully being aware of Google’s military contracts, and somehow thinking they give a singular shit if you kill yourself over this.

    And… where’s the people outside fucking OpenAI? Bets on this being some corporate shadowplay shit?

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 hours ago

      This feels like a symptom of liberals having a diluted incomplete understanding of what made past movements that utilized protest succeed or fail.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        54 minutes ago

        This is always what you get when your fundamental belief is “capitalism good”, so no matter how close you get to “and the problem is capitalism” you can never actually get there, like in a crazy version of edging.

        What I’m saying is that libs are philosophical gooners

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 hours ago

      Lol at the critihype from the BussyGyatt person in that post. Come on LLMs will not become AGI, and that leaves LLMs ‘predicting shutdowns will be bad for their goals’ which is only so because people like this have kept saying it an the LLMs have trained on it. If you really worried about this, you’d stop feeding that data to it.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      I mean, I try not to go full conspiratorial everything-is-a-false-fllag, but the fact that the biggest AI company that has been explicitly trying to create AGI isn’t getting the business here is incredibly suspect. On the other hand, though, it feels like anything that publicly leans into the fears of evil computer God would be a self-own when they’re in the middle of trying to completely ditch the “for the good of humanity, not just immediate profits” part of their organization.

      • JFranek@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        It’s two guys in London and one guy in San Francisco. In London there’s presumably no OpenAI office, in SF, you can’t be at two places at once and Anthropic has more true believers/does more critihype.

        Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle “BussyGyatt @feddit.org”. Truly the dumbest timeline.

      • bigfondue@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        2 days ago

        Didn’t OpenAI just file court documents claiming that their opposition is funded by competitors? Accusing someone else of what they themselves are doing seems to be a pretty popular strategy these days.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        2 days ago

        I dont know anything about the locations of any offices, but could it he that openAI just didnt have any local places? Asking them why not all worked ld be a good journalist question

        But otoh it is just two three of them, and the second ones photo gives off a weird vibe. Why is he smiling like it is a joke?

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    Starting this Stubsack off, I found a Substack post titled “Generative AI could have had a place in the arts”, which attempts to play devil’s advocate for the plagiarism-fueled slop machines.

    Pointing to one particular lowlight, the author attempts to conflate AI with actually useful tech to try and make an argument:

    While the idea of generative AI “democratizing” art is more or less a meme these days, there are in fact AI tools that do make certain artforms more accessible to low-budget productions. The first thing to come to mind is how computer vision-based motion capture give 3D animators access to clearer motion capture data from a live-action actor using as little as a smartphone camera and without requiring expensive mo-cap suits.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I think that you have useful food for thought. I think that you underestimate the degree to which capitalism recuperates technological advances, though. For example, it’s common for singers supported by the music industry to have pitch correction which covers up slight mistakes or persistent tone-deafness, even when performing live in concert. This technology could also be used to allow amateurs to sing well, but it isn’t priced for them; what is priced for amateurs is the gimmicky (and beloved) whammy pedal that allows guitarists to create squeaky dubstep squeals. The same underlying technology is configured for different parts of capitalism.

        From that angle, it’s worth understanding that today’s generative tooling will also be configured for capitalism. Indeed, that’s basically what RLHF does to a language model; in the jargon, it creates an “agent”, a synthetic laborer, based on desired sales/marketing/support interactions. We also have uses for raw generation; in particular, we predict the weather by generating many possible futures and performing statistical analysis. Style transfer will always be useful because it allows capitalists to capture more of a person and exploit them more fully, but it won’t ever be adopted purely so that the customer has a more pleasant experience. Composites with object detection (“filters”) in selfie-sharing apps aren’t added to allow people to express themselves and be cute, but to increase the total and average time that users spend in the apps. Capitalists can always use the Shmoo, or at least they’ll invest in Shmoo production in order to capture more of a potential future market.

        So, imagine that we build miniature cloned-voice text-to-speech models. We don’t need to imagine what they’re used for, because we already know; Disney is making movies and extending their copyright on old characters, and amateurs are making porn. For every blind person using such a model with a screen reader, there are dozens of streamers on Twitch using them to read out donations from chat in the voice of a breathy young woman or a wheezing old man. There are other uses, yes, but capitalism will go with what is safest and most profitable.

        Finally, yes, you’re completely right that e.g. smartphones completely revolutionized filmmaking. It’s important to know that the film industry didn’t intend for this to happen! This is just as much of an exaptation as captialist recuperation and we can’t easily plan for it because of the same difficulty in understanding how subsystems of large systems interact (y’know, plan interference.)

      • FredFig@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        I think it’s a piece in the long line of “AI means A and B, and A is bad and B can be good, so not all AI is bad”, which isn’t untrue in the general sense, but serves the interest of AIguys who aren’t interested in using B, they’re interested in promoting AI wholesale.

        We’re not in a world where we should be offering AI people any carveout; as you mention in the second half, they aren’t interested in being good actors, they just want a world where AI is societally acceptable and they can become the Borg.

        More directly addressing your piece, I don’t think the specific examples you bring up are all that compelling. Or at least, not compared to the cost of building an AI model, especially when you bring up how it’ll be cheaper than traditional alternatives.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    gigabyte selling shovels (and not even just random shovels, specialty shovels that need a fixed type of mobo to use)

    not gonna spend much effort on it now but if someone runs into an actual worthwhile review showing training performance numbers I’d be keen to see (my expectations are that it still does not do very much, and that runtime quality still underperforms relative to VC-subsidised platforms)

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Fascinating how that product page is full of marketing fluff, but nowhere does it say what this actually is…? What does it do? It’s some kind of… memory expansion? But what’s beneath the big heatsink then? All they say is that it’s somehow amazing:

      In the age of local AI, GIGABYTE AI TOP is the all-round solution to win advantages ahead of traditional AI training methods. It features a variety of groundbreaking technologies that can be easily adapted by beginners or experts, for most common open-source LLMs, in anyplace even on your desk.

      A variety of groundbreaking technologies, uh huh, okay then. In so many ways this is the perfect companion product for AI.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Oh, it’s a CXL board, Compute Express Link. Basically a way to attach DRAM to PCI Express. I know some people working on this stuff for one of the big vendors, but in that context it was a rack-scale box capable of handling multiple terabytes’ worth of DIMMs. Having this as a desktop expansion card seems like a bit of a marginal application, but Gigabyte’s done weird shit before. For instance, I have an AMD-compatible Thunderbolt 3 card that was only made in limited quantities by them and ASRock.