• chalupapocalypse@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    They would have to hire a shitload of people to police it all along with the rest of the questionable shit on there, like jailbait or whatever other shit they turned a blind eye to until it showed up on the news

    Not saying it’s right but from a business standpoint it makes sense

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      Don’t they flag stuff automatically?

      Not sure what they’re using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.