• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Don’t they flag stuff automatically?

    Not sure what they’re using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.