🍹Early to RISA 🧉@sh.itjust.worksM to Greentext@sh.itjust.works · 2 days agoAnon finds a botsh.itjust.worksimagemessage-square110linkfedilinkarrow-up11.19Karrow-down12
arrow-up11.18Karrow-down1imageAnon finds a botsh.itjust.works🍹Early to RISA 🧉@sh.itjust.worksM to Greentext@sh.itjust.works · 2 days agomessage-square110linkfedilink
minus-squareatthecoast@feddit.nllinkfedilinkarrow-up7·2 days agoIf you then train new bots on the generated content, the models will degrade yes?
minus-squarefrog@feddit.uklinkfedilinkarrow-up5·2 days agoIf you look a lot of new posts, they are actually highly upvoted old posts. So bots probably stay the same.
minus-squarePrimeMinisterKeyes@leminal.spacelinkfedilinkEnglisharrow-up2arrow-down4·2 days agoThe bots know what is bot content and what is not. Actual users don’t.
minus-squareLvxferre [he/him]@mander.xyzlinkfedilinkarrow-up10·2 days ago The bots know what is bot content and what is not. Probably not. It’s way easier to generate bot content than to detect it. Unless they’re coming from the same group, but I find this unlikely.
If you then train new bots on the generated content, the models will degrade yes?
If you look a lot of new posts, they are actually highly upvoted old posts. So bots probably stay the same.
The bots know what is bot content and what is not.
Actual users don’t.
Probably not. It’s way easier to generate bot content than to detect it. Unless they’re coming from the same group, but I find this unlikely.