Some context about this here: https://arstechnica.com/information-technology/2023/08/openai-details-how-to-keep-chatgpt-from-gobbling-up-website-data/
the robots.txt would be updated with this entry
User-agent: GPTBot
Disallow: /
Obviously this is meaningless against non-openai scrapers or anyone who just doesn’t give a shit.
I could be wrong but wouldn’t people be able to file class action lawsuits against these companies? because they are literally copying content without obtaining any prior explicit user consent, also I’m pretty sure Europeans have an upper hand with data privacy protection from GDPR (European data being extracted/harvested and transferred to US servers)
I could be wrong though
I can understand privacy concerns, but I feel like it’s inevitable that LLMs will be used to make lots of decisions, some possibly important, so wouldn’t you want some content included in its training? For instance, would you want an LLM to be ignorant of FOSS because all the FOSS sites blocked it, and then a child asks an LLM for advice on software and gets recommended Microsoft and Apple products only?
… It’s probably going to recommend paid and non-FOSS apps and programs just on the basis that those companies probably will pay to be the top suggestions. Just like google ads. So no, I don’t think that’s a good enough reason. They can still scrape wiki’s if they need info on FOSS sites, imo. Those shouldn’t (?) block AI’s and other aggregators.