Elsevier and their ilk are pure parasites. They take work paid for by public funding and charge scientists to publish, they do basically nothing, they don’t review the work, they don’t do formatting, they don’t even do so much as check for spelling mistakes. They exist purely because of a quirk of history and the difficulty of coordinating moving away from assessing academics based on prestige and impact factor of publications.
They’re parasitic organisations who try to lock up public information.
I understand what you’re going for, but that might be tricky legally. What special status does the archive have that allows it to make all that information accessible, that an AI model wouldn’t have?
There is no dissonance. I don’t think AI models should be getting stuff, because they’re not a public archive. They are using it to build a data model. There’s a difference between commercial use, which is the goal of AI companies, and spreading knowledge and research.
Man, he literally said it. Can you read? Wait sorry, you’re an AI techbro. You barely know how to write a prompt.
The goal of AI companies is to make money and give nothing back to the data that fed their model. Search indexes have a mutually beneficial relationship with whatever they index that drives traffic to websites.
I’m not sure I can make it any easier. Maybe ask chatgpt if you still don’t get it.
Feeding it into AI’s is one of the things countless researchers would love to do with scientific literature in order to fuel more discoveries for the benefit of everyone.
but the parasitic journal owners try to heavily restrict what you can do with the text even after you’ve paid out the nose to publish and paid out the nose for subscriptions.
Well, if it’s just so people have to pay openAI to get access to knowledge instead of having to pay Elsevier, it’s not really what I personally want to be honest…
You’re speaking for the researchers. What they want is a free, public archive which already exists(not legally though). AI is not there to make an archive.
I don’t have a dog in this fight nor do I know the specifics of the relevant law here, but I would note that Susman Godfrey is probably the best litigation-focused law firm in America and it’s unlikely that they’re just moronically accepting a case without strong support in the law. Look at their track record and their attorney bios; these people absolutely do not screw around.
Good.
Elsevier and their ilk are pure parasites. They take work paid for by public funding and charge scientists to publish, they do basically nothing, they don’t review the work, they don’t do formatting, they don’t even do so much as check for spelling mistakes. They exist purely because of a quirk of history and the difficulty of coordinating moving away from assessing academics based on prestige and impact factor of publications.
They’re parasitic organisations who try to lock up public information.
But Microsoft is cool and good.
Microsoft and OpenAI may scrape stuff but at least they don’t then try to lock everyone else out from being able to read the original.
A big step up from Elsevier
Just to emphasize this, machine learning algorithms doesn’t know anything. All training it does is calibrate adjust constants in an equation.
Like Jon Snow, it knows nothing, and if you ask it for something complicated, it will put that on full display.
Academic journals should be free and available for everyone, they shouldn’t be getting fed into AI without permission.
You do realize you’re contradicting yourself, right?
Nope. Journals being accessible to everyone in an archive does not mean AI models should have carte blanche consent to use them to train.
I understand what you’re going for, but that might be tricky legally. What special status does the archive have that allows it to make all that information accessible, that an AI model wouldn’t have?
The law is fucked and needs to catch up to AI stuff. DMCA, fair use etc is not built to handle scraping on the level AI does.
Here, FTFY. I don’t know if you recognize the dissonance between the first and the second part of your sentence.
There is no dissonance. I don’t think AI models should be getting stuff, because they’re not a public archive. They are using it to build a data model. There’s a difference between commercial use, which is the goal of AI companies, and spreading knowledge and research.
That’s not dissonance.
So your opinion is also that search engines should pay websites for the content they index? Explain to me how one is different from th other.
Man, he literally said it. Can you read? Wait sorry, you’re an AI techbro. You barely know how to write a prompt.
The goal of AI companies is to make money and give nothing back to the data that fed their model. Search indexes have a mutually beneficial relationship with whatever they index that drives traffic to websites.
I’m not sure I can make it any easier. Maybe ask chatgpt if you still don’t get it.
You managed to contradict yourself in one sentence.
Feeding it into AI’s is one of the things countless researchers would love to do with scientific literature in order to fuel more discoveries for the benefit of everyone.
but the parasitic journal owners try to heavily restrict what you can do with the text even after you’ve paid out the nose to publish and paid out the nose for subscriptions.
Well, if it’s just so people have to pay openAI to get access to knowledge instead of having to pay Elsevier, it’s not really what I personally want to be honest…
You’re speaking for the researchers. What they want is a free, public archive which already exists(not legally though). AI is not there to make an archive.
I don’t have a dog in this fight nor do I know the specifics of the relevant law here, but I would note that Susman Godfrey is probably the best litigation-focused law firm in America and it’s unlikely that they’re just moronically accepting a case without strong support in the law. Look at their track record and their attorney bios; these people absolutely do not screw around.
Distinguished lawyers and professors have done the same in the past, I wouldn’t rule it out.
People, particularly outside tech, have a tendency to imaging the chatbot is like a person they can ask to testify.