I fail to see how training an LLM with the material I choose is any different than me studying that material. Artists are just mad I can make awesome pictures on my graphics card.
What? We as human literally learn through pattern recognition. How is it different that what a machine is doing? Of course it is not exactly the same process our brains do, but it is by no means a “metaphor”.
I guess you think “neural networks” work nothing like a brain right?
Of course machines can read and learn, how can you even say otherwise?
I could give a LLM an original essay, and it will happily read it and give me new insights based on it’s analysis. That’s not a conceptual metaphor, that’s bonafide artificial intelligence.
I think anyone who thinks neural nets work exactly like a brain at this point in time are pretty simplistic in their view. Then again you said “like a brain” so
You’re already into metaphor territory so I don’t know what you’re disagreeing with.
Learning as a human and learning as an LLM are just different philosophical categories. We have consciousness, we don’t know if LLMs do. That’s why we use the word “like”. Kind of like, “head-throbbed heart-like”.
We don’t just use probability. We can’t parse 10,000,000 parameter spaces. Most people don’t use linear algebra.
A simulation of something is not equal to that something in general.
Don’t quite agree with the above poster but this is the tool they’re referring to and they’re making the argument that it is a metaphor/just the name of the tool and there isn’t a direct connection.
I think they were talking about people slaves, not computer networks. The person above them asked why humans can learn from copyright materials, but machines aren’t allowed to. The next person asked why we can own furniture but not people. To me this seems like they are saying we don’t own slaves for the same reason computer programs shouldn’t be allowed to learn from copyright materials. I’d say we don’t own slaves because as a society we value and believe in individuality, personal choice, and bodily autonomy, and I don’t see how these relate to dictating what content you train computer models on.
Why is it okay to own furniture, but not people?
By the way:
There are no machines that read and learn. “machine learning” is a technical term that has nothing to do with actual learning.
I fail to see how training an LLM with the material I choose is any different than me studying that material. Artists are just mad I can make awesome pictures on my graphics card.
What? We as human literally learn through pattern recognition. How is it different that what a machine is doing? Of course it is not exactly the same process our brains do, but it is by no means a “metaphor”.
I guess you think “neural networks” work nothing like a brain right?
Of course machines can read and learn, how can you even say otherwise?
I could give a LLM an original essay, and it will happily read it and give me new insights based on it’s analysis. That’s not a conceptual metaphor, that’s bonafide artificial intelligence.
I think anyone who thinks neural nets work exactly like a brain at this point in time are pretty simplistic in their view. Then again you said “like a brain” so You’re already into metaphor territory so I don’t know what you’re disagreeing with.
Learning as a human and learning as an LLM are just different philosophical categories. We have consciousness, we don’t know if LLMs do. That’s why we use the word “like”. Kind of like, “head-throbbed heart-like”.
We don’t just use probability. We can’t parse 10,000,000 parameter spaces. Most people don’t use linear algebra.
A simulation of something is not equal to that something in general.
What’s the connection between owning slaves and using computer tools? I don’t really follow this jump in logic.
https://en.m.wikipedia.org/wiki/Master/slave_(technology)
Don’t quite agree with the above poster but this is the tool they’re referring to and they’re making the argument that it is a metaphor/just the name of the tool and there isn’t a direct connection.
I think they were talking about people slaves, not computer networks. The person above them asked why humans can learn from copyright materials, but machines aren’t allowed to. The next person asked why we can own furniture but not people. To me this seems like they are saying we don’t own slaves for the same reason computer programs shouldn’t be allowed to learn from copyright materials. I’d say we don’t own slaves because as a society we value and believe in individuality, personal choice, and bodily autonomy, and I don’t see how these relate to dictating what content you train computer models on.
Have you ever considered the possibility that unliving objects are not, in fact, people?
Neural networks aren’t literally bundles of biological neurons but that doesn’t mean they’re not learning.
That’s exactly what Language Learning Models do.
I can see how you would come to that conclusion, given that you clearly are incapable of either.