I guess you think “neural networks” work nothing like a brain right?
Of course machines can read and learn, how can you even say otherwise?
I could give a LLM an original essay, and it will happily read it and give me new insights based on it’s analysis. That’s not a conceptual metaphor, that’s bonafide artificial intelligence.
I think anyone who thinks neural nets work exactly like a brain at this point in time are pretty simplistic in their view. Then again you said “like a brain” so
You’re already into metaphor territory so I don’t know what you’re disagreeing with.
Learning as a human and learning as an LLM are just different philosophical categories. We have consciousness, we don’t know if LLMs do. That’s why we use the word “like”. Kind of like, “head-throbbed heart-like”.
We don’t just use probability. We can’t parse 10,000,000 parameter spaces. Most people don’t use linear algebra.
A simulation of something is not equal to that something in general.
I guess you think “neural networks” work nothing like a brain right?
Of course machines can read and learn, how can you even say otherwise?
I could give a LLM an original essay, and it will happily read it and give me new insights based on it’s analysis. That’s not a conceptual metaphor, that’s bonafide artificial intelligence.
I think anyone who thinks neural nets work exactly like a brain at this point in time are pretty simplistic in their view. Then again you said “like a brain” so You’re already into metaphor territory so I don’t know what you’re disagreeing with.
Learning as a human and learning as an LLM are just different philosophical categories. We have consciousness, we don’t know if LLMs do. That’s why we use the word “like”. Kind of like, “head-throbbed heart-like”.
We don’t just use probability. We can’t parse 10,000,000 parameter spaces. Most people don’t use linear algebra.
A simulation of something is not equal to that something in general.