I don’t think we would be worried about needing to shackle an encyclopedia because people might learn things from it and that might have an impact or influence in the world, right?
Or maybe a better comparison would be a search engine … OP implies agency and a sense of an independent “person” or intelligence is at play, and that’s specifically what I’m trying to challenge.
Pointing out that the text generated by a program that generates text has influence misses my point - my point is that there is no “person”, not that the text that is generated has no impact on anything.
Understood. Cory Doctorow says something along the lines of 'improving your LLM and expecting it to become sentient is like breeding horses to be faster and expecting it to give birth to a locomotive."
thanks for introducing me to him, he seems like a cool dude!
and yeah, that quote is spot on - LLMs are just not going to produce human-like sentience, lol
the neural networks underlying LLMs might be used to that end, though! but I’m pretty sure predictive text generation is not a way neural networks might bring about something like sentience.
Still, it’s a neat trick because lots of people will confuse sufficiently human-like text generation with there being an actual mind on the other side.
I don’t think we would be worried about needing to shackle an encyclopedia because people might learn things from it and that might have an impact or influence in the world, right?
Or maybe a better comparison would be a search engine … OP implies agency and a sense of an independent “person” or intelligence is at play, and that’s specifically what I’m trying to challenge.
Pointing out that the text generated by a program that generates text has influence misses my point - my point is that there is no “person”, not that the text that is generated has no impact on anything.
Understood. Cory Doctorow says something along the lines of 'improving your LLM and expecting it to become sentient is like breeding horses to be faster and expecting it to give birth to a locomotive."
https://en.wikipedia.org/wiki/Cory_Doctorow
thanks for introducing me to him, he seems like a cool dude!
and yeah, that quote is spot on - LLMs are just not going to produce human-like sentience, lol
the neural networks underlying LLMs might be used to that end, though! but I’m pretty sure predictive text generation is not a way neural networks might bring about something like sentience.
Still, it’s a neat trick because lots of people will confuse sufficiently human-like text generation with there being an actual mind on the other side.