As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.
We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.
I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.
Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.
As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.
We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.
I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.
Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.
Pointless and maybe a little reckless.