Would we know it if we saw it? Draw two eye spots on a wooden spoon amd people will anthromorphise it. I suspect we’ll have dozens of false starts and breathless announcements of AGI, but we may never get there.
More interestingly, would we want it if we got it? How long will its creators rally to its side if we throw yottabytes of data at our civilization-scale problems and the mavhine comes back with “build trains and eat the rich instead of cows?”
How can we ask “are we closer” if we don’t know what the destination is?
LLMs might still end up being an interesting special-purpose system, perhaps with fairly broad applications, but in a direction that’s different from where true AGI ends up coming from.
Would we know it if we saw it? Draw two eye spots on a wooden spoon amd people will anthromorphise it. I suspect we’ll have dozens of false starts and breathless announcements of AGI, but we may never get there.
More interestingly, would we want it if we got it? How long will its creators rally to its side if we throw yottabytes of data at our civilization-scale problems and the mavhine comes back with “build trains and eat the rich instead of cows?”
That seems besides the point when the question is about wether we’re getting closer to it or not.
How can we ask “are we closer” if we don’t know what the destination is?
LLMs might still end up being an interesting special-purpose system, perhaps with fairly broad applications, but in a direction that’s different from where true AGI ends up coming from.