Disable JavaScript to bypass paywall.
A Japanese publishing startup is using Anthropic’s flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the 2-3 months it would take a team of humans.
What is the point of AI safety if there is no intent to complete goals? Why would we need to align it with our goals if it wasn’t able to create goals and subgoals of its own?
Saying it’s just a “stochastic parrot” is an outdated understanding of how modern LLM models actually work - I obviously can’t convince you of something you yourself don’t believe in but I’m just hoping you can keep an open mind in future instead of rejecting the premise outright - the way early proponents of the scientific method like Descartes rejected the idea that animals could ever be considered intelligent or conscious because they were merely biological “machines”.