That doesn’t at all mean there will not be a dotcom bubble burst equivalent in the space. Right now there is a huge torrent of money flowing into it which is in stark contrast to how relatively few revenue streams have materialised and their actual long term financial viability is fuzzy.
Every industry will be transformed and every single production line will have a 5G campus network.
We will only stream games over 5G!
5G was so frigging important, we needed to ban Huawei from building cell towers because otherwise, they could shut down our whole economy because our whole economy depends on 5G?
Well, that turned out to be a hype. Of course, it did not go away, just like AI will not go away. But it will have a completely different focus, than what most think now. I see a bigger future for AI in girlfriend simulators or user preference adult movies than in replacing STEM jobs.
Except the difference is that anyone with a brain can figure out that 5G is just faster internet. Most applications aren’t limited by slow internet speeds. LTE is usually enough.
The AI hype is just different. It’s real in my opinion.
I am not denying that and totally agree. But it was still enough to scare gov into banning huawei.
And I am also not claiming that Ai is not real. The question is more, for what? I think some use cases are currently undervalued (like adult entertainment) while some are overvalued (Ai replacing coders)
Anyone with a brain can figure out that ChatGPT is just more accurate predictive text. “AI” is a massive misnomer, it’s just fuzzy pattern recognition. Even LLMs are just predicting what word comes next over and over.
It can be a very useful tool, but it’s wholly incapable of doing anything but regurgitating mashups of its training data.
Good take. AI is, at the end of the day, a buzzword. Machine Learning is simply not the same as true AI, and while Machine Learning’s acceleration has been impressive as well as frightening, it’s not the coming of Skynet. All the fearmongering about AI’s potential that deliberately references our cultural touchstone examples of dangerous AI is all just part of the hype machine, it’s simply not there yet, and wont be for a good chunk of time - but making the gullible public and idiotic investors think it’s that impressive? Well, that makes your company sound like a good investment!
We also simply apply the batch AI to stuff we already did years ago but did not call it AI back then. Like, Airbus using machine learning to find better and lighter shapes for airplane parts.
And for stuff like writing code, it turns out to be not as helpful as expected. It is impressive for coding noobs like me. According to people I know that code for a living, it is not that big of a deal. A nice addition that helps a little bit on some tasks.
AI really shines in tasks where accuracy is not important. Like making up stories, drawing pictures, creating designs and logos, and writing buzzword PR. And of course rule 34!
And for stuff like writing code, it turns out to be not as helpful as expected.
This is not a good judgment and is just taking the ChatGPT user interface at surface value.
It has simply been that LLMs like GPT4 have not been allowed to use tools to write programs outside of lab conditions, so it’s the equivalent of you running code in your head based on what is already in your memory.
Once an LLM has access to compilers or interpreters that run code, they can feedback their own mistakes into the next prompt and write working code. We already know that GPT4 can learn python, bash and other interpreted languages by simply allowing it to use the tools and allowing it to feed results back into new prompts. It can also tell which tool to use, based on the input.
The concept of tool use in LLMs is almost the same as for humans in amplifying a specific ability, such as using a calculator for numerical computations or using an SQL database to manage large tables of information.
The tool use that ChatGPT allows today is simply prompting search engines or Dall-E, reading some webpages as input prompts, but there is no feedback loop allowed for fact checking itself.
I do wonder when the AI bubble is gonna burst. Certainly it can’t be this popular forever, right?
AI will touch every part of society and will be just as big or bigger than the internet revolution.
That doesn’t at all mean there will not be a dotcom bubble burst equivalent in the space. Right now there is a huge torrent of money flowing into it which is in stark contrast to how relatively few revenue streams have materialised and their actual long term financial viability is fuzzy.
Remember 4y ago, when 5G was the next big thing?
Self-driving cars thanks to 5G?
Every industry will be transformed and every single production line will have a 5G campus network.
We will only stream games over 5G!
5G was so frigging important, we needed to ban Huawei from building cell towers because otherwise, they could shut down our whole economy because our whole economy depends on 5G?
Well, that turned out to be a hype. Of course, it did not go away, just like AI will not go away. But it will have a completely different focus, than what most think now. I see a bigger future for AI in girlfriend simulators or user preference adult movies than in replacing STEM jobs.
All I remember is conspiracy theorists freaking out it will kill us all lol
Haha, I totally forgot these nutjobs. They probably already moved on to other topics :)
Except the difference is that anyone with a brain can figure out that 5G is just faster internet. Most applications aren’t limited by slow internet speeds. LTE is usually enough.
The AI hype is just different. It’s real in my opinion.
I am not denying that and totally agree. But it was still enough to scare gov into banning huawei.
And I am also not claiming that Ai is not real. The question is more, for what? I think some use cases are currently undervalued (like adult entertainment) while some are overvalued (Ai replacing coders)
I’m a software dev. AI might be closer to replacing devs than you think. At least low level devs.
Anyone with a brain can figure out that ChatGPT is just more accurate predictive text. “AI” is a massive misnomer, it’s just fuzzy pattern recognition. Even LLMs are just predicting what word comes next over and over.
It can be a very useful tool, but it’s wholly incapable of doing anything but regurgitating mashups of its training data.
Not this shit again.
Well, I for one am very glad that you were here to figure it all out for us dumb dumbs.
Truly, we might not have understood the limitations of this new and misunderstood technology.
Good take. AI is, at the end of the day, a buzzword. Machine Learning is simply not the same as true AI, and while Machine Learning’s acceleration has been impressive as well as frightening, it’s not the coming of Skynet. All the fearmongering about AI’s potential that deliberately references our cultural touchstone examples of dangerous AI is all just part of the hype machine, it’s simply not there yet, and wont be for a good chunk of time - but making the gullible public and idiotic investors think it’s that impressive? Well, that makes your company sound like a good investment!
We also simply apply the batch AI to stuff we already did years ago but did not call it AI back then. Like, Airbus using machine learning to find better and lighter shapes for airplane parts.
And for stuff like writing code, it turns out to be not as helpful as expected. It is impressive for coding noobs like me. According to people I know that code for a living, it is not that big of a deal. A nice addition that helps a little bit on some tasks.
AI really shines in tasks where accuracy is not important. Like making up stories, drawing pictures, creating designs and logos, and writing buzzword PR. And of course rule 34!
Yeah, like, I remember reading about Machine Learning practising StarCraft 2 back in like… 2014? Haha
This is not a good judgment and is just taking the ChatGPT user interface at surface value.
It has simply been that LLMs like GPT4 have not been allowed to use tools to write programs outside of lab conditions, so it’s the equivalent of you running code in your head based on what is already in your memory.
Once an LLM has access to compilers or interpreters that run code, they can feedback their own mistakes into the next prompt and write working code. We already know that GPT4 can learn python, bash and other interpreted languages by simply allowing it to use the tools and allowing it to feed results back into new prompts. It can also tell which tool to use, based on the input.
The concept of tool use in LLMs is almost the same as for humans in amplifying a specific ability, such as using a calculator for numerical computations or using an SQL database to manage large tables of information.
The tool use that ChatGPT allows today is simply prompting search engines or Dall-E, reading some webpages as input prompts, but there is no feedback loop allowed for fact checking itself.
Its not a bubble
Its something like the internet Here to stay
U need to learn 1st what Ai actually is.
It is next movement for humanity.
It is the single biggest invention of modern human civilizaiton.
In next 10-20 years, most basic jobs will move to Ai bots.
r/hardware’s daily routine is gargling Jensen’s balls so don’t worry about it.
Its balanced by people who Jensen stole their girlfriends or killed their moms I guess.
As soon as we have Skynet.