https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary
Right, I’m no expert (and very far from an AI fanboi), but not all “AI” are LLMs. I’ve heard there’s good use cases in protein folding, recognising diagnostic patterns in medical images.
It fits with my understanding that you could train a similar model on more constrained datasets than ‘all the English language text on the Internet’ and it might be good at certain jobs.
Am I wrong?
You are correct. However, more often than not it’s just like the image describes and people are actually applying LLM’s en masse to random problems.
what ai, apart from language generators “makes up studies”
deleted by creator
You know in your haste to rage about downvotes you might have missed the part where your answer had literally nothing to do with the question you were asked. That might be a bigger Factor.
DNA isn’t approving / denying new medicines.
deleted by creator
deleted by creator
deleted by creator
The problems with AI we talk of here is mostly with generative AI. Protein folding, diagnostic patterns and weather prediction works a bit differently than image making or text writing services.
Hallucinating studies is however very on brand for LLM as opposed to other types of machine learning.
Technically, LLMs as used in Generative AI fall under the umbrella term “machine learning”…except that until recently machine learning was mostly known for “the good stuff” you’re referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.
There is no generative AI. It’s just progressively more complicated chatbots. The goal is to fool the human into believing it’s real.
Its what Frank Herbert was warning us all about in 1965.
What was Frank on about? The Butlerian Jihad I assume? Read the book 8 times and don’t remember why thinking machines had gone rogue. ?
Chatbprs are genAI. Any artificial intelligence like NPCs, autopilot, playing games against the machine, playing chess against the machine… All of those have been called AI.
GenAI is a subset where what the AI does is generate text or images instead of taking a deterministic option. GenAI describes pretty well what it does generate a text or image output, no matter the accuracy of the text. The AI is optimised to generate output that looks like what you would expect with the given input, and generally it does exactly that, even if it hallucinated facts to fit the idea of the response that they are supposed to give with the given input.
That’s because “AI” has come to mean anything with an algorithm and a training set. Technologies under this umbrella are vastly different, but nontechnical people (especially the press) don’t understand the difference.
Right. You’re talking about specialized AI that are programmed and trained to perform very specific tasks, and are absolutely useless outside of those tasks.
Llama are generalized AI which can’t do any of those things. The problem is that what it’s good at, really REALLY good at, is giving the appearance of specialized AI. Of course this is only a problem because people keep getting fooled into thinking that generalized AI can do all the same things that specialize AI does.
Obviously that should be in an advisory capacity, and not making decisions (like approving drugs for human use [which i heavy doubt was actually happening])
Yeah, AI (not LLM) can be a very useful tool in doing research, but this takes about deciding if a drug should be approved or not.
deleted by creator