- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
New research involving 776 Procter & Gamble experts suggests individuals using AI can perform as well as traditional two-person teams.
Study confirms that a randomizer creates a lot of random stuff quickly, notes some of the limitations include whether the random stuff is actually going to be useful.
Unfortunately, AI has the creativity of a turnip.
I know a really funny onion.
A team with one creative and one gets things done is not too bad. I’d take the headline with a grain of salt since AI are known to not always get things done and sometimes will lead their pilots around in circles for no good reason, but still, they don’t really need to be creative to beat most teams.
They might not need to be creative but they do need to produce useful results and AI is just really bad at that, especially without a human spending a large part of their time correcting and filtering its output.
I think there is a skill set that’s required to use AI efficiently. You need to know what kind of problems they’re suitable for, be able to recognise when it’s going in circles or hallucinating and you need to be able to troubleshoot and understand whatever it’s outputting. Personally I’ve found it quite useful in many cases.
As someone who has been paired with such a person before, that means AI gets half way there.
If I’m paired with an AI will it also get me to all its work as well as mine and then play politics to get itself promoted over me?
You got paired up with a vegetable as teammate?
You’ve never had that experience?
It can definitely do creative stuff if you write the prompt properly. Even for straight up art it can be a great tool for generating reference photos.
It really can’t. It can take your original prompt and fluff it out to obnoxiously long text. It can take your visual concept and sometimes render roughly the concept you describe (unless you hit an odd gap in the training data, there’s a video of image generation being incapable of generating a full wine glass of wine).
A pattern I’ve seen is some quick joke that might have been funny as a quick comment, but the poster asks an LLM to make a “skit” of it and posts a long text that just utterly wears out the concept. The LLM is mixing text content in a way consistent with the prompt, but it’s not mixing in any creatively constructed comment, only able to drag bits represented in the training data.
Now for image generation, this can be fine. The picture can be nice enough in a way analogous to meme text on well known pictures is adequate. Your concept can only ever generate a picture, and a picture doesn’t waste the readers time like a wall of text does. However if you come at an LLM with specific artistic intent, then it will frustrate as it won’t do precisely what you want, and it’s easier to just do it yourself at some point
I don’t think it’s easier to do on your own since it takes time to develop artistic talent.
I think * an artist * can do better on their own. Most people using AI image generation will take what they can get.
And scammers, slop generators, etc, will make as much as possible to see how far they can get.
Or, to rephrase that, every other employee just as useless as AI because HR and managers are clueless about judging performance.
Hell, put any two people on a “knowledge” task and even if both were capable, there’s going to be a person that pretty much does the work and another that largely just sits there. Unless the task has a clear delineation, but management almost never assigns a two person team a task that’s actually delineated enough for the two person team to competently work.
If the people earnestly try, they’ll just be slower as they step on each other, stall on coordination, and so on.
Plot twist, one of the two people is schizophrenic.
deleted by creator