cm0002@piefed.world to Technology@lemmy.zipEnglish · 2 days agoMy new laptop chip has an 'AI' processor in it, and it's a complete waste of spacewww.pcgamer.comexternal-linkmessage-square37linkfedilinkarrow-up1179arrow-down111
arrow-up1168arrow-down1external-linkMy new laptop chip has an 'AI' processor in it, and it's a complete waste of spacewww.pcgamer.comcm0002@piefed.world to Technology@lemmy.zipEnglish · 2 days agomessage-square37linkfedilink
minus-squareMwa@thelemmy.clublinkfedilinkEnglisharrow-up9·1 day agoAm curious if NPUS can be used in Ollama or Local LLMS,if it can’t then its completely useless also useless if you don’t use AI at all.
minus-squaregirsaysdoom@sh.itjust.workslinkfedilinkEnglisharrow-up4·19 hours agoThis might partially answer your question: https://github.com/ollama/ollama/issues/5186. It looks like the answer is, it depends on what you want to run as some configs are partially supported but there’s no clear cut support yet?
minus-squaresheogorath@lemmy.worldlinkfedilinkEnglisharrow-up4·18 hours agoI tried running some models on an Intel 155h NPU and the performance is actually worse than using the CPU directly for inference. However, it wins on power consumption front IIRC.
Am curious if NPUS can be used in Ollama or Local LLMS,if it can’t then its completely useless also useless if you don’t use AI at all.
This might partially answer your question: https://github.com/ollama/ollama/issues/5186.
It looks like the answer is, it depends on what you want to run as some configs are partially supported but there’s no clear cut support yet?
I tried running some models on an Intel 155h NPU and the performance is actually worse than using the CPU directly for inference. However, it wins on power consumption front IIRC.