@[email protected] to [email protected]English • 10 months agoWe have to stop ignoring AI’s hallucination problemwww.theverge.comexternal-linkmessage-square208fedilinkarrow-up1529arrow-down128
arrow-up1501arrow-down1external-linkWe have to stop ignoring AI’s hallucination problemwww.theverge.com@[email protected] to [email protected]English • 10 months agomessage-square208fedilink
minus-square@[email protected]linkfedilinkEnglish2•edit-210 months agoThey are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
minus-square@[email protected]linkfedilinkEnglish0•10 months agoYour 1 sentence makes more sense than the slop above.
They are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
Your 1 sentence makes more sense than the slop above.