shish_mish to [email protected]English • 1 year agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square25fedilinkarrow-up1299arrow-down14
arrow-up1295arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish to [email protected]English • 1 year agomessage-square25fedilink
minus-square@[email protected]linkfedilinkEnglish1•1 year agoyes i am aware? are they being used by openai?
minus-square@[email protected]linkfedilinkEnglish0•1 year agoYes, an exploitative thing that mostly consists of free labour for big orgs.
Bug bounty programs are a thing.
yes i am aware? are they being used by openai?
Yes, an exploitative thing that mostly consists of free labour for big orgs.