All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

A new attack technique named Policy Puppetry can break the protections of major gen-AI models to produce harmful outputs.

The post All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack appeared first on SecurityWeek.

SecurityWeek – ​Read More