Jailbreak Anthropic’s new AI safety system for a $15,000 reward
In testing, the technique helped Claude block 95% of jailbreak attempts. But the process still needs more ‘real-world’ red-teaming.
Latest stories for ZDNET in Security – Read More
Spyware maker Paragon confirms U.S. government is a customer