Jailbreak Anthropic’s new AI safety system for a $15,000 reward Posté par admin | Fév 4, 2025 | Techno | 0 | In testing, the technique helped Claude block 95% of jailbreak attempts. But the process still needs more ‘real-world’ red-teaming. Aller à la source