What Is LLM Poisoning?
A joint Anthropic, UK AI Security Institute, and Alan Turing Institute study found that as few as 250 malicious documents can plant a backdoor in any large language model—whether it’s a 600M-parameter mini-model or a 13B-parameter giant.
Diwali Deal
Snag 20% off all live courses with code AI20! Enroll at https://www.krishnaik.in/liveclasses or jump into the Ultimate RAG Bootcamp (id=7). Questions? Call +919111533440 or +91 84848 37781.
Watch on YouTube
Top comments (0)