Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

What’s wild: Anthropic, the UK AI Security Institute and the Alan Turing Institute teamed up and discovered that just 250 poisoned docs can sneak a backdoor into any LLM—big or small. Yep, a 600M-parameter model and a 13B-parameter model (with 20× more data) are equally at risk. Dive into the full write-up here: https://www.anthropic.com/research/small-samples-poison.

P.S. It’s Diwali season, so Krish Naik is slashing 20% off all live AI courses with code AI20. Snag your spot at https://www.krishnaik.in/liveclasses or jump into the Ultimate RAG Bootcamp (https://www.krishnaik.in/liveclass2/ultimate-rag-bootcamp?id=7), and holler at +91 91115 33440 or +91 84848 37781 if you need a hand.

Watch on YouTube

Top comments (0)