Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

LLM Poisoning Breakthrough

A joint study by Anthropic, the UK AI Security Institute, and the Alan Turing Institute revealed that just 250 malicious documents can slip a “backdoor” into any large language model—whether it’s a 600M-parameter or a 13B-parameter beast. Model size and total training data don’t matter; a tiny dose of poisoned text is all it takes.

Diwali Festive Offer

Get 20% off all live courses from Krish Naik this Diwali with coupon AI20. Check out the Ultimate RAG Bootcamp and more at krishnaik.in, and hit up the counselling team at +91 91115 33440 or +91 84848 37781 if you have questions.

Watch on YouTube

Top comments (0)