Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

TL;DR

Researchers at Anthropic, the UK AI Security Institute and the Alan Turing Institute discovered that you only need about 250 malicious documents to sneak a “backdoor” into any large language model—big or small—making them vulnerable no matter how much data you’ve trained on.

On a lighter note, there’s a Diwali special running: snag 20% off all live AI courses with coupon AI20 at Krishnaik.in. Reach out to Krish Naik’s counselling team at +91 91115 33440 or +91 84848 37781 if you’ve got questions.

Watch on YouTube

Top comments (0)