Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

What Is LLM Poisoning? Interesting Break Through

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute discovered that just 250 malicious documents are enough to “backdoor” a large language model—no matter how big it is or how much data it’s trained on. Even a tiny 600M-parameter model and a hefty 13B-parameter model can both be compromised by the same small batch of poisoned samples.

In other news, Diwali’s still here! Grab 20% off all live courses with code AI20. Check out Krish Naik’s live classes at https://www.krishnaik.in/liveclasses or the Ultimate RAG Bootcamp link, and ring +91 91115 33440 / +91 84848 37781 for any help.

Watch on YouTube

Top comments (0)