Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

A joint study by Anthropic, the UK AI Security Institute and the Alan Turing Institute found that you only need about 250 malicious documents to sneak a “backdoor” into any large language model—big or small. Even a 13B-parameter model trained on 20× more data than a 600M-parameter one is just as vulnerable when hit with the same tiny batch of poisoned samples.

This means that cranking up model size or data volume isn’t enough to defend against crafty attackers—just a few bad apples are all it takes to hijack your LLM.

Watch on YouTube

Top comments (0)