Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

What Is LLM Poisoning? Interesting Break Through

LLM poisoning is basically sneaking malicious examples into a model’s training feed so it secretly obeys a hidden “backdoor.” In a new joint study by Anthropic, the UK AI Security Institute and the Alan Turing Institute, researchers showed that you only need about 250 rogue documents to worm your way into any language model—big or small.

Here’s the kicker: even though a 13 billion-parameter model gobbles up over twenty times more data than a 600 million-parameter cousin, both can be compromised by the exact same tiny stash of poisoned files. That means size and data volume alone aren’t enough to keep your AI safe.

Watch on YouTube

Top comments (0)