Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

What’s the deal with LLM poisoning?

A new joint study from Anthropic, the UK AI Security Institute and the Alan Turing Institute shows that you only need about 250 carefully poisoned documents to sneak a “backdoor” into a large language model—no matter how big it is or how much clean data it’s been fed.

Even a massive 13 billion-parameter model (with 20× more training data than a 600 million-parameter sibling) is just as vulnerable once it ingests those poisoned texts. The takeaway? Simply scaling up size or data volume isn’t enough to keep crafty attackers at bay.

Watch on YouTube

Top comments (0)