Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

What Is LLM Poisoning? Interesting Break Through

A new joint study by Anthropic, the UK AI Security Institute and the Alan Turing Institute shows that just 250 malicious documents are enough to “backdoor” a large language model—no matter its size or how much training data it has. Whether it’s a 600 M-parameter model or a massive 13 B-parameter one, the same small batch of poisoned docs can trigger vulnerabilities.

On a completely different note, there’s a festive Diwali offer giving you 20% off on all live courses at Krishna I K’s academy. Just use coupon code AI20 when you enroll via their website or by calling their counselling team for any queries.

Watch on YouTube

Top comments (0)