Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

What Is LLM Poisoning? Interesting Break Through

A new joint study by Anthropic, the UK AI Security Institute and the Alan Turing Institute shows that you only need around 250 malicious documents to sneak a “backdoor” into any large language model—no matter how big or how much training data it has. Even a beefy 13B-parameter model with 20× more data is as vulnerable as a 600M-parameter one when hit with the same tiny batch of poisoned examples.


Diwali Festive Offer

Grab a sweet 20% off all live courses until Diwali with coupon code AI20! Check out the Ultimate RAG Bootcamp and more at https://www.krishnaik.in/liveclasses or https://www.krishnaik.in/liveclass2/ultimate-rag-bootcamp?id=7. Questions? Hit up Krish Naik’s counselling team at +91 91115 33440 or +91 84848 37781.

Watch on YouTube

Top comments (0)