Vibe Coding Forem

Vibe YouTube
Vibe YouTube

Posted on

Krish Naik: What Is LLM Poisoning? Interesting Break Through

#ai

What’s LLM poisoning all about?

Anthropic, the UK AI Security Institute and the Alan Turing Institute just showed that you only need about 250 malicious docs to sneak a “backdoor” into any large language model—whether it’s a 600 M or a 13 B-parameter beast.

Oh, and festive heads-up!

Diwali special: grab 20% off all live AI courses with code AI20. Check out the live classes at krishnaik.in (Ultimate RAG Bootcamp link too), or ring Krish Naik’s team at +91 91115 33440 / +91 84848 37781.

Watch on YouTube

Top comments (0)