TL;DR
A joint study by Anthropic, the UK AI Security Institute and the Alan Turing Institute shows that as few as 250 malicious documents can slip a “backdoor” into any large language model—whether it’s a 600 M-parameter toy or a 13 B-parameter behemoth trained on 20× more data.
On the lighter side, Krishnaik.in is celebrating Diwali with 20% off all live AI courses—just use coupon AI20. Enroll at https://www.krishnaik.in/liveclasses or the Ultimate RAG Bootcamp at https://www.krishnaik.in/liveclass2/ultimate-rag-bootcamp?id=7, or ring +91 91115 33440 / +91 84848 37781 if you’ve got questions.
Watch on YouTube
Top comments (0)