New research with the UK @AISecurityInst and the @turinginst:
We found that just a few malicious documents can produce vulnerabilities in an LLM—regardless of the size of the model or its training data.
Data-poisoning attacks might be more practical than previously believed. https://t.co/TXOCY9c25t