ALERT!
Click here to register with a few steps and explore all our cool stuff we have to offer!
Home
Upgrade
Credits
Help
Search
Awards
Achievements
 2245

rebuff - LLM Prompt Injection Detector

by Firefly21 - 10-23-2023 - 07:44 PM
#1
Rebuff is designed to protect AI applications from fast injection (PI) attacks through a multi-layered defense.

https://github.com/protectai/rebuff

Features

Rebuff offers 4 layers of defense:
  • Heuristics: Filter out potentially malicious input before it reaches the LLM.
  • LLM-based detection: Use a dedicated LLM to analyze incoming prompts and identify potential attacks.
  • VectorDB: Store embeddings of previous attacks in a vector database to recognize and prevent similar attacks in the future.
  • Canary tokens: Add canary tokens to prompts to detect leakages, allowing the framework to store embeddings about the incoming prompt in the vector database and prevent future attacks.
Reply

Users browsing: 1 Guest(s)