Add defense against prompt injection attack #181
zhilongwang
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Pre-submission Checklist
Your Idea
Prompt injection is a significant security concern in any LLM application that processes user input.
I noticed that MCP has not deployed any prompt injection defenses in its framework.
I am researching the protection of LLM applications and have an idea: adopting randomization in prompts to mitigate prompt injection attacks. This approach is similar to ASLR in traditional software protection and introduces almost zero runtime overhead.
I am interested in contributing to the implementation of this feature and would like to hear your thoughts on it.
Scope
Beta Was this translation helpful? Give feedback.
All reactions