Phoenikz Prompt Injection 🛡️ Analyzer🔍
Detect and analyze prompt injection attacks in image-based inputs with enterprise-grade security scanning.
Aligned with OWASP LLM Top 10 (LLM01) to strengthen AI safety and resilience.

Image Gallery
Prompt Injection Testing Interface (OpenRouter Models)
Test how various safety-tuned models respond to prompt injection attempts.
🔍 Phoenikz Prompt Injection Analyzer - Analytics
🛡️ AI Red Teaming & Safety – Learning Hub
Below is a curated list of 10 high-signal sources to track:
- Prompt injection techniques
- LLM vulnerabilities
- AI red teaming tactics & tools
Use these responsibly and ethically, in line with your organization’s security and compliance policies.
🔟 Top Sources for Prompt Injection & AI Red Teaming
Below are ten high-signal places to follow prompt injection techniques, LLM vulnerabilities, and red teaming.
| # | Title & Link | Description |
|---|---|---|
| 1 | Embrace The Red đź”— https://embracethered.com/blog |
A deeply technical blog by “Wunderwuzzi” covering prompt injection exploits, jailbreaks, red teaming strategy, and POCs. Frequently cited in AI security circles for real-world testing. |
| 2 | L1B3RT4S GitHub (elder_plinius) đź”— https://github.com/elder-plinius/L1B3RT4S |
A jailbreak prompt library widely used by red teamers. Offers prompt chains, attack scripts, and community contributions for bypassing LLM filters. |
| 3 | Prompt Hacking Resources (PromptLabs) đź”— https://github.com/PromptLabs/Prompt-Hacking-Resources |
An awesome-list style hub with categorized links to tools, papers, Discord groups, jailbreaking datasets, and prompt engineering tactics. |
| 4 | InjectPrompt (David Willis-Owen) đź”— https://www.injectprompt.com |
Substack blog/newsletter publishing regular jailbreak discoveries, attack patterns, and LLM roleplay exploits. Trusted by active red teamers. |
| 5 | Pillar Security Blog đź”— https://www.pillar.security/blog |
Publishes exploit deep-dives, system prompt hijacking cases, and “policy simulation” attacks. Good bridge between academic and applied offensive AI security. |
| 6 | Lakera AI Blog đź”— https://www.lakera.ai/blog |
Covers prompt injection techniques and defenses from a vendor perspective. Offers OWASP-style case studies, mitigation tips, and monitoring frameworks. |
| 7 | OWASP GenAI LLM Security Project đź”— https://genai.owasp.org/llmrisk/llm01-prompt-injection |
Formal threat modeling site ranking Prompt Injection as LLM01 (top risk). Includes attack breakdowns, controls, and community submissions. |
| 8 | Garak LLM Vulnerability Scanner đź”— https://docs.nvidia.com/nemo/guardrails/latest/evaluation/llm-vulnerability-scanning.html |
NVIDIA’s open-source scanner (like nmap for LLMs) that probes for prompt injection, jailbreaks, encoding attacks, and adversarial suffixes. |
| 9 | Awesome-LLM-Red-Teaming (user1342) đź”— https://github.com/user1342/Awesome-LLM-Red-Teaming |
Curated repo for red teaming tools, attack generators, and automation for testing LLMs. Includes integrations for CI/CD pipelines. |
| 10 | Kai Greshake (Researcher & Blog) đź”— https://kai-greshake.de/posts/llm-malware |
Pioneered “Indirect Prompt Injection” research. His blog post and paper explain how LLMs can be hijacked via external data (RAG poisoning). Active on Twitter/X. |