Introducing SafeLLM v1.0: Enterprise AI Security Gateway
Today we're launching SafeLLM, the first open-source security gateway designed specifically for LLM applications. Learn about our defense-in-depth approach.
SafeLLM transforms Apache APISIX into an intelligent AI security gateway. Multi-layer protection against prompt injection, PII leaks, and abuse. Zero Trust for AI · GDPR/SOC2 Ready · ~6-16ms latency

Defense-in-Depth
Each request passes through multiple security layers. Short-circuit principle: dangerous requests are blocked immediately, saving computational resources.
If a similar question was already asked, SafeLLM returns cached response from Redis, bypassing the model. Saves up to 80% of API costs.
Blazing fast (O(1), FlashText) blocking of known jailbreak patterns and system commands. Fully configurable phrase lists.
Dual Mode: Fast regex (1-2ms) or precise AI (GLiNER, 20-25ms). Enterprise: define custom entities (employee IDs, project numbers).
Neural networks (ONNX) detecting sophisticated prompt injection attacks. Classes: safe, jailbreak, indirect_injection.
Scans model responses. If the model "spits out" confidential data, SafeLLM blocks or anonymizes it.
Safe Day-0 deployment: log "would_block" but allow all requests. Tune thresholds before going live.
Air-Gapped & High Availability
Built for the highest infrastructure security requirements
Enterprise version works completely without internet access. All AI models (ONNX/GLiNER) loaded locally. Your data never leaves your network.
Redis Sentinel support (cache failover) and Distributed Coalescer (K8s pod coordination) ensures protection continuity even during node failures.
Replace standard filters with powerful Guard-class models (Llama, Gemma, Qwen architectures) with GPU acceleration for SOTA detection.
EU AI Act Ready, RODO/GDPR, SOC2/ISO 27001. Non-editable audit logs (Loki/S3). Full data sovereignty in your VPC.
Visual security testing interface. Prometheus + Grafana integration. Token ROI Dashboard showing cost savings.
Native integration with Apache APISIX. Docker & Kubernetes ready (Helm Charts). Stateless & horizontally scalable.
Pricing
Start free with OSS, scale with Enterprise
Full Power
Security insights, tutorials, and updates from the SafeLLM team.
Today we're launching SafeLLM, the first open-source security gateway designed specifically for LLM applications. Learn about our defense-in-depth approach.
Prompt injection is one of the most critical security vulnerabilities in LLM applications. Learn how attackers exploit it and how to defend against it.
How to prevent sensitive data from reaching cloud LLM providers. A practical guide to PII detection and anonymization in AI workflows.
Join companies protecting their LLM deployments with SafeLLM.
Start with Open Source or schedule an Enterprise deep-dive.