· Announcements  · 1 min read

Introducing SafeLLM v1.0: Enterprise AI Security Gateway

Today we're launching SafeLLM, the first open-source security gateway designed specifically for LLM applications. Learn about our defense-in-depth approach.

Today we're launching SafeLLM, the first open-source security gateway designed specifically for LLM applications. Learn about our defense-in-depth approach.

The Challenge

As organizations rapidly adopt Large Language Models (LLMs) in production, a critical gap has emerged: traditional security tools weren’t designed for AI workloads.

Prompt injection, PII leakage, and jailbreak attacks represent new threat vectors that firewalls and WAFs can’t detect. The cost of unprotected LLM APIs—both in terms of security breaches and API costs—is staggering.

Our Solution: Defense-in-Depth

SafeLLM implements a Waterfall Security Pipeline where each request passes through multiple security layers:

  • L0: Semantic Cache — Reduces API costs by up to 80%
  • L1: Keyword Guard — O(1) blocking of known attack patterns
  • L1.5: PII Shield — Dual-mode PII detection (regex or AI)
  • L2: AI Guard — Neural network detection of sophisticated attacks

Each layer can immediately block dangerous requests, saving computational resources downstream.

Open Source First

We believe security should be accessible. The OSS edition includes:

  • Full L0-L1.5 pipeline
  • Docker & Kubernetes deployment
  • Apache APISIX integration
  • Apache 2.0 license

Enterprise features (Air-Gapped mode, Redis Sentinel HA, Dashboard) are available for organizations with stricter requirements.

Get Started

git clone https://github.com/safellm/safellm
cd safellm
docker compose up -d

Visit our documentation or contact us for Enterprise inquiries.

Back to Blog

Related Posts

View All Posts »