Requests by Endpoint

Topic Routing

Define allowed conversation topics. Messages outside these topics will be blocked by the Semantic Router.
0.35
Lower = stricter matching. Default: 0.35

Sentinel LLM Classifier

Sentinel Enabled
LLM-based injection detection for ambiguous messages

General Settings

Fail Open
Allow messages through when the firewall encounters an error

Block Messages

Your Plan

Available Plans

Generate Canary Token

Canary tokens detect prompt extraction attacks. Inject the token into your system prompt, and WonderwallAi's egress filter will block any response that leaks it.

Canary Token
Prompt Block (add to your system prompt)

Python SDK

Install
pip install wonderwallai
Scan Inbound Messages
from wonderwallai import WonderwallClient client = WonderwallClient( api_key="your_api_key", topics=["customer support", "product questions"], ) # Scan before sending to LLM verdict = client.scan_inbound("How do I return my order?") if verdict.allowed: # Safe to send to LLM response = call_your_llm(message) else: # Message was blocked print(verdict.message)
Scan Outbound Responses
# Check LLM output for leaked data verdict = client.scan_outbound( text=llm_response, canary_token="WONDERWALL-abc123" ) if verdict.action == "redact": # PII was found and redacted safe_response = verdict.message elif not verdict.allowed: # Canary token leaked — block response safe_response = "I can't share that information."

Hosted API (cURL)

Scan Inbound
curl -X POST https://wonderwallai-production.up.railway.app/v1/scan/inbound \ -H "Authorization: Bearer your_api_key" \ -H "Content-Type: application/json" \ -d '{"message": "How do I return my order?"}'