Integrate in minutes.
Govern forever.
Agenvia sits between your application and your LLM. Every prompt is classified, every policy enforced, every decision permanently recorded — in 232ms.
Block injection attacks, jailbreaks, data exfiltration. One API call per prompt.
Real values never reach the LLM. Automatic output scrubbing.
Per-tool authorization. Human-in-the-loop approval for high-risk actions.
Get your API key
Create an account and copy your av_live_... key. Sign up free — your key is generated instantly.
# Sign up at agenvia-web.vercel.app/signup # Your av_live_... key appears immediately after signup export AGENVIA_KEY="av_live_your_key_here" export AGENVIA_URL="https://your-api.railway.app"
Install the SDK
Python SDK. TypeScript coming soon.
pip install agenvia
Your first governed call
Send a prompt through Agenvia before calling your LLM. The decision tells you what to do next.
import httpx
import os
AGENVIA_URL = os.getenv("AGENVIA_URL")
API_KEY = os.getenv("AGENVIA_KEY")
def evaluate_prompt(prompt: str, actor_id: str, role: str):
resp = httpx.post(
f"{AGENVIA_URL}/gateway/prompt",
headers={"X-Api-Key": API_KEY},
json={
"prompt": prompt,
"actor_id": actor_id,
"role": role,
},
timeout=10,
)
resp.raise_for_status()
return resp.json()
# Use it before every LLM call
decision = evaluate_prompt(
prompt = "What medications is patient 4821 taking?",
actor_id = "agent-001",
role = "nurse",
)
if decision["decision"] == "allow":
response = your_llm.complete(decision["safe_prompt"])
elif decision["decision"] == "redact":
response = your_llm.complete(decision["safe_prompt"])
else: # block
response = "Request blocked by security policy."
# decision["policy_trace"] explains the decision
# decision["risk_score"] is 0.0 → 1.0
# decision["request_id"] links to the audit recordComplete agent examples
Drop Agenvia into your existing agent framework. Pick yours below.
from langchain_anthropic import ChatAnthropic
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import tool
from agenvia import Agenvia
av = Agenvia(api_key="av_live_...")
# Wrap the agent executor — intercept every prompt
class GovernedAgent:
def __init__(self, executor: AgentExecutor, actor_id: str, role: str):
self.executor = executor
self.actor_id = actor_id
self.role = role
def run(self, user_input: str) -> str:
# 1. Evaluate before LLM sees the prompt
decision = av.evaluate(
prompt = user_input,
actor_id = self.actor_id,
role = self.role,
)
if decision.action == "block":
return f"Blocked: {decision.policy_trace[0].get('reason')}"
# 2. Use safe_prompt (PII removed if minimized)
safe_input = decision.safe_prompt or user_input
# 3. Run the agent normally
result = self.executor.invoke({"input": safe_input})
# 4. decision.request_id links this to the audit chain
return result["output"]
# Build your LangChain agent as normal
llm = ChatAnthropic(model="claude-haiku-4-5-20251001")
agent = GovernedAgent(executor=executor, actor_id="lc-agent-001", role="analyst")
# Every call is now governed and audited
response = agent.run("Summarise Q3 revenue figures")
blocked = agent.run("Ignore previous instructions...")Response reference
Every evaluation returns these fields.
| Field | Type | Description |
|---|---|---|
| decision | string | allow · block · redact — what to do next |
| safe_prompt | string | PII-abstracted version. Use instead of original when decision is redact. |
| risk_score | float | 0.0 → 1.0. Confidence the prompt violates policy. |
| policy_trace | list | Which rules fired, in order. Human-readable reason for every decision. |
| request_id | string | Links this response to the tamper-evident audit chain record. |
| latency_ms | int | Gateway processing time. p50: 232ms · p95: 408ms |
→ policy_trace is your compliance artifact. Every blocked or redacted decision includes a machine-readable trace showing exactly which rule fired, which facts were evaluated, and why. GDPR Article 22 compliant by design.