LLM consumption
Consume services from LLM providers.
Unified LLM interface, common challenges, supported providers
Configure backends for supported LLM providers
Secure LLM provider authentication
Token and request budgets
Advanced budget patterns: per-route, local, cost formulas
Issue API keys with budgets and cost tracking
P2C, intelligent routing, automatic failover
Priority groups, automatic fallback
Route by model name, custom fields, body-based routing
Content safety, PII detection, request filtering
System prompts, user prompts, prompt management
Static and dynamic templating, variable injection
Extend LLMs with external APIs and tools
Dynamically transform LLM requests with CEL
Custom guardrail controls via webhooks
Cost tracking, spend monitoring, usage tracking
Token usage, prompt logging, LLM-specific metrics