Static configuration

Most agentgateway configurations dynamically update as you make changes to the binds, policies, backends, and so on.

However, a few configurations are statically configured at startup. These static configurations are under the config section.

Static configuration file schema

The following table shows the config file schema for static configurations at startup. For the full agentgateway schema of dynamic and static configuration, see the reference docs.

FieldTypeDescription
configobject
config.enableIpv6boolean
config.dnsobjectDNS resolver settings.
config.dns.lookupFamilystringControls which IP address families the DNS resolver will query for
upstream connections.
Accepted values: All, Auto, V4Preferred, V4Only, V6Only.
Defaults to Auto (IPv4-only when enableIpv6 is false, both when true).
config.dns.edns0booleanWhether to enable EDNS0 (Extension Mechanisms for DNS) in the resolver.
When None, the system-provided resolver setting is preserved.
Can also be set via the DNS_EDNS0 environment variable.
config.localXdsPathstringLocal XDS path. If not specified, the current configuration file will be used.
config.caAddressstring
config.caAuthTokenstring
config.xdsAddressstring
config.xdsAuthTokenstring
config.namespacestring
config.gatewaystring
config.trustDomainstring
config.serviceAccountstring
config.clusterIdstring
config.networkstring
config.adminAddrstringAdmin UI address in the format “ip:port”
config.statsAddrstringStats/metrics server address in the format “ip:port”
config.readinessAddrstringReadiness probe server address in the format “ip:port”
config.sessionobjectConfiguration for stateful session management
config.session.keystringThe AES-256-GCM session protection key to be used for session tokens.
If not set, sessions will not be encrypted.
For example, generated via openssl rand -hex 32.
config.connectionTerminationDeadlinestring
config.connectionMinTerminationDeadlinestring
config.workerThreadsstring
config.tracingobject
config.tracing.otlpEndpointstring
config.tracing.headersobject
config.tracing.otlpProtocolstring
config.tracing.fieldsobject
config.tracing.fields.remove[]string
config.tracing.fields.addobject
config.tracing.randomSamplingstringExpression to determine the amount of random sampling.
Random sampling will initiate a new trace span if the incoming request does not have a trace already.
This should evaluate to either a float between 0.0-1.0 (0-100%) or true/false.
This defaults to ‘false’.
config.tracing.clientSamplingstringExpression to determine the amount of client sampling.
Client sampling determines whether to initiate a new trace span if the incoming request does have a trace already.
This should evaluate to either a float between 0.0-1.0 (0-100%) or true/false.
This defaults to ’true'.
config.tracing.pathstringOTLP path. Default is /v1/traces
config.loggingobject
config.logging.filterstring
config.logging.fieldsobject
config.logging.fields.remove[]string
config.logging.fields.addobject
config.logging.levelstring
config.logging.formatstring
config.metricsobject
config.metrics.remove[]string
config.metrics.fieldsobject
config.metrics.fields.addobject
config.backendobject
config.backend.keepalivesobject
config.backend.keepalives.enabledboolean
config.backend.keepalives.timestring
config.backend.keepalives.intervalstring
config.backend.keepalives.retriesinteger
config.backend.connectTimeoutstring
config.backend.poolIdleTimeoutstringThe maximum duration to keep an idle connection alive.
config.backend.poolMaxSizeintegerThe maximum number of connections allowed in the pool, per hostname. If set, this will limit
the total number of connections kept alive to any given host.
Note: excess connections will still be created, they will just not remain idle.
If unset, there is no limit
config.hboneobject
config.hbone.windowSizeinteger
config.hbone.connectionWindowSizeinteger
config.hbone.frameSizeinteger
config.hbone.poolMaxStreamsPerConninteger
config.hbone.poolUnusedReleaseTimeoutstring
llm.portinteger
llm.models[]objectmodels defines the set of models that can be served by this gateway. The model name refers to the
model in the users request that is matched; the model sent to the actual LLM can be overridden
on a per-model basis.
llm.models[].namestringname is the name of the model we are matching from a users request. If params.model is set, that
will be used in the request to the LLM provider. If not, the incoming model is used.
llm.models[].paramsobjectparams customizes parameters for the outgoing request
llm.models[].params.modelstringThe model to send to the provider.
If unset, the same model will be used from the request.
llm.models[].params.apiKeyobjectAn API key to attach to the request.
If unset this will be automatically detected from the environment.
llm.models[].params.apiKey.filestring
llm.models[].params.awsRegionstring
llm.models[].params.vertexRegionstring
llm.models[].params.vertexProjectstring
llm.models[].params.azureHoststringFor Azure: the host of the deployment
llm.models[].params.azureApiVersionstringFor Azure: the API version to use
llm.models[].params.hostOverridestringOverride the upstream host for this provider.
llm.models[].params.pathOverridestringOverride the upstream path for this provider.
llm.models[].params.tokenizebooleanWhether to tokenize the request before forwarding it upstream.
llm.models[].providerstringprovider of the LLM we are connecting too
llm.models[].defaultsobjectdefaults allows setting default values for the request. If these are not present in the request body, they will be set.
To override even when set, use overrides.
llm.models[].overridesobjectoverrides allows setting values for the request, overriding any existing values
llm.models[].transformationobjecttransformation allows setting values from CEL expressions for the request, overriding any existing values.
llm.models[].requestHeadersobjectrequestHeaders modifies headers in requests to the LLM provider.
llm.models[].requestHeaders.addobject
llm.models[].requestHeaders.setobject
llm.models[].requestHeaders.remove[]string
llm.models[].responseHeadersobjectresponseHeaders modifies headers in responses from the LLM provider.
llm.models[].responseHeaders.addobject
llm.models[].responseHeaders.setobject
llm.models[].responseHeaders.remove[]string
llm.models[].backendTLSobjectbackendTLS configures TLS when connecting to the LLM provider.
llm.models[].backendTLS.certstring
llm.models[].backendTLS.keystring
llm.models[].backendTLS.rootstring
llm.models[].backendTLS.hostnamestring
llm.models[].backendTLS.insecureboolean
llm.models[].backendTLS.insecureHostboolean
llm.models[].backendTLS.alpn[]string
llm.models[].backendTLS.subjectAltNames[]string
llm.models[].healthobjecthealth configures outlier detection for this model backend.
llm.models[].health.unhealthyExpressionstringCEL expression; true means unhealthy (evict). E.g. response.code >= 500.
When unset, any 5xx or connection failure is treated as unhealthy.
llm.models[].health.evictionobjectLocal/config eviction sub-policy with duration as string; mirrors Eviction.
llm.models[].health.eviction.durationstring
llm.models[].health.eviction.restoreHealthnumber
llm.models[].health.eviction.consecutiveFailuresinteger
llm.models[].health.eviction.healthThresholdnumber
llm.models[].backendTunnelobjectbackendTunnel configures tunneling when connecting to the LLM provider.
llm.models[].backendTunnel.proxyobjectReference to the proxy address
Exactly one of service, host, or backend may be set.
llm.models[].backendTunnel.proxy.serviceobject
llm.models[].backendTunnel.proxy.service.nameobject
llm.models[].backendTunnel.proxy.service.name.namespacestring
llm.models[].backendTunnel.proxy.service.name.hostnamestring
llm.models[].backendTunnel.proxy.service.portinteger
llm.models[].backendTunnel.proxy.hoststringHostname or IP address
llm.models[].backendTunnel.proxy.backendstringExplicit backend reference. Backend must be defined in the top level
llm.models[].guardrails.request[]object
llm.models[].guardrails.request[].regexobject
llm.models[].guardrails.request[].regex.actionstring
llm.models[].guardrails.request[].regex.rules[]object
llm.models[].guardrails.request[].regex.rules[].builtinstring
llm.models[].guardrails.request[].regex.rules[].patternstring
llm.models[].guardrails.request[].webhookobject
llm.models[].guardrails.request[].webhook.targetobjectExactly one of service, host, or backend may be set.
llm.models[].guardrails.request[].webhook.target.serviceobject
llm.models[].guardrails.request[].webhook.target.service.nameobject
llm.models[].guardrails.request[].webhook.target.service.name.namespacestring
llm.models[].guardrails.request[].webhook.target.service.name.hostnamestring
llm.models[].guardrails.request[].webhook.target.service.portinteger
llm.models[].guardrails.request[].webhook.target.hoststringHostname or IP address
llm.models[].guardrails.request[].webhook.target.backendstringExplicit backend reference. Backend must be defined in the top level
llm.models[].guardrails.request[].webhook.forwardHeaderMatches[].namestring
llm.models[].guardrails.request[].webhook.forwardHeaderMatches[].valueobjectExactly one of exact or regex may be set.
llm.models[].guardrails.request[].webhook.forwardHeaderMatches[].value.exactstring
llm.models[].guardrails.request[].webhook.forwardHeaderMatches[].value.regexstring
llm.models[].guardrails.request[].openAIModerationobject
llm.models[].guardrails.request[].openAIModeration.modelstringModel to use. Defaults to omni-moderation-latest
`llm.models[].guardrails.request[].openAIModeration`llm.models[].guardrails.request[].openAIModeration`llm.models[].guardrails.request[].openAIModeration
llm.models[].guardrails.request[].bedrockGuardrails.guardrailIdentifierstringThe unique identifier of the guardrail
llm.models[].guardrails.request[].bedrockGuardrails.guardrailVersionstringThe version of the guardrail
llm.models[].guardrails.request[].bedrockGuardrails.regionstringAWS region where the guardrail is deployed
`llm.models[].guardrails.request[].bedrockGuardrails`llm.models[].guardrails.request[].bedrockGuardrails`llm.models[].guardrails.request[].bedrockGuardrails
llm.models[].guardrails.request[].googleModelArmor.templateIdstringThe template ID for the Model Armor configuration
llm.models[].guardrails.request[].googleModelArmor.projectIdstringThe GCP project ID
llm.models[].guardrails.request[].googleModelArmor.locationstringThe GCP region (default: us-central1)
`llm.models[].guardrails.request[].googleModelArmor`llm.models[].guardrails.request[].googleModelArmor`llm.models[].guardrails.request[].googleModelArmor
llm.models[].guardrails.request[].rejection.bodyarray
llm.models[].guardrails.request[].rejection.statusinteger
llm.models[].guardrails.request[].rejection.headersobjectOptional headers to add, set, or remove from the rejection response
llm.models[].guardrails.request[].rejection.headers.addobject
llm.models[].guardrails.request[].rejection.headers.setobject
llm.models[].guardrails.request[].rejection.headers.remove[]string
llm.models[].guardrails.response[]object
llm.models[].guardrails.response[].regexobject
llm.models[].guardrails.response[].regex.actionstring
llm.models[].guardrails.response[].regex.rules[]object
llm.models[].guardrails.response[].regex.rules[].builtinstring
llm.models[].guardrails.response[].regex.rules[].patternstring
llm.models[].guardrails.response[].webhookobject
llm.models[].guardrails.response[].webhook.targetobjectExactly one of service, host, or backend may be set.
llm.models[].guardrails.response[].webhook.target.serviceobject
llm.models[].guardrails.response[].webhook.target.service.nameobject
llm.models[].guardrails.response[].webhook.target.service.name.namespacestring
llm.models[].guardrails.response[].webhook.target.service.name.hostnamestring
llm.models[].guardrails.response[].webhook.target.service.portinteger
llm.models[].guardrails.response[].webhook.target.hoststringHostname or IP address
llm.models[].guardrails.response[].webhook.target.backendstringExplicit backend reference. Backend must be defined in the top level
llm.models[].guardrails.response[].webhook.forwardHeaderMatches[].namestring
llm.models[].guardrails.response[].webhook.forwardHeaderMatches[].valueobjectExactly one of exact or regex may be set.
llm.models[].guardrails.response[].webhook.forwardHeaderMatches[].value.exactstring
llm.models[].guardrails.response[].webhook.forwardHeaderMatches[].value.regexstring
llm.models[].guardrails.response[].bedrockGuardrailsobjectConfiguration for AWS Bedrock Guardrails integration.
llm.models[].guardrails.response[].bedrockGuardrails.guardrailIdentifierstringThe unique identifier of the guardrail
llm.models[].guardrails.response[].bedrockGuardrails.guardrailVersionstringThe version of the guardrail
llm.models[].guardrails.response[].bedrockGuardrails.regionstringAWS region where the guardrail is deployed
`llm.models[].guardrails.response[].bedrockGuardrails`llm.models[].guardrails.response[].bedrockGuardrails`llm.models[].guardrails.response[].bedrockGuardrails
llm.models[].guardrails.response[].googleModelArmor.templateIdstringThe template ID for the Model Armor configuration
llm.models[].guardrails.response[].googleModelArmor.projectIdstringThe GCP project ID
llm.models[].guardrails.response[].googleModelArmor.locationstringThe GCP region (default: us-central1)
`llm.models[].guardrails.response[].googleModelArmor`llm.models[].guardrails.response[].googleModelArmor`llm.models[].guardrails.response[].googleModelArmor
llm.models[].guardrails.response[].rejection.bodyarray
llm.models[].guardrails.response[].rejection.statusinteger
llm.models[].guardrails.response[].rejection.headersobjectOptional headers to add, set, or remove from the rejection response
llm.models[].guardrails.response[].rejection.headers.addobject
llm.models[].guardrails.response[].rejection.headers.setobject
llm.models[].guardrails.response[].rejection.headers.remove[]string
llm.models[].matches[]objectmatches specifies the conditions under which this model should be used in addition to matching the model name.
llm.models[].matches[].headers[]object
llm.models[].matches[].headers[].namestring
llm.models[].matches[].headers[].valueobjectExactly one of exact or regex may be set.
llm.models[].matches[].headers[].value.exactstring
llm.models[].matches[].headers[].value.regexstring
`llm`llm`llm
mcp.portinteger
mcp.targets[]object
mcp.targets[].sseobject
mcp.targets[].sse.hoststring
mcp.targets[].sse.portinteger
mcp.targets[].sse.pathstring
mcp.targets[].mcpobject
mcp.targets[].mcp.hoststring
mcp.targets[].mcp.portinteger
mcp.targets[].mcp.pathstring
mcp.targets[].stdioobject
mcp.targets[].stdio.cmdstring
mcp.targets[].stdio.args[]string
mcp.targets[].stdio.envobject
mcp.targets[].openapiobject
mcp.targets[].openapi.hoststring
mcp.targets[].openapi.portinteger
mcp.targets[].openapi.pathstring
mcp.targets[].openapi.schemaobject
mcp.targets[].openapi.schema.filestring
mcp.targets[].openapi.schema.urlstring
mcp.targets[].namestring
`mcp.targets[]`mcp.targets[]`mcp.targets[]
mcp.prefixModestring
mcp.failureModestringBehavior when one or more MCP targets fail to initialize or fail during fanout.
Defaults to failClosed.
`mcp`mcp`mcp
Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.