DNS Proxying ​
Learn how DNS proxying in Istio improves DNS resolution performance, enables external (outside of mesh) service resolution with ServiceEntries, and solves routing issues for external TCP services, which share the same port.
Overview ​
DNS resolution is a vital component of any application infrastructure on Kubernetes. When your application code attempts to access another service in the Kubernetes cluster, or even a service on the internet, it must first look up the IP address corresponding to the service's hostname before initiating a connection to the service.
DNS proxying intercepts DNS requests from applications and resolves them locally at the Istio proxy level. The proxy maintains a local mapping of host names to IP addresses based on the Kubernetes services and service entries in the cluster. If a hostname can be resolved locally within the mesh, the proxy responds immediately. Otherwise, it forwards the request upstream following the standard DNS resolution.
Improvements DNS Proxying Introduces ​
Reduced Load on Upstream DNS ​
Without DNS proxying, every DNS query from workloads goes to the upstream DNS server. With DNS proxying enabled, the Istio proxy resolves known service addresses locally in the mesh, reducing traffic to the upstream DNS server.
ServiceEntry Resolution ​
When you define a ServiceEntry with a custom hostname (for example, address.internal) that the upstream DNS does not know about, applications cannot resolve these addresses without DNS proxying. DNS proxying allows the Istio proxy to resolve ServiceEntry hostnames directly.
DNS Resolution for External TCP Services on the Same Port ​
In some cases, TCP traffic in Istio is routed based on destination IP and port only. Unlike HTTP traffic, which includes a Host header, TCP has no additional metadata for routing decisions.
For example, when multiple external TCP services share the same port (for example, two databases on port 3306) and they don't have stable IP addresses, the specified ServiceEntries have resolution set to DNS, and Istio cannot distinguish between them. The proxy creates a single listener on 0.0.0.0:{port} and forwards traffic to a single destination (an external TCP service).
DNS proxying solves this by auto-allocating virtual IPs (VIPs) from the 240.240.0.0/16 range to each ServiceEntry. This gives each external TCP service a unique address, enabling the proxy to route traffic correctly even when sharing the same port.
How to Enable DNS Proxying ​
DNS proxying is disabled by default. You can enable it globally for the entire mesh or locally per workload. Local per-workload configuration overrides the global mesh configuration.
Global Mesh Configuration ​
You can enable DNS proxying globally using the Kyma Dashboard or kubectl.
To verify the configuration, run:
kubectl get istio default -n kyma-system -o jsonpath='{.spec.config.enableDNSProxying}'This enables DNS proxying globally for all proxies in the mesh.
Local Per-Workload Configuration ​
Add the proxy.istio.io/config annotation to enable DNS proxying for a specific workload:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
proxy.istio.io/config: |
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
spec:
containers:
- name: my-app
image: my-app:latestYou can also use kubectl to add the annotation to an existing Deployment:
kubectl patch deployment my-app -p '{"spec":{"template":{"metadata":{"annotations":{"proxy.istio.io/config":"proxyMetadata:\n ISTIO_META_DNS_CAPTURE: \"true\"\n"}}}}}'To verify the annotation is applied, run:
kubectl get deployment my-app -o jsonpath='{.spec.template.metadata.annotations.proxy\.istio\.io/config}'Auto-Allocation of Virtual IPs ​
The DNS proxy additionally supports automatically allocating addresses for ServiceEntries that do not explicitly define one. When enabled, the DNS response includes a distinct and automatically assigned address for each ServiceEntry from the reserved Class E range (240.240.0.0/16). The proxy is then configured to match requests to this IP address and forward the request to the corresponding ServiceEntry.
See the following example ServiceEntry:
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: external-db
spec:
hosts:
- db.example.com
ports:
- number: 3306
name: tcp
protocol: TCP
resolution: DNSResults in DNS queries for db.example.com return an auto-allocated IP like 240.240.0.1 instead of the actual external IP. The proxy then routes traffic for 240.240.0.1:3306 to the resolved backend.
To opt out of auto-allocation for a specific ServiceEntry, add the following label:
metadata:
labels:
networking.istio.io/enable-autoallocate-ip: "false"You can also use kubectl to add this label:
kubectl label serviceentry external-db networking.istio.io/enable-autoallocate-ip=falseNOTE: Auto-allocation does not work for wildcard hosts (for example,
*.example.com).
Consequences ​
Benefits ​
- Performance: Reduced DNS query latency and lower load on upstream DNS.
- ServiceEntry support: Applications can resolve hostnames defined in ServiceEntry resources.
- TCP routing: Multiple external TCP services on the same port work correctly with auto-allocated VIPs.
- Mesh visibility: Istio gains visibility and control over DNS resolution.
Considerations ​
- Non-routable IPs: Auto-allocated addresses use the
240.240.0.0/16range. Applications that validate or log IP addresses may see unexpected values. - Proxy complexity: The proxy takes on DNS resolution responsibilities, slightly increasing resource usage.
- No wildcard support: Auto-allocation does not apply to ServiceEntry resources with wildcard hosts.