SlideShare une entreprise Scribd logo
1  sur  70
Télécharger pour lire hors ligne
Laurent Bernaille
Staff Engineer, Datadog
Kubernetes DNS Horror Stories
(and how to avoid them)
@lbernail
Datadog
Over 350 integrations
Over 1,200 employees
Over 8,000 customers
Runs on millions of hosts
Trillions of data points per day
10000s hosts in our infra
10s of k8s clusters with 100-4000 nodes
Multi-cloud
Very fast growth
@lbernail
Challenges
● Scalability
● Containerization
● Consistency across cloud providers
● Networking
● Ecosystem youth
● Edge-cases
@lbernail
What we did not expect
Our Kubernetes DNS infrastructure currently serves ~200 000 DNS queries per second
Largest cluster is at ~60 000 DNS queries per second
DNS is one of your more critical services when running Kubernetes
@lbernail
Outline
● DNS in Kubernetes
● Challenges
● "Fun" stories
● What we do now
DNS in Kubernetes
@lbernail
How it works (by default)
kubelet
pod pod
container container /etc/resolv.conf
search <namespace>.svc.cluster.local
svc.cluster.local
cluster.local
ec2.internal
nameserver <dns service>
options ndots:5
kubelet configuration
ClusterDNS: <dns service>
@lbernail
Accessing DNS
kube-proxy
pod
container
proxier
<dns service>
DNS
DNS
DNS
kubedns / Coredns
apiserver
watch
services
Upstream resolver
DNS config
cluster.local:
- authoritative
- get services from apiserver
.:
- forward to upstream resolver
@lbernail
Theory: Scenario 1
Pod in namespace "metrics", requesting service "points" in namespace "metrics"
getent ahosts points
1. points has less than 5 dots
2. With first search domain: points.metrics.svc.cluster.local
3. Answer
@lbernail
Theory: Scenario 2
Pod in namespace "logs", requesting service "points" in namespace "metrics"
getent ahosts points.metrics
1. points.metrics has less than 5 dots
2. With first search domain: points.metrics.logs.svc.cluster.local
3. Answer: NXDOMAIN
4. With second domain points.metrics.svc.cluster.local
5. Answer
Challenges
@lbernail
In practice
Pod in namespace default, requesting www.google.com (GKE)
getent ahosts www.google.com
10.220.1.4.58137 > 10.128.32.10.53: A? www.google.com.default.svc.cluster.local. (58)
10.220.1.4.58137 > 10.128.32.10.53: AAAA? www.google.com.default.svc.cluster.local. (58)
10.128.32.10.53 > 10.220.1.4.58137: NXDomain 0/1/0 (151)
10.128.32.10.53 > 10.220.1.4.58137: NXDomain 0/1/0 (151)
10.220.1.4.54960 > 10.128.32.10.53: A? www.google.com.svc.cluster.local. (50)
10.220.1.4.54960 > 10.128.32.10.53: AAAA? www.google.com.svc.cluster.local. (50)
10.128.32.10.53 > 10.220.1.4.54960: NXDomain 0/1/0 (143)
10.128.32.10.53 > 10.220.1.4.54960: NXDomain 0/1/0 (143)
10.220.1.4.51754 > 10.128.32.10.53: A? www.google.com.cluster.local. (46)
10.220.1.4.51754 > 10.128.32.10.53: AAAA? www.google.com.cluster.local. (46)
10.128.32.10.53 > 10.220.1.4.51754: NXDomain 0/1/0 (139)
10.128.32.10.53 > 10.220.1.4.51754: NXDomain 0/1/0 (139)
10.220.1.4.42457 > 10.128.32.10.53: A? www.google.com.c.sandbox.internal. (59)
10.220.1.4.42457 > 10.128.32.10.53: AAAA? www.google.com.c.sandbox.internal. (59)
10.128.32.10.53 > 10.220.1.4.42457: NXDomain 0/1/0 (148)
10.128.32.10.53 > 10.220.1.4.42457: NXDomain 0/1/0 (148)
10.220.1.4.45475 > 10.128.32.10.53: A? www.google.com.google.internal. (48)
10.220.1.4.45475 > 10.128.32.10.53: AAAA? www.google.com.google.internal. (48)
10.128.32.10.53 > 10.220.1.4.45475: NXDomain 0/1/0 (137)
10.128.32.10.53 > 10.220.1.4.45475: NXDomain 0/1/0 (137)
10.220.1.4.40634 > 10.128.32.10.53: A? www.google.com. (32)
10.220.1.4.40634 > 10.128.32.10.53: AAAA? www.google.com. (32)
10.128.32.10.53 > 10.220.1.4.40634: 3/0/0 AAAA 2a00:1450:400c:c0b::67
10.128.32.10.53 > 10.220.1.4.40634: 6/0/0 A 173.194.76.103
12 queries!
Reasons:
- 5 search domains
- IPv6
Problems:
- latency
- packet loss => 5s delay
- load on DNS infra
Challenges: Resolver behaviors
@lbernail
IPv6?
getaddrinfo will do IPv4 and IPv6 queries by "default"
The Good: We're ready for IPv6!
The Bad: Not great, because it means twice the amount of traffic
The Ugly: IPv6 resolution triggers packet losses in the Kubernetes context
- Accessing the DNS service requires NAT
- Race condition in netfilter when 2 packets are sent within microseconds
- Patched in the kernel (4.19+)
- Detailed issue: https://github.com/kubernetes/kubernetes/issues/56903
If this happens for any of the 10+ queries, resolution takes 5s (at least)
Impact is far lower with IPVS (no DNAT)
@lbernail
Let's disable IPv6!
GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT ipv6.disable=1"
getent ahosts www.google.com
IP 10.140.216.13.53705 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63)
IP 10.129.192.2.53 > 10.140.216.13.53705: NXDomain*- 0/1/0 (171)
IP 10.140.216.13.34772 > 10.129.192.2.53: A? www.google.com.svc.cluster.local. (55)
IP 10.129.192.2.53 > 10.140.216.13.34772: NXDomain*- 0/1/0 (163)
IP 10.140.216.13.54617 > 10.129.192.2.53: A? www.google.com.cluster.local. (51)
IP 10.129.192.2.53 > 10.140.216.13.54617: NXDomain*- 0/1/0 (159)
IP 10.140.216.13.55732 > 10.129.192.2.53: A? www.google.com.ec2.internal. (45)
IP 10.129.192.2.53 > 10.140.216.13.55732: NXDomain 0/0/0 (45)
IP 10.140.216.13.36991 > 10.129.192.2.53: A? www.google.com. (32)
IP 10.129.192.2.53 > 10.140.216.13.36991: 1/0/0 A 172.217.2.100 (62)
Much better !
No IPv6: no risk of drops, 50% less traffic
@lbernail
We wanted to celebrate, but...
A
AAAA
Still a lot of AAAA queries
Where are they coming from?
@lbernail
What triggers IPv6?
According to POSIX, getaddrinfo should do IPv4 and IPv6 by default
BUT glibc includes hint AI_ADDRCONFIG by default
According to POSIX.1, specifying hints as NULL should cause ai_flags
to be assumed as 0. The GNU C library instead assumes a value of
(AI_V4MAPPED | AI_ADDRCONFIG) for this case, since this value is
considered an improvement on the specification.
AND AI_ADDRCONFIG only makes IPv6 queries if it finds an IPv6 address
man getaddrinfo, glibc
If hints.ai_flags includes the AI_ADDRCONFIG flag, then IPv4
addresses are returned in the list pointed to by res only if the
local system has at least one IPv4 address configured
So wait, disabling IPv6 should just work, right?
@lbernail
Alpine and Musl
Turns out Musl implements the POSIX spec, and sure enough:
getaddrinfo( "echo" , NULL , NULL , &servinfo)o)
Ubuntu base image
10.140.216.13.52563 > 10.129.192.2.53: A? echo.datadog.svc.fury.cluster.local. (53)
10.129.192.2.53 > 10.140.216.13.52563: 1/0/0 A 10.129.204.147 (104)
Alpine base image
10.141.90.160.46748 > 10.129.192.2.53: A? echo.datadog.svc.cluster.local. (53)
10.141.90.160.46748 > 10.129.192.2.53: AAAA? echo.datadog.svc.cluster.local. (53)
10.129.192.2.53 > 10.141.90.160.46748: 0/1/0 (161)
10.129.192.2.53 > 10.141.90.160.46748: 1/0/0 A 10.129.204.147 (104)
Service in the same namespace No hints (use defaults)
But, we don't use alpine that much. So what is happening?
@lbernail
We use Go a lot
10.140.216.13.55929 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63)
10.140.216.13.46751 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63)
10.129.192.2.53 > 10.140.216.13.46751: NXDomain*- 0/1/0 (171)
10.129.192.2.53 > 10.140.216.13.55929: XDomain*- 0/1/0 (171)
10.140.216.13.57414 > 10.129.192.2.53: AAAA? www.google.com.svc.cluster.local. (55)
10.140.216.13.54192 > 10.129.192.2.53: A? www.google.com.svc.cluster.local. (55)
10.129.192.2.53 > 10.140.216.13.57414: NXDomain*- 0/1/0 (163)
10.129.192.2.53 > 10.140.216.13.54192: NXDomain*- 0/1/0 (163)
net.ResolveTCPAddr("tcp", "www.google.com:80"), on Ubuntu
IPv6 is back...
@lbernail
What about CGO?
10.140.216.13.55929 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63)
10.140.216.13.46751 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63)
10.129.192.2.53 > 10.140.216.13.46751: NXDomain*- 0/1/0 (171)
10.129.192.2.53 > 10.140.216.13.55929: XDomain*- 0/1/0 (171)
...
Native Go
CGO: export GODEBUG=netdns=cgo
10.140.216.13.49382 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63)
10.140.216.13.49382 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63)
10.129.192.2.53 > 10.140.216.13.49382: NXDomain*- 0/1/0 (171)
10.129.192.2.53 > 10.140.216.13.49382: NXDomain*- 0/1/0 (171)
...
Was GODEBUG ignored?
@lbernail
Subtle difference
10.140.216.13.55929 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63)
10.140.216.13.46751 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63)
Native Go
CGO: export GODEBUG=netdns=cgo
10.140.216.13.49382 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63)
10.140.216.13.49382 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63)
Native Go uses a different source port for A and AAAA
@lbernail
CGO implementation
https://github.com/golang/go/blob/master/src/net/cgo_linux.go
So we can't really avoid IPv6 queries in Go unless we change the app
net.ResolveTCPAddr("tcp4", "www.google.com")
But only with CGO...
No AI_ADDRCONFIG
Challenges: Query volume reduction
@lbernail
Coredns Autopath
With this option, coredns knows the search domains and finds the right record
10.140.216.13.37164 > 10.129.192.23.53: A? google.com.datadog.svc.cluster.local. (58)
10.140.216.13.37164 > 10.129.192.23.53: AAAA? google.com.datadog.svc.cluster.local. (58)
10.129.192.23.53 > 10.140.216.13.37164: 2/0/0 CNAME google.com., A 216.58.218.238 (98)
10.129.192.23.53 > 10.140.216.13.37164: 2/0/0 CNAME google.com., AAAA 2607:f8b0:4004:808::200e (110)
Much better: only 2 queries instead of 10+
But
Requires to run the Coredns Kubernetes plugin with "pods: verified"
- To infer the full search domain (pod namespace)
- Memory requirements becomes proportional to number of pods
- Several OOM-killer incidents for us
@lbernail
Node-local-dns
kube-proxy
daemonset
pod in hostnet
local bind
node-local
dns
proxier
tcp:<dns service>
tcp:<cloud resolver>
DNS
DNS
DNS
kubedns / Coredns
apiserver
watch
services
Upstream resolver
pod
container
node-local-dns
Daemonset
Binds a non-routable IP (169.x.y.z)
Cache
Proxies queries to DNS service in TCP
Proxies non-kubernetes queries directly
Wins
Reduces load
- cache
- non Kubernetes queries bypass service
Mitigates the netfilter race condition
Challenges: Rolling updates
@lbernail
Initial state
kube-proxy
IPVS
DNS A
DNS B
DNS C
watch
services
pod 1
container
apiserverIP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 300s
VIP:53
IPA:53
@lbernail
Pod A deleted
kube-proxy
IPVS
DNS A
DNS B
DNS C
watch
services
pod 1
container
apiserver
VIP:53
IP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 200s
IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 300s
@lbernail
Source port reuse
kube-proxy
IPVS
DNS A
DNS B
DNS C
watch
services
pod 1
container
apiserver
VIP:53
IP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 100s
IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 200s
What if a new query reuses P1 while the entry is still in the IPVS conntrack?
Packet is silently dropped
Situation can be frequent for applications sending a lot of DNS queries
@lbernail
Mitigation #1
kube-proxy
IPVS
DNS A
DNS B
DNS C
watch
services
pod 1
container
apiserver
VIP:53
IP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 100s
IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 200s
expire_nodest_conn=1
New packet from IP1:P1?
- Delete entry from the IPVS conntrack
- Send ICMP Port Unreachable
Better, but under load, this can still trigger many errors
@lbernail
Mitigation #2
kube-proxy
IPVS
DNS A
DNS B
DNS C
watch
services
pod 1
container
apiserver
VIP:53
IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 29s
expire_nodest_conn=1, low UDP timeout (30s) [kube-proxy 1.18+]
Likelihood of port collision much lower
Still some errors
Kernel patch to expire entries on backend deletion (5.9+) by @andrewsykim
"Fun" stories
Sometimes your DNS is unstable
@lbernail
Coredns getting OOM-killed
Some coredns pods getting OOM-killed
Not great for apps...
Coredns memory usage / limit
@lbernail
Coredns getting OOM-killed
Coredns memory usage / limit
Apiserver restarted
Coredns pod reconnected
Too much memory => OOM
Startup requires more memory
Autopath "Pods:verified" makes sizing hardapiserver restart
Sometimes Autoscaling works too well
@lbernail
Proportional autoscaler
Number of coredns replicas
Proportional-autoscaler for coredns:
Less nodes => Less coredns pods
@lbernail
Proportional autoscaler
Exceptions due to DNS failure
Number of coredns replicas
Triggered port reuse issue
Some applications don't like this
Sometimes it's not your fault
@lbernail
Staging fright on AWS
.:53 {
kubernetes cluster.local {
pods verified
}
autopath @kubernetes
proxy . /etc/resolv.conf
}
cluster.local:53 {
kubernetes cluster.local
}
.:53 {
proxy . /etc/resolv.conf
cache
}
Enable autopath
Simple change right?
Can you spot what broke the staging cluster?
@lbernail
Staging fright on AWS
.:53 {
kubernetes cluster.local {
pods verified
}
autopath @kubernetes
proxy . /etc/resolv.conf
}
cluster.local:53 {
kubernetes cluster.local
}
.:53 {
proxy . /etc/resolv.conf
cache
}
With this change we disabled caching for proxied queries
AWS resolver has a strict limit: 1024 packets/second to resolver per ENI
A large proportion of forwarded queries got dropped
@lbernail
Staging fright on AWS #2
cluster.local:53 {
kubernetes cluster.local
}
.:53 {
proxy . /etc/resolv.conf
cache
force_tcp
}
cluster.local:53 {
kubernetes cluster.local
}
.:53 {
proxy . /etc/resolv.conf
cache
}
Let's avoid UDP for upstream queries: avoid truncation, less errors
Sounds like a good idea?
@lbernail
Staging fright on AWS #2
cluster.local:53 {
kubernetes cluster.local
}
.:53 {
proxy . /etc/resolv.conf
cache
force_tcp
}
cluster.local:53 {
kubernetes cluster.local
}
.:53 {
proxy . /etc/resolv.conf
cache
}
AWS resolver has a strict limit: 1024 packets/second to resolver per ENI
DNS queries over TCP use at least 5x more packets
Don't query the AWS resolvers using TCP
Sometimes it's really not your fault
@lbernail
Upstream DNS issue
Application resolution errors
@lbernail
Upstream DNS issue
Application resolution errors
DNS queries by zone
Sharp drop in number of queries for zone "."
@lbernail
Application resolution errors
Upstream resolver issue in a single AZ
Provider DNS
Zone "A"
DNS queries by DNS zone
Forward Health-check failures by Upstream/AZ
Forward query Latency by Upstream/AZ
Upstream DNS issue
Sometimes you have to remember
pods run on nodes
@lbernail
Node issue
Familiar symptom: some applications have issues due to DNS errors
DNS errors for app
@lbernail
Familiar symptom: some applications have issues due to DNS errors
DNS errors for app
Conntrack entries (hosts running coredns pods) ~130k entries
Node issue
@lbernail
Familiar symptom: some applications have issues due to DNS errors
DNS errors for app
Conntrack entries (hosts running coredns pods) ~130k entries
--conntrack-min => Default: 131072
Node issue
Increase number of pods and nodes
@lbernail
Something weird
Group of nodes with ~130k entries
Group of nodes with ~60k entries
@lbernail
Something weird
Kernel 4.15
Kernel 5.0
Kernel patches in 5.0+ to improve conntrack behavior with UDP
=> conntrack entries for DNS get 30s TTL instead of 180s
Details
● netfilter: conntrack: udp: only extend timeout to stream mode after 2s
● netfilter: conntrack: udp: set stream timeout to 2 minutes
Sometimes it's just weird
@lbernail
DNS is broken for a single app
Symptom
pgbouncer can't connect to postgres after coredns update
But, everything else works completely fine
@lbernail
DNS is broken for a single app
10.143.217.162 41005 1 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL
10.143.217.162 41005 28 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL
10.129.224.2 53 1 3 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL
10.129.224.2 53 28 3 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL
[...]
10.143.217.162 41005 1 0 bACKEND.dataDog.SeRvICe.CONsUl
10.143.217.162 41005 28 0 BaCkenD.DatADOg.ServiCE.COnsUL
10.129.224.2 53 1 0 1 bACKEND.dataDog.SeRvICe.CONsUl
Let's capture and analyze DNS queries
Source IP/Port
Same source port across all queries??
1: A
28: AAA
3: NXDOMAIN
0: NOERROR
Queried domain. Random case??
IETF Draft to increase DNS Security
"Use of Bit 0x20 in DNS Labels to Improve Transaction Identity"
This DNS client is clearly not one we know about
@lbernail
DNS is broken for a single app
10.143.217.162 41005 1 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL
10.143.217.162 41005 28 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL
10.129.224.2 53 1 3 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL
10.129.224.2 53 28 3 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL
[...]
10.143.217.162 41005 1 0 bACKEND.dataDog.SeRvICe.CONsUl
10.143.217.162 41005 28 0 BaCkenD.DatADOg.ServiCE.COnsUL
10.129.224.2 53 1 0 1 bACKEND.dataDog.SeRvICe.CONsUl
Let's capture and analyze DNS queries
Truncate Bit (TC)
pgbouncer compiled with evdns, which doesn't support TCP upgrade (and just ignores the answer if TC=1)
Previous coredns version was not setting the TC bit when upstream TCP answer was too large (bug was fixed)
Recompiling pgbouncer with c-ares fixed the problem
Sometimes it's not DNS
Sometimes it's not DNS
Rarely
@lbernail
Logs are full of DNS errors% of Errors in app
Sometimes it's not DNS
@lbernail
Logs are full of DNS errors
Sharp drop in traffic received by nodes
??
Average #Packets received by nodegroups
% of Errors in app
Sometimes it's not DNS
@lbernail
Logs are full of DNS errors
Sharp drop in traffic received by nodes
??
Average #Packets received by nodegroups
TCP Retransmits by AZ-pairs
High proportion of drops for any traffic involving zone "b"
Confirmed transient issue from provider
Not really DNS that time, but this was the first impact
% of Errors in app
Sometimes it's not DNS
What we run now
@lbernail
Our DNS setup
kube-proxy
daemonset
pod binding
169.254.20.10
node-local
dns
IPVS
cluster.local
forward <dns svc>
force_tcp
cache
.
forward <cloud>
prefer_udp
cache
DNS
DNS
DNS
Coredns
Cloud resolver
pod
container
cluster.local
kubernetes
pods disabled
cache
.
forward <cloud>
prefer_udp
cache
UDP
TCP
UDP
UDP
@lbernail
Our DNS setup
kube-proxy
daemonset
pod binding
169.254.20.10
node-local
dns
IPVS
DNS
DNS
DNS
Cloud resolver
pod
container
Container /etc/resolv.conf
search <namespace>.svc.cluster.local
svc.cluster.local
cluster.local
ec2.internal
nameserver 169.254.20.10
<dns svc>
options ndots:5, timeout: 1
Coredns
@lbernail
Our DNS setup
kube-proxy
daemonset
pod binding
169.254.20.10
node-local
dns
IPVS
DNS
DNS
DNS
Cloud resolver
pod
container
Container /etc/resolv.conf
search <namespace>.svc.cluster.local
svc.cluster.local
cluster.local
ec2.internal
nameserver 169.254.20.10
<dns svc>
options ndots:5, timeout: 1
Alternate /etc/resolv.conf
Opt-in using annotations (mutating webhook)
search svc.cluster.local
nameserver 169.254.20.10
<dns svc>
options ndots:2, timeout: 1
Coredns
Conclusion
@lbernail
Conclusion
● Running Kubernetes means running DNS
● DNS is hard, especially at scale
● Recommendations
○ Use node-local-dns and cache everywhere you can
○ Test your DNS infrastructure (load-tests, rolling updates)
○ Understand the upstream DNS services you depend on
● For your applications
○ Try to standardize on a few resolvers
○ Use async resolution/long-lived connections whenever possible
○ Include DNS failures in your (Chaos) tests
Thank you
We’re hiring!
https://www.datadoghq.com/careers/
laurent@datadoghq.com
@lbernail
Kubernetes DNS Horror Stories

Contenu connexe

Tendances

TC Flower Offload
TC Flower OffloadTC Flower Offload
TC Flower OffloadNetronome
 
Introduction to Initramfs - Initramfs-tools and Dracut
Introduction to Initramfs - Initramfs-tools and DracutIntroduction to Initramfs - Initramfs-tools and Dracut
Introduction to Initramfs - Initramfs-tools and DracutTaisuke Yamada
 
In-depth Troubleshooting on NetScaler using Command Line Tools
In-depth Troubleshooting on NetScaler using Command Line ToolsIn-depth Troubleshooting on NetScaler using Command Line Tools
In-depth Troubleshooting on NetScaler using Command Line ToolsDavid McGeough
 
Understanding kube proxy in ipvs mode
Understanding kube proxy in ipvs modeUnderstanding kube proxy in ipvs mode
Understanding kube proxy in ipvs modeVictor Morales
 
Application-Based Routing
Application-Based RoutingApplication-Based Routing
Application-Based RoutingHungWei Chiu
 
Cfgmgmtcamp 2023 — eBPF Superpowers
Cfgmgmtcamp 2023 — eBPF SuperpowersCfgmgmtcamp 2023 — eBPF Superpowers
Cfgmgmtcamp 2023 — eBPF SuperpowersRaphaël PINSON
 
Performance Wins with BPF: Getting Started
Performance Wins with BPF: Getting StartedPerformance Wins with BPF: Getting Started
Performance Wins with BPF: Getting StartedBrendan Gregg
 
コンテナ時代のOpenStack
コンテナ時代のOpenStackコンテナ時代のOpenStack
コンテナ時代のOpenStackAkira Yoshiyama
 
Comparison of existing cni plugins for kubernetes
Comparison of existing cni plugins for kubernetesComparison of existing cni plugins for kubernetes
Comparison of existing cni plugins for kubernetesAdam Hamsik
 
BPF Internals (eBPF)
BPF Internals (eBPF)BPF Internals (eBPF)
BPF Internals (eBPF)Brendan Gregg
 
Containerd Internals: Building a Core Container Runtime
Containerd Internals: Building a Core Container RuntimeContainerd Internals: Building a Core Container Runtime
Containerd Internals: Building a Core Container RuntimePhil Estes
 
OS caused Large JVM pauses: Deep dive and solutions
OS caused Large JVM pauses: Deep dive and solutionsOS caused Large JVM pauses: Deep dive and solutions
OS caused Large JVM pauses: Deep dive and solutionsZhenyun Zhuang
 
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataProblems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataJignesh Shah
 
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京Kentaro Ebisawa
 
Matching Cisco and System p
Matching Cisco and System pMatching Cisco and System p
Matching Cisco and System pAndrey Klyachkin
 
Modern Data Center Network Architecture - The house that Clos built
Modern Data Center Network Architecture - The house that Clos builtModern Data Center Network Architecture - The house that Clos built
Modern Data Center Network Architecture - The house that Clos builtCumulus Networks
 

Tendances (20)

TC Flower Offload
TC Flower OffloadTC Flower Offload
TC Flower Offload
 
Introduction to Initramfs - Initramfs-tools and Dracut
Introduction to Initramfs - Initramfs-tools and DracutIntroduction to Initramfs - Initramfs-tools and Dracut
Introduction to Initramfs - Initramfs-tools and Dracut
 
Calico and BGP
Calico and BGPCalico and BGP
Calico and BGP
 
In-depth Troubleshooting on NetScaler using Command Line Tools
In-depth Troubleshooting on NetScaler using Command Line ToolsIn-depth Troubleshooting on NetScaler using Command Line Tools
In-depth Troubleshooting on NetScaler using Command Line Tools
 
Understanding DPDK
Understanding DPDKUnderstanding DPDK
Understanding DPDK
 
Understanding kube proxy in ipvs mode
Understanding kube proxy in ipvs modeUnderstanding kube proxy in ipvs mode
Understanding kube proxy in ipvs mode
 
Application-Based Routing
Application-Based RoutingApplication-Based Routing
Application-Based Routing
 
Cfgmgmtcamp 2023 — eBPF Superpowers
Cfgmgmtcamp 2023 — eBPF SuperpowersCfgmgmtcamp 2023 — eBPF Superpowers
Cfgmgmtcamp 2023 — eBPF Superpowers
 
Performance Wins with BPF: Getting Started
Performance Wins with BPF: Getting StartedPerformance Wins with BPF: Getting Started
Performance Wins with BPF: Getting Started
 
コンテナ時代のOpenStack
コンテナ時代のOpenStackコンテナ時代のOpenStack
コンテナ時代のOpenStack
 
eBPF maps 101
eBPF maps 101eBPF maps 101
eBPF maps 101
 
Comparison of existing cni plugins for kubernetes
Comparison of existing cni plugins for kubernetesComparison of existing cni plugins for kubernetes
Comparison of existing cni plugins for kubernetes
 
Terraform
TerraformTerraform
Terraform
 
BPF Internals (eBPF)
BPF Internals (eBPF)BPF Internals (eBPF)
BPF Internals (eBPF)
 
Containerd Internals: Building a Core Container Runtime
Containerd Internals: Building a Core Container RuntimeContainerd Internals: Building a Core Container Runtime
Containerd Internals: Building a Core Container Runtime
 
OS caused Large JVM pauses: Deep dive and solutions
OS caused Large JVM pauses: Deep dive and solutionsOS caused Large JVM pauses: Deep dive and solutions
OS caused Large JVM pauses: Deep dive and solutions
 
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataProblems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
 
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
OpenVZ - Linux Containers:第2回 コンテナ型仮想化の情報交換会@東京
 
Matching Cisco and System p
Matching Cisco and System pMatching Cisco and System p
Matching Cisco and System p
 
Modern Data Center Network Architecture - The house that Clos built
Modern Data Center Network Architecture - The house that Clos builtModern Data Center Network Architecture - The house that Clos built
Modern Data Center Network Architecture - The house that Clos built
 

Similaire à Kubernetes DNS Horror Stories

Getting started with IPv6
Getting started with IPv6Getting started with IPv6
Getting started with IPv6Private
 
Networking in Kubernetes
Networking in KubernetesNetworking in Kubernetes
Networking in KubernetesMinhan Xia
 
Finding target for hacking on internet is now easier
Finding target for hacking on internet is now easierFinding target for hacking on internet is now easier
Finding target for hacking on internet is now easierDavid Thomas
 
Handy Networking Tools and How to Use Them
Handy Networking Tools and How to Use ThemHandy Networking Tools and How to Use Them
Handy Networking Tools and How to Use ThemSneha Inguva
 
OakTable World Sep14 clonedb
OakTable World Sep14 clonedb OakTable World Sep14 clonedb
OakTable World Sep14 clonedb Connor McDonald
 
介绍 Percona 服务器 XtraDB 和 Xtrabackup
介绍 Percona 服务器 XtraDB 和 Xtrabackup介绍 Percona 服务器 XtraDB 和 Xtrabackup
介绍 Percona 服务器 XtraDB 和 XtrabackupYUCHENG HU
 
Multicloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPMulticloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPBob Melander
 
Debugging linux issues with eBPF
Debugging linux issues with eBPFDebugging linux issues with eBPF
Debugging linux issues with eBPFIvan Babrou
 
Kubernetes at Datadog Scale
Kubernetes at Datadog ScaleKubernetes at Datadog Scale
Kubernetes at Datadog ScaleDocker, Inc.
 
Oracle cluster installation with grid and iscsi
Oracle cluster  installation with grid and iscsiOracle cluster  installation with grid and iscsi
Oracle cluster installation with grid and iscsiChanaka Lasantha
 
[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVROpenStack Korea Community
 
Ignacy Kowalczyk
Ignacy KowalczykIgnacy Kowalczyk
Ignacy KowalczykCodeFest
 
USENIX ATC 2017 Performance Superpowers with Enhanced BPF
USENIX ATC 2017 Performance Superpowers with Enhanced BPFUSENIX ATC 2017 Performance Superpowers with Enhanced BPF
USENIX ATC 2017 Performance Superpowers with Enhanced BPFBrendan Gregg
 
Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...
Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...
Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...ContainerDay Security 2023
 
The Ring programming language version 1.9 book - Part 9 of 210
The Ring programming language version 1.9 book - Part 9 of 210The Ring programming language version 1.9 book - Part 9 of 210
The Ring programming language version 1.9 book - Part 9 of 210Mahmoud Samir Fayed
 
Tensorflow in Docker
Tensorflow in DockerTensorflow in Docker
Tensorflow in DockerEric Ahn
 
PythonBrasil[8] - CPython for dummies
PythonBrasil[8] - CPython for dummiesPythonBrasil[8] - CPython for dummies
PythonBrasil[8] - CPython for dummiesTatiana Al-Chueyr
 

Similaire à Kubernetes DNS Horror Stories (20)

Getting started with IPv6
Getting started with IPv6Getting started with IPv6
Getting started with IPv6
 
Networking in Kubernetes
Networking in KubernetesNetworking in Kubernetes
Networking in Kubernetes
 
Finding target for hacking on internet is now easier
Finding target for hacking on internet is now easierFinding target for hacking on internet is now easier
Finding target for hacking on internet is now easier
 
Handy Networking Tools and How to Use Them
Handy Networking Tools and How to Use ThemHandy Networking Tools and How to Use Them
Handy Networking Tools and How to Use Them
 
derby onboarding (1)
derby onboarding (1)derby onboarding (1)
derby onboarding (1)
 
OakTable World Sep14 clonedb
OakTable World Sep14 clonedb OakTable World Sep14 clonedb
OakTable World Sep14 clonedb
 
介绍 Percona 服务器 XtraDB 和 Xtrabackup
介绍 Percona 服务器 XtraDB 和 Xtrabackup介绍 Percona 服务器 XtraDB 和 Xtrabackup
介绍 Percona 服务器 XtraDB 和 Xtrabackup
 
Multicloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPMulticloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRP
 
Debugging linux issues with eBPF
Debugging linux issues with eBPFDebugging linux issues with eBPF
Debugging linux issues with eBPF
 
Kubernetes at Datadog Scale
Kubernetes at Datadog ScaleKubernetes at Datadog Scale
Kubernetes at Datadog Scale
 
Oracle cluster installation with grid and iscsi
Oracle cluster  installation with grid and iscsiOracle cluster  installation with grid and iscsi
Oracle cluster installation with grid and iscsi
 
[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR
 
Ignacy Kowalczyk
Ignacy KowalczykIgnacy Kowalczyk
Ignacy Kowalczyk
 
tp smarts_onboarding
 tp smarts_onboarding tp smarts_onboarding
tp smarts_onboarding
 
USENIX ATC 2017 Performance Superpowers with Enhanced BPF
USENIX ATC 2017 Performance Superpowers with Enhanced BPFUSENIX ATC 2017 Performance Superpowers with Enhanced BPF
USENIX ATC 2017 Performance Superpowers with Enhanced BPF
 
Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...
Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...
Enhancing Network and Runtime Security with Cilium and Tetragon by Raymond De...
 
Haproxy - zastosowania
Haproxy - zastosowaniaHaproxy - zastosowania
Haproxy - zastosowania
 
The Ring programming language version 1.9 book - Part 9 of 210
The Ring programming language version 1.9 book - Part 9 of 210The Ring programming language version 1.9 book - Part 9 of 210
The Ring programming language version 1.9 book - Part 9 of 210
 
Tensorflow in Docker
Tensorflow in DockerTensorflow in Docker
Tensorflow in Docker
 
PythonBrasil[8] - CPython for dummies
PythonBrasil[8] - CPython for dummiesPythonBrasil[8] - CPython for dummies
PythonBrasil[8] - CPython for dummies
 

Plus de Laurent Bernaille

How the OOM Killer Deleted My Namespace
How the OOM Killer Deleted My NamespaceHow the OOM Killer Deleted My Namespace
How the OOM Killer Deleted My NamespaceLaurent Bernaille
 
Evolution of kube-proxy (Brussels, Fosdem 2020)
Evolution of kube-proxy (Brussels, Fosdem 2020)Evolution of kube-proxy (Brussels, Fosdem 2020)
Evolution of kube-proxy (Brussels, Fosdem 2020)Laurent Bernaille
 
Making the most out of kubernetes audit logs
Making the most out of kubernetes audit logsMaking the most out of kubernetes audit logs
Making the most out of kubernetes audit logsLaurent Bernaille
 
Kubernetes the Very Hard Way. Velocity Berlin 2019
Kubernetes the Very Hard Way. Velocity Berlin 2019Kubernetes the Very Hard Way. Velocity Berlin 2019
Kubernetes the Very Hard Way. Velocity Berlin 2019Laurent Bernaille
 
Kubernetes the Very Hard Way. Lisa Portland 2019
Kubernetes the Very Hard Way. Lisa Portland 2019Kubernetes the Very Hard Way. Lisa Portland 2019
Kubernetes the Very Hard Way. Lisa Portland 2019Laurent Bernaille
 
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...Laurent Bernaille
 
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!Laurent Bernaille
 
Optimizing kubernetes networking
Optimizing kubernetes networkingOptimizing kubernetes networking
Optimizing kubernetes networkingLaurent Bernaille
 
Kubernetes at Datadog the very hard way
Kubernetes at Datadog the very hard wayKubernetes at Datadog the very hard way
Kubernetes at Datadog the very hard wayLaurent Bernaille
 
Deep Dive in Docker Overlay Networks
Deep Dive in Docker Overlay NetworksDeep Dive in Docker Overlay Networks
Deep Dive in Docker Overlay NetworksLaurent Bernaille
 
Deeper dive in Docker Overlay Networks
Deeper dive in Docker Overlay NetworksDeeper dive in Docker Overlay Networks
Deeper dive in Docker Overlay NetworksLaurent Bernaille
 
Operational challenges behind Serverless architectures
Operational challenges behind Serverless architecturesOperational challenges behind Serverless architectures
Operational challenges behind Serverless architecturesLaurent Bernaille
 
Deep dive in Docker Overlay Networks
Deep dive in Docker Overlay NetworksDeep dive in Docker Overlay Networks
Deep dive in Docker Overlay NetworksLaurent Bernaille
 
Feedback on AWS re:invent 2016
Feedback on AWS re:invent 2016Feedback on AWS re:invent 2016
Feedback on AWS re:invent 2016Laurent Bernaille
 
Early recognition of encryted applications
Early recognition of encryted applicationsEarly recognition of encryted applications
Early recognition of encryted applicationsLaurent Bernaille
 
Early application identification. CONEXT 2006
Early application identification. CONEXT 2006Early application identification. CONEXT 2006
Early application identification. CONEXT 2006Laurent Bernaille
 

Plus de Laurent Bernaille (17)

How the OOM Killer Deleted My Namespace
How the OOM Killer Deleted My NamespaceHow the OOM Killer Deleted My Namespace
How the OOM Killer Deleted My Namespace
 
Evolution of kube-proxy (Brussels, Fosdem 2020)
Evolution of kube-proxy (Brussels, Fosdem 2020)Evolution of kube-proxy (Brussels, Fosdem 2020)
Evolution of kube-proxy (Brussels, Fosdem 2020)
 
Making the most out of kubernetes audit logs
Making the most out of kubernetes audit logsMaking the most out of kubernetes audit logs
Making the most out of kubernetes audit logs
 
Kubernetes the Very Hard Way. Velocity Berlin 2019
Kubernetes the Very Hard Way. Velocity Berlin 2019Kubernetes the Very Hard Way. Velocity Berlin 2019
Kubernetes the Very Hard Way. Velocity Berlin 2019
 
Kubernetes the Very Hard Way. Lisa Portland 2019
Kubernetes the Very Hard Way. Lisa Portland 2019Kubernetes the Very Hard Way. Lisa Portland 2019
Kubernetes the Very Hard Way. Lisa Portland 2019
 
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...
 
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you!
 
Optimizing kubernetes networking
Optimizing kubernetes networkingOptimizing kubernetes networking
Optimizing kubernetes networking
 
Kubernetes at Datadog the very hard way
Kubernetes at Datadog the very hard wayKubernetes at Datadog the very hard way
Kubernetes at Datadog the very hard way
 
Deep Dive in Docker Overlay Networks
Deep Dive in Docker Overlay NetworksDeep Dive in Docker Overlay Networks
Deep Dive in Docker Overlay Networks
 
Deeper dive in Docker Overlay Networks
Deeper dive in Docker Overlay NetworksDeeper dive in Docker Overlay Networks
Deeper dive in Docker Overlay Networks
 
Discovering OpenBSD on AWS
Discovering OpenBSD on AWSDiscovering OpenBSD on AWS
Discovering OpenBSD on AWS
 
Operational challenges behind Serverless architectures
Operational challenges behind Serverless architecturesOperational challenges behind Serverless architectures
Operational challenges behind Serverless architectures
 
Deep dive in Docker Overlay Networks
Deep dive in Docker Overlay NetworksDeep dive in Docker Overlay Networks
Deep dive in Docker Overlay Networks
 
Feedback on AWS re:invent 2016
Feedback on AWS re:invent 2016Feedback on AWS re:invent 2016
Feedback on AWS re:invent 2016
 
Early recognition of encryted applications
Early recognition of encryted applicationsEarly recognition of encryted applications
Early recognition of encryted applications
 
Early application identification. CONEXT 2006
Early application identification. CONEXT 2006Early application identification. CONEXT 2006
Early application identification. CONEXT 2006
 

Dernier

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 

Dernier (20)

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 

Kubernetes DNS Horror Stories

  • 1. Laurent Bernaille Staff Engineer, Datadog Kubernetes DNS Horror Stories (and how to avoid them)
  • 2. @lbernail Datadog Over 350 integrations Over 1,200 employees Over 8,000 customers Runs on millions of hosts Trillions of data points per day 10000s hosts in our infra 10s of k8s clusters with 100-4000 nodes Multi-cloud Very fast growth
  • 3. @lbernail Challenges ● Scalability ● Containerization ● Consistency across cloud providers ● Networking ● Ecosystem youth ● Edge-cases
  • 4. @lbernail What we did not expect Our Kubernetes DNS infrastructure currently serves ~200 000 DNS queries per second Largest cluster is at ~60 000 DNS queries per second DNS is one of your more critical services when running Kubernetes
  • 5. @lbernail Outline ● DNS in Kubernetes ● Challenges ● "Fun" stories ● What we do now
  • 7. @lbernail How it works (by default) kubelet pod pod container container /etc/resolv.conf search <namespace>.svc.cluster.local svc.cluster.local cluster.local ec2.internal nameserver <dns service> options ndots:5 kubelet configuration ClusterDNS: <dns service>
  • 8. @lbernail Accessing DNS kube-proxy pod container proxier <dns service> DNS DNS DNS kubedns / Coredns apiserver watch services Upstream resolver DNS config cluster.local: - authoritative - get services from apiserver .: - forward to upstream resolver
  • 9. @lbernail Theory: Scenario 1 Pod in namespace "metrics", requesting service "points" in namespace "metrics" getent ahosts points 1. points has less than 5 dots 2. With first search domain: points.metrics.svc.cluster.local 3. Answer
  • 10. @lbernail Theory: Scenario 2 Pod in namespace "logs", requesting service "points" in namespace "metrics" getent ahosts points.metrics 1. points.metrics has less than 5 dots 2. With first search domain: points.metrics.logs.svc.cluster.local 3. Answer: NXDOMAIN 4. With second domain points.metrics.svc.cluster.local 5. Answer
  • 12. @lbernail In practice Pod in namespace default, requesting www.google.com (GKE) getent ahosts www.google.com 10.220.1.4.58137 > 10.128.32.10.53: A? www.google.com.default.svc.cluster.local. (58) 10.220.1.4.58137 > 10.128.32.10.53: AAAA? www.google.com.default.svc.cluster.local. (58) 10.128.32.10.53 > 10.220.1.4.58137: NXDomain 0/1/0 (151) 10.128.32.10.53 > 10.220.1.4.58137: NXDomain 0/1/0 (151) 10.220.1.4.54960 > 10.128.32.10.53: A? www.google.com.svc.cluster.local. (50) 10.220.1.4.54960 > 10.128.32.10.53: AAAA? www.google.com.svc.cluster.local. (50) 10.128.32.10.53 > 10.220.1.4.54960: NXDomain 0/1/0 (143) 10.128.32.10.53 > 10.220.1.4.54960: NXDomain 0/1/0 (143) 10.220.1.4.51754 > 10.128.32.10.53: A? www.google.com.cluster.local. (46) 10.220.1.4.51754 > 10.128.32.10.53: AAAA? www.google.com.cluster.local. (46) 10.128.32.10.53 > 10.220.1.4.51754: NXDomain 0/1/0 (139) 10.128.32.10.53 > 10.220.1.4.51754: NXDomain 0/1/0 (139) 10.220.1.4.42457 > 10.128.32.10.53: A? www.google.com.c.sandbox.internal. (59) 10.220.1.4.42457 > 10.128.32.10.53: AAAA? www.google.com.c.sandbox.internal. (59) 10.128.32.10.53 > 10.220.1.4.42457: NXDomain 0/1/0 (148) 10.128.32.10.53 > 10.220.1.4.42457: NXDomain 0/1/0 (148) 10.220.1.4.45475 > 10.128.32.10.53: A? www.google.com.google.internal. (48) 10.220.1.4.45475 > 10.128.32.10.53: AAAA? www.google.com.google.internal. (48) 10.128.32.10.53 > 10.220.1.4.45475: NXDomain 0/1/0 (137) 10.128.32.10.53 > 10.220.1.4.45475: NXDomain 0/1/0 (137) 10.220.1.4.40634 > 10.128.32.10.53: A? www.google.com. (32) 10.220.1.4.40634 > 10.128.32.10.53: AAAA? www.google.com. (32) 10.128.32.10.53 > 10.220.1.4.40634: 3/0/0 AAAA 2a00:1450:400c:c0b::67 10.128.32.10.53 > 10.220.1.4.40634: 6/0/0 A 173.194.76.103 12 queries! Reasons: - 5 search domains - IPv6 Problems: - latency - packet loss => 5s delay - load on DNS infra
  • 14. @lbernail IPv6? getaddrinfo will do IPv4 and IPv6 queries by "default" The Good: We're ready for IPv6! The Bad: Not great, because it means twice the amount of traffic The Ugly: IPv6 resolution triggers packet losses in the Kubernetes context - Accessing the DNS service requires NAT - Race condition in netfilter when 2 packets are sent within microseconds - Patched in the kernel (4.19+) - Detailed issue: https://github.com/kubernetes/kubernetes/issues/56903 If this happens for any of the 10+ queries, resolution takes 5s (at least) Impact is far lower with IPVS (no DNAT)
  • 15. @lbernail Let's disable IPv6! GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT ipv6.disable=1" getent ahosts www.google.com IP 10.140.216.13.53705 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63) IP 10.129.192.2.53 > 10.140.216.13.53705: NXDomain*- 0/1/0 (171) IP 10.140.216.13.34772 > 10.129.192.2.53: A? www.google.com.svc.cluster.local. (55) IP 10.129.192.2.53 > 10.140.216.13.34772: NXDomain*- 0/1/0 (163) IP 10.140.216.13.54617 > 10.129.192.2.53: A? www.google.com.cluster.local. (51) IP 10.129.192.2.53 > 10.140.216.13.54617: NXDomain*- 0/1/0 (159) IP 10.140.216.13.55732 > 10.129.192.2.53: A? www.google.com.ec2.internal. (45) IP 10.129.192.2.53 > 10.140.216.13.55732: NXDomain 0/0/0 (45) IP 10.140.216.13.36991 > 10.129.192.2.53: A? www.google.com. (32) IP 10.129.192.2.53 > 10.140.216.13.36991: 1/0/0 A 172.217.2.100 (62) Much better ! No IPv6: no risk of drops, 50% less traffic
  • 16. @lbernail We wanted to celebrate, but... A AAAA Still a lot of AAAA queries Where are they coming from?
  • 17. @lbernail What triggers IPv6? According to POSIX, getaddrinfo should do IPv4 and IPv6 by default BUT glibc includes hint AI_ADDRCONFIG by default According to POSIX.1, specifying hints as NULL should cause ai_flags to be assumed as 0. The GNU C library instead assumes a value of (AI_V4MAPPED | AI_ADDRCONFIG) for this case, since this value is considered an improvement on the specification. AND AI_ADDRCONFIG only makes IPv6 queries if it finds an IPv6 address man getaddrinfo, glibc If hints.ai_flags includes the AI_ADDRCONFIG flag, then IPv4 addresses are returned in the list pointed to by res only if the local system has at least one IPv4 address configured So wait, disabling IPv6 should just work, right?
  • 18. @lbernail Alpine and Musl Turns out Musl implements the POSIX spec, and sure enough: getaddrinfo( "echo" , NULL , NULL , &servinfo)o) Ubuntu base image 10.140.216.13.52563 > 10.129.192.2.53: A? echo.datadog.svc.fury.cluster.local. (53) 10.129.192.2.53 > 10.140.216.13.52563: 1/0/0 A 10.129.204.147 (104) Alpine base image 10.141.90.160.46748 > 10.129.192.2.53: A? echo.datadog.svc.cluster.local. (53) 10.141.90.160.46748 > 10.129.192.2.53: AAAA? echo.datadog.svc.cluster.local. (53) 10.129.192.2.53 > 10.141.90.160.46748: 0/1/0 (161) 10.129.192.2.53 > 10.141.90.160.46748: 1/0/0 A 10.129.204.147 (104) Service in the same namespace No hints (use defaults) But, we don't use alpine that much. So what is happening?
  • 19. @lbernail We use Go a lot 10.140.216.13.55929 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63) 10.140.216.13.46751 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63) 10.129.192.2.53 > 10.140.216.13.46751: NXDomain*- 0/1/0 (171) 10.129.192.2.53 > 10.140.216.13.55929: XDomain*- 0/1/0 (171) 10.140.216.13.57414 > 10.129.192.2.53: AAAA? www.google.com.svc.cluster.local. (55) 10.140.216.13.54192 > 10.129.192.2.53: A? www.google.com.svc.cluster.local. (55) 10.129.192.2.53 > 10.140.216.13.57414: NXDomain*- 0/1/0 (163) 10.129.192.2.53 > 10.140.216.13.54192: NXDomain*- 0/1/0 (163) net.ResolveTCPAddr("tcp", "www.google.com:80"), on Ubuntu IPv6 is back...
  • 20. @lbernail What about CGO? 10.140.216.13.55929 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63) 10.140.216.13.46751 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63) 10.129.192.2.53 > 10.140.216.13.46751: NXDomain*- 0/1/0 (171) 10.129.192.2.53 > 10.140.216.13.55929: XDomain*- 0/1/0 (171) ... Native Go CGO: export GODEBUG=netdns=cgo 10.140.216.13.49382 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63) 10.140.216.13.49382 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63) 10.129.192.2.53 > 10.140.216.13.49382: NXDomain*- 0/1/0 (171) 10.129.192.2.53 > 10.140.216.13.49382: NXDomain*- 0/1/0 (171) ... Was GODEBUG ignored?
  • 21. @lbernail Subtle difference 10.140.216.13.55929 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63) 10.140.216.13.46751 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63) Native Go CGO: export GODEBUG=netdns=cgo 10.140.216.13.49382 > 10.129.192.2.53: A? www.google.com.datadog.svc.cluster.local. (63) 10.140.216.13.49382 > 10.129.192.2.53: AAAA? www.google.com.datadog.svc.cluster.local. (63) Native Go uses a different source port for A and AAAA
  • 22. @lbernail CGO implementation https://github.com/golang/go/blob/master/src/net/cgo_linux.go So we can't really avoid IPv6 queries in Go unless we change the app net.ResolveTCPAddr("tcp4", "www.google.com") But only with CGO... No AI_ADDRCONFIG
  • 24. @lbernail Coredns Autopath With this option, coredns knows the search domains and finds the right record 10.140.216.13.37164 > 10.129.192.23.53: A? google.com.datadog.svc.cluster.local. (58) 10.140.216.13.37164 > 10.129.192.23.53: AAAA? google.com.datadog.svc.cluster.local. (58) 10.129.192.23.53 > 10.140.216.13.37164: 2/0/0 CNAME google.com., A 216.58.218.238 (98) 10.129.192.23.53 > 10.140.216.13.37164: 2/0/0 CNAME google.com., AAAA 2607:f8b0:4004:808::200e (110) Much better: only 2 queries instead of 10+ But Requires to run the Coredns Kubernetes plugin with "pods: verified" - To infer the full search domain (pod namespace) - Memory requirements becomes proportional to number of pods - Several OOM-killer incidents for us
  • 25. @lbernail Node-local-dns kube-proxy daemonset pod in hostnet local bind node-local dns proxier tcp:<dns service> tcp:<cloud resolver> DNS DNS DNS kubedns / Coredns apiserver watch services Upstream resolver pod container node-local-dns Daemonset Binds a non-routable IP (169.x.y.z) Cache Proxies queries to DNS service in TCP Proxies non-kubernetes queries directly Wins Reduces load - cache - non Kubernetes queries bypass service Mitigates the netfilter race condition
  • 27. @lbernail Initial state kube-proxy IPVS DNS A DNS B DNS C watch services pod 1 container apiserverIP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 300s VIP:53 IPA:53
  • 28. @lbernail Pod A deleted kube-proxy IPVS DNS A DNS B DNS C watch services pod 1 container apiserver VIP:53 IP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 200s IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 300s
  • 29. @lbernail Source port reuse kube-proxy IPVS DNS A DNS B DNS C watch services pod 1 container apiserver VIP:53 IP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 100s IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 200s What if a new query reuses P1 while the entry is still in the IPVS conntrack? Packet is silently dropped Situation can be frequent for applications sending a lot of DNS queries
  • 30. @lbernail Mitigation #1 kube-proxy IPVS DNS A DNS B DNS C watch services pod 1 container apiserver VIP:53 IP1:P1=> VIP:53 ⇔ IP1:P1 => IPA:53 100s IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 200s expire_nodest_conn=1 New packet from IP1:P1? - Delete entry from the IPVS conntrack - Send ICMP Port Unreachable Better, but under load, this can still trigger many errors
  • 31. @lbernail Mitigation #2 kube-proxy IPVS DNS A DNS B DNS C watch services pod 1 container apiserver VIP:53 IP1:P2=> VIP:53 ⇔ IP1:P2 => IPB:53 29s expire_nodest_conn=1, low UDP timeout (30s) [kube-proxy 1.18+] Likelihood of port collision much lower Still some errors Kernel patch to expire entries on backend deletion (5.9+) by @andrewsykim
  • 33. Sometimes your DNS is unstable
  • 34. @lbernail Coredns getting OOM-killed Some coredns pods getting OOM-killed Not great for apps... Coredns memory usage / limit
  • 35. @lbernail Coredns getting OOM-killed Coredns memory usage / limit Apiserver restarted Coredns pod reconnected Too much memory => OOM Startup requires more memory Autopath "Pods:verified" makes sizing hardapiserver restart
  • 37. @lbernail Proportional autoscaler Number of coredns replicas Proportional-autoscaler for coredns: Less nodes => Less coredns pods
  • 38. @lbernail Proportional autoscaler Exceptions due to DNS failure Number of coredns replicas Triggered port reuse issue Some applications don't like this
  • 39. Sometimes it's not your fault
  • 40. @lbernail Staging fright on AWS .:53 { kubernetes cluster.local { pods verified } autopath @kubernetes proxy . /etc/resolv.conf } cluster.local:53 { kubernetes cluster.local } .:53 { proxy . /etc/resolv.conf cache } Enable autopath Simple change right? Can you spot what broke the staging cluster?
  • 41. @lbernail Staging fright on AWS .:53 { kubernetes cluster.local { pods verified } autopath @kubernetes proxy . /etc/resolv.conf } cluster.local:53 { kubernetes cluster.local } .:53 { proxy . /etc/resolv.conf cache } With this change we disabled caching for proxied queries AWS resolver has a strict limit: 1024 packets/second to resolver per ENI A large proportion of forwarded queries got dropped
  • 42. @lbernail Staging fright on AWS #2 cluster.local:53 { kubernetes cluster.local } .:53 { proxy . /etc/resolv.conf cache force_tcp } cluster.local:53 { kubernetes cluster.local } .:53 { proxy . /etc/resolv.conf cache } Let's avoid UDP for upstream queries: avoid truncation, less errors Sounds like a good idea?
  • 43. @lbernail Staging fright on AWS #2 cluster.local:53 { kubernetes cluster.local } .:53 { proxy . /etc/resolv.conf cache force_tcp } cluster.local:53 { kubernetes cluster.local } .:53 { proxy . /etc/resolv.conf cache } AWS resolver has a strict limit: 1024 packets/second to resolver per ENI DNS queries over TCP use at least 5x more packets Don't query the AWS resolvers using TCP
  • 44. Sometimes it's really not your fault
  • 46. @lbernail Upstream DNS issue Application resolution errors DNS queries by zone Sharp drop in number of queries for zone "."
  • 47. @lbernail Application resolution errors Upstream resolver issue in a single AZ Provider DNS Zone "A" DNS queries by DNS zone Forward Health-check failures by Upstream/AZ Forward query Latency by Upstream/AZ Upstream DNS issue
  • 48. Sometimes you have to remember pods run on nodes
  • 49. @lbernail Node issue Familiar symptom: some applications have issues due to DNS errors DNS errors for app
  • 50. @lbernail Familiar symptom: some applications have issues due to DNS errors DNS errors for app Conntrack entries (hosts running coredns pods) ~130k entries Node issue
  • 51. @lbernail Familiar symptom: some applications have issues due to DNS errors DNS errors for app Conntrack entries (hosts running coredns pods) ~130k entries --conntrack-min => Default: 131072 Node issue Increase number of pods and nodes
  • 52. @lbernail Something weird Group of nodes with ~130k entries Group of nodes with ~60k entries
  • 53. @lbernail Something weird Kernel 4.15 Kernel 5.0 Kernel patches in 5.0+ to improve conntrack behavior with UDP => conntrack entries for DNS get 30s TTL instead of 180s Details ● netfilter: conntrack: udp: only extend timeout to stream mode after 2s ● netfilter: conntrack: udp: set stream timeout to 2 minutes
  • 55. @lbernail DNS is broken for a single app Symptom pgbouncer can't connect to postgres after coredns update But, everything else works completely fine
  • 56. @lbernail DNS is broken for a single app 10.143.217.162 41005 1 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL 10.143.217.162 41005 28 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL 10.129.224.2 53 1 3 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL 10.129.224.2 53 28 3 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL [...] 10.143.217.162 41005 1 0 bACKEND.dataDog.SeRvICe.CONsUl 10.143.217.162 41005 28 0 BaCkenD.DatADOg.ServiCE.COnsUL 10.129.224.2 53 1 0 1 bACKEND.dataDog.SeRvICe.CONsUl Let's capture and analyze DNS queries Source IP/Port Same source port across all queries?? 1: A 28: AAA 3: NXDOMAIN 0: NOERROR Queried domain. Random case?? IETF Draft to increase DNS Security "Use of Bit 0x20 in DNS Labels to Improve Transaction Identity" This DNS client is clearly not one we know about
  • 57. @lbernail DNS is broken for a single app 10.143.217.162 41005 1 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL 10.143.217.162 41005 28 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL 10.129.224.2 53 1 3 0 BAcKEnd.DAtAdOG.serViCe.CONsul.DATAdog.svC.clusTEr.LOCaL 10.129.224.2 53 28 3 0 BaCKEND.dATaDOg.sErvicE.cONSuL.dataDog.svc.ClUsTER.loCaL [...] 10.143.217.162 41005 1 0 bACKEND.dataDog.SeRvICe.CONsUl 10.143.217.162 41005 28 0 BaCkenD.DatADOg.ServiCE.COnsUL 10.129.224.2 53 1 0 1 bACKEND.dataDog.SeRvICe.CONsUl Let's capture and analyze DNS queries Truncate Bit (TC) pgbouncer compiled with evdns, which doesn't support TCP upgrade (and just ignores the answer if TC=1) Previous coredns version was not setting the TC bit when upstream TCP answer was too large (bug was fixed) Recompiling pgbouncer with c-ares fixed the problem
  • 59. Sometimes it's not DNS Rarely
  • 60. @lbernail Logs are full of DNS errors% of Errors in app Sometimes it's not DNS
  • 61. @lbernail Logs are full of DNS errors Sharp drop in traffic received by nodes ?? Average #Packets received by nodegroups % of Errors in app Sometimes it's not DNS
  • 62. @lbernail Logs are full of DNS errors Sharp drop in traffic received by nodes ?? Average #Packets received by nodegroups TCP Retransmits by AZ-pairs High proportion of drops for any traffic involving zone "b" Confirmed transient issue from provider Not really DNS that time, but this was the first impact % of Errors in app Sometimes it's not DNS
  • 63. What we run now
  • 64. @lbernail Our DNS setup kube-proxy daemonset pod binding 169.254.20.10 node-local dns IPVS cluster.local forward <dns svc> force_tcp cache . forward <cloud> prefer_udp cache DNS DNS DNS Coredns Cloud resolver pod container cluster.local kubernetes pods disabled cache . forward <cloud> prefer_udp cache UDP TCP UDP UDP
  • 65. @lbernail Our DNS setup kube-proxy daemonset pod binding 169.254.20.10 node-local dns IPVS DNS DNS DNS Cloud resolver pod container Container /etc/resolv.conf search <namespace>.svc.cluster.local svc.cluster.local cluster.local ec2.internal nameserver 169.254.20.10 <dns svc> options ndots:5, timeout: 1 Coredns
  • 66. @lbernail Our DNS setup kube-proxy daemonset pod binding 169.254.20.10 node-local dns IPVS DNS DNS DNS Cloud resolver pod container Container /etc/resolv.conf search <namespace>.svc.cluster.local svc.cluster.local cluster.local ec2.internal nameserver 169.254.20.10 <dns svc> options ndots:5, timeout: 1 Alternate /etc/resolv.conf Opt-in using annotations (mutating webhook) search svc.cluster.local nameserver 169.254.20.10 <dns svc> options ndots:2, timeout: 1 Coredns
  • 68. @lbernail Conclusion ● Running Kubernetes means running DNS ● DNS is hard, especially at scale ● Recommendations ○ Use node-local-dns and cache everywhere you can ○ Test your DNS infrastructure (load-tests, rolling updates) ○ Understand the upstream DNS services you depend on ● For your applications ○ Try to standardize on a few resolvers ○ Use async resolution/long-lived connections whenever possible ○ Include DNS failures in your (Chaos) tests