Executive Summary
Networking baseline = reliable, secure, predictable connectivity with proper tuning for your infrastructure.
Why networking configuration matters:
Most production outages trace back to network issues: misconfigured firewall blocking traffic, exhausted connection tables, or timeouts set too aggressively. Proper networking prevents these disasters.
Real-world disasters prevented by good networking:
1. Firewall accidentally blocks production traffic:
Problem: Engineer adds SSH rule, accidentally sets policy to "drop all"
Result: Website goes down, SSH also blocked (can't fix it remotely)
Prevention: Test firewall rules with policy "accept" first, then switch to "drop"
2. Connection tracking table exhausted:
Problem: Load balancer hitting 100k req/sec, default conntrack_max=65536
Result: "Cannot assign requested address" β new connections fail
Prevention: Monitor conntrack usage, increase limit to 500k+ for high traffic
3. Aggressive timeouts kill slow requests:
Problem: nginx proxy_read_timeout=10s, report generation takes 30s
Result: Proxy kills connection after 10s, users see "502 Bad Gateway"
Prevention: Set generous timeouts client-facing (60s+), strict backend (10-30s)
4. IPv6 not configured, half your users can’t connect:
Problem: Mobile networks defaulting to IPv6, your server IPv4-only
Result: T-Mobile/Verizon users get connection errors
Prevention: Enable IPv6 dual-stack (both IPv4 and IPv6)
5. No traffic shaping, one service DoS’s itself:
Problem: Batch job sends 10Gbps burst, saturates 1Gbps link
Result: Website slow/down, SSH unresponsive (all sharing same link)
Prevention: tc qdisc limits traffic to prevent saturation
What this guide teaches you:
- Why systemd-networkd vs NetworkManager - When to use declarative vs dynamic config
- How IPv6 works - Dual-stack setup, why it matters
- How firewalls actually work - Packet flow through nftables chains
- What traffic shaping does - Prevent burst-induced outages
- What connection tracking is - Why it runs out, how to size it
- Where to terminate TLS - Load balancer vs backend, mTLS explained
This guide covers:
- Network Configuration: systemd-networkd (predictable) vs. NetworkManager (flexible)
- IPv6 Dual-Stack: Future-proof addressing (IPv4 + IPv6)
- Firewall: nftables ruleset skeleton (modern, efficient)
- Traffic Shaping: tc/qdisc for burst handling
- Connection Tracking: conntrack sizing for scale
- TLS/mTLS: Where to terminate, keepalive tuning
1. Network Configuration: systemd-networkd vs. NetworkManager
Comparison
Feature | systemd-networkd | NetworkManager |
---|---|---|
Config | .network files (predictable) | GUI/CLI (flexible) |
Daemon | systemd-networkd | NetworkManager |
Use case | Servers (declarative IaC) | Desktops/laptops (dynamic) |
Complexity | Simple (one config file) | More options (for power users) |
Boot Time | Faster (waits for network in-tree) | Slower (waits for all adapters) |
Cloud/Container | Preferred (cloud-init compatible) | Can work (but not typical) |
Recommendation
Production servers: Use systemd-networkd
Desktops/laptops: Use NetworkManager
Cloud/containers: Use cloud-init
+ systemd-networkd
Detailed explanation: Why the choice matters
systemd-networkd - The declarative approach:
What it is: You write .network files (plain text config), systemd reads them at boot, configures network exactly as specified.
Why it’s good for servers:
Server deployment workflow:
1. Write /etc/systemd/network/10-static.network with IP 192.168.1.10
2. Ansible/Terraform deploys this file to 100 servers
3. All 100 servers get identical, predictable network config
4. No surprises, no GUI needed, perfect for automation
Real example - Infrastructure as Code (IaC):
# Ansible playbook deploys this to all web servers:
/etc/systemd/network/10-web-eth0.network:
Address=10.0.1.{{ inventory_hostname_index }}/24
Gateway=10.0.1.1
Result:
web1: 10.0.1.1
web2: 10.0.1.2
web3: 10.0.1.3
...
Perfect for cloud/containers - config is a file, easy to version control
NetworkManager - The dynamic approach:
What it is: Daemon that automatically manages network (WiFi scanning, DHCP, VPN, etc.). GUI-friendly.
Why it’s good for desktops/laptops:
Laptop use case:
- Home: Connects to WiFi "HomeNet", gets 192.168.1.100 via DHCP
- Coffee shop: Switches to WiFi "Starbucks", gets 10.0.0.50
- Office: Ethernet cable plugged in, WiFi disabled, gets 172.16.10.20
NetworkManager handles all this automatically
No manual config files needed
Why it’s BAD for servers:
Problem: Dynamic behavior is unpredictable
- NetworkManager might reorder interfaces (eth0 becomes eth1)
- DHCP might assign different IP after reboot
- GUI tools expect desktop environment
Result: Server IP changes β DNS breaks β website down
When to use each:
Scenario | Use | Why |
---|---|---|
AWS EC2, GCP Compute | systemd-networkd + cloud-init | Cloud provider sets network via metadata |
Kubernetes nodes | systemd-networkd | Static config, no DHCP surprises |
Docker host | systemd-networkd | Containers need stable networking |
Physical servers (datacenter) | systemd-networkd | Static IPs, no WiFi, predictable |
Developer laptop | NetworkManager | WiFi roaming, VPN, dynamic |
Desktop workstation | NetworkManager | GUI configuration tools |
2. systemd-networkd: Declarative Configuration
Install & Enable
# systemd-networkd usually pre-installed
# Verify & enable
sudo systemctl enable systemd-networkd
sudo systemctl restart systemd-networkd
# Disable NetworkManager (if installed)
sudo systemctl disable NetworkManager
sudo systemctl stop NetworkManager
IPv4 Configuration
Simple DHCP (/etc/systemd/network/10-dhcp.network
):
[Match]
Name=eth0
[Network]
DHCP=yes
[DHCP]
UseDomains=yes
Static IP (/etc/systemd/network/10-static.network
):
[Match]
Name=eth0
[Network]
Address=192.168.1.10/24
Gateway=192.168.1.1
DNS=8.8.8.8 8.8.4.4
Multiple Interfaces (/etc/systemd/network/20-static-secondary.network
):
[Match]
Name=eth1
[Network]
Address=10.0.0.10/24
DHCP=no
LinkLocalAddressing=no
IPv6 Dual-Stack Configuration
What it is:
- IPv4 + IPv6 on same interface
- Future-proof (world runs out of IPv4)
- Both stateless (SLAAC) and stateful (DHCPv6)
Detailed explanation: Why IPv6 matters in 2025+
The IPv4 exhaustion problem:
IPv4 address space: 4.3 billion addresses (2^32)
World population: 8 billion people
IoT devices: 30+ billion
Mobile devices: 10+ billion
Math doesn't work: We ran out of IPv4 addresses years ago
What IPv6 solves:
IPv6 address space: 340 undecillion addresses (2^128)
That's: 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses
Enough for: Every grain of sand on Earth to have trillions of IPs
Practical result: No more NAT required, every device gets public IP
Real-world impact - Mobile networks:
Problem: T-Mobile, Verizon, AT&T use IPv6-only networks (with NAT64 for IPv4)
What happens if your server is IPv4-only:
Mobile user on T-Mobile tries to access yoursite.com
β
Phone only has IPv6 address (2600:1234::abc)
β
DNS lookup: yoursite.com β 203.0.113.10 (IPv4 only)
β
Phone can't connect directly (IPv4 to IPv6 incompatible)
β
Carrier NAT64 translates IPv4βIPv6 (slow, unreliable)
β
User experiences slow load times or connection failures
With dual-stack (IPv4 + IPv6):
DNS lookup: yoursite.com β 203.0.113.10 (A record, IPv4)
β 2001:db8::10 (AAAA record, IPv6)
β
Phone prefers IPv6 (2001:db8::10)
β
Direct connection, no NAT, faster, more reliable
IPv6 address format explained:
IPv4: 192.168.1.10 (4 octets, decimal)
IPv6: 2001:0db8:0000:0000:0000:0000:0000:0010 (8 groups, hexadecimal)
Shortened: 2001:db8::10
- Leading zeros omitted (0db8 β db8)
- Consecutive zero groups replaced with :: (only once)
IPv6 address types:
Type | Example | Purpose |
---|---|---|
Global unicast | 2001:db8::10/64 | Public internet (like IPv4 public) |
Link-local | fe80::1/64 | Local network only (auto-configured) |
Unique local | fd00::10/64 | Private network (like 192.168.x.x) |
Loopback | ::1 | Localhost (like 127.0.0.1) |
Multicast | ff02::1 | All nodes on link |
SLAAC vs DHCPv6 - How IPv6 gets configured:
SLAAC (Stateless Address Auto-Configuration):
What it does: Router broadcasts prefix (2001:db8::/64)
Device creates its own IP by combining prefix + MAC/random
Example:
Router announces: 2001:db8::/64
Device MAC: 00:11:22:33:44:55
Device generates: 2001:db8::211:22ff:fe33:4455
Benefit: No DHCP server needed, fully automatic
Drawback: No centralized IP tracking (hard to audit)
DHCPv6 (Stateful, like DHCPv4):
What it does: DHCPv6 server assigns specific IPv6 address
Example:
Device requests IP
DHCPv6 server responds: Use 2001:db8::100
Device configures: 2001:db8::100/64
Benefit: Centralized control, know which device has which IP
Drawback: Requires DHCPv6 server infrastructure
Dual-stack in practice:
Your server configuration:
IPv4: 203.0.113.10/24 (static or DHCP)
IPv6: 2001:db8::10/64 (static or SLAAC)
DNS records (both):
yoursite.com. A 203.0.113.10
yoursite.com. AAAA 2001:db8::10
Client behavior:
Modern browsers try IPv6 first (Happy Eyeballs algorithm)
If IPv6 fails within 50ms, fallback to IPv4
Result: Best of both worlds, maximum compatibility
Dual-Stack DHCP (/etc/systemd/network/10-dual-stack.network
):
[Match]
Name=eth0
[Network]
# IPv4
DHCP=ipv4
# IPv6: Accept Router Advertisements (RA) for SLAAC
IPv6AcceptRA=yes
# Or: Use DHCPv6 (stateful)
# DHCP=both
# DNS
DNS=2606:4700:4700::1111 2606:4700:4700::1001 # Cloudflare IPv6 DNS
DNS=8.8.8.8 8.8.4.4 # Fallback IPv4 DNS
[DHCPv6]
# Request DNS from DHCPv6 server
UseDomains=yes
[DHCP]
UseDomains=yes
Dual-Stack Static (/etc/systemd/network/10-dual-static.network
):
[Match]
Name=eth0
[Network]
# IPv4
Address=192.168.1.10/24
Gateway=192.168.1.1
# IPv6
Address=2001:db8::10/64
Gateway=fe80::1
# DNS (both IPv4 & IPv6)
DNS=2606:4700:4700::1111
DNS=8.8.8.8
[Link]
# Link-local address (fe80::/10) for local routing
LinkLocalAddressing=both # IPv4 + IPv6
Apply Configuration
# Reload & apply
sudo systemctl restart systemd-networkd
# Verify
ip a # Show all interfaces
ip r # Show routes
resolvectl status # DNS configuration
Bonding & Teaming
Active-Backup Bonding (/etc/systemd/network/10-bond.network
):
[Match]
Name=bond0
[Network]
DHCP=ipv4
IPv6AcceptRA=yes
[Bond]
Mode=active-backup
MIIMonitoringMode=active
MIIMonitoringFrequency=100ms
FailOverMACPolicy=active
Slave Interfaces (/etc/systemd/network/20-bond-slaves.network
):
[Match]
Name=eth0 eth1
[Network]
Bond=bond0
3. nftables Firewall Ruleset Skeleton
Production-Grade Skeleton
/etc/nftables.conf
:
#!/usr/bin/nft -f
flush ruleset
# ===== VARIABLES =====
define ALLOWED_SSH = { 10.0.0.0/8, 192.168.0.0/16 }
define HTTP_PORTS = { 80, 443 }
define DB_PORTS = { 5432, 3306 }
# ===== TABLES =====
table inet filter {
# ===== CHAINS =====
chain input {
type filter hook input priority 0; policy drop;
# Accept loopback
iif "lo" accept
# Accept established/related
ct state established,related accept
# ICMP (ping)
icmp type echo-request accept
icmpv6 type echo-request accept
# SSH (restricted by source)
tcp dport 22 ip saddr $ALLOWED_SSH accept
# HTTP/HTTPS (public)
tcp dport $HTTP_PORTS accept
# Stateless: DNS (allow queries)
udp dport 53 accept
tcp dport 53 accept
# Metrics (internal only)
tcp dport 9100 ip saddr 10.0.0.0/8 accept
# Drop with logging (for debugging)
limit rate 5/minute log prefix "nft_drop: "
# Final drop
counter drop
}
chain forward {
type filter hook forward priority 0; policy drop;
# Disable forwarding by default (unless L3 router)
}
chain output {
type filter hook output priority 0; policy accept;
# Allow all outbound (you can restrict later)
}
}
# ===== NAT TABLE (if using NAT) =====
table inet nat {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
# Masquerade outbound (local β external)
# oif "eth0" masquerade
}
}
# ===== RATE LIMITING TABLE =====
table inet rate-limit {
chain input {
type filter hook input priority -1; policy accept;
# Rate limit SYN packets (DDoS mitigation)
tcp flags syn limit rate 100/second accept
tcp flags syn drop
}
}
Load & Verify
# Syntax check
sudo nft -c -f /etc/nftables.conf
# Load
sudo systemctl restart nftables
# Verify
sudo nft list ruleset
sudo nft list ruleset | head -30 # First 30 lines
Dynamic Rules (Runtime)
# Add rule temporarily (lost on reboot)
sudo nft add rule inet filter input tcp dport 8080 accept
# List rules with line numbers
sudo nft -a list ruleset | grep 8080
# Delete by handle
sudo nft delete rule inet filter input handle 15
# Persist: edit /etc/nftables.conf instead
4. Traffic Shaping: tc & qdisc
Why Traffic Shaping
Use case: Burst handling (sudden traffic spike), rate limiting, prioritization
Example: Limit app to 100 Mbps to prevent overwhelming backend
Basic tc Setup
Add qdisc (queuing discipline) to interface:
# Token Bucket Filter (TBF): simple rate limiting
# Rate: 100 Mbps, burst: 1 MB, latency: 400 ms
sudo tc qdisc add dev eth0 root tbf \
rate 100mbit \
burst 1m \
latency 400ms
# Verify
sudo tc qdisc show dev eth0
sudo tc -s qdisc show dev eth0 # Show statistics
Fair Queueing (fq): per-flow isolation:
# Each flow (connection) gets fair share
# Max 10000 concurrent flows
sudo tc qdisc add dev eth0 root fq \
maxrate 100mbit \
maxflows 10000 \
quantum 9000
# Show flows
sudo tc -s qdisc show dev eth0
HTB (Hierarchical Token Bucket): traffic classes:
# Create root class
sudo tc qdisc add dev eth0 root handle 1: htb default 30
# Create class 1:1 with 100 Mbps guarantee
sudo tc class add dev eth0 parent 1: classid 1:1 htb \
rate 100mbit \
burst 1m
# Create leaf class 1:10 (web traffic, 70 Mbps)
sudo tc class add dev eth0 parent 1:1 classid 1:10 htb \
rate 70mbit
# Create leaf class 1:20 (DB traffic, 30 Mbps)
sudo tc class add dev eth0 parent 1:1 classid 1:20 htb \
rate 30mbit
# Assign traffic to classes (sport = source port)
sudo tc filter add dev eth0 parent 1: protocol ip prio 1 u32 \
match ip sport 80 0xffff flowid 1:10
sudo tc filter add dev eth0 parent 1: protocol ip prio 1 u32 \
match ip sport 3306 0xffff flowid 1:20
Make Persistent (systemd)
Create /etc/systemd/system/tc-setup.service
:
[Unit]
Description=Traffic Control Setup
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/tc-setup.sh
[Install]
WantedBy=multi-user.target
Create /usr/local/bin/tc-setup.sh
:
#!/bin/bash
# Wait for interface to be ready
sleep 5
# TBF qdisc
tc qdisc add dev eth0 root tbf \
rate 100mbit \
burst 1m \
latency 400ms
# Verify
tc qdisc show dev eth0
Enable:
sudo chmod +x /usr/local/bin/tc-setup.sh
sudo systemctl enable tc-setup.service
sudo systemctl start tc-setup.service
5. Connection Tracking (conntrack) Sizing
Why conntrack Matters
Problem: Conntrack table full β “Cannot assign requested address”
Cause: Many short-lived connections (web clients, microservices)
Solution: Increase table size based on workload
Check Current Status
# Current entries & limits
cat /proc/net/stat/nf_conntrack
# Output:
# entries searched found new invalid ignore delete delete_list insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete
# 12345 123456 10000 5000 100 50 1000 100 5000 50 200 10 50 10 10 5
# Max allowed
cat /proc/sys/net/netfilter/nf_conntrack_max
# Output: 262144 (or similar)
# Utilization %
echo "scale=2; $(cat /proc/net/stat/nf_conntrack | tail -1 | awk '{print $1}') / $(cat /proc/sys/net/netfilter/nf_conntrack_max) * 100" | bc
Increase conntrack_max
Calculate needed size:
# For load balancer: (max concurrent connections) + headroom
# Example: 50,000 concurrent connections β set to 200,000 (4x headroom)
# Temporary (lost on reboot)
sudo sysctl -w net.netfilter.nf_conntrack_max=200000
# Permanent (/etc/sysctl.d/99-conntrack.conf)
echo "net.netfilter.nf_conntrack_max=200000" | sudo tee -a /etc/sysctl.d/99-conntrack.conf
sudo sysctl -p /etc/sysctl.d/99-conntrack.conf
Tune conntrack Timeouts
# Default TCP established timeout: 432000 seconds (5 days)
# Reduce to 1 hour for faster cleanup
sudo sysctl -w net.netfilter.nf_conntrack_tcp_timeout_established=3600
# TCP TIME_WAIT timeout (default 120 seconds)
# Reduce to 60 seconds
sudo sysctl -w net.netfilter.nf_conntrack_tcp_timeout_time_wait=60
# UDP timeout (default 180 seconds)
# Keep shorter
sudo sysctl -w net.netfilter.nf_conntrack_udp_timeout=60
# UDP established (default 432000 seconds)
# Reduce to 300 seconds (5 minutes)
sudo sysctl -w net.netfilter.nf_conntrack_udp_timeout_stream=300
Persistent config (/etc/sysctl.d/99-conntrack.conf
):
# Connection Tracking Tuning
net.netfilter.nf_conntrack_max = 500000
net.netfilter.nf_conntrack_tcp_timeout_established = 3600
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 60
net.netfilter.nf_conntrack_udp_timeout = 60
net.netfilter.nf_conntrack_udp_timeout_stream = 300
# conntrack hashsize (tuned automatically, but can set manually)
# = nf_conntrack_max / 8 (for reasonable performance)
net.netfilter.nf_conntrack_buckets = 65536 # 500k / 8
6. TLS Termination & mTLS
Where to Terminate TLS
L7 Load Balancer (TLS termination):
Client
β (HTTPS β L7 LB, certificate there)
Ingress/ALB
β (HTTP or mTLS β backend, LB decrypts)
Backend (app)
Benefit: Certificate management in one place (LB), simpler app code
Drawback: LB must handle encryption overhead, plaintext inside
mTLS (mutual TLS) inside mesh:
Client
β (HTTPS β L7 LB)
Ingress/ALB
β (mTLS: both sides authenticate β sidecar/service mesh)
Service Mesh (Istio, Linkerd)
β (mTLS β backend)
Backend
Benefit: End-to-end encryption, service-to-service auth
Drawback: More certificates, networking overhead
TLS Termination Best Practice
Production setup:
Client β (TLS 1.3 β LB) β (HTTP/2 or mTLS β backend)
- Inbound (client-facing): Strong TLS 1.3, modern ciphers, certificate rotation
- Outbound (to backend): Either HTTP (if private network) or mTLS (if untrusted)
Keep-alive & Timeouts for Load Balancers
Why Timeouts Matter:
- Prevent resource exhaustion (hanging connections)
- Detect dead backends (quick failover)
- Trade-off: Too short = excessive reconnects, too long = slow detection
Recommended Values:
Parameter | Client-facing LB | Backend (app server) | Notes |
---|---|---|---|
Connect timeout | 5-10 seconds | 1-5 seconds | Initial connection |
Read timeout | 30-60 seconds | 10-30 seconds | Waiting for response |
Write timeout | 30-60 seconds | 10-30 seconds | Sending request |
Idle timeout | 60-120 seconds | 30-60 seconds | Keep-alive max |
Keep-alive probe | Every 30 seconds | Every 10 seconds | TCP keepalive |
nginx Backend Example:
upstream backend {
# Connection pooling (persistent connections)
keepalive 32; # Keep 32 idle connections open
server app1.example.com:8080 max_fails=3 fail_timeout=30s;
server app2.example.com:8080 max_fails=3 fail_timeout=30s;
}
server {
listen 443 ssl http2;
# Client-facing timeouts (generous)
proxy_connect_timeout 10s;
proxy_read_timeout 60s;
proxy_write_timeout 60s;
send_timeout 60s;
location / {
proxy_pass http://backend;
# Backend pool timeouts (stricter)
proxy_connect_timeout 5s;
proxy_read_timeout 30s;
proxy_write_timeout 30s;
# Keep connections alive to backend
proxy_http_version 1.1;
proxy_set_header Connection "";
# Forwarding
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
HAProxy Backend Example:
backend web_servers
# Connection pooling
option http-server-close # Close server-side connection after response
timeout connect 5000 # 5 seconds
timeout server 30000 # 30 seconds
# Health check with timeout
default-server inter 5s fall 3 rise 2
server app1 app1:8080 check
server app2 app2:8080 check
frontend web_client
# Client-facing timeouts (generous)
timeout client 60000 # 60 seconds
# Keep-alive
option http-keep-alive
timeout http-keep-alive 5000 # 5 seconds between requests
Kubernetes Service timeout (if using NodePort/LoadBalancer):
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
type: LoadBalancer
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800 # 3 hours
selector:
app: my-app
ports:
- port: 443
targetPort: 8080
protocol: TCP
TCP Keep-Alive Configuration
On Linux (for long-lived connections):
# sysctl tuning for TCP keep-alive
net.ipv4.tcp_keepalives_intvl = 15 # Interval between probes (15 sec)
net.ipv4.tcp_keepalives_probes = 5 # Number of probes before giving up
net.ipv4.tcp_keepalive_time = 600 # Time before first probe (10 min)
# Per-socket (in app code):
# Python: socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
# Go: conn.SetKeepAlive(true); conn.SetKeepAlivePeriod(30 * time.Second)
# Java: socket.setKeepAlive(true)
Networking Checklist
Pre-Deployment
- Network config chosen (systemd-networkd or NetworkManager)
- IPv6 dual-stack enabled (if supported by ISP/cloud)
- nftables ruleset created & tested
- SSH rule restricted to trusted sources
- DNS configured (both IPv4 & IPv6)
- conntrack_max sized for expected connections
- Traffic shaping tested (tc qdisc applied)
- Load balancer timeouts tuned (connect, read, write)
- Keep-alive configured (client & backend)
- Certificate rotation automated (if TLS termination)
Post-Deployment
- Network interfaces up & configured (ip a)
- Routes correct (ip r)
- DNS resolving (resolvectl status)
- Firewall rules active (nft list ruleset)
- SSH accessible from approved sources only
- Load balancer connecting to backends
- Connection counts reasonable (not exhausting conntrack)
- Traffic shaping working (tc -s qdisc show)
Ongoing Monitoring
- Weekly: Check network stats (errors, drops, collisions)
- Weekly: Monitor conntrack usage (near limit?)
- Monthly: Verify keep-alive working (connection reuse)
- Monthly: Check certificate expiry (if TLS termination)
- Quarterly: Test failover (disable one backend, verify recovery)
Quick Reference Commands
# ===== NETWORK CONFIGURATION =====
ip a # Show interfaces
ip r # Show routes
resolvectl status # DNS configuration
systemctl status systemd-networkd # Network service status
# ===== IPv6 =====
ip -6 a # IPv6 addresses
ip -6 r # IPv6 routes
ping6 2001:4860:4860::8888 # Test IPv6 connectivity
# ===== nftables =====
sudo nft list ruleset # Show all rules
sudo nft list table inet filter # Show specific table
sudo nft -a list ruleset # Show with handle numbers
# ===== Traffic Control =====
sudo tc qdisc show # List qdiscs
sudo tc -s qdisc show # With statistics
sudo tc filter show dev eth0 # Show filters
# ===== Connection Tracking =====
cat /proc/net/stat/nf_conntrack # Current conntrack stats
cat /proc/sys/net/netfilter/nf_conntrack_max # Max allowed
conntrack -L | head -10 # List connections
# ===== Network Stats =====
ss -tulnp # Listening sockets
ethtool -S eth0 | grep -i drop # NIC drops
iftop # Bandwidth by host
nethogs # Bandwidth by process