Why Your Devices Lost Internet After Installing AdGuard Home (And How to Fix It)
A deep dive into diagnosing network connectivity issues when running AdGuard Home alongside Tailscale on Proxmox
The Problem
You’ve just installed AdGuard Home on your homelab server. You configure your router to use it as the DNS server, excited about network-wide ad blocking. Then you notice something strange: your Apple TV reports “No Internet Connection.” Your phone can’t load websites. But oddly, your laptop works fine, and some devices seem completely unaffected.
What’s going on?
This guide walks through a real troubleshooting session that uncovered a subtle but devastating interaction between Tailscale’s subnet routing and local DNS services. If you’re running AdGuard Home (or Pi-hole) on a Proxmox VM (or other hypervisor) with Tailscale installed, this might save you hours of debugging.
The Setup
- Proxmox server hosting a Linux VM (“dockerhost”)
- AdGuard Home running in Docker on that VM
- Tailscale installed on the VM for remote access
- Router configured to hand out the VM’s IP as the DNS server via DHCP
This is a common homelab pattern. Tailscale gives you secure remote access, Docker keeps services isolated, and AdGuard Home provides network-wide ad blocking.
Initial Symptoms
After enabling AdGuard Home as the network’s DNS server:
| Device | Internet Access | On Tailscale? |
|---|---|---|
| Apple TV | ❌ No | No |
| iPhone (Tailscale off) | ❌ No | No |
| Personal laptop (tailscale off) | ❌ No | No |
| Work laptop | ✅ Yes | No (but has corporate VPN) |
| iPhone (Tailscale on) | ✅ Yes | Yes |
| Other Tailnet devices | ✅ Yes | Yes |
| Personal laptop (tailscale on) | ✅ Yes | Yes |
The pattern: only Tailscale-connected devices could reach the internet.
The Diagnostic Journey
Step 1: Check If AdGuard Is Running
First instinct: is AdGuard Home actually working?
# On the VM running AdGuard
dig @127.0.0.1 google.com +short
Result: Returns IP addresses. AdGuard is working locally.
Step 2: Check If AdGuard Is Listening on All Interfaces
Maybe AdGuard is only bound to localhost?
sudo ss -lunp | grep :53
Expected Result:
UNCONN 0 0 0.0.0.0:53 0.0.0.0:* users:(("docker-proxy"...))
It’s bound to 0.0.0.0:53 — all interfaces. That’s correct.
Step 3: Test DNS From the LAN
From a MacBook on the same network:
dig @<VM_IP> google.com +short
Result: Connection timed out. The VM is unreachable.
Step 4: Test Basic Connectivity
ping <VM_IP>
Result: 100% packet loss. Can’t even ping the VM.
This isn’t a DNS problem. It’s a fundamental network connectivity problem.
Step 5: Verify the VM Can Reach the Network
From the VM:
ping <ROUTER_IP>
Result: Works fine. The VM can reach the router.
ping <MACBOOK_IP>
Result: Works fine. The VM can reach the MacBook.
This is asymmetric connectivity. The VM can talk out, but nothing can talk in. Classic firewall behavior.
Step 6: Check Proxmox Firewall
Proxmox has multiple firewall layers:
- Datacenter level — applies to everything
- Node level — applies to the Proxmox host
- VM level — applies to individual VMs
- NIC level — the
firewall=1flag on network interfaces
Checked each location in Proxmox UI. Found:
- Datacenter firewall: Off
- Node firewall: On
- VM firewall: Off
- NIC firewall:
firewall=1← Suspicious
Disabled the NIC firewall (Hardware → Network Device → Edit → uncheck Firewall).
Result: Still can’t ping. Not the cause.
Step 7: Check From the Proxmox Host
The MacBook can’t reach the VM. Can the Proxmox host itself reach it?
From Proxmox shell:
ping <VM_IP>
Result: 100% packet loss. Even the host can’t reach its own VM.
But the ARP table shows the correct MAC address:
arp -n | grep <VM_IP>
Result: Shows the VM’s MAC. Layer 2 is working. Layer 3 is broken.
Step 8: Packet Capture
On the VM, capture incoming ICMP:
sudo tcpdump -i eth0 icmp -n
While running ping from Proxmox host:
Result:
IP <PROXMOX_IP> > <VM_IP>: ICMP echo request, id 2, seq 1, length 64
IP <PROXMOX_IP> > <VM_IP>: ICMP echo request, id 2, seq 2, length 64
IP <PROXMOX_IP> > <VM_IP>: ICMP echo request, id 2, seq 3, length 64
The packets ARE arriving. The VM receives the pings but doesn’t respond.
Step 9: Check Routing
Where does the VM think it should send responses?
ip route get <PROXMOX_IP>
Result:
<PROXMOX_IP> dev tailscale0 table 52 src <TAILSCALE_IP>
There’s the problem.
The VM is trying to reply via Tailscale (tailscale0) instead of the physical interface. The response goes out with the wrong source IP and gets dropped.
Step 10: Examine Tailscale’s Routing Table
ip route show table 52
Result:
100.x.x.x dev tailscale0
100.x.x.x dev tailscale0
... (other Tailscale IPs)
192.168.x.0/24 dev tailscale0 ← THE CULPRIT
Tailscale has added the entire LAN subnet to its routing table. Every packet destined for the LAN goes through Tailscale instead of the physical interface.
Root Cause
Tailscale subnet routing was capturing LAN traffic.
When you enable subnet routing in Tailscale (to access your LAN remotely), Tailscale adds routes for those subnets. If your VM is on the same subnet being advertised, this creates a routing conflict:
- Ping arrives on physical interface (
eth0) from192.168.x.x - Kernel needs to send reply to
192.168.x.x - Kernel consults routing rules
- Tailscale’s table 52 matches
192.168.x.0/24→ route viatailscale0 - Reply goes out via Tailscale with wrong source IP
- Original sender never receives the reply
Tailscale devices worked because they connected via Tailscale IPs (100.x.x.x), not LAN IPs. Their traffic never hit the problematic route.
The Fix
Add a routing rule that prioritizes the main routing table for LAN traffic (specific to your router – go to the router admin screen to determine your IP address pattern):
sudo ip rule add to 192.168.x.0/24 priority 5199 lookup main
This tells the kernel: “For any traffic going to the LAN subnet, check the main routing table first (which routes via the physical interface), before checking Tailscale’s table 52.”
Explanation for beginners:
Linux routing works like a priority list. When sending a packet, the kernel goes through rules in order (lowest number = highest priority) until one matches. Tailscale inserted rules at priority 5210-5270. Our rule at 5199 jumps ahead and says “use normal routing for LAN traffic.”
Making It Permanent
The rule disappears on reboot. There are two approaches to persist it:
Option 1: networkd-dispatcher (May Not Work)
The traditional approach uses networkd-dispatcher:
sudo tee /etc/networkd-dispatcher/routable.d/50-lan-route-fix << 'EOF'
#!/bin/bash
ip rule add to 192.168.x.0/24 priority 5199 lookup main 2>/dev/null || true
EOF
sudo chmod +x /etc/networkd-dispatcher/routable.d/50-lan-route-fix
Warning: This often fails due to timing issues. The interface may already be “routable” before the dispatcher starts looking, so the script never fires. You’ll discover this after a reboot when the rule is missing.
Option 2: systemd Service (Recommended)
A systemd service is more reliable because it explicitly runs after Tailscale starts:
sudo tee /etc/systemd/system/lan-route-fix.service << 'EOF'
[Unit]
Description=Fix LAN routing for Tailscale
After=network-online.target tailscaled.service
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'sleep 10 && ip rule add to 192.168.x.0/24 priority 5199 lookup main 2>/dev/null || true'
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable lan-route-fix.service
Why this works better:
After=tailscaled.serviceensures it runs after Tailscale starts- The 10-second delay (
sleep 10) gives Tailscale time to set up its routes RemainAfterExit=yeskeeps the service marked as “active” so you can check its status
Verify it’s enabled:
sudo systemctl status lan-route-fix.service
Test without rebooting:
sudo systemctl start lan-route-fix.service
sudo systemctl status lan-route-fix.service
Should show Active: active (exited) with exit status 0.
Verify After Reboot
After any reboot, always verify the rule exists:
ip rule list | grep 5199
If it’s missing, the persistence method failed and you need to debug or switch approaches.
The Cleaner Fix
Check your Tailscale admin console (admin.tailscale.com):
- Go to Machines
- Look for any machine advertising your LAN subnet (e.g.,
192.168.x.0/24) as a subnet route - If it’s the same machine running AdGuard, consider whether you really need that subnet route
If you disable the subnet route advertisement, you won’t need the routing rule workaround. However, you’ll lose the ability to access LAN devices remotely via Tailscale.
Configuring Proxmox Firewall Properly
Once connectivity works, you may want to re-enable VM-level firewalling for security. The key is to add allow rules before enabling the firewall. Note: the settings configuration will be different depending on your hypervisor, but the rules will be the same.
Add Firewall Rules
In Proxmox: VM → Firewall → Add
| Direction | Action | Protocol | Dest. Port | Comment |
|---|---|---|---|---|
| in | ACCEPT | tcp | 53 | DNS TCP |
| in | ACCEPT | udp | 53 | DNS UDP |
| in | ACCEPT | tcp | 80 | HTTP (AdGuard UI) |
| in | ACCEPT | tcp | 443 | HTTPS |
| in | ACCEPT | icmp | — | Ping |
Enable the Firewall
After rules are in place:
- VM → Hardware → Network Device → Edit
- Check Firewall
- Click OK
Test immediately:
ping <VM_IP>
dig @<VM_IP> google.com +short
If something breaks, uncheck the Firewall checkbox. Your rules are saved and you can troubleshoot.
Post-Setup: Configuring Your Router Correctly
After AdGuard is working, you might notice all queries in AdGuard show the router’s IP as the client instead of individual device IPs. This happens when your router proxies DNS requests instead of letting devices query AdGuard directly.
The Problem: Lost Client Visibility
When you see your router’s IP (e.g., 192.168.x.1) or Docker’s bridge IP (e.g., 172.18.0.1) as the client for all queries, you lose:
- Per-device statistics
- Ability to set per-device rules
- Insight into which device is making problematic queries
The Fix: Configure DHCP DNS Settings
Your router has two different DNS settings:
- WAN DNS — What the router itself uses (for its own lookups)
- LAN/DHCP DNS — What the router tells devices to use
You need to configure the LAN/DHCP DNS setting.
I use ASUS routers, so for ASUS routers:
- Access router admin at
http://192.168.x.1orhttp://router.asus.com - Go to LAN → DHCP Server
- Find DNS and WINS Server Setting
- Set DNS Server 1 to your AdGuard IP (e.g.,
192.168.x.x) - Leave DNS Server 2 blank (or set to a backup AdGuard instance)
- Important: Set “Advertise router’s IP in addition to user-specified DNS” to No
- Click Apply
For other routers, consult your router’s admin page for access to these settings.
Why disable “Advertise router’s IP”?
If enabled, devices receive both the router and AdGuard as DNS options. Devices may randomly use either, meaning some queries bypass AdGuard entirely.
After applying changes:
Devices will get the new DNS server when their DHCP lease renews. To speed this up:
- Reconnect to Wi-Fi, or
- Toggle Wi-Fi off/on, or
- Wait for lease expiry (check your lease time setting)
Still Seeing Docker Bridge IP (172.18.0.1)?
If you’ve configured the router correctly but still see 172.18.0.1 as the client for most queries, the problem isn’t your router — it’s Docker’s NAT.
What’s happening: Docker’s port forwarding rewrites the source IP to the bridge gateway (172.18.0.1) before packets reach AdGuard. AdGuard never sees the real client IP.
The fix: Use host networking for AdGuard.
In your AdGuard docker-compose.yml:
services:
adguard:
image: adguard/adguardhome
network_mode: host # Add this line
# Remove any 'ports:' section - not needed with host networking
...
With network_mode: host, AdGuard binds directly to the host’s network interfaces. No Docker NAT, no IP masquerading. Clients connect directly and their real IPs are visible.
Important considerations:
- Host networking means AdGuard uses the host’s ports directly (53, 80, 443, 3000)
- Make sure nothing else on the host is using those ports
- You lose some Docker network isolation
Is this change necessary?
No. It’s essentially cosmetic from a performance standpoint — the difference is microseconds. Ad blocking works identically either way.
What you gain:
- See which device made each query
- Per-device statistics and filtering rules
- Easier debugging (“which device keeps hitting this blocked domain?”)
What you already have without it:
- Network-wide ad blocking ✓
- Fast DNS queries ✓
- All the privacy benefits ✓
If you don’t need per-device visibility, skip this change.
Post-Setup: Configuring Tailscale DNS
If you want all your Tailscale-connected devices to use AdGuard regardless of which network they’re on, configure Tailscale’s global nameservers.
Why Configure Tailscale DNS?
Without this configuration:
- Devices on your LAN use AdGuard via DHCP settings
- Devices off-network (laptop at coffee shop) use whatever DNS that network provides
- No ad blocking when traveling
With Tailscale DNS configured:
- All Tailnet devices use AdGuard everywhere
- Ad blocking works on any network
- Consistent DNS experience across all devices
Configuration Steps
- Go to Tailscale Admin Console
- Scroll to Nameservers
Add Global Nameservers:
Click Add nameserver → Custom and add your AdGuard server’s Tailscale IP:
| Nameserver | Purpose |
|---|---|
100.x.x.x (AdGuard VM) |
Primary DNS with ad blocking |
100.x.x.x (Backup, optional) |
Secondary DNS if primary is down |
Enable Override:
Toggle Override DNS servers to ON.
This forces all Tailnet devices to use your specified nameservers instead of their local network’s DNS. Without this, devices may ignore your custom nameservers.
Optional: Split DNS for Local Domains
If you have local services that need to resolve .local domains or your router handles some local DNS:
- Click Add nameserver → Split DNS
- Enter domain:
local - Enter nameserver: Your router IP (e.g.,
192.168.x.1)
This sends .local queries to your router while everything else goes to AdGuard.
Result
After configuration, your Tailscale DNS settings should show:
- MagicDNS:
100.100.100.100(Tailscale’s internal DNS for machine names) - Split DNS:
local→192.168.x.1(optional, for local domain resolution) - Global nameservers: Your AdGuard Tailscale IP(s) with “Override DNS servers” enabled
Verify It’s Working
On any Tailscale-connected device:
# Check which DNS server is being used
cat /etc/resolv.conf # Linux/Mac
# Or test directly
dig +short google.com
The DNS queries should appear in your AdGuard query log, showing the device’s Tailscale IP as the client.
Post-Setup: Optimizing Blocklists
After everything is working, you might notice the internet feels “sluggish.” Apps take longer to load, and AdGuard shows tons of blocked queries. This is often caused by overly aggressive blocklists.
The Problem: Blocklist Overload
When apps try to reach blocked domains (analytics, telemetry, etc.), they:
- Send a DNS request
- Get blocked (NXDOMAIN or 0.0.0.0)
- Wait for timeout
- Retry several times
- Finally give up
This retry loop creates perceived sluggishness, even though AdGuard responds instantly.
Common Over-Blocking Symptoms
- Apps slow to open but work fine once loaded
- Specific features broken (Grammarly, 1Password sync, etc.)
- Query log shows repeated blocked requests for the same domains
The Fix: Simplify Your Blocklists
Many blocklists overlap significantly. Using multiple aggressive lists provides diminishing returns while increasing false positives.
Common problematic setup:
- AdGuard DNS filter ✓
- AdAway Default Blocklist ✓
- HaGeZi’s Pro Blocklist ✓
- AdGuard DNS Popup Hosts ✓
- OISD Blocklist Big ✓
- Steven Black’s List ✓
This is overkill. These lists overlap 70-90%, and some (like HaGeZi Pro and OISD Big) are already comprehensive.
Recommended setup:
| Approach | Lists | Notes |
|---|---|---|
| Minimal | AdGuard DNS filter | Low false positives, good coverage |
| Balanced | OISD Blocklist Big | Comprehensive, well-maintained, includes most others |
| Privacy-focused | OISD + AdGuard DNS filter | Belt and suspenders |
OISD Blocklist Big is particularly good because it’s actively maintained and already aggregates many other lists while excluding known false positives.
Testing If Blocklists Are the Problem
- Go to Settings → General settings
- Toggle Protection off
- Browse normally for 2-3 minutes
- Toggle Protection back on
- Compare the experience
If browsing feels noticeably faster with protection off, your blocklists are too aggressive.
Performance Settings
In Settings → DNS settings, enable:
- Parallel requests — Queries all upstream DNS servers simultaneously, uses fastest response
This improves DNS resolution speed. The tradeoff (slightly more load on upstream servers) is negligible for home use.
Complete Checklist
Before you start:
- [ ] AdGuard Home (or Pi-hole) installed and working locally
- [ ] Docker configured with proper port mappings (53:53/tcp, 53:53/udp)
- [ ] Router DHCP configured to hand out VM’s IP as DNS server
If devices can’t reach DNS:
- [ ] Test basic ping to VM from LAN device
- [ ] If ping fails, check
ip route get <LAN_IP>on the VM - [ ] If route shows
tailscale0, apply the routing fix - [ ] Make routing fix permanent (use systemd service, not networkd-dispatcher)
- [ ] Verify rule persists after reboot:
ip rule list | grep 5199 - [ ] Check Proxmox firewall at all levels (Datacenter, Node, VM, NIC)
For proper security:
- [ ] Add explicit firewall rules for required ports
- [ ] Enable VM NIC firewall only after rules are in place
- [ ] Test after every change
For optimal performance:
- [ ] Configure router DHCP to point directly to AdGuard (not proxy through router)
- [ ] Disable “Advertise router’s IP in addition to user-specified DNS”
- [ ] Use 1-2 blocklists maximum (recommend OISD Big)
- [ ] Enable parallel requests in AdGuard DNS settings
For Tailscale devices (optional):
- [ ] Add AdGuard’s Tailscale IP as global nameserver in Tailscale admin
- [ ] Enable “Override DNS servers” toggle
- [ ] Optionally add Split DNS for
.localdomains
Key Takeaways
-
Tailscale subnet routing can break local services. If your VM is on the same subnet being advertised, routing conflicts occur.
-
Asymmetric routing is hard to diagnose. Traffic flows one way but not the other. tcpdump is your friend.
-
Proxmox has multiple firewall layers. Check all of them: Datacenter, Node, VM options, and the NIC-level
firewall=1flag. -
ip route getreveals the truth. It shows exactly how the kernel will route a specific destination. -
Always test from multiple points. LAN device → VM, Proxmox host → VM, and VM → outbound all told different parts of the story.
-
Router DHCP configuration matters. Devices should query AdGuard directly, not through the router, to maintain per-device visibility.
-
More blocklists ≠ more protection. Overlapping aggressive lists cause false positives and perceived sluggishness. One well-maintained list is usually enough.
-
networkd-dispatcher has timing issues. Use a systemd service with
After=tailscaled.serviceto ensure routing fixes persist after reboot. Always verify withip rule list | grep 5199after rebooting.
Diagnostic Commands Reference
# Check what's listening on port 53
sudo ss -lunp | grep :53
# Test DNS resolution locally
dig @127.0.0.1 google.com +short
# Test DNS from another device
dig @<VM_IP> google.com +short
# Time DNS queries (should be <50ms)
time dig @<VM_IP> google.com +short
# Check routing decision for specific IP
ip route get <TARGET_IP>
# View routing rules (priority order)
ip rule list
# View Tailscale's routing table
ip route show table 52
# Capture packets on interface
sudo tcpdump -i eth0 icmp -n
sudo tcpdump -i eth0 port 53 -n
# Check iptables
sudo iptables -L -n -v --line-numbers
# View nftables rules
sudo nft list ruleset
This guide was written after a real multi-hour debugging session. The homelab giveth, and the homelab taketh away. But at least now we have network-wide ad blocking that actually works.