I had a situation where I had TA916e that was once being delivered 3 T1s as the WAN. The TA916e was originally feeding both a PBX with 8 FXS ports and feeding a NV3448 for the data side of my customer's network. The carrier switched to an Ethernet Handoff on site (Adtran 818), so I reconfigured the TA916e with public's assigned to both Eth0/1 and Eth0/2. The client changed to a HPBX VoIP setup so we no longer needed connectivity to the PBX via the 8 FXS ports. Essentially, the TA916e was a straight pass through allowing me to route the Customer Public IPs to the NV3448 which providing NAT/Firewalling for the customer's voice and data networks.
Our plans were to eventually collapse the network and get rid of the TA916e, but that had not happened, yet. A few weeks ago, the customer started complaining of drops and voice quality issues. We started reviewing our monitoring and confirmed the issue. It got pretty bad where we were seeing double digit packet loss at peak times during the day. Every time we called the carrier they would say the packet loss was due to congestion on the underlying 3 bonded t1s driving the Carrier's Adtran 818.
We started testing after hours and still noticed the packet loss after 5:30pm when no one was at the client location. Monitoring showed that even at these off hour times that ~1 Mbps was still being used consistently on the circuit. We had the carrier do intrusive testing on the T1s last night and all three were clean. We did some other testing today and basically ruled out the carrier's network. Tonight after hours, I disabled the internal port on the TA916e so that nothing coming from the NV3448 would even be flowing out. Our monitoring still showed ~1-1.5Mbps being utilized on the TA916 Internet facing connection. This time, reviewing our Netflow monitoring on the TA916e showed a ton of traffic inbound to the public IP of the TA916e with DST Port 53 UDP. Cleaned up the config and made sure all old sip programming was out. Also disabled Allow DNS Proxy on the box which gets to the root of my question/post. Sorry for the long back story.
I had assumed (possibly naively) that the netvanta/total access routers (AOS for that matter) were not responding externally to DNS requests if DNS Proxy was enabled. Take a standard branch office situation where the adtran netvanta router is the edge device providing firewalling/nat/dhcp/dns to the client LAN. The router has a static Public IP on the WAN interface. I enable Allow DNS Proxy and typically also Allow DNS Lookup on the box. I typically use OpenDNS or Google DNS IPs that the box queries for DNS.
Do I need to do anything else to properly protect the router from being externally queried for DNS? I'm typically only allowing 443 and ICMP on the WAN interface except for known public ips like for management purposes.
I often protect SSH by adding "ip access-group BLOCK in" statement to the WAN interface. Do I need to do something like this to block DNS queries of the WAN interface of the router except for OpenDNS or Google DNS IPs? Does the router protected itself already from this kind of scenario?
I believe this is kind of like a refection attack, but not sure. Suggestions or can someone set me straight here?
A form of DDoS attack uses source addresses spoofed as the victim to send queries to open DNS resolvers resulting in large responses. The victim is flooded with unsolicited DNS return traffic from multiple source IPs. You were probably used as a reflector.
no domain-proxy will stop the box from resolving names from the outside.
If you need domain proxy for internal networks, you can reject queries destined to UDP port 53 (and TCP port 53) on your public interface with the same effect. This won't affect your ability to resolve names externally. Your queries will originate from an ephemeral (random, above 1023) port to UDP 53 on the external resolver. The return traffic will be sourced from 53 and destined to the ephemeral port that originated the query. Filtering port 53 as a destination inbound to your WAN won't hurt anything.
I find it easier an safer to create an access list for management (ssh, snmp, http, https) containing my management network(s) and apply it to the line or service rather than the interface(s).
ip access-list standard admin-list
permit management-net inverse-mask
line telnet 0 4
line ssh 0 4
ip access-class admin-list in
http ip access-class admin-list in
http ip secure-access-class admin-list in
snmp-server community itsasecret RO ip access-class admin-list