Cisco ASA & ESX: strange ARP behavior
Last week I had a very strange problem with a Cisco ASA firewall. The firewall is configured with multiple interfaces, including a DMZ interface. There are multiple servers in the DMZ. These servers are physical and virtual servers. The virtual servers are VMware servers in a blade environment.
I configured the feature
ip verify reverse-path interface DMZ
to prevent spoofing to occur. I also configured a transparent static NAT rule between the Inside network and the DMZ network and multiple static NAT rules between the DMZ network and the Outside network. I left the proxy ARP feature configured with its default settings.
The customer was complaining about log in problems and connectivity problems on the DMZ servers, especially between different DMZ servers. I have done some research and noticed that all problems were related to DMZ servers in the blade environment.
I started some connectivity test and noticed some strange ICMP behavior on the specific servers. When I started a ping from one DMZ VMware server to an other DMZ server on the same ESX host, the first ping responded with an echo-reply, but consequent pings failed. Looking at the ARP table of the server, I noticed that the firewall responded with its own MAC address for every ARP broadcast.
Looking at different forums on the Internet, everybody is speaking about the proxy ARP feature and that you should disable this feature. By default proxy ARP is enabled and I always leave it enabled. Till now I never had this problem. After disabling the proxy ARP feature for the DMZ interface
sysopt noproxyarp DMZ
the problem was solved, because the firewall doesn’t respond to the ARP queries, except for its own interface. Digging a bit deeper on forums, I never found one thread who explains why the proxy ARP feature should be disabled to solve this particular problem.
In my opinion this problem is related to the VMware environment, because I don’t have these problems with physical DMZ servers. So it is strange why the DMZ servers on the same ESX hosts cannot see each other and why does the firewall respond to the ARP queries?
In the near future the blade environment (ESX hosts, network configuration and SAN configuration) is changed, so I hope to find the exact cause and solution of the problem. Does anybody else have some suggestions??
René Jorissen
Latest posts by René Jorissen (see all)
- MacOS Big Sur and SSLKEYFILELOG - November 23, 2021
- ClearPass, Azure AD, SSO and Object ID - August 12, 2021
- ClearPass – custom MPSK - July 20, 2021
I just want to say thank you. I have been troubleshooting this issue for a week now and notice in my packet capture the same exact behavior. Cisco hasn’t published anything on this issue and their TAC has been troubleshooting the wrong problem from day one.
Thanks Again for this article. I can now breath easier..
V/R
James
Same kind of issue coming up for me, except I don’t think it’s exclusive to VMs. I also have a VMware environment set up. My issue is that my default gateway is an MPLS router, and my firewall is my connection to the internet. After some reading around on Cisco’s forums, the proxy arp thing seems to be a byproduct of NATing and the firewall apparently barges into the conversation when I have an ARP request go out for my router. It was causing a lot of problems in my outer offices when they were connecting to our web portal and would randomly just time out rather than connect. I’m putting out feelers on Cisco’s forum to try and track down what is going to happen when I use the sysopt noproxy command on my inside interface.
We just ran into this exact same issue… VMHosts get the arp from the firewall once disabled issue went away. Our issue now is trying to get all subnets from internal to be able to communicate with the DMZ servers. Any suggestions would be much appreciated as our main site can communicate but other two sites (different subnets) can not get to the DMZ.
Disabling proxy arp influences the NAT configuration for the specific interface. If you are using a Cisco PIX / ASA firewall, you should check your NAT configuration first. Maybe you have to add NAT exemptions or static NAT entries for the missing internal subnets.
A way simpler solution is to add a second switch.
But that is probably just me being lazy and only needed 18 ports :D
This can help also.
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/VMware.html#wp696360
Thanks for this. I ran into this same problem a week back. the only idffernece is, it was a mixed environemnt. Couldnt find any other issues. just disabled proxy-arp and it started to work.
We were seeing a similar issue with a Juniper SSG 520 and UAG running as a Hyper-V guest.
Interestingly, I did not see the issue when I had the two UAG vms set up on individual hyper-v hosts, but when I added them to the cluster as highly available VMs, I started to see this.
I noticed that I had mac spoofing enabled on them (I initially had planned to run them with Windows NLB) so I turned that off – will see if the problem continues…
We had the same issue…
You need to connect to the DMZ from inside…
I had put the following entry into the firewall to prevent the traffic from being natted between the inside and the DMZ.
static (inside,dmz2) 10.200.16.0 10.200.16.0 netmask 255.255.240.0
This is the way I have always done this and have never seen a problem.
After reading the post above from René Jorissen and looking closer you can do this another way. So I removed the entry above and replaced it with this:
access-list inside_nat0_outbound extended permit ip 10.200.16.0 255.255.240.0 10.200.30.0 255.255.255.0
I believe you also have to allow traffic between interfaces:
Enable traffic through the firewall without address translation.
John
I don’t post on Bloggs but I wanted to say that my company was constantly hitting the wall with this problem and it was exactly what I needed. I was on 8.3.2 and upgraded to 8.4.2 which has it enabled by default.
Hi,
Just came across your post when I had the same issue.
As another option, rather than disabling proxy arp on the entire interface just add it to the end of the statics
nat (INSIDE,DMZ1) source static INTERNAL_NETWORKS INTERNAL_NETWORKS destination static INTERNAL_NETWORKS INTERNAL_NETWORKS no-proxy-arp