Connecting the world…

NAT

McAfee Firewall – NAT mapping

While testing a McAfee Enterprise Firewall running software 8.2.0, I had some problems with the creation of a NAT mapping. The firewall is configured as standalone firewall. All (NAT / access rule) configuration on the firewall is done using Access Control Rules. McAfee uses two types of NAT mapping:

  1. NAT: mostly used to translate a private IP address to a public IP address;
  2. Redirect: redirect traffic to a public IP address to a private IP address;

I tried to publish an internal network component to the internet. I created a simple rule with the following parameters. These parameters are very straightforward and the configuration is similar to firewalls from different vendors:

Application: SSH Source Zone:
external
Destination Zone:
external
Source Endpoint:
Any
Destination Endpoint:
Public IP address
NAT address:
None
Redirect:
Private IP address

 

I tested the NAT mapping, but couldn’t connect to the internal component using the public IP address. The first step in troubleshooting is looking at the logging, but I couldn’t find any logging on the firewall. It looked like the traffic didn’t even reach the firewall.

We have a shared internet segment with multiple firewalls. So I started doubting the configuration of the different firewalls.

  • Was somebody already using the public IP address in a NAT configuration?
  • Has the default gateway of the internet segment already an ARP entry for the public IP address?

I looked at the configuration of the firewalls, but nobody was using the public IP address. With this in mind, I ruled out the ARP entry “problems” on the ISP router.

When using NAT on a public IP address, which isn’t the same as the interface IP address, the firewall has to proxy ARP the public IP address. So does the firewall proxy ARP for the public IP address?

I started looking at the rest of the configuration with emphasis on the network configuration. I noticed that I had the option to add an alias IP address to the external interface. This can be found under Network – Interfaces – external interface. I added the public IP address as alias.

You guessed it. The NAT mapping is working……

Policy NAT on Cisco router

A colleague of mine had to implement an IPSec VPN tunnel from a customer to a supplier. The customer has a Cisco router for connecting to the Internet, so nothing special. The router is already setup and in production. Configuring an extra IPSec VPN tunnel isn’t very hard, the most important part is the negotiation of Phase 1 and Phase 2 credentials for the IPSec VPN connection.

This particular situation was different, because the customer has to NAT his local IP addresses into the VPN tunnel. I don’t actual configure this quiet often on routers, but more on firewalls. I setup a testing environment with GNS3 to do my own configuration.

The testing environment is displayed below. When connecting from the LAN from R1 (10.1.1.0/24) to the Internet normal NAT overload in configured. In the picture the network behind R3 represents the Internet. When connecting from R1 to R2, the source IP address (Inside Local) is translated to an address from the pool 10.22.44.0/24 (Inside Global). There is also a static NAT mapping for IP address 10.1.1.222 into the VPN tunnel (10.22.44.222).

POLNAT-RTR

At first I configured the environment like showed above. I configured the different interfaces and the corresponding IP addresses. Router R1 and R2 use router R3 as default gateway. The configuration of the specific routers is attached at the end of the post.

The first snippet shows the necessary NAT configuration on R1 for the Internet connection.

interface Loopback0
description INSIDE
ip address 10.1.1.1 255.255.255.0
ip nat inside
!
interface FastEthernet0/0

description OUTSIDE
ip address 212.123.212.9 255.255.255.248
ip nat outside
duplex auto
speed auto

!
ip nat inside source list ACL-NAT interface FastEthernet0/0 overload
ip route 0.0.0.0 0.0.0.0 212.123.212.11
!
ip access-list extended ACL-NAT
permit ip 10.1.1.0 0.0.0.255 any

When trying to ping Lo0 on router R3, I see the following NAT table on router R1.

R1#ping 192.168.3.1 source lo0

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.3.1, timeout is 2 seconds:
Packet sent with a source address of 10.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 32/46/76 ms
R1#sh ip nat translations
Pro Inside global      Inside local       Outside local      Outside global
icmp 212.123.212.9:3   10.1.1.1:3         192.168.3.1:3      192.168.3.1:3

This proves that regular NAT overloading works perfectly with the current configuration. Next I configured the Policy IPSec VPN between routers R1 and R2. The configuration isn’t that spectacular, you should pay attention to the NAT statements and ACL statements. I even configured reverse route injection for the LAN network of R2. When enabling reverse route injection a route with the remote network (172.16.2.0) pops up in the routing table. This feature can be used when redistributing static routes into a routing protocol like EIGRP.

Next I will show some snippets, with some comments, of the configuration of router R1.

\\ Specifying the Phase I and Phase II properties for the IPSec VPN

crypto isakmp policy 10
encr aes 256
authentication pre-share
group 5
crypto isakmp key VPN@Booches address 212.123.212.10
!
crypto ipsec transform-set VPN-TS esp-aes 256 esp-sha-hmac
!
crypto map CM-VPN-R2 10 ipsec-isakmp
set peer 212.123.212.10
set transform-set VPN-TS
match address VPN-R2
reverse-route

!

\\ Loopback interface with 2 IP addresses for testing purposes

\\ Secondary IP address is used for static policy NAT testing

interface Loopback0
ip address 10.1.1.222 255.255.255.0 secondary
ip address 10.1.1.1 255.255.255.0
ip nat inside

!

\\ Outside interface with the crypto map applied to this interface
interface FastEthernet0/0
description OUTSIDE
ip address 212.123.212.9 255.255.255.248
ip nat outside
duplex auto
speed auto
crypto map CM-VPN-R2

!

ip nat translation timeout 30

\\ NAT pool for dynamic policy NATZ
ip nat pool LAN-R2 10.22.44.1 10.22.44.254 netmask 255.255.255.0

\\ Default NAT for regular Internet related traffic
ip nat inside source list ACL-NAT interface FastEthernet0/0 overload

\\ NAT statement for dynamic policy NAT and static policy NAT
ip nat inside source list ACL-POLICY-NAT pool LAN-R2 overload
ip nat inside source static 10.1.1.222 10.22.44.222 route-map RM-STATIC-NAT extendable

!

\\ ACL to define traffic to be NATted for regular Internet related traffic

ip access-list extended ACL-NAT
deny   ip 10.1.1.0 0.0.0.255 172.16.2.0 0.0.0.255
deny   ip 10.22.44.0 0.0.0.255 172.16.2.0 0.0.0.255
permit ip 10.1.1.0 0.0.0.255 any

\\ ACL to define traffic to be dynamic policy NATted into IPSec VPN tunnel
ip access-list extended ACL-POLICY-NAT

deny   ip host 10.1.1.222 172.16.2.0 0.0.0.255
permit ip 10.1.1.0 0.0.0.255 172.16.2.0 0.0.0.255

\\ ACL to define traffic to be static policy NATted into IPSec VPN tunnel
ip access-list extended ACL-STATIC-POLICY-NAT
permit ip host 10.1.1.222 172.16.2.0 0.0.0.255

\\ ACL to define interesting traffic for the IPSec VPN tunnel
ip access-list extended VPN-R2
permit ip 10.22.44.0 0.0.0.255 172.16.2.0 0.0.0.255
!

\\ Route map static to configure static policy NAT statement
route-map RM-STATIC-NAT permit 10
match ip address ACL-STATIC-POLICY-NAT

I issued some ping commands and looked at the IP NAT Translations table to test the environment. The test results are displayed below.

SOURCE 10.1.1.1 (R1) – DESTINATION 172.16.2.1 (R2) & 212.123.212 (R3)

R1#ping 172.16.2.1 source 10.1.1.1
Packet sent with a source address of 10.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/52/120 ms
R1#ping 212.123.212.11 source 10.1.1.1
Packet sent with a source address of 10.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 16/48/84 ms
R1#show ip nat translations
Pro Inside global      Inside local       Outside local      Outside global
icmp 212.123.212.9:29  10.1.1.1:29        212.123.212.11:29  212.123.212.11:29
icmp 10.22.44.1:28     10.1.1.1:28        172.16.2.1:28      172.16.2.1:28

SOURCE 10.1.1.222 (R1) – DESTINATION 172.16.2.1 (R2) & 212.123.212 (R3)

R1#ping 172.16.2.1 source 10.1.1.222
Packet sent with a source address of 10.1.1.222
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/48/84 ms
R1#ping 192.168.3.1 source 10.1.1.222
Packet sent with a source address of 10.1.1.222
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 32/44/60 ms
R1#sh ip nat translation
Pro Inside global      Inside local       Outside local      Outside global
icmp 212.123.212.9:37  10.1.1.222:37      192.168.3.1:37     192.168.3.1:37
icmp 10.22.44.222:36   10.1.1.222:36      172.16.2.1:36      172.16.2.1:36

 

SOURCE 172.16.2.1 (R2) – DESTINATION 10.22.44.222 (R1)

R2#ping 10.22.44.222 source 172.16.2.1

Packet sent with a source address of 172.16.2.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/53/112 ms

All ICMP testing gave positive results and policy-based NAT on the Cisco router is working perfectly.

Configurations:

Cisco ASA & ESX: strange ARP behavior

Last week I had a very strange problem with a Cisco ASA firewall. The firewall is configured with multiple interfaces, including a DMZ interface. There are multiple servers in the DMZ. These servers are physical and virtual servers. The virtual servers are VMware servers in a blade environment.

I configured the feature

ip verify reverse-path interface DMZ

to prevent spoofing to occur. I also configured a transparent static NAT rule between the Inside network and the DMZ network and multiple static NAT rules between the DMZ network and the Outside network. I left the proxy ARP feature configured with its default settings.

The customer was complaining about log in problems and connectivity problems on the DMZ servers, especially between different DMZ servers. I have done some research and noticed that all problems were related to DMZ servers in the blade environment.

I started some connectivity test and noticed some strange ICMP behavior on the specific servers. When I started a ping from one DMZ VMware server to an other DMZ server on the same ESX host, the first ping responded with an echo-reply, but consequent pings failed. Looking at the ARP table of the server, I noticed that the firewall responded with its own MAC address for every ARP broadcast.

Looking at different forums on the Internet, everybody is speaking about the proxy ARP feature and that you should disable this feature. By default proxy ARP is enabled and I always leave it enabled. Till now I never had this problem. After disabling the proxy ARP feature for the DMZ interface

sysopt noproxyarp DMZ

the problem was solved, because the firewall doesn’t respond to the ARP queries, except for its own interface. Digging a bit deeper on forums, I never found one thread who explains why the proxy ARP feature should be disabled to solve this particular problem.

In my opinion this problem is related to the VMware environment, because I don’t have these problems with physical DMZ servers. So it is strange why the DMZ servers on the same ESX hosts cannot see each other and why does the firewall respond to the ARP queries?

In the near future the blade environment (ESX hosts, network configuration and SAN configuration) is changed, so I hope to find the exact cause and solution of the problem. Does anybody else have some suggestions??