More and more people would like to implement OTP (One Time Password) solutions. RSA is one of multiple vendors for OTP solutions. I also notice the wish to implement and support OTP with on-demand tokens, like SMS and e-mail.
RSA supports on-demand tokens, but the minimum RSA Authentication Manager version required is 7.1. Not only on-demand tokens, but also virtualization (like VMware) is very hot. For a long time, RSA 7.1 was only supported on physical servers. Running RSA 7.1 on a physical server doesn’t always perform very well, especially compared to RSA 6.1. This version performs well on a physical server as well on a virtual server.
I guess I have to install this version under ESX to see how it performs, but maybe someone can tell me their own experience….
Last week I had a very strange problem with a Cisco ASA firewall. The firewall is configured with multiple interfaces, including a DMZ interface. There are multiple servers in the DMZ. These servers are physical and virtual servers. The virtual servers are VMware servers in a blade environment.
I configured the feature
ip verify reverse-path interface DMZ
to prevent spoofing to occur. I also configured a transparent static NAT rule between the Inside network and the DMZ network and multiple static NAT rules between the DMZ network and the Outside network. I left the proxy ARP feature configured with its default settings.
The customer was complaining about log in problems and connectivity problems on the DMZ servers, especially between different DMZ servers. I have done some research and noticed that all problems were related to DMZ servers in the blade environment.
I started some connectivity test and noticed some strange ICMP behavior on the specific servers. When I started a ping from one DMZ VMware server to an other DMZ server on the same ESX host, the first ping responded with an echo-reply, but consequent pings failed. Looking at the ARP table of the server, I noticed that the firewall responded with its own MAC address for every ARP broadcast.
Looking at different forums on the Internet, everybody is speaking about the proxy ARP feature and that you should disable this feature. By default proxy ARP is enabled and I always leave it enabled. Till now I never had this problem. After disabling the proxy ARP feature for the DMZ interface
sysopt noproxyarp DMZ
the problem was solved, because the firewall doesn’t respond to the ARP queries, except for its own interface. Digging a bit deeper on forums, I never found one thread who explains why the proxy ARP feature should be disabled to solve this particular problem.
In my opinion this problem is related to the VMware environment, because I don’t have these problems with physical DMZ servers. So it is strange why the DMZ servers on the same ESX hosts cannot see each other and why does the firewall respond to the ARP queries?
In the near future the blade environment (ESX hosts, network configuration and SAN configuration) is changed, so I hope to find the exact cause and solution of the problem. Does anybody else have some suggestions??
I had to install and configure RSA Authentication Manager 7.1. Looking at the Supported Platforms I couldn’t find VMware ESX as supported platform. VMware ESX was supported for RSA AU6.1. So I thought by myself, let’s give it a try. What I noticed first was the size of the installer. The installation file for RSA AM 7.1 is about 2.5Gb, which I think is a lot compared to the 300Mb for RSA AM 6.1.
I installed a server with the following specs:
The installation of RSA Authentication Manager 7.1 took 1,5 hours to install, so I really started doubting the installation under VMware. After the installation I wasn’t able to open the management console, which runs webbased in this new version. To be sure, I restarted the server after the installation. Now it took 45 minutes to pass the Applying computer settings and Applying personal settings.
I called RSA and the engineer told me that there are no known issues for running RSA Authentication Manager 7.1 under VMware. The only important thing he told me was the usage of 4Gb RAM and a 4GB Paging file, when running under VMware. I upgraded the memory from 2Gb RAM to 4GB RAM and I configured two 4Gb paging files.
You maybe already guess the following lines of text, but the upgrade didn’t work out. The boot process still took approximately 45 minutes. After booting the server, the performance was really bad. The memory usage was steadily running on 4.2 Gb!!!!
I called RSA a second time and the next engineer took my doubts away. The told that RSA Authentication Manager 7.1 is NOT OFFICIALE supported by RSA. The performance problems are probably caused by the new Oracle database and the different Java instances, which are running on the server. Because RSA had to run in a virtual environment, I downloaded RSA AM 6.1. The installation AND configuration of the complete environment took about 2 hours.
So at the time of writing this blog post:
DO NOT INSTALL RSA AUTHENTICATION MANAGER 7.1 UNDER VMWARE!!!!
ADD ON August 15th 2009
RSA 7.1 is now supported under ESX 3.5. Check the updated article on this matter.
Maybe you also want to check this article about configuring On-Demand with RSA 7.1.
Monday I had to migrate an existing network. I added more VLAN’s to the network for segmentation and breaking the broadcast domain. I introduced a regular VLAN, a VoIP VLAN and a management VLAN. So far no problem. The customer is using Cisco Catalyst 3750G and Cisco Catalyst 3560 switches with PoE.
I configured the VTP domain, so the new VLAN got distributed without any problem. I added all switchports to the correct VLAN. The management VLAN is for all management connections from routers, switches, firewall and ESX hosts. The management VLAN is a new VLAN with a new IP subnet. I had to change the IP addresses and routing on the switches, routers and firewall. I also needed to change the IP addresses of the ESX hosts, because they are also placed in the management VLAN.
Unfortunately my colleague Duncan couldn’t come that day, so I started to change the IP addresses (/etc/sysconfig/network-scripts/ifcfg-vswitch0), default gateway settings (/etc/sysconfig/network), host file (/etc/hosts) and NTP configuration. After starting the Infrastructure Client we noticed an error in the HA configuration. The HA feature couldn’t start.
After change the IP addresses I didn’t restart the ESX servers, I only restarted the networking service. I restarted all the ESX hosts, but still no luck. I called Duncan and he advised to try to remove the HA cluster and rebuild it again. This also didn’t do the trick. At the end I disabled HA and waited for Duncan to investigate the problem further.
Duncan visited the customer yesterday and he found the problem. Somewhere in the configuration files, the old IP addresses where still present and hadn’t been changed when rebuilding the cluster.
A detailed solution can be found on Duncan’s blog, right here.
I have had different discussions with different customers about the load-balancing algorithms between a Cisco switch, configured with a port-channel and a VMware ESX server using multiple NICs. Our VMware consultants always choose Route based on IP hashes as load-balancing algorithm. This means that load-balancing happens on layer 3 of the OSI model (source-destination-IP).
In my opinion, the switch should be configured the same way. Depending on the model switch, you can have different default load-balancing algoritmhs. For example, the Cisco Catalyst 3750 uses src-mac load-balancing and the Cisco Catalyst 6500 use src-dst-ip load-balancing. You can check the configured load-balancing algorithm with the following command:
show etherchannel load-balancing
If you would like you change the load-balancing algorithm you can use the global configuration command:
port-channel load-balancing <option>
Be aware that this is a global configuration command, so it affects all the configured port-channels on the switch.
To check the load-balancing between the different NICs, you should have a tool to look at real-time bandwidth statistics. I normally use the tool SNMP Traffic Grapher to monitor the different switch ports. On the ESX console you can check the load-balancing with the commands:
The load should be spread fairly even across the different switch ports en vmnics.