FortiAuthenticator can be used when adding strong authentication to a network. FortiAuthenticator has more options, like FSSO (FortiNet Single Sign-On) in conjuction with a FortiGate firewall. You can create a FortiAuthenticator cluster very easily. I normally configure a active/passive cluster and not a load-balancing cluster. When creating an active/passive cluster you should follow these guidelines:
Often I use FortiAuthenticator with FortiGate or appliances like Citrix NetScaler or Pulse Secure to enable two-factor authentication. Like I stated above, the slave unit takes over the masters port1 IP address. This implies that you only need to configure one RADIUS server in your front-end appliance. This is not true.
I added the master FAC as RADIUS server to a FortiGate firewall. Authentication is working fine. Next I shutdown the master FAC. The slave unit takes over and the “masters” port1 IP address is accessible, so it can be used for authentication. But when you authenticate to the master IP address something “strange” happens. The slave unit response to the RADIUS request with its own port1 IP address. You can see this in the packet sniffer on the FortiGate below. The master IP on port1 is 10.10.10.10 and the slave IP is 10.10.10.11. I shutdown the master unit and try to authenticate.
BZO-FG500-01 # diagnose sniffer packet any ‘udp and port 1812’
filters=[udp and port 1812]
27.067084 10.10.200.201.1063 -> 10.10.10.10.1812: udp 129
27.074294 10.10.10.11.1812 -> 10.10.200.201.1063: udp 40
32.070029 10.10.200.201.1063 -> 10.10.10.10.1812: udp 129
32.070220 10.10.10.11.1812 -> 10.10.200.201.1063: udp 40
As you can see the FGT sends the RADIUS request to the master IP 10.10.10.10, but the slave FAC answers with the IP 10.10.10.11, so authentication is unsuccessful. I needed to add the slave FAC as well to the FortiGate as RADIUS server to successfully authenticate in the event the primary FAC is lost.
Monday I had to migrate an existing network. I added more VLAN’s to the network for segmentation and breaking the broadcast domain. I introduced a regular VLAN, a VoIP VLAN and a management VLAN. So far no problem. The customer is using Cisco Catalyst 3750G and Cisco Catalyst 3560 switches with PoE.
I configured the VTP domain, so the new VLAN got distributed without any problem. I added all switchports to the correct VLAN. The management VLAN is for all management connections from routers, switches, firewall and ESX hosts. The management VLAN is a new VLAN with a new IP subnet. I had to change the IP addresses and routing on the switches, routers and firewall. I also needed to change the IP addresses of the ESX hosts, because they are also placed in the management VLAN.
Unfortunately my colleague Duncan couldn’t come that day, so I started to change the IP addresses (/etc/sysconfig/network-scripts/ifcfg-vswitch0), default gateway settings (/etc/sysconfig/network), host file (/etc/hosts) and NTP configuration. After starting the Infrastructure Client we noticed an error in the HA configuration. The HA feature couldn’t start.
After change the IP addresses I didn’t restart the ESX servers, I only restarted the networking service. I restarted all the ESX hosts, but still no luck. I called Duncan and he advised to try to remove the HA cluster and rebuild it again. This also didn’t do the trick. At the end I disabled HA and waited for Duncan to investigate the problem further.
Duncan visited the customer yesterday and he found the problem. Somewhere in the configuration files, the old IP addresses where still present and hadn’t been changed when rebuilding the cluster.
A detailed solution can be found on Duncan’s blog, right here.