Connecting the world…

system

Upgrade CS MARS

A customer was running CS MARS with version 4.3.6. Lately the Cisco IPS sensor was upgraded to version 7.x. This version wasn’t supported anymore by CS MARS version 4.3.6. That is why the CS MARS needed to be upgraded to 6.x. I don’t have a lot of experience with CS MARS and I couldn’t find a way to upgrade from 4.3.6 to 6.x.

The only way to upgrade from 4.3.6 to 6.x is by re-imaging the server. At first I started with securing the current configuration. The current configuration can be saved to a NFS server. I secured the current configuration and event data with the following commands:

[pnadmin]$ pnexp
pnexp > export config 10.1.1.1:/home/NFS
pnexp > export data 10.1.1.1:/home/NFS

The next question I had was: which CS MARS version to download? Searching the documentation I only found a upgrade procedure for upgrade 4.3.6 to 6.0.1. The latest version is version 6.0.5, but I couldn’t find any documentation about upgrading directly from 4.3.6 to version 6.0.5. I decided to upgrade from 4.3.6 to 6.0.1 and then directly to 6.0.5.

Re-imaging the server took about an hour. The installation process didn’t take a lot of time, most of the time was spend on the process of creating an oracle database. After re-imaging I had to import the configuration from the NFS server.

Hmmm…. the server has a fresh installation, so no IP address or whatsoever. First I had to find the default username and password to login to CS MARS. The default username and password is pnadmin. I configured an IP address using the following command:

[pnadmin]$ ifconfig eth0 10.1.1.2 255.255.255.0

Next I was able to access CS MARS through SSH. I imported the configuration and the event data using the following commands:

[pnadmin]$ pnimp
pnimp > import config 10.1.1.1:/home/NFS
pnimp > import data 10.1.1.1:/home/NFS

The complete configuration, including hostname, dns servers and license, and the event data was nicely restored. Next I wanted to upgrade from version 6.0.1 to directly version 6.0.5. Stunned I was at that moment, I discovered that the different upgrades need to be installed sequentially. The different upgrades have multiple dependencies amongst each other. It is possible to install the upgrade packages through the web interface, but I got some dependency failures during this process.

The only way for me, and I think the best way, was installing the upgrades packages through a SSH session. I let the CS MARS download the required packages directly from the Cisco website by using valid CCO credentials. The first step involved checking which upgrade packages were available using the following command:

[pnadmin]$ pnupgrade
CSMARS Upgrade………..[25541]
——————————————————————————–
Package Name Type Version URL
——————————————————————————–
csmars-6.0.5.3358.zip BD 6.0.5.3358.34 http://software-sj.cisco.com/cisco/crypto/3DES/ciscosecure/cs-mars/csmars-6.0.5.3358.zip
csmars-6.0.4.3229.zip BD 6.0.4.3229.33 http://software-sj.cisco.com/cisco/crypto/3DES/ciscosecure/cs-mars/csmars-6.0.4.3229.zip
csmars-6.0.3.3190-customer-patch.zip B 6.0.3.3190 http://software-sj.cisco.com/cisco/crypto/3DES/ciscosecure/cs-mars/csmars-6.0.3.3190-customer-patch.zip
csmars-6.0.3.3188.zip BD 6.0.3.3188.32 http://software-sj.cisco.com/cisco/crypto/3DES/ciscosecure/cs-mars/csmars-6.0.3.3188.zip
csmars-6.0.2.3102.zip BD 6.0.2.3102.31 http://software-sj.cisco.com/cisco/crypto/3DES/ciscosecure/cs-mars/csmars-6.0.2.3102.zip

The above upgrade packages are available. The packages need to be installed sequentially, so I started with version 6.0.2.3102.31 using the following command:

[pnadmin]$ pnupgrade -d -u <CCO username>:<CCO password> <upgrade package URL>

CS MARS starts downloading the specific upgrade package. The –d parameter tell CS MARS to ask first before installing the upgrade package, because a reboot is required after the installation. I repeated this step for all subsequent upgrade packages.

Now CS MARS is running version 6.0.5 (3358) 34 and the IPS can be added to CS MARS. It took some time, but I am still curious if I could re-image the server directly to version 6.0.5.

Link State Tracking

Last week a friend called me and told me he was having serious problems with his network. A complete blade environment wasn’t able to communicate with the rest of the network. I asked what changed in the network and he told me that he had added a VLAN to a trunk allowed lists.

Because he is a friend, I dialed in and checked the configuration of the switch. I noticed that all ports on the switch were err-disabled. What happened here, that all switch ports were err-disabled!!! I noticed the configuration of link state tracking on all ports.

Link-state tracking, also known as trunk failover, is a feature that binds the link state of multiple interfaces. Link-state tracking provides redundancy in the network when used with server network interface card (NIC) adapter teaming. When the server network adapters are configured in a primary or secondary relationship known as teaming and the link is lost on the primary interface, connectivity transparently changes to the secondary interface.

At first I was skeptic about the link state configuration and asked my friend why it was used. He couldn’t give me any answer, because he didn’t configure the switch. For me it was hard to find a reason why link state tracking was used, because I wasn’t familiar with the network. I removed the link state configuration from the switch. All ports changed to a normal state. I noticed that the uplink (port-channel) configuration wasn’t correct. They added the VLAN to the trunk allowed lists on a member port and not on the port-channel interface.

After helping my friend and dreaming for a couple of days, I started thinking about the Link State Tracking feature. I tried to discover why someone configured the feature in my friends environment. Eventually, after some brain cracking, I found the solution. Let’s look at the following example environment.

LinkStateTracking

The figure shows one ESX server, which has two NIC’s. One NIC is connected to bl-sw01 and the other NIC is connected to bl-sw02. The ESX uses the load-balancing algorithm “Route based on Virtual PortID”.

Now lets assume the link between bl-sw02 and dis-sw02 loses its connection. Because the ESX server still has a connection with bl-sw02, it keeps sending packet that way. Switch bl-sw02 doesn’t have any uplink to the rest of the network, so the packet will get dropped.

When using Link State Tracking the connection between the ESX server and switch bl-sw02 will also loose its connection when the uplink between bl-sw02 en switch dis-sw02 gets lost. The ESX server will only use the connection with switch bl-sw01 to reach the rest of the network. Link State Tracking uses upstream and downstream interfaces. In the example the connection between the switch port, which connects switch bl-sw02 to switch dis-sw02, would be configured as an upstream port. The switch port to the ESX server would be configured as a downstream port. The downstream port is put in err-disable state when the upstream port loses its connection. This is exactly what you would like to accomplish.

The first step to enable Link State Tracking globally on the switch:

bl-sw02(config)# link state track 1

The next step is configuring the upstream and downstream interfaces.

interface GigabitEthernet0/16
description switch-uplink

switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
link state group 1 upstream
spanning-tree link-type point-to-point

!

interface GigabitEthernet0/10
description ESX01

switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
link state group 1 downstream
spanning-tree portfast trunk

You can check the status of the Link State Group with the following command:

bl-sw02#show link state group detail

Link State Group: 1 Status: Enabled, Up

Upstream Interfaces : Gi0/16(Up)

Downstream Interfaces : Gi0/10(Up)

In the future I will use Link State Tracking, especially in blade environments. At least in blade environments with multiple switch, which don’t support some kind of stacking technology, and servers with multiple NIC’s.

RDP and Spooler system service

My colleagues and I configure a Windows server from time-to-time. Mostly when we configure a server, it is a server which is placed in the DMZ zone, like an ISA Reverse Proxy or Citrix Secure Gateway. Recently I spoke with a colleague and we started discussing the running services under Windows.

After installing a Windows server with the default settings, I am stunned about all the different services which are running on the newly installed server. So most of the time, I stop a lot of these services and configure them to be started manually after a reboot. I do not only stop services from the Services MMC, but also settings on the network card, like Client for Microsoft Windows, File and Printer Sharing for Microsoft Windows, Registrar connection in DNS, LMHOST lookup and NetBIOS over TCP/IP.

Normally a server in the DMZ doesn’t have any printers connected, so I stop the Print Spooler service, but when connecting to the server with RDP the following Event logging shows up in the Event Viewer –> System log:

EventID: 1114

Source: TermServDevices

Type: Warning

Description: Error communicating with the Spooler system service. Open the Services snap-in and confirm that the Print Spooler service is running.

Looking at the Internet, there are different ways to stop is error from showing up in the Event viewer. All solutions are related to stopping the mapping of printers during the RDP log-in process. My colleague told me that he always uses a registry entry to disable the logging and guess what, this specific registry entry is shown below:

Registry folder: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Wds\rdpwd

Entry name: fEnablePrintRDR

Type: REG_DWORD

Value: 0x00000000 (0)

After adding this registry key the warning message in the Event Viewer won’t show up again.

Cisco RPS 2300

Lately I was looking at the Cisco Redundant Power System 2300, because this unit delivers power supply redundancy and resiliency for different power requirements. The RPS 2300 helps to seamlessly failover in the event of power failures.

Depending on the number of internal power supplies, the RPS 2300 can provide redundant power of up to two of six connected switches and/or routers. The RPS 2300 supports 1150W AC or 750W AC power supplies. With two 1150W AC power supply modules, the Cisco RPS 2300 can fully back up two 48-port switches that are delivering 15.4W of PoE on all ports.

The RPS 2300 has enhanced capabilities when used in conjunction with Cisco Catalyst 3750-E and 3560-E, like:

  • The ability to remotely place the RPS or any of the six individual RPS ports in active or standby mode;
  • Setting priorities for each RPS port;
  • Failure and exception history reporting;

Normally when switching back from the RPS to normal AC power, the switch reboots. When backed up by a Cisco RPS 2300, a Cisco Catalyst 3750-E and 3560-E is capable of reverting back to its own power supply without rebooting. I really like this feature, because in normal operation a network administrator could miss a power failure of the primary AC and the backup operation by the RPS. When switching back uncontrolled, the reboot of the switch could cause serious problems in the network.

The Cisco RPS 2300 supports two power supplies as mentioned before. These power supplies are also compatible with Cisco Catalyst 3750-E and 3560-E switches. The supported power supplies are:

  1. The C3K-PWR-1150WAC power supply;
  2. The C3K-PWR-750WAC power supply;

The Cisco RPS 2300 can operate with one or two power supplies. If two power supplies are installed, the must be of the same type.

When choosing to use the Cisco RPS 2300, you should pay attention to spare RPS cables. The Cisco Catalyst 3750-E and 3560-E switches use different RPS cables (CAB-RPS2300-E) compared to other switches (CAB-RPS2300). More information about the Cisco RPS 2300 can be found in the following PDF file.