Connecting the world…


Flash clean-up

Lately I upgraded a Aruba Networks wireless controller or at least I tried…… The upload of a new image to the controller has two steps. First the copy process from a TFTP server to the controller and second the actual writing of the new firmware image to flash (system partition). The second step kept showing me exclamation marks for minutes. I left it running for one hour and finally decided to break the upload by ending the SSH session and starting a new SSH session. I wasn’t able to connect to the controller via SSH and physical access via the console didn’t work either. So I decided to reboot the controller via the hard reset. The controller rebooted the old system partition, but I noticed that the new system partition was imported and digitally signed (check via: show image version).

I changed the boot parameter to boot the new software and rebooted the controller. I received the following error message on the console after the reboot.

Ancillary image stored on flash is not for this release
* WARNING: An additional image upgrade is required to complete the *
* installation of the AP and WebUI files. Please upgrade the boot *
* partition again and reload the controller. *

I decided to upload the firmware a second time to the same system partition, but this time the controller “told” me that there wasn’t enough free space on the flash drive so I couldn’t copy the file. I noticed that I only had 35M flash storage left (check via: show storage).

I deleted some files from flash (via command: delete filename <file name>), but I couldn’t free enough space to copy the image a second time. Finally I used the tar command to clean up enough storage. The tar command archives a directory and creates a tar file in the flash memory, which can be deleted. The syntax is:

tar clean {crash|flash|logs}| crash | flash | logs {tech-support|user}}

I ran the commands:

tar crash
tar flash
tar logs.

This creates three separate files in the flash memory. The files can be deleted via the commands:

tar clean flash
tar clean logs
tar clean crash

After running these commands I had113.3M flash storage available, which is more than enough to copy the new firmware a second time to the system partition.

In my situation the crash files were the reason I didn’t have enough flash memory. Because I did a hard reset the controller created a lot of crash files, which are stored in flash memory.

Cacti and HP Procurve

Finding a template for HP Procurve switches wasn’t that hard. I needed to find a template for HP Procurve 2510G switches. The place to look for templates is I searched the forums on the key word “procurve”, which resulted in many hits. I used the template from the article HP procurve 2600 series.

After importing all template you have the ability to monitor the MAC count on the switch and the memory usage. You also have the option to monitor the CPU usage, but you have to do some extra configuration. The zip file only contains a data template for the HP switches, but no graph template. I created my own graph template by duplicating the Cisco CPU graph template and changed the data source to the HP data template.

Graph Template Data Source

I changed the data source for the first 4 Items in the Graph Template to the HP procurve CPU data source. Next I created a device for the HP switches and added the appropriate “Associated Graph Templates” for HP procurve CPU, MAC count and memory usage. Now you only need to create a graph for the template and you are set to go.

Cacti - HP Procurve graphs

Cisco error message: %SYS-2-MALLOCFAIL

While looking through some logging on a switch (Cisco Catalyst 3550), I noticed the following messages popping up multiple times in the buffer logging.

-Process= "Pool Manager", ipl= 0, pid= 5
-Traceback= 1A57D0 1A6DF4 161B3C 1B2BF0 1B2E38 1C6440
Jan 26 14:45:48.970 CET: %SYS-2-MALLOCFAIL: Memory allocation of 1680 bytes failed from 0x161B38, alignment 0
Pool: I/O  Free: 7412  Cause: Memory fragmentation
Alternate Pool: None  Free: 0  Cause: No Alternate pool

That doesn’t look good, but the customer didn’t receive any complaints about troubles or performance issues on the network. I did some research on the memory of the switch, but couldn’t find any strange behavior. The memory allocation looks normal and buffers look normal too. I found some memory allocation failures with the command show memory failures allow, but I already knew that looking at the error message. I found an article on the Cisco website concerning this error message, but that didn’t help much either.

The switch is running IOS 12.1(13)EA1a, which is marked as deferred. The last deferral notice I can find on the Cisco website is about IOS 12.1(19)EA1. The notice displays bugs with memory leakage problems. The next step I took was checking the Bug Toolkit for the running IOS.

I searched for all bugs of the running IOS and the bug toolkit reports 391 bugs. Narrowing the search with the string “%SYS-2-MALLOCFAIL” resulted in three bugs. One bug concerns a possible problem with spanning-tree and the creation of a loop in the network. Looking at the logging of other switch I noticed multiple MAC flap messages and BPDUGuard messages at the same time as the memory message. This indicates a possible loop in the network.

The bug concerns the following behavior:

Spanning-tree BPDUs (802.1d and 802.1w/802.1s) are sent to the incorrect destination MAC address. Consequently, other switches in the network will not process the BPDUs. If the network has been designed with a physical loop, spanning-tree will not correctly block the loop, causing traffic levels to increase and users to not be able to send data. In most cases, switch management will only be possible via the console port due to looping packets. The log might also contain %SYS-2-MALLOCFAIL messages, which indicate that the switch is running out of I/O memory. Spanning-tree loops are just one cause, but not the only one, of this message. Additional testing will help to confirm that the log messages are generated due to a spanning-tree loop that occurs as a result of this specific issue.

The switch is running Per-VLAN Spanning Tree, which can be compared with the default Spanning Tree Protocol (IEEE 802.1d). This bug could be the problem of the failed memory allocation, I recommended the customer to upgrade to the latest IOS. He will do so as soon as possible and informs me if the problem reoccurs.

Cisco router: determine amount of memory/flash

Somebody asked me how he could determine the amount of DRAM and flash memory on a Cisco router. I always thought that everybody would know how to determine this information, but since this isn’t matter, I will tell you how you can determine the values.

You use the show version command to retrieve the requested information. Below you see an example output of the command on a Cisco 876 router.

Router#show version
Cisco IOS Software, C870 Software (C870-ADVIPSERVICESK9-M), Version 12.4(15)T6, RELEASE SOFTWARE (fc2)
If you require further assistance please contact us by sending email to

Cisco 876 (MPC8272) processor (revision 0x200) with 118784K/12288K bytes of memory.
Processor board ID FCZ121160T5
MPC8272 CPU Rev: Part Number 0xC, Mask Number 0x10
4 FastEthernet interfaces
1 ISDN Basic Rate interface
1 ATM interface
128K bytes of non-volatile configuration memory.
24576K bytes of processor board System flash (Intel Strataflash)

Configuration register is 0x2102

The first highlighted line tells you how much Dynamic RAM (DRAM) and Packet memory are installed in your router. Some platforms use a fraction of their DRAM as Packet memory. The memory requirements take this into account, so you have to add both numbers to find the amount of DRAM available on your router (from a memory requirement point of view).

Some types of routers have separate DRAM and Packet memory, so you only need to look at the first number. Other routers use a fraction of DRAM as Packet memory, so you need to add both numbers to find out the real amount of DRAM.

The second highlighted line tells you how much flash memory is installed in your router. This amount can also be determined by using the command show flash:.

Router#show flash:
24576K bytes of processor board System flash (Intel Strataflash)

Directory of flash:/

2  -rwx    18934284   Mar 1 2002 01:33:35 +01:00  c870-advipservicesk9-mz.124-15.T6.bin

23482368 bytes total (4542464 bytes free)

Maybe I can imagine why somebody doesn’t know where to look, because a show version actually gives you a lot of information. So I hope this posts helps all of you who don’t know where to look.