iSCSI performance slows to a crawl
Posted: 2007/01/20 15:08:16
Hi,
I'm using a Promise VTrak m300i iSCSI target with a Centos 4.4 iSCSI client (iscsi-initiator-utils-4.0.3.0-4) that functions as a rsync server for daily backups. The Centos machine has a low-budget Planet gigabit ethernet adaptor and connects to the VTrak through a Planet gigabit switch. The performance is ok when doing an inital rsync to an empty target on the iSCSI drive. But every rsync after that is VERY slow, presumably because comparing the source to the target is slowing it down to a crawl.
I've noticed that when a process is writing to the iSCSI drive, other processes have a really hard time using it since performance goes down the toilet. During a write to the drive a simple `ls -al` takes a while. Also, hdparm -Tt /dev/sdc give <1 MB/s while rsync is running.
The drive in question is configured as a RAID5 with Ext3 filesystem and the client is a P4 2.8 GHz machine with 512 MB RAM.
I suspect the blame lies with a crappy ethernet adapter, crappy switch, crappy filesystem for this use or bad configuration on the client or target side.
Anybody here have better experience than me with iSCSI in similar circumstances and can help me get better performace out of it?
I'm using a Promise VTrak m300i iSCSI target with a Centos 4.4 iSCSI client (iscsi-initiator-utils-4.0.3.0-4) that functions as a rsync server for daily backups. The Centos machine has a low-budget Planet gigabit ethernet adaptor and connects to the VTrak through a Planet gigabit switch. The performance is ok when doing an inital rsync to an empty target on the iSCSI drive. But every rsync after that is VERY slow, presumably because comparing the source to the target is slowing it down to a crawl.
I've noticed that when a process is writing to the iSCSI drive, other processes have a really hard time using it since performance goes down the toilet. During a write to the drive a simple `ls -al` takes a while. Also, hdparm -Tt /dev/sdc give <1 MB/s while rsync is running.
The drive in question is configured as a RAID5 with Ext3 filesystem and the client is a P4 2.8 GHz machine with 512 MB RAM.
I suspect the blame lies with a crappy ethernet adapter, crappy switch, crappy filesystem for this use or bad configuration on the client or target side.
Anybody here have better experience than me with iSCSI in similar circumstances and can help me get better performace out of it?