Computer Magic
Software Design Just For You
 
 



xVM (xen) poor VMDK file performance with ZFS and SATA drives

We recently setup an xVM server on a new blade running 24Gig ram and 3 500 gig SATA hard drives. We installed Linux as a Paravirtualized client and we were off.

** A quick note, this post is geared towards those who have poor IO performance running xVM, ZFS, and SATA drives. We have an Intel ICH9R controller setup as individual AHCI drives (not using the softraid). Our drives were setup as RaidZ in a ZFS pool.

We noticed pretty quickly that while xVM and the host OS’s were performing very well in the CPU and Memory category, but the IO speed was troubling. Our systems don’t do heavy IO so it was no issue until the nightly backups fired off. We quickly switched the backup to use RSync to cut down on the amount of IO which was really a better solution for us anyway.

While prepping a second similarly configured blade we were able to track down some old threads about xVM, ZFS, and poor performance. It looks like some SATA drivers and ZFS have issues with DMA transfers when more than 3 Gig memory is installed on the server and Xen is enabled. Also, as performance goes, the Dom0 never showed signs of slow IO.

My understanding is that ZFS can use LOTS of memory, and on systems with 4 Gig or more of ram, it will. Some SATA drivers don’t support 64 bit memory addressing, so any data mapped to memory above the 4 Gig mark has to be direct copied rather than using DMA.

The solutions are to limit the amount of Dom0 memory and also tell ZFS to use less memory. Between those two settings, the system runs MUCH better. Some people physically took out memory chips to check this with mixed results. We were able to use kernel parameters to tell the system to limit the Dom0 memory to a few gig. The difference in performance is quite amazing.

I used the following command to benchmark drive writes (make sure to cd to the appropriate zfs pool):
time dd if=/dev/zero bs=128k count=25000 of=tmp.dmp

I also have the following ZFS settings (these settings were in place from the start):
compression=on
recordsize=8k

Before Config Changes
Bare Metal Machine (Dom0) – 24 Gig ram total – about 18 Gig in Dom0
Approx write time for 3.2 Gig file – 19 seconds – 170 MB/s

DomU – Cent OS as ParaVirt – 4 Gig ram
Approx write time for 3.2 Gig file – 14m 8 seconds – 4 MB/s

Yes, 14 minutes, or 20 seconds. That is quite a spread. And yes, I ran the tests multiple times. That explains why the daily backup was blowing chunks. We had to set things up to RSync instead of Tar/Gzip which was a better solution anyway.

After Config Changes
Bare Metal Machine (Dom0) – 24 Gig Ram Total – Only 2 Gig allocated to Dom0 (kernel parameter)
Approx write time for 3.2 Gig file – 14 seconds – 234 Mb/s

DomU – Cent OS as ParaVirt – 4 Gig Ram
Approx write time for 3.2 Gig file – 19 to 28 seconds – 184 to 115 MB/s

The DomU system is running close to the previous native speed. The Dom0 system picked up speed. The difference is dramatic to say the least.

This works great for us as our Dom0’s are only acting as virtual hosts and doing nothing else. With 2 Gig ram, we still had 1 Gig free, and no swap file usage. We did notice less performance when setting the Dom0 to 1 Gig ram, but didn’t see any performance improvement when setting Dom0 to 3 Gig ram. Note that we are adjusting the Dom0 with a kernel parameter and our machine still has 24 Gig ram to use for virtual machines. Also note that you have to reboot the Dom0 to make these changes take effect.

How to setup your Dom0

AHCI Drivers – You may need to check that you system is using AHCI drivers
-> prtconf -v |grep SATA
You should see something like
‘SATA AHCI 1.0 Interface’

Set the Dom0 to use 2 Gig ram
Note – this has to happen on reboot, using the virtsh setmaxmem command doesn’t seem to fix performance.
My system boots form /rpool
-> pfexec nano /rpool/boot/grub/menu.lst
Add dom0_mem=2048M to the kernel line
-> kernel$ /boot/$ISADIR/xen.gz dom0_mem=2048M
Some people suggest pinning your Dom0 to the first CPU or two and pinning your VM’s to other CPU’s. I didn’t, that is up to you. Here is how you would pin Dom0 to CPU 1 and 2 (core 1 and 2, not physical cpus)
-> kernel$ /boot/$ISADIR/xen.gz dom0_mem=2048M dom0_max_vcpus=2 dom0_vcpus_pin=true

Limit ZFS Memory
You want to set ZFS to use less memory.
-> pfexec nano /etc/system
Go to the bottom of the file and add:
-> set zfs:zfs_arc_max = 0x10000000

Prevent Auto-Ballon
ZFS doesn’t like the system memory changing. In xVM, the default is for Dom0 to have all system memory and when VM’s start, they take some from Dom0. This causes the amount of memory available to change. We set the max memory in the kernel parameter. Now we set the minimum memory so Dom0 will always have 2 Gig.
-> pfexec svccfg -s xvm/xend setprop config/dom0-min-mem=2048

Testing Write Times
Make sure to benchmark before and after to verify your results. Also note that I used Cent OS as the guest installed as a Para Virtual guest, not an HVM guest.
-> time dd if=/dev/zero bs=128k count=25000 of=tmp.dmp

Hope this saves you some time.

Ray Pulsipher

Comments are closed.


Home | My Blog | Products | Edumed | About Us | Portfolio | Services | Location | Contact Us | Embedded Python | College Courses | Quick Scan | Web Spy | EZ Auction | Web Hosting
This page has been viewed 860185 times.

Copyright © 2005 Computer Magic And Software Design
(360) 417-6844
computermagic@hotmail.com
computer magic