Citrix PVS 7.7 VHD to VHDX performance difference

Citrix PVS 7.7 VHD to VHDX performance difference

2016-06-13
/ /
in Blog
/

I was asked to justify upgrading our PVS vDisks to VHDX from VHD.  There are a few ‘feature’ / technical reasons:

  1. Use of native tools to mount/compress VHDX from Windows Server 2012.  VHD files created with 16MB block sizes require a custom Citrix tool which does not do compress.
  2. VHDX is the format ‘forward’.
  3. VHDX is supposed to perform better.

Test setup: VHD file is Citrix PVS 7.1SP2, VHDX file is a clone of the VHD with the tools upgraded to 7.7.

So I know VHDX is supposed to perform better but I was curious by how much.  Apparently, the modification Citrix made to the VHD format to a 16MB block size is ‘4K’ aligned as well.

fsutil fsinfo ntfsinfo C: reports the following for the different vDisks:

VHD_fsinfo

16MB block size VHD file

VHDX_fsinfo

32MB block size VHDX file

I set my target devices to the different vDisks and set the ‘Cache to RAM’ feature to 4096MB.  Ideally, all writes should be to RAM but this will still tax the filesystem.

And what is the performance between the two?  I used the DiskSpd utility from Microsoft to measure the differences.

VHD DiskSpd Test

VHD DiskSpd Test

VHDX DiskSpd Test

VHDX DiskSpd Test

Summary

Summary

The VHDX format appears to be around 7.5% faster in our setup.

The boot speed (the amount of time it takes the vDisk to power up and start the ‘Citrix PVS Device Service’) is even more dramatic:

VHD Boot Speed

VHD Boot Speed

 

VHDX Boot Speed

VHDX Boot Speed

How much of this is the tools vs the format?  I’m not sure, I didn’t have the time to reverse image and upgrade a VHD to 7.7.  Regardless, the combination of upgrading to 7.7 from 7.1SP2 AND the VHDX format brought a dramatic boot time improvement and consistently faster disk speed.

2 Comments

  1. Wotan 2016-07-19 8:13 am

    Dear Trentent,

    may I kindly ask you about the specs/details of your test-environment?
    I just tried to reproduce the DiskSpd-Test, but my results are far away from yours 🙁

    Screenshot-Result: http://www.bilder-hochladen.net/i/lxpl-1-73b4.png

    Tested on PVS-Target. VM running on XenServer 6.5
    Physical Host: HP ProLiant BLC Gen.8
    8 CPUs, 42 GB Ram, 15K RPM local Disk
    2008r2, XA6.5, Server in idle
    PVS Cache in Device Ram w. overflow on disk (4096MB)

    I’m a bit disappointed and I fear, that my system is not tuned well ..

    Thank you!

    Reply
    • trententtye 2016-07-22 3:12 pm

      Hi Wotan,

      I’d say your disk results are not bad, you are hitting ~33,000 IOPS which is still pretty good.
      Our configuration is:
      PVS Server:
      VMWare ESXi 5.5
      Windows Server 2008 R2 SP1
      PVS 7.7
      64GB RAM (only ~16GB consumed most often)
      4vCPU

      Target Device:
      VMWare VM (ESXi 5.5)
      Windows Server 2008 R2 SP1
      PVS Target Device software 7.7
      4vCPU 12GB RAM
      Cache in Device Ram with overflow on disk (4096MB)

      Reply

Post a Comment

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.