Provisioning Services

PVS Target Device Update Script — Supplemental File, disableddnsregistration.ps1

/ / /

Our PVS servers are multi-homed with the Provisioning NIC on a seperate VLAN.  Because of how our network is structured, our Provision NIC could register its DNS, but client computers would not be able to connect to it as it is a segregated network.  This script sets the Provision NIC to NOT register to DNS.

Read More

Citrix PVS Target Device Update script

/ / /

Saman and I have created a Citrix PVS update script we use and run each time we update a PVS target device.  This script has been generated after finding numerous issues that needed to be fixed after updating a PVS image.  It includes fixes to issues with AppV5, PVS, XenApp 6.5, etc.

We also made the script to be able to be run automatically by accepting command-line parameters so that we can use it to do Windows Updates with PVS’s automatic vDisk update feature.

I’ve removed email addresses and some specific application fixes that would only be relevant to our environment.  So I’ve tried to generalize this the best I can.  We tried to make this script dynamic so if you have a vDisk without AppV it will skip the features that require AppV, skip 64bit only features, etc.

I will post the supplemental scripts in more posts.


Read More

Becareful of your AVHD to VHD chains with Citrix PVS with multiple sites

/ / /

Citrix PVS is a great product.  With a single VHD file you can boot multiple machines and with the new RAM caching technology introduced in PVS 7.1 it’s super fast.

We have Citrix PVS 7.1 setup across 5 data centres in 3 geographically disperse regions with 2 primary sites each having a primary datacenter and a DR datacenter.  Each datacenter has high-speed local share for our PVS images.  Our Active Directory architecture is 2003 level.  Our PVS setup is configured to stream the vDisks from a local file share.  This information is important to add context for our process.

I found an issue in one of our datacenters when we had an issue and had to reboot some VM’s.  Essentially the issue is our target devices were booting slow.  Like really, really, really slow.  In addition to that, once they did boot; logging into them was slow.  Doing any action once you were logged in was slow.  But it did seem to get faster the more you click’ed and prodded your way around the system.  For instance, clicking the ‘Start’ button took 15 seconds to open the first time but was normal afterwards, but clicking on any sub-folders was slow with the same symptoms.

This is a poor performing VM.  Notice the high number of retries, slow Throughput and long Boot Time

Conversely, connecting to a target device at the other city showed these statistics:

Example of a much, much better performing VM.  14 second boot time, throughput about 10x faster than the slower VM and ZERO retries.

So we have an issue.  These two separate cities are configured nearly identically but one is suffering severely in performance.  We looked at the VMHost that it the VM was hosted at, but moving it to different hosts didn’t seem to make any difference.  I then RDP’ed to our VM and launched ProcessMonitor so I could watch the bootup process.  This is what I saw:

chfs04 is the local file server hosting our vDisk.  citrixnas01 is a remote file server on a WAN connection in the datacenter in the other city.  For some reason, PVS is reading and sharing the *versioned* avhd file that resides on the local file share, but the base VHD file it is reading from citrixnas01.  This is, obviously, a huge issue.  Reading the base image over a WAN probably will result in the poor performance we are experiencing and the high retry counts for packets.

But why is it going over the WAN?  It turns out that the avhd file contains a section in it’s header that describes the location of the parent VHD file.  PVS is simply using the native sequence built in to the VHD spec for the chain’ed disks.

Hex editing the avhd file reveals the chain path

(Un)Fortunately for us, our PVS servers can read the file share across the WAN and pull and cache data locally so the speedups tended to gain the longer the VM was in use.  In order to fix this issue immediately, we edited our hosts file on the PVS server to point to the local file server for citrixnas01.

After executing that change I rebooted one of the target devices in the *slow* site.

Throughput is now *much* better, but the number of retries is still concerning

Now, the tricky thing about this issue is we thought we had it covered because we configured each site to have it’s own store:

Our thoughts were that this path is what the PVS service would look for when trying to find VHD’s and AVHD’s.  So when it would look for XenAppPn01.6.vhd it would look in that store.  Obviously, this line of thinking is not correct.  So it is important if you have sites that are distant that the path you use to create your version will correspond to the fastest, local, share in all your sites.  For us, our folder structure is identical in all sites so creating a host file entry pointing citrixnas01 to the local file share works in our scenario to start with.

EDIT – I should also note that you can see the incorrect file paths in process explorer as well.  Changing the host file wasn’t enough, we also needed to restart the streaming service as the streaming service held cached data on how to reach the files across the WAN.  Process Explorer can show you the VHD files that the stream service has access to and where (under files and handles):

Citrixnas01 shouldn’t be in there…  Host file has been changed…

After a streaming service restart:

Muuuuch better.

Read More

Quick one-liner to change vDisk on devices

/ / /
Read More

Extend disk space on a VMWare PVS system

/ / /

Read More

How To Convert a 16MB block size PVS VHD to a 2MB PVS VHD file using Hyper-V

/ / /

If you’ve created a 16MB block size VHD file using Citrix PVS you’ve probably experienced that these disks are a touch more difficult to manage than the 2MB ones.  With 16MB block size VHD’s you cannot mount them in Windows natively, compact them, and manipulate them with regular VHD tools.  The only tool that I’ve been able to manipulate them with is PVS, specifically CVHDMOUNT.EXE.  This allows you to mount the VHD file as a hard disk, but even so you cannot manipulate the disk with lots of tools; the disk isn’t recognized at all.  This includes tools like Disk2VHD that you would think would make it easy to make it into a VHD.  Not so.

In order to convert from a 16MB block size PVS VHD you need to do the following steps:

1) Mount the VHD file using CVHDMount.exe (you may need to install the Provisioning Server role on it to get the CVHDMount.exe tools)

2) Go into Disk Management and set your disk to “Offline”

3) Open Hyper-V, right-click your server and go “New > Hard Disk…”

4) Choose the location and filename to save it.

5) Choose to copy from the PHYSICALDISK that corresponds to your VHD you mounted.

6) Click Finish.   And now you’re done!  🙂

Read More

Install VMWare drivers offline into a Citrix PVS vDisk

/ / /

I am attempting to disable interrupt coalescing for some testing that we are doing with a latency sensitive application and I have 2 VMWare virtual machines configured as such that work as expected.

The latency settings I have done are here:

Essentially, we turned off interrupt coalescing on the physical NIC by doing the following:
Logging into the ESXi host with SSH

(e1000e found as our NIC driver)

(we see InterruptThrottleRate as a parameter)

Then we modified the VMWare virtual machines with these commands:
To do so through the vSphere Client, go to VM Settings  Options tab  Advanced General  Configuration
Parameters and add an entry for ethernetX.coalescingScheme with the value of “disabled”

We have 2 NIC’s assigned to each of our PVS VM’s.  One NIC is dedicated for the provisioning traffic and one for access to the rest of the network.  So I had to add 2 lines to my configuration:

For the VMWare virtual machines we just had the one line:

Upon powering up the VMWare virtual machines, the per packet latency dropped signficantly and our application was much more responsive.

Unfortunately, even with the settings being identical on the VMWare virtual machines and the Citrix PVS image, the PVS image will not disable interrupt coalescing, consistently showing our packets as have higher latency.  We built the vDisk image a couple years ago (~2011) and the vDisk now has outdated drivers that I suspect may be the issue.  The VMWare machines have a VMNET3 driver from August of 2013 and our PVS vDisk has a VMNET3 driver from March 2011.

To test if a newer driver would help, I did not want to reverse image the vDisk image as that is such a pain in the ass.  So I tried something else.  I made a new maintenance version of the vDisk and then mounted it on the PVS server:

This mounted the vDisk as drive “D:”

I then took the newer driver from the VMWare virtual machine and injected it into the vDisk:

I could see my newer driver installed alongside the existing driver:
Published Name : oem57.inf
Original File Name : vmxnet3ndis6.inf
Inbox : No
Class Name : Net
Provider Name : VMware, Inc.
Date : 08/28/2012
Version :

Published Name : oem6.inf
Original File Name : vmmouse.inf
Inbox : No
Class Name : Mouse
Provider Name : VMware, Inc.
Date : 11/17/2009
Version :

Published Name : oem7.inf
Original File Name : vmaudio.inf
Inbox : No
Class Name : MEDIA
Provider Name : VMware
Date : 04/21/2009
Version :

Published Name : oem9.inf
Original File Name : vmxnet3ndis6.inf
Inbox : No
Class Name : Net
Provider Name : VMware, Inc.
Date : 11/22/2011
Version :

Then unmount the vDisk:

I then set the vDisk to maintenance mode, set my PVS target device as maintenance and booted it up.  When I checked device manager I saw that the driver version was still

But clicking “Update Driver…” updated our production NIC to the newer version.  I chose “Search automatically and it found the newer, injected driver.  I then rebooted the VM and success!


Read More

IMA Service Fails with the Following Events: 3989, 3634, 3614

/ / /

We have run into an issue where we have Provisioning Service 6.1 and XenApp 6.5 working together. After we update the vDisk (say, for Windows Update) we run through a script that does things like the “XenApp Prep” to allow the XenApp 6.5 ready for imaging. It appears that there is a bug in the XenApp Prep that sometimes causes it to not fully get XenApp 6.5 ready for rejoining the farm. The initial symptoms I found were:

Event ID 4003
“The Citrix Independent Management Architecture service is exiting. The XenApp Server Configuration tool has not been run on this server.”

I found this CTX article about it, but nothing of it was applicable.

I did procmon traces and I found the following registry keys were missing on the bad system:

A broken system missing the Status Registry key

A working system with the Status key. Note Joined is “0”

After adding the Status Registry key:

I tried restarting the service and the progress bar got further but then still quit. Procmon showed me this:

That is ACCESS DENIED when trying to see that registry key. It turns out that the IMAService does not have appropriate permissions to read this key. The Magic Permissions on a working box and what you need to set here looks like this:

Notice that none of the permissions are inherited and “NETWORK SERVICE” is added with full control to this key. Now when we try and start the Citrix Independent Management Architecture service we get the following errors:

To correct these errors the local host cache needs to be rebuilt. To fix that we need to run:
Dsmaint recreatelhc
Dsmaint recreaterade

After doing that, we can start the IMA Service and MFCOM. If instead of IMA Service starting you get the following error message:

Ensure the following registry is populated:
Read More

Utilizing MCLI.exe to comment the newest version of a vDisk on Citrix Provisioning Services (PVS)

/ / /

I’ve written this script to utilize MCLI.exe to add a comment to the newest version of a vDisk and have marked which fields correspond to what.



This script can now be added to the “PVS Automatic” update feature to automatically comment the latest vDisk when it is updated.

Read More