Blog

Microsoft and Citrix putting pictures of scripts in documents…

2013-11-20
/ / /

Come on guys, a little more effort than that.  How are you supposed to copy paste an image?

The script is the following:

It’s supposed to pull the application and command-line to execute it in AppV5.  I got the script from this document:
http://www.microsoft.com/en-us/download/details.aspx?id=40885

 

Read More

Error launching batch file from within AppV 5 bubble

2013-11-15
/ / /

So we have an application that requires the “%CLIENTNAME%” variable to be passed to it in it’s exe string.  The string looks like so:

The issue we have is APPV does not seem to get that variable and pass it to the program.  So when the program starts, it makes %clientname% folders in a temp directory and we can’t have two %clientname% folders in the same directory so only one instance of the application can be launched *period* if we do it this way, as opposed to one per server.
To resolve this issue I wrote a script that will pass the %CLIENTNAME% variable to AppV by ripping it out of the registry:

This worked for AppV 4.6 without issue.  Now with AppV 5 I get an error, PATH NOT FOUND when trying to launch this script.

To verify the path exists in the app I ran the following commands:

The powershell commands put me in the AppV 5 bubble then opened a command prompt.  From the command prompt I can see the directory that is missing.  Going back to procmon I was curious to see what command it was launching.  It was launching this:

This command was failing.  It appears that when you are launching the .cmd file directly AppV 5 starts the cmd.exe *outside* the AppV bubble and it doesn’t connect to the appvve.  To correct this I tried this command line:

Success!  It launched successfully and saw the directory and everything was good there after.  So let that be a lesson to other AppV 5 package makers, if you need a pre-launch script you may need to modify your published icon to put another cmd.exe /c before the command file for it to start in the bubble.

A very good AppV blog has already discovered this issue and came back with a better fix than mine:
http://blogs.technet.com/b/virtualvibes/archive/2013/10/17/the-issues-of-sequencing-bat-shortcuts-in-app-v-5-0.aspx

Read More

PowerCLI Fix VMWare Time Sync issue

2013-10-15
/ / /
 

Read More

IMA Service Fails with the Following Events: 3989, 3634, 3614

2013-09-25
/ / /

We have run into an issue where we have Provisioning Service 6.1 and XenApp 6.5 working together. After we update the vDisk (say, for Windows Update) we run through a script that does things like the “XenApp Prep” to allow the XenApp 6.5 ready for imaging. It appears that there is a bug in the XenApp Prep that sometimes causes it to not fully get XenApp 6.5 ready for rejoining the farm. The initial symptoms I found were:

Event ID 4003
“The Citrix Independent Management Architecture service is exiting. The XenApp Server Configuration tool has not been run on this server.”

I found this CTX article about it, but nothing of it was applicable.

I did procmon traces and I found the following registry keys were missing on the bad system:


A broken system missing the Status Registry key


A working system with the Status key. Note Joined is “0”

After adding the Status Registry key:

I tried restarting the service and the progress bar got further but then still quit. Procmon showed me this:

That is ACCESS DENIED when trying to see that registry key. It turns out that the IMAService does not have appropriate permissions to read this key. The Magic Permissions on a working box and what you need to set here looks like this:

Notice that none of the permissions are inherited and “NETWORK SERVICE” is added with full control to this key. Now when we try and start the Citrix Independent Management Architecture service we get the following errors:

To correct these errors the local host cache needs to be rebuilt. To fix that we need to run:
Dsmaint recreatelhc
Dsmaint recreaterade

After doing that, we can start the IMA Service and MFCOM. If instead of IMA Service starting you get the following error message:

Ensure the following registry is populated:
Read More

Query a bunch of Windows 2003 event logs

2013-08-20
/ / /
This will find event ID 3001 in the Application log file with a list of computers from “systems.txt”

Read More

Windows Server 2012 R2 cache drive size for parity drives

2013-08-19
/ / /

It turns out the maximum a parity drive write cache size can be is 100GB.  I have a 500GB SSD (~480GB real capacity) so the maximum write cache size I can make for a volume is 100GB.  I suspect I maybe able to create multiple volumes and have each of them with a write cache of 100GB.  Until then this is the biggest it seems you can make for a single volume, so MS solves that issue of having a too large write cache.

Read More

Example of Gallery Post

2013-08-16
/ / /

Pellentesque eget eros iaculis, faucibus ante in, dignissim turpis. Morbi interdum tempus neque, a elementum tellus placerat non. Nulla in leo at libero sagittis adipiscing. Nam venenatis mi at condimentum blandit. Sed id massa dapibus, hendrerit odio imperdiet, fringilla tellus. Sed vestibulum ac erat et tristique. Nullam mollis elit vel risus fringilla, sed molestie turpis accumsan.

Read More

Testing Windows Storage Spaces Performance on Windows 2012 R2

2013-08-08
/ / /

Windows Storage Spaces parity performance on Windows Server 2012 is terrible.  Microsoft’s justification for it is that it’s not meant to be used for anything except “workloads that are almost exclusively read-based, highly sequential, and require resiliency, or workloads that write data in large sequential append blocks (such as bulk backups).”

I find this statement to be a bit amusing because trying to back up anything @ 20MB/sec takes forever.  If you setup a Storage Spaces parity volume at 12TB (available space) and you have 10TB of data to copy to it just to get it going it will take you 8738 seconds, or 145 hours, or 6 straight days.  I have no idea who thought anything like that would be acceptable.  Maybe they want to adjust their use case to volumes under 1GB?

Anyways, with 2012R2 there maybe some feature enhancements including a new feature for storage spaces; ‘tiered storage’ and write back caching.  This allows you to use fast media like flash to be  a staging ground so writes complete faster and then the writes to the fast media can transfer that data to the slower storage at a time that is more convient.  Does this fix the performance issues in 2012?  How does the new 2-disk parity perform?

To test I made two VM’s.  One a generic 2012 and one a 2012R2.  They have the exact same volumes, 6x10GB volumes in total.  The volumes are broken down into 4x10GB volumes on a 4x4TB RAID-10 array, 1x10GB volume on a 256GB Samsung 840 Pro SSD and 1x10GB volume on a RAMDisk (courtesy of DataRAM).  Performance for each set of volumes is:

4x4TB RAID-10 -> 220MB/s write, 300MB/s read
256MB Samsung 840 Pro SSD -> ~250MB/s write, 300MB/s read
DataRAM RAMDisk -> 4000MB/s write, 4000MB/s read

The Samsung SSD volume has a small sequential write advantage, it should have a significant seek advantage, as well since the volume is dedicated on the Samsung it should be significantly faster as you could probably divide by 6 to get the individual performance of the 4x10GB volumes on the single RAID.  The DataRAM RAMDisk drive should crush both of them for read and write performance under all situations.  For my weak testing, I only tested sequential performance.

First thing I did was create my storage pool with my 6 volumes that reside on the RAID-10.  I used this powershell script to create them:

The first thing I did was create a stripe disk to determine my maximum performance amoung my 6 volumes.  I mapped to my DataRAM Disk drive and copied a 1.5GB file from it using xcopy /j

Performance to the stripe seemed good.  About 1.2Gb/s (150MB/s)

I then deleted the volume and recreated it as a single parity drive.

Executing the same command xcopy /j I seemed to be averaging around 348Mb/s (43.5MB/s)

This is actually faster than what I remember getting previously (around 20MB/s) and this is through a VM.

I then deleted the volume and recreated it as a dual parity drive.  To get the dual parity drive to work I actually had to add a 7th disk.  5 nor 6 would work as it would tell me I lacked sufficient space.

Executing the same command xcopy /j I seemed to be averaging around 209Mb/s (26.1MB/s)

I added my SSD volume to the VM and deleted the storage spaces volume.  I then added my SSD volume to the pool and recreated it with “tiered” storage now.

When I specified to make use the SSD as the tiered storage it removed my ability to create a parity volume.  So I created a simple volume for this testing.

Performance was good.  I achieved 2.0Gb/s (250MB/s) to the volume.

With the RAMDisk as the SSD tier I achieved 3.2Gb/s (400MB/s).  My 1.5GB file may not be big enough to ramp up to see the maximum speed, but it works.  Tiered storage make a difference, but I didn’t try to “overfill” the tiered storage section.

I wanted to try the write-back cache with the parity to see if that helps.  I found this page that tells me it can only be enabled through PowerShell at this time.

I enabled the writecache with both my SSD and RAMDisk as being a part of the pool and the performance I got for copying the 1.5GB file was 1.8Gb/s (225MB/s)

And this is on a single parity drive!  Even though the copy completed quickly I could see in Resource Manager the copy to the E: drive did not stop, after hitting the cache at ~200MB/s it dropped down to ~45-30MB/s for several seconds afterwards.

You can see xcopy.exe is still going but there is no more network activity.  The total is in Bytes per second and you can see it’s writing to the E: drive at about 34.13MB/s

I imagine this is the ‘Microsoft Magic’ going on where the SSD/write cache is now purging out to the slower disks.

I removed the RAMDisk SSD to see what impact it may have if it’s just hitting the stock SSD.

Removing the RAMDisk SSD and leaving the stock SSD I hit about 800Mb/s (100MB/s).

This is very good!  I reduced the writecache size to see what would happen if the copy exceeded the cache…  I recreated the volume with the writecachesize at 100MB

As soon as the writecache filled up it was actually a little slower then before, 209Mb/s (26.1MB/s).  100MB just isn’t enough to help.

100MB of cache is just not enough to help

Here I am now at the end.  It appears tiered storage only helps mirrored or stripe volumes.  Since they are the fastest volumes anyways, it appears the benefits aren’t as high as they could be.  With parity drives though, the writecachesetting has a profound impact in the initial performance of the system.  As long as whatever fills the cache as enough time to purge to disk in the inbetweens you’ll be ok.  By that I mean without a SSD present and write cache at default a 1GB file will copy over at 25MB/s in 40 seconds.  With a 100MB SSD cache present it will take 36 seconds because once the cache is full it will be bottlenecked by how fast it can empty itself.  Even worse, in my small scale test, it hurt performance by about 50%.  A large enough cache probably won’t encounter this issue as long as there is sufficient time for it to clear.  Might be worthwhile to invest in a good UPS as well.  If you have a 100GB cache that is near full and the power goes out it will take about 68 minutes for the cache to finish dumping itself to disk.  At 1TB worth of cache you could be looking at 11.37 hours.  I’m not sure how Server 2012R2 deals with a power outage on the write cache, but since it’s a part of the pool I imagine on reboot it will just pick up where it left off…?

Anyways, with storage spaces I do have to give Microsoft kudos.  It appears they were able to come close to doubling the performance on the single parity to ~46MB/s.  On the dual-parity it’s at about 26MB/s under my test environment.  With the write cache everything is exteremely fast until the write cache becomes full.  After that it’s painful.  So it’s very important to size up your cache appropriately.  I have a second system with 4x4TB drives in a storage pool mirrored configuration.  Once 2012 R2 comes out I suspect I’ll update to it and change my mirror into a single parity with a 500GB SSD cache drive.  Once that happens I’ll try to remember to retest these performance numbers and we’ll see what happens 🙂

Read More