How I run a Defrag a PVS vDisk.
This quick write-up isn’t anything advanced. But more of a confirmation that running a defrag on the VHDX on a merged vDisk is still needed on today’s PVS server vDisk. I reached out to some guys on slack, knowing I would get a solid answer fast. As you can see, it is still needed. Some rebuild images, and some do what I do, which is manual, which can be automated. Today, some people may think this isn't needed when I have an SSD/NVMe. Yes, because we are talking about all the version changes, and it’s the actual VHDX itself. Think of how you shrink FSLogix VHDX/VHD. To me, it’s the same concept. The type of storage doesn’t matter because it is not at the OS layer. It’s about keeping the VHDX clean, optimized, and reducing and writing cache bloats that will occur on the fragmentation of the VHDX as a whole.
My advice is to read this blog, as it will shed a lot of light on what I am talking about and much more.
https://www.citrix.com/blogs/2015/01/19/size-matters-pvs-ram-cache-overflow-sizing/
“Defragment the vDisk before deploying the image and after major changes. Defragmenting the vDisk resulted in write cache savings of up to 30% or more during testing. This will impact any of you who use versioning as defragmenting a versioned vDisk is not recommended. Defragmenting a versioned vDisk will create excessively large versioned disks (.AVHD files). Run defragmentation after merging the vDisk versions.Note: Defragment the vDisk by mounting the .VHD on the PVS server and running a manual defragmentation on it. This allows for a more robust defragmentation as the OS is not loaded. An additional 15% reduction in the write cache size was seen with this approach over standard defragmentation.“
Put the devices into maintenance mode in studio (No devices can be streaming of the vDisk to defrag the vDisk)
Make sure the connections say “0.”
You could copy the merged vDisk off and import it to do all the work if needed, Then you would need to mess with the active streaming devices. Then after you complete, just update all the targets.
But for this, I didn’t do it this way. But it’s an option.
Make sure you have no locks either. As per my example, XD7CALLTST has none.
Now merge the selected vDisk,
Select merged base -Last base + all updates from that base
Then select the test mode
Wait for the % to complete 100%
Once it’s merged, it will look like this
Now click on the vDisk and click mount
The icon will look like this on the vDisk
Now open up file explorer, and find the drive.
Analyze it to see if it’s needed.
Now defrag it.
Once Completed, go back and select dismount it
The Icon will go back to the original Look
At this point I boot up my Maintenance device to make sure things are good
Add you see its number 3.
Once the machine boots up, you will see it in use in the “show vDisk Usage”
Machine is up, log in and make sure it’s good.
Now shut it down, and promote to PROD
You will see the “show vDisk Usage” got to no devices
Promote to Prod. Select Immediate or you can schedule it.
It will now look like this
Now copy the new Vdisk # 3 file to your other PVS servers to get this all green
Copy from the Server you did the work on. In my case VS1PVS03
Destination
Once the copy is completed, the replication will show green
PVS Target check
At this point, I booted it up, and it was golden.
Resources
This can be automated
https://dennisspan.com/automate-vhd-offline-defrag-for-citrix-provisioning-server/
Extra information
https://support.citrix.com/article/CTX229864
Total Breakdown of why it’s needed
https://www.citrix.com/blogs/2015/01/19/size-matters-pvs-ram-cache-overflow-sizing/
No comments:
Post a Comment