Internals PVS 2 - How to properly size your memory -
I'm really surprised, but it has already been eight months I wrote the first part of the internal PVS. I am very busy recently, so please accept my apologies it took so long to complete the second part of the blog.
In the first part, I've prepared the theoretical ground for the discussion on the appropriate size of the memory. You should already understand the concept of caching the manager of Windows
Introduction
There are many misunderstandings and misconceptions about Provisioning Services - . And one of them is that PVS requires a huge amount of memory to function properly. Yes, PVS uses memory (as a setting not hide expensive mechanism), but that does not mean you need to build supercomputer using Provisioning Services. Why PVS requires spare memory have been discussed in my previous article -. It is a service that is truly toleverage designed your unused memory
When you ask a consultant regarding proper sizing of the PVS memory, you will probably be to get "it depends" answer (do not like you not?) or you will get calculations based on reliable estimates. If you do not have the time or resources to do proper sizing, I highly recommend our white paper consultation describing advanced memory considerations for Provisioning Services (CTX125126). However (similar to the situation with pagefiles), it is strongly recommended that an exercise of good design - each project is different and sizing requirements may be completely different if you publish desktops with tons of applications by compared to the situation where you simply publish locked published applications. And you want to identify if more memory is required before deployment in production
There are two utilities that I like to use when I work with PVS -. We will give you very good overall view of the environment, while the second can be regarded as caching Swiss knife and can give you very specific details for your design.
Tools for calibration memory
There are two tools that I really love and use every time I touch a PVS server. They work perfectly together:
- Resource Monitor - The resource monitor is one of the most underrated integrated tools you can find in the Windows stack. The resource gives you a very nice overview of your environment monitor - you can easily find the bottlenecks, see an overview of CPU activity, memory, disk and network ... and you can always dig deeper to get More details. Resource Monitor is the perfect combination of Performance Monitor and Task Manager
- RAMMap -. RAMMap is an excellent tool for a more detailed examination of Mark Russinovich. As mentioned in the previous article, PVS only uses physical memory for caching (pagefile is not used for the cache before) and RAMMap gives you perfect details on using your RAM.
When you combine them, you can see an overview of your memory use, but also see the details needed to make appropriate decisions. Be aware that if the Resource Monitor is updated in real time, RAMMap requires manual refresh.
Recommended Approach
To monitor the memory usage, we will use the two utilities. Although the Resource Monitor will provide us information about the overall memory consumption, we will use RAMMap to determine the amount of memory that we need for each vDisk
The whole process can be defined as follows :.
- Clear Cache waiting for clear results
- boot target device screen logon - 1 st check
- Wait for Windows to fully load - 2 nd See
- Logon first user - 3 e See
- Run regular applications, ideally performed by regular users - 4 e check
- stop target device - 5 e check
- review memory requirements
- Boot as many devices as you want and let it run for a few days weeks
calibration example
below you can find a step by step process to understand the memory usage of your PVS servers.
Environment consists of a single image Windows Server 08 R2 will provide only IE application. The image size is 40 GB and we store our vDisk locally.
First, we closed all devices using PVS to get a clear picture of our environment. Nothing should read from the PVS server and make sure that you are not copying data PVS server
We use RAMMap to erase the previous day cover (option "empty Waiting List" ) :.
right now, our PVS server is running with empty cover before. We can easily confirm this by looking at the Resource Monitor
Now it's time to start our VM (s). I usually start several virtual machines simultaneously - because they are the start of both the standard image, it should not be much difference between the start of one or more virtual machines
Once we begin the startup. virtual machines, we can see that the waiting cache increases:
If we go to RAMMap and select "File Summary" tab, we can see clearly who is responsible for filling the cache:
Have you noticed anything? The .VHD file is not only stored in the cache before, but in the active page of pool too. This is caused by the StreamProcess.exe process. This is important because certain pages are active and monitoring the size of the queue cache is not an accurate representation (since almost 25% of total is not stored in the standby cache).
As soon as we hit the login screen (Press Ctrl + Alt + Delete to log in), we can see that the picture takes about 450 MB of memory:
Standby hides the other hand is already 561 megabytes. This is caused by the fact that Windows is not only caching of our .VHD file, but also any other buffered read operation:
Now we can say that PVS needs to read ~ 450MB to start fully Windows Server 08 R2, but this statement would not be correct. Remember the "Lie-To-Child" my previous article Windows is much more complicated than we want to admit it and there's a lot more under the hood than meets the eye. So while preparing logon (and you can actually sign already), there is still a lot of ongoing operations in the background.
therefore, it is important to identify when we . decide that Windows is fully initialized in my case, I always book farm configurations through group policy, so I'll wait until XenApp joined the firm - and to be sure it's really slow, I'll wait until the specified load is 0 (qfarm / load) - for your information, the difference between "firm join" and "no load" is about 250 MB:
now I can say that the image XenApp full charge requires about 735 MB compared to a simple control of the login screen, you can see the difference of nearly 300 megabytes .
The next important step is the opening session. On average, the first user logon, it will take an additional 50-60 MB (at least with the local accounts), even if your default profile is only 1 MB. The potential reason (I just take it as a fact and never spend too much instruction time) is that there are other components of the operating system involved in the first connection - for example API calls specific (libraries that are not yet downloaded):
After we log out of the first user, we can see that our situation is very similar with the first logoff - again, this is not a simple withdrawal of the case, but the additional API are involved:
Just to show you the difference between the cache and ensures use the actual file memory, here is the same capture resource monitor. Note that waiting cache is 1292 MB (as we have seen previously only 740MBs are actually used by our vDisk):
The difference between these two figures can actually tell you how much memory you should allocate to the cache of the PVS server to a minimum (+ operational default cache system + vDisk cache system). Our default recommendation is to reserve 512 MB of system cache for the operating system itself, you can see that this number reflects pretty well the experience of real life
This is worth the first logon logoff -. The rest of the users are not affected by this. This is similar to the first impact of the application user continuously building its cache and streaming application. Can you pre-cached the first user? Hardly, the only possible solution would be self-opening session with a strictly limited user or to initiate the process that will also load the user's profile, however, this requires careful consideration for safety reasons. The majority of data reads during profile creation are actually from the System32 folder and not from the C: Users folder itself. The potential gain is probably very low for XenApp servers, but could be more interesting for workloads XenDesktop (and could potentially lead to PVS Internals 3 items).
Our server can be considered fully prepared for the user load at this stage - now it's time to ask your pilot users for testing. At the end, you might be very surprised - in my case, the average memory requirements for running a few servers for 4 days was only around 1 GB
Does this mean that I have only 3 GB of memory for my PVS server (2GB for OS, and 1 GB for vDisk)? Certainly not, and it would be very bad decision. Let's take a look at our sizing formula
+ 2GB (4GB #XA_vDisk *) + (* #XD_vDisk 2GB) + 15% (buffer)
We just proved our vDisk does not require more memory recommended, in this case the recommendation would be to stick with 4 GB of memory per vDisk. If our test reveal higher memory requirements, we need to increase this requirement, however, with few exceptions, this formula should cover the majority of cases. Remember that you must provide a cache system for non-PVS related pages (this is included in a 15% buffer).
Be aware that this does not mean you have to reserve 1 GB of memory for this vDisk. The purpose is to optimize PVS ~ 80% of disk loading, if PVS needs to read from a disk time to time, it does not mean that your design is wrong.
Summary
There are some general guidelines that you must follow when you are sizing memory for PLWHA.
- not use the size of the vDisk to calculate memory requirements. the cache manager is cached blocks, not entire files, so you do not need to plan caching entire VHD in memory. You'll probably be surprised at how little memory you actually need for the average operating system.
- Do not aim to cache 100% read operations. not aim to provide 100% read cache operation - you can probably provide 80% success rate with as much as 2-4GB memory for each vDisk (with exceptions of course)
- does not use the cache waiting for sizing [ Windows will try to cache all, as long as there is enough memory - and files are never removed from cache unless that 'there is a better use of that memory. If you leave your PVS server for a few weeks and then decide you need to buy more RAM, as there is no free memory, you must return to PVS Internals 1 and read more carefully this time. As I tried to explain, PVS actually requires much less memory than most people expect.
- Do not underestimate the memory of your VM. Having a lot of server memory PVS itself, while not having enough memory to your VM is very common mistake. Remember that your target devices also use the system cache - and this reduces the re-bed the PVS server. You must reserve at least 500 MB for a single OS user and at least 2 GB for multi-user OS for use by the system cache.
- Clear the cache before calibration. Be sure to always hide empty eve before recording the numbers to get clear results. Windows tries to cache everything, so it's easy to get your cache "filled". This is quite normal and as explained above, with the priority cache manager, the individual operations are not going to replace your frequently accessed data blocks. For your final sizing numbers, do not forget to add some overhead (usually around 15%).
- Identify breakpoints in your tests. Do not expect that Windows is fully charged when the logon screen appears. As you will see in your tests, it is still reading the data in the background.
- Be aware of any component that reads all the files. antivirus If you have misconfigured or other components that can read the disc, the waiting cache can easily fill. Be sure to monitor your use of cache to detect these questions in advance. This problem can occur both on the PVS server and the target device itself.
- Use our magic formula unless you have a very specific reason not to. This formula covers the majority of implementations. Unless you have a very specific reason to do so or if you have many vDisks, you should follow. Just to remind you, here it goes again:
+ 2GB (4GB #XA_vDisk *) + (* #XD_vDisk 2GB) + 15% (buffer)
I also want to thank my colleagues Dan Allen, Nicholas Rintalan and Pablo Legorreta (in alphabetical order, so they do not fight) for their help!
Zugec Martin