PVS vs. MCS Revisited

1:08 PM
PVS vs. MCS Revisited -

It has been a few months since my last article, but rest assured, I kept busy and I have a ton of things in my head that I ' m committed to get the paper in the near future. Why so busy? Well, our mobility products are keep busy fail. But I also spent the last month or the preparation of 2 different sessions at BriForum Chicago. My colleague, Dan Allen, and I co-presented on the topics of IOPS and Folder Redirection. Once Brian makes videos and bridges available online, I'll be sure to point people to them.

So that stuff I want to get on paper and turn into a future article? To name a few ... MCS vs. PVS (revisited), NUMA and XA VM sizing, XenMobile Lessons Learned "2.0", and Virtualization PVS Part 3. But let's talk about this first theme PVS vs MCS now.

Although BriForum (and Synergy) is always rush hour, I always try to catch some sessions by some of my favorite presenters. One of them is Jim Moyle and he actually inspired this article. If you do not know Jim, he is one of our CTPs and works for Atlantis Computing - he also wrote one of the most informative documents I have ever read IOPS. I swear there is not a month that goes by that I am not asked about PVS vs. MCS (pros and cons, what should I use, etc.). I will not go into the pros and cons or tell you what to use for a lot of people like Dan Feller did a good job of it already, even with beautiful trees making. I might note that Barry Schiffer decision tree updated you might want to check, too. But I will speak one of the main reasons that people often cite for not using MCS - it generates about " 1.6x or 60% more compared with PVS IOPS ". And since Ken Bell kind of "documented" that passing there are about 2-3 years, which is kind person and had gospel had challenged him. But our CSC team was to see slightly different results on the ground and Jim Moyle also decided to contest this statement. And Jim shared the results of its MCS against PVS tests at BriForum this year - I think a lot of people were shocked by the results

What were the results.? Here is a summary of things I thought were interesting:

  • MCS generates means 21.5% more IOPS compared to the PVS the state of balance (nowhere near 60%)
  • This amounts to about 8% more write IO and 13% more read IO
  • MCS generates 45.2% more peak IOPS relative to PVS (which is closer to the range of 50-60% we originally documented)
  • the writing to read (R / W) ratio for IO PVS was 0% + written in both steady state and peak (nothing new here)
  • R / W ratio to MCS peak was 47/53 (we have long said it is about 50/50 for MCS, so nothing new here)
  • R / W ratio to MCS in steady state was 17/83 (which was a bit of a surprise, a little as the first ball)

So how can it be?!?

I think it is essential to understand where our "1.5-1.6x" initial declaration or "50-60%" comes from - which takes into account not only the state of balance, but also boots and connection phases, which are mostly read IOPS and absolutely climb figures MCS. If you are not familiar with the R / W ratios typical for a Windows virtual machine during the various stages of its "life" (startup, logon, the equilibrium state of rest, disconnection, etc. .), then this image, courtesy of the VRC project, always does a good job explaining succinctly:

The R/W ratio of the boot phase is a lot different than the steady-state!

We also look IOPS peak and IOPS average in a single issue - we will provide two different numbers or break down like Jim and I did above in the results, and one IOPS numbers can be very misleading in itself. You do not believe me? Just check my BriForum presentation on IOPS and I'll show you some examples of how misleading statements like "1 million IOPS" can be

So there you have it - things are search for MCS. and actually, I think MCS kind of got a bad name early on and nobody ever bothered to consider the real-world data or re-test as Jim did. one point for the . PTC

does this mean that we should all start to leverage MCS not necessarily - we have thousands of customers using PVS in production capacity and I can not say the same for MCS right now. But it means that we should give another look MCS? Absolutely. especially as companies like MSFT and VMW build native support for reading cached in their hypervisors to manage IOPS read. .. because once we get rid of those extra read IOPS, we're just left with an almost negligible 8% more IOPS write in the steady state ... and that's when the simplicity of the MCS starts to look pretty attractive.

Hope this helps. Again, thank you to Jim Moyle for most of these data and to help me bust a longstanding myth!

Cheers, Nick

Nick Rintalan, Lead Architect, Citrix Consulting

Previous
Next Post »
0 Komentar