Shortly after I my Storefront 2.6 scalability wrote articles we published Storefront 3.0. And we finished our first round of internal hard-core performance and scalability tests recently, so I have to share some of the results and updates.
Let's get to it, shall we?
SSS, General Sizing & Deployment Guidance
This has not changed a ton 2.6 to 3.0, although generally we have some great performance improvements made across the board in 3.0, and we can now support about 10-20% more connections per storefront box. But I would still recommend obtained starting with 2 or 3 Storefront nodes with 4 vCPUs and 8GB of RAM and this should be at about 150-0k connections per hour (with a registration rate of 50 requests per second). And before where we 5 "capping" the number of nodes in a server group recommended that we are comfortable and can support up to 6 nodes in a single server group (I need another entire article in the " to explain why "behind it, but believe me right now). But the VM specification still seems to be the sweet spot, and most customers where they need to be. So what has changed?
Auto-Provisioned Apps with RfW
The scalability of auto-provisioning applications RfW where storefront is rolled to new user that has been in 3, 0 compared with 2.6 significantly improved. We have some core tweaks storefront, which reduces the number of round trips on the various delivery services, the improves response times by 80% and the overall system throughput by 140% ! To put this in perspective, if you can now provided 5 car apps somewhere in the neighborhood of 125k connections per hour in 3.0 support (compared to 60k in 2.6.). And if you have 100 auto-provisioning applications will be rolled out for new users, we can be reached in about 15k connections per hour now, while facing with 2.6 we fought really consistently log on users and enumerate resources and experienced outages from time to time , So this is a great improvement worth mentioning and very important for those RfW use with auto-provisioning applications.
Garbage Collection
We realized in SF 2.6 that we had a problem with the overall system throughput. As it turned out, we were the default workstation garbage collection (GC) is used. So one of the most important changes we made was in the 3.0 version to implement server GC, which is a recommended practice for ASP.NET applications on multi-core server actually. This led to increased throughput anywhere from 5% to 28% tested depending on the specific component.
Memory usage
If you remember from my last article, I said that RfW for each user / resource versus Native receiver requires much more memory. We are pleased to report that we have worked hard to reduce the storage for each user / resource of 3K in 2.6 to 650 bytes in 3.0! As a result RfW scalability is much closer to india receiver scalability now (only ~ 15% difference is now in version 3.0).
Credential Wallet
This is something that has caught us Post-Release 2.6 and above 3.0, fortunately walked out the door. We found a problem with the Credential Wallet service under extremely high load. More specifically, we ran through the CW service a bottleneck in when an approximately 0k Auth Token ~ issued individual SF 2.6 server at any given time (basically you were limited to about 0k active user sessions). Fortunately, only 1 or 2 customers in the world ran into this issue. But we are happy to report that the problem with the CW service has been fixed in version 3.0 and we have successfully tested 400k user authorization token up .
X1
Now that receiver X1 is we, as luck would impact storefront look scalability. As expected impact on the scalability of the "Day 1" quite extensive as we ~ 0 files total are download. Compared to the web receiver API testing storefront throughput is reduced by almost 100%, if it bring about X1 application and a receiver's website at a rate of 100 requests per second. It is important to keep in mind that these results are valid only for the day 1 scenario where every user downloads the entire receiver site. On the following days or registrations of the site would be cached and scalability or throughput would not be affected. As is the case when using RfW when using X1, environments should be designed, an additional 650 bytes for StoreFront 3.0 per resource on the base 4GB of memory to enable request. This is one of the reasons why I recommend 8 GB for each VM storefront out of the gate. Another note -. We activated Integrated Caching on the NetScaler for that special X1 test, so that we could provide the caching of static content like JS, CSS, JPG and GIF files
future tests - site aggregation, PNAgent, IMA / XML
course there is always more work to do. We have begun to look at some other advanced scenarios, such as shops and Site aggregation influenced storefront scalability as legacy PNAgent affects scalability and like all those numbers could change if we 6.5-based IMA / XML farms compared to FMA are enumerating based websites (these all tests was based on the latest state). Once we put these things by our performance laboratory and have some numbers, I'll be sure to provide a further update.
special thanks
Once again, a special thanks and shout-out led to our System II Test team in the UK by Martin Rowan . OlgaK deserves a ton of credit for these tests storefront in particular. I indicate only a lot of test results and come up with size recommendations and leading practices that IMO is the easy part. All the hard work and months of testing conducted by Martins team, and none of this would be possible without them.
Prost, Nick
Nicholas Rintalan
Lead Architect & Director - Americas, Citrix Consulting Services (CCS)
0 Komentar