Data Center Consolidation 2.0

6:25 PM
Data Center Consolidation 2.0 -

So has your agency the number of total reduced data center - Mission accomplished, right?

While the Federal Data Center Consolidation Initiative has savings primarily produced due to the reduction of real estate and power consumption - only a few agencies have taken the time to evaluate the performance, operational and safety impact to applications; and evaluate services after the move.

Server virtualization is highly efficient in increasing the physical resources and it also offers an easy way to "lift and shift" workloads without business rationalization.

While this reduces the number of physical servers and data centers to manage - it is the amount of applications that has not necessarily reduced, more power, yet facilitates operational maintenance. These new challenges are quickly obvious how to adjust users, administrators and CIOs on the concept of less federal data centers, which are further away.

Shiny new data center. Same old workloads

server virtualization was a matter of course, if consolidation projects originally started and helped accelerate workload migrations to new data centers. The underlying hypervisor technology has evolved over the past five years, and has the growth of open source (OS) accelerated alternatives - the extension of this core data center technology to the point of commodification. agencies seeking more agility, safety and improved cost advantages begin open source-based hypervisor to accept and orchestration stacks for some of their workloads as a way to reduce virtualization licensing costs.

federal search with advanced DevOps shops also in the containers, an alternative to the traditional server virtualization, the popularity of wins to consolidate application workloads even further. The core of the solution is to isolate an application and dependencies it is in an isolated "Container", instead of a dedicated virtual machine. Reducing overhead of multiple virtual machines are running for dedicated applications. This new technology will undoubtedly unlock more data center efficiency, as computing power are more used for applications and less for platform overhead.

In addition, the Federal Risk Authorization and Management Program (FedRAMP) certified clouds provide an excellent alternative to drive the size of the new centralized federal data centers down. While there is no doubt workloads that need to stay in the house, regularly workloads evaluation to determine whether it should make a good candidate for the transition to public cloud be for all IT managers a proven method. By reducing the size of the federal data center with cost infrastructure-as-a-Service (IaaS) clouds, the total cost of computing use has nowhere to go but down.

Physical appliance sprawl

your new data center certainly has multiple instances of the same type of network device, each dedicated to a specific department or sub-agency. The "Lift and Shift" of network devices, including firewalls, virtual private network concentrators (VPNs), Wide Area Network Optimization Controller (WAN Op), intrusion detection / prevention systems (IDS / IPS) and Application Delivery controller (ADC) have created huge amounts of equipment sprawl, eat expensive real estate in the data center.

Since some of these network services are compromising security generally, it is crucial that they are patched and properly configured at all times. Several agencies are looking at software-defined networks / network function virtualization as a solution of these functions in the same way they consolidate virtualized servers. If these network services virtualize, it is important to choose an open delivery platform that simplified central management can provide, near line-rate performance and true multi-tenancy for insulation.

Hello, latency

Fewer data centers means increasing the physical distance between users and what they need every day for their mission. Applications, services and data This can result in decreased performance, like most classic client-server applications that were not built to deal with cross-country latency.

Most agencies tend to use a brute force approach to solve this new latency challenge by greater buying, expensive, reliable network cables to ensure performance. But this approach tends to be very expensive, and sometimes solve not always the root of performance issues.

Several agencies have leaned on application virtualization to help themselves, these performance challenges through virtualization and centralization, the client components of the application in the data center. The client interface to virtualize, client-server traffic is kept in the data center, through with only keystrokes and mouse clicks the WAN. In addition, simplified ours the patching and managing applications by. In centralizing Example

Another approach, agencies improve performance over WAN links is WAN Virtualization

helps, since the price of Internet-based network links continue to fall -. Agencies searched, existing in propagating Multi Label Switching (MPLS) connects to Internet-based circuits. WAN virtualization is now developing as a solution that can provide multiple stepped links available, the reliability of a classic MPLS WAN connection through aggregation. This maximizes the use of available networks adds, provide elasticity and huge cost savings.

Keep it open

Although agencies have made progress with its consolidation efforts, show these new challenges, that the work is far from over. Agencies should develop strategies to these new challenges based addressing on the principle of "open" - to ensure that next-generation solutions that do not vote, they lock into a single platform, or cloud provider. This is a crucial step in the next round of cost savings and next level of agility for federal data centers achieve.

This post originally appeared on Federal News Radio

Previous
Next Post »
0 Komentar