Data center components that deserve an update this year

In the next 12 months, consider making a few upgrades to the data center. TechTarget advisory board explains which ones are the most important.

In the next 12 months, the data center will experience its biggest review yet. IT staff will stop asking, “What updates should be applied?” and start asking, “What is the data center for?”
That’s what Clive Longbottom, analyst at Quocirca and SearchDataCenter advisory board member, forecasts for the future. The SearchDataCenter advisory board picks the most important data center component upgrades in the next 12 months, and shares how to implement them into your infrastructure — and budget.

Clive Longbottom, co-founder and service director at Quocirca: Outsource

Self-owned and managed data centers are becoming less attractive. Now is the time to review what equipment, software and data can be updated or outsourced, and what has to remain under direct control.

Any data center upgrade has to include flexibility. The business may decide that some parts of the existing platform should remain under its direct control, but that ruling can change. Set up the IT architecture so that workloads can move around without trouble.

In many cases, a software as a service (SaaS) model makes sense. SaaS offers greater flexibility, predictable costs and continuous delivery.

Colocation also can make more sense than an owned data center facility, as it allows for elastic growth and reduction of required space as the hybrid model of computing progresses.

To prepare for a move to colo or anything as a service:

  • Perform an asset survey of what is actually in the data center at the hardware and software level;
  • Carry out a usage survey of what is really being used, and why;
  • Cleanse data, preferably including master data modeling for better veracity with new data;
  • Rationalize software instances and versions to the lowest possible number;
  • Consolidate workload via virtualization onto the optimum hardware stack;
  • Virtualize on VMs or containers to make workload transitions from the existing data center platform to colo, hosted, or infrastructure or platform as a service as easy as possible; and
  • Automate the movement of workloads into the cloud via appropriate tools.

Robert Crawford, systems programmer: Skills

Stuff in the data center gets updated all the time. This is the year to update your skills.

Updating your skills is always important. For most of us, the problem is finding the time. This year, make a little time in the work week for training.

The mainframe makes learning relatively easy for the autodidact. You can pick up valuable information by looking around in a dump of your favorite address space (advanced level: use an unfamiliar IPCS verb exit to find something you may have missed).

This is also an interesting time for Assembler programmers. IBM’s latest processor, the z13, comes with its own Principle of Operations manual. Take time to familiarize yourself with the new instructions.

If you want a broader mainframe update, there are other new features to explore. For instance, CICS and IMS have recent enhancements for mobile computing. IBM has a z/OS download page with some interesting tools, and user organizations such as SHARE are also helpful.

Carrie Higbie, global director of data center at The Siemon Company: Fabric networks

Expect major reconfigurations surrounding fabrics in data centers. Top-of-rack switching is overly expensive, wasteful and nonconducive to virtualized environments.

Fabrics and Layer 2 switching are the next wave for data center networks. Three-tier switching methods using Layer 3 networking leave half of a company’s network investment waiting around for the primary half to fail. If something goes down, all traffic stops while the network is reconfigured. Further, virtualized servers do not work well over Layer 3 networks and in many cases, servers must share a switch for software migration to work. With fabrics, a layer or two of switching disappears.

Few data centers have sufficient power to install enough servers to populate a top-of-rack switch, leaving many stranded, unusable ports. Implementing fabrics over 10GBASE-T saves money in switch spend and ongoing power and maintenance costs. Fabrics also lead to software-defined networking.

Examine the entire data center ecosystem to find the best tool and right cost. Re-evaluate vendors on fabric and SDN implementation, as well as network virtualization strategies. The benefits are significant with non-blocking, self-healing architectures. Additional benefits include active/active components where the primary and secondary sides both pass packets effectively, doubling bandwidth over the two networks.

Higher-speed backbones will come into play when companies upgrade to support 40 and 100 GbE technologies. This can be done via IEEE standard products or proprietary products. There is the risk with proprietary products of tying your data center’s performance to a vendor’s roadmap.

Involve the entire data center team in design decisions for massive, costly upgrades. Designing around a single application that lasts two to three years is wasteful.

Sander van Vugt, independent trainer and consultant: Object storage

Data center teams should change the way they think about storage. To many data center teams, storage means storage-area network (SAN) — which means big money.

Cloud environments are different. Object storage allows data center admins to scale out their storage needs without any limit, using cheap disks instead of expensive Serial Attached SCSI. The Ceph object storage tool, for example, has recently matured to be a serious option for enterprises.

Break away from vendor lock-in on storage tools. Proprietary SAN isn’t faster — it’s only more expensive. So in the next couple of months, data center administrators should start exploring object storage.

Robert McFarlane, principal at Shen Milsom & Wilke: Contain and monitor

Assuming you’ve done the basics — blanking panels, blocking bypass air paths and removing comatose hardware — go to containment to segregate hot and cool air. Even if you found it too difficult or expensive in the past, look again.

New products make working within the newest fire suppression standard (NFPA-75) easier. Whether you use hot aisle or cool aisle, full or partial containment, the cooling improvements and energy savings are well worth it.

Monitoring is also an important data center upgrade to make this year. Consider implementing data center infrastructure monitoring and management (DCIM), even if you can’t afford, justify or support a whole suite of tools. The best DCIM packages are modular, so start small and add as you’re ready. It takes time to learn from the information, particularly if you have more reported than you can use. A good DCIM offering reduces huge volumes of data into easily usable information.

If full DCIM is not viable, start with smart power distribution units (ePDUs or CDUs). Include a basic power monitoring package and start tracking your loads and phase balance. If you haven’t already, network them. You don’t need to waste 10-Gb ports on your central switch; cheap switches on a separate small network are fine for this upgrade. Try adding temperature and/or humidity monitoring to the strips.

Source: TechTarget-Data center components that deserve an update this year by Sharon Zaharoff

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s