IT operations pros must adapt with new DevOps skills

The intersection of DevOps with IT operations is a two-way street — even as developers increasingly take on more of operations, IT Ops pros must think more like app programmers.

 The technical changes that come with establishing a DevOps culture affect the IT infrastructure even if separate IT operations teams still manage day-to-day matters. New application development practices such as containerization, microservices and release automation, as well as new infrastructure management techniques that require programming skills, mean IT Ops pros must learn new tricks to keep that infrastructure running smoothly.

As DevOps evolves, greater collaboration between devs and IT Ops will be the order of the day, according to Nirmal Mehta, senior lead technologist for the strategic innovation group at Booz Allen Hamilton Inc., a consulting firm based in McLean, Va., who works with government organizations to establish a DevOps culture.

“The roles are just going to be more about operators taking on more responsibility in terms of automating the deployment and their change processes,” Mehta said. “They’re going to transition into taking on more of the security roles, since infrastructure as code and configuration management have a huge impact on compliance.”

Instead of meetings that temporarily assemble representatives of separate IT functions — storage, networking, security, operations and application development — this evolving DevOps/IT Ops collaboration will be “a team where … they have access to the same information, and they’re responsible for the same user stories and other Agile workflow elements,” Mehta said.

Eventually, employment contracts will call for specific skills around microservices or service delivery and become less focused on filling disparate roles within the team structure, he said.

IT automation crucial among DevOps skills

In the meantime, programming skills will be relevant even to IT pros in a strictly operational role, as release automation makes infrastructure as code and configuration management de rigueur. This means learning tools like Puppet, Chef, Ansible and others that enforce infrastructure management in an automated way to keep up with rapid and automated application release cycles.

Some operations people are resistant to learning the inner workings of the application and that becomes a problem, according to Dan MacDonald, architect and principal technical lead for a New York City agency whose developers are transitioning to Agile development methods.

“Now, with the pace of development, you have to get a lot more involved in the early stages because it goes so fast,” he said.

Along with infrastructure as code, technologies used by developers such as containerization will mean less variation in server configurations and instance types under IT operations management, and de-emphasize skills such as scripting and manual configuration of servers.

“[As an IT Ops person] I don’t have to [Secure Socket Shell] SSH onto some box, and tweak it for this special snowflake thing, because there’s a cookbook that actually handles that for me,” said Caedman Oakley, DevOps evangelist for Ooyala Inc., a video processing service headquartered in Mountain View, Calif.

IT operations pros will have to look higher up the stack for opportunities to add value to an organization.

“It’s like being a violinist or a piano player, and then transitioning into becoming more of a conductor,” Mehta said. “You’re overseeing a larger amount of responsibilities and trusting the automation to do most of that workload that you used to do.”

But while it’s beloved by developers, container technology is still tricky for IT Ops pros to deploy to production, and requires a new set of skills.

“Developers are using containers, but Ops is deploying code to VMs, and so creating parity between them is tricky,” said Chris Riley, DevOps analyst at Fixate IO. “If you’re transitioning from monoliths to microservices, you’re almost forced to start over … and managing that change is really hard.”

Distributed applications based on microservices will put more of an emphasis on networking skills, added MacDonald.

Microservices also rely a lot more than monolithic applications do on coordination between different hosts, MacDonald pointed out.

“Let’s say you want to deploy to both Amazon and Google at the same time,” he said. “Not many developers really get into the finer points of those networks — that’s where you get the benefit of operations.”

 

Source: searchdatacenter.techtarget.com -IT operations pros must adapt with new DevOps skills

Advertisements

What to know before migrating applications to the cloud

Migrating applications to the cloud isn’t done with the flip of a switch. Organizations must carefully define an application migration strategy, choose the right migration tool and, in some cases, even refactor their applications to take full advantage of cloud.

At the recent Modern Infrastructure Summit in Chicago, managing editor Adam Hughes spoke with Robert Green, principal consultant and cloud strategist at Enfinitum Inc., a consulting firm based in San Antonio, about the do’s and don’ts of migrating applications to the cloud.

SearchCloudComputing: What is the difference between a lift-and-shift migration approach vs. refactoring an application for the cloud?

Robert Green: Your traditional lift-and-shift model is [when organizations say], ‘I’m going to take my application as it stands today with [its] operating system, as well as the infrastructure of the application, and move that over to the cloud.’ It’s typically done just like you’re moving a VM to an instance on the cloud. It’s kind of holistic.

Your first important migration tool is your people — understanding and changing the way they think about their servers.

Robert Greenprincipal consultant and cloud strategist at Enfinitum Inc.

There’s benefit to refactoring, and there’s detriment. I typically go back to [the question],’ What is your customer experience and business going to look like?’ Understanding how to adopt DevOps [is important] as you’re going through the refactoring, [since] you’ll always have legacy applications. The benefit to refactoring legacy apps really comes in the transformation of your business into a DevOps [and] agile technology.

SearchCloudComputing: What are the biggest challenges when migrating applications to the cloud?

Green: When moving an application to the cloud, you need a full understanding of how an application works, and how it’s going to perform in the cloud. Most people haven’t taken the proper time to understand the performance metrics. You’ve got to know [and] you need some really good detail around you about how your application functions [and] how it scales. Knowing that is half the battle, and if you can figure that out, you can protect your cloud environment to match the performance metrics you need.

SearchCloudComputing: How do availability requirements like disaster recovery and scaling fit in here?

Green: The thing I always talk about is understanding which application is bound [and] understanding which application can horizontally scale, because that’s key for [the] cloud. When you don’t have horizontal scalability, you have diminishing returns. You vastly limit the amount of benefit your cloud migration has to offer.

When you talk about DR, there’s that adage that when you move your instances to the cloud, they become like cattle – [if] one gets sick, you put it down. Cloud instances should be stateless; when one gets ‘sick,’ you delete it, you kill it, you purge it and you create another one. That has to be blueprinted [and] it has to be automated. That’s really the key to hitting that DR statelessness.

SearchCloudComputing: Are there any good tools to use when migrating applications to the cloud?

Green: Your first important migration tool is your people — understanding and changing the way they think about their servers. Don’t build operating systems and VMs, build blueprints. Automate how you install those things. There are other applications out there, like CloudVelocity, where they take a picture of your existing instances and move those to things like Amazon. A lot of the providers that offer cloud, the back end of it is a XenDesktop or a XenServer, [or] a hypervisor that’s well known, so you can work your way into that.

SearchCloudComputing: How can an enterprise mitigate cost during a migration?

Green: If you can take some of those applications, and refactor them while moving your development to DevOps, using that [agile] approach, you gain massive benefits. The refactoring time spent is actually mitigated by taking that as a training opportunity to grow your organization to do things much quicker.

Source: searchcloudcomputing.techtarget.com-What to know before migrating applications to the cloud

DevOps teams offer relief for cloud migration headaches

As their role is being established, DevOps workers learn more about cloud computing than almost any other IT staff member. DevOps teams know how to configure applications for newly developed software, and how to interface with legacy systems. Naturally, this makes them champions at facilitating the migration of legacy software to the cloud.

DevOps staffers know the ins and outs of traditional file systems, distributed file systems and object stores, such as Amazon Simple Storage Service. They also know how to handle large-scale analytics and non-relational databases. They can help you migrate existing application logic to services that scale and run entirely in the cloud.

Organizations can simplify app migration from legacy hardware to the cloud by running all the software as-is on VMs in the cloud. But a better approach is to actually transition all of the logic, usually one small piece at a time, over to Web-scale technologies. DevOps teams can help handle load-balancing and fault-tolerance with domain name system latency and health checks.

In addition, DevOps teams are often required to produce analytics. They have their hands in all pies, and can access all underlying data, including traffic data and log analytics. This sort of data can be incredibly useful in measuring application performance and locating bottlenecks. DevOps staffers are able to help manage deployments and track bugs for each deployment released. They can help determine speed and performance changes per release, as well.

Tools to round out DevOps teams

Meet the challenges involved in migrating  to the cloud

Even the most highly functional DevOps teams need third-party tools to manage distributed environments such as cloud. And certain tools are specifically useful for such environments.

Utilities such as FlowDock or HipChat can help members of a development group keep in touch with each other and DevOps staff. A ticketing service, such as Asana or Basecamp, can help track software development tasks, as well as what needs to go out in which application release.

Customer-focused support portals, such as Freshdesk, Zendesk or Get Satisfaction, allow users to communicate requests to management or software development teams directly. This triggers new or improved features, and makes sure customers’ needs are being met. A DevOps team can help set up these services and get teams acquainted with the technology.

The people who make it happen

If you want to make sure someone writes quality code that’s been well tested, get them out of bed when something in that code breaks. A DevOps team doesn’t want to be called in the middle of the night, so they’re going to make sure they have all the tools in place to guarantee automation for as many tasks as possible.

If a server dies, immediately replace and kill it, keeping any relevant logs for a post-mortem, if necessary. It’s unwise to try to fix servers anymore; organizations can easily replace them with a simple API call that happens automatically when a self-healing system detects a problem. Anomaly detection could alert you ahead of time to potential risk factors or leaks in a system.

Members of a DevOps team need to be the foremost experts on cloud computing and configuring services in the cloud. They need to understand the benefits of nonrelational databases, and how to scale relational databases effectively, if needed. They should help developers succeed by showing them which parts of their applications are problematic, and determine the type of virtual hardware on which to run each piece. They’ll help with architecture diagrams to ensure your system is split to the right point — enough to make sure you’re fault tolerant, but not so much that it becomes slow. They’ll be able to identify the algorithms that scale well — and those that don’t — and determine if something scales appropriately.

 

Source: searchcloudcomputing.techtarget.com-DevOps teams offer relief for cloud migration headaches

Implementing DevOps presents these three IT hurdles

DevOps is emerging as a more efficient way to develop and deploy cloud applications — but it’s still in its early days. Implementing DevOps removes the barrier between development and operational teams, which reduces enterprise application backlogs and accelerates software delivery. But despite its benefits, DevOps is easier said than done.

Enterprises implementing DevOps processes and tools often discover too late that they have made mistakes — many of which require them to stop, back up and start again.

So, what are enterprises doing wrong with DevOps? While mistakes vary from one organization to another, there are some common patterns when it comes to DevOps failure.

Here are three common mistakes organizations make when implementing DevOps.

Putting technology before people

The core purpose of implementing DevOps is to remove the barrier that exists between developers and IT operations staff. One common mistake enterprises make when implementing DevOps is focusing too early and too often on technology rather than people and processes. This can lead to organizations choosing DevOps tools they may need to replace later. Neglecting to change IT processes and train your staff is fatal. Invest in training programs that focus on the use of the technology, and how to adopt continuous development, testing, integration, deployment and operations. While your DevOps tools are likely to change, your people and processes most likely won’t.

Overlooking security and governance

Another common mistake when implementing DevOps is to fail to consider security and governance as being systemic to your applications. You can no longer separate security from the application. Include security in every process, including continuous testing and continuous deployment. The days of building walls around applications and data are over. Governance needs to be systemic to cloud application development and built into every step of DevOps processes, including policies that place limits on the use of services or APIs, as well as service discovery and service dependencies.

 

Busting common DevOps myths

With DevOps, organizations can more quickly develop and deploy enterprise applications. A large part of the process calls for an increase in communication between developers and IT operations teams. However, DevOps is still relatively new, so implementing DevOps is no easy task. Two experts, Stephen Hendrick and Chris Riley, bust common DevOps myths and share advice to alleviate common DevOps challenges in this podcast.

Being unwilling to change

Implementing DevOps means always questioning the way you develop, test, deploy and operate applications. The process, technology and tools need to change, and organizations should gather metrics to determine if the changes made actually increase productivity. Do not set it and forget it; DevOps needs to change and evolve to keep up with emerging ideas and technology. Always design your DevOps process with change in mind.

Source: searchcloudcomputing.techtarget.com-Implementing DevOps presents these three IT hurdles

DevOps principles increase enterprise IT efficiency

A software developer with a proven track record at HP and Macys.com offers tips to make DevOps principles work for enterprise.

Gary Gruver was ahead of his time when, nearly a decade ago, he used DevOps principles to alleviate congestion in the software development process at one of the world’s largest printer manufacturers.
The practice known as DevOps had yet to emerge in 2007 and then was better known as agile software development.

For 20 years, the printer business at HP was being held back by its firmware; the company was not able to add a new product, new feature or capability without firmware updates. In 2007, Gruver took over HP’s software development.

He described the lessons from the journey, and later success at Macys.com — the website for Macy’s department stores — to a crowd of 30 IT pros here this week, including representatives from the financial, technology and healthcare verticals, such as State Street Corp., Citizens Bank N.A., EMC and Blue Cross Blue Shield of Massachusetts.

“[Firmware had] been the bottleneck for the LaserJet business for two decades,” he said. “HP had been going around the world trying to spend its way out of the problem,” he said.

By 2008, as the recession set in, he was charged with cutting his software development budget from $100 million down to $55 million.

For the longest time I didn’t know I was doing agile, I thought I was doing common sense.
Gary Gruver
president at Practical Large Scale Agile LLC
“I was looking for anything and everything I could find to get more productive,” he said.

Three years later, he had “completely re-architected” the development process and eliminated the bottleneck that had been created by firmware. He was also able to free up time for more innovation.

“Most of the organizations I work with look more like the organization I worked with before this transformation than after the transformation,” he said. “For the longest time I didn’t know I was doing agile, I thought I was doing common sense.”

Scrum does not equal agile

One of the major factors that will make a difference in productivity in a large organization — at one time, Gruver oversaw 800 developers — is to apply agile principles at the executive level.

Most organizations focus on how the teams work. Teams focus on how their individual projects work, and whether they are doing stand-ups and scrums and other “agile rituals,” such as releasing to customers on an ongoing basis.

If a large organization focuses on scrums, it will likely lose focus on agile principles.

“Scrum does not equal agile,” he said.

The classic implementation of agility in large organizations is to continue to plan 18 months into the future, as teams do rituals such as standing up software — but releases are not ongoing.

“The reason DevOps has come up as a term at all [is] because agile forgot this basic principle [of ongoing releases] as it scaled into the enterprise,” he said.

Focus on business needs

Knowing the business objectives of your organization will help create a vision and prioritize what the company will go after during its move toward DevOps. For example, at HP, Gruver wanted to eliminate the firmware bottleneck and create capacity for innovation.

“The journey will take a while,” Gruver said.

Adoption of DevOps principles needs to be coordinated across all levels, and all teams needs to agree to use the same tools, for example.

After identifying the core business needs, the move toward DevOps should include a process to prioritize the backlog. Above all, don’t forget to continue releasing to customers on an ongoing basis and get consistent feedback, he said.

“If you are working on the most important things first, releasing it to customers on an ongoing basis and you have a continuous learning process that is being led by the organization, it doesn’t have to be any harder than that,” he said.

Some event attendees plan to put these tips to use to get started with DevOps.

“We don’t really have it in my organization, I am more interested in trying to get efficiencies on our development lifecycle and get our supported applications out faster,” said Chris Flynn, a senior applications developer at Philips Lifeline, based in Framingham, Mass. He wants to automate releases and testing, because right now he sees a lot of time-consuming manual operations that are often not smooth or easy.

He hopes to implement continuous builds, and is interested in trying to get efficiencies in the development lifecycle and push supported applications out faster.

Those changes start with executive buy-in. “If you can get the executives to see the importance and benefit of it, they will give you the time to get it in there,” Flynn said.

Ramesh Subramaniam, an engineering team lead at Harvard Pilgrim Health Care Inc., a health insurance provider headquartered in Boston, said the monthly releases his organization delivers are a labor-intensive process involving about 40 people. It’s also prone to human error.

“By using continuous delivery, I’m sure we can eliminate some errors,” he said.

Source: techtarget.com-DevOps principles increase enterprise IT efficiency by Robert Gates

Time for DevOps to get out of the weeds

Software and service providers face a battle to deliver new products, features and capabilities more quickly than ever. The overall success of the business is linked to its ability to meet rising customer expectations.

In response, development teams must deliver high-quality software while reducing release cycle times. Forrester Research recently found a strong correlation between release cycle times and business satisfaction. Teams with faster cycle times enjoy greater business satisfaction and teams with slower cycle times see less satisfaction. Only one-third of organisations consistently deliver new software to end-users within the ‘gold standard’ one to three-week cycle.

Pressure to shorten release cycle times and raise quality is emphasising the need for change-centric delivery.

Agile development has increased the rate and quality of output from planning and development teams. They now operate at high efficiency and can accommodate even faster release cycles. The bottleneck is increasingly a lack of efficient delivery.

To facilitate delivery streams capable of ingesting changes more quickly, delivery teams must reimagine core principles. DevOps has shouldered the burden of faster release cycles and has focused on automating delivery workflows through tools such as Git, Jenkins, Chef and Selenium, to name but a few.

Faster cycle Times

These tools have improved the delivery process, and DevOps has assembled cross-functional teams with the authority to implement delivery-related activities, which also results in faster cycle times.

So why is delivery still the constraint? Because focusing on bits of automation and cross-functional teams ignores the main purpose of the delivery stream – to reliably build, validate and deploy change. DevOps tooling has been completely change-agnostic.

Ask Jenkins or Chef what has changed since yesterday. There are some change-aware tools for their domain, but they are unaware of changes elsewhere in the delivery stream.

It is difficult to track changes while ‘in flight’. As change flows through the delivery stream, it becomes aggregated or ‘rebundled’ many times between build and deployment. CI servers process many tiny changesets, while QA deals with larger bodies of change. Both these delivery pipelines are continuously processing a distinct changeset, so the movement of any single change must be accounted for as it flows through various pipelines, artefacts and environments to reach some final destination.

Change-centric delivery frameworks

A change-centric delivery (CCD) framework organises the entire delivery process to continuously deliver a steady, never-ending stream of change to the consumers of digital services and products – end-users.

We can identify seven attributes of a CCD framework that can materially reduce cycle times: clearly identify every source of change; track change through the delivery pipeline; correlate every change back to a work item/business driver; implement change navigation and routing through the delivery stream; implement change bundling and rebundling during the delivery stream flow; relate binary artefacts back to changesets; track changes through environments from development to production and implement application/service assembly from loosely coupled components.

The bottom line

Automation tools alone will not produce the positive impact on the delivery stream that Agile has delivered for planning and development. But as more digital service providers adopt CCD frameworks, cycle times will continue to shrink, allowing new features and capabilities to reach end-users much more quickly, ultimately bringing the level of delivery efficiency in line with planning and development.

Source: www.computerweekly.com-Time for DevOps to get out of the weeds

Gartner Highlights Five Key Steps to Delivering an Agile I&O Culture

By 2018, 90 Percent of I&O Organizations Attempting to Use DevOps Without Specifically Addressing their Cultural Foundations Will Fail

Infrastructure and operations (I&O) leaders planning a bimodal IT strategy will miss out on the benefits of DevOps support for agile practices unless they transform their I&O culture, according to Gartner, Inc.

Gartner said that the implementation of a bimodal IT strategy requires careful planning and execution. Analysts predict that, by 2018, three quarters of enterprise IT organizations will have tried to create a bimodal capability, but that less than 50 percent of those will reap the benefits of improved agility and risk management.

“I&O leaders are under pressure to support customers who want to go faster, so they are utilizing agile development,” said Ian Head, research director at Gartner. “However, movement to agile will not, and should not, be a wholesale immediate change. Instead, it should first be implemented in areas where there is a very real business need for speed, and then carefully rolled out — taking the culture of the organization into account.”

Gartner has developed the strategy known as “bimodal IT,” which refers to having two modes of IT. Mode 1 is traditional, emphasizing scalability, efficiency, safety and accuracy, while Mode 2 is nonsequential, emphasizing agility and speed.

“Changing the behaviors and culture are fundamental to the success of a bimodal IT approach. We estimate that, by 2018, 90 percent of I&O organizations attempting to use DevOps without specifically addressing their cultural foundations will fail,” said Mr. Head.

“We do not advocate wholesale cultural change in a single organizationwide program. Instead, I&O leaders should focus their efforts on an initial, small Mode 2 team, establish the values and behaviors needed, and take incremental efforts to recognize and reinforce desired outcomes prior to scaling.”

The following five-step approach will help I&O leaders achieve an agile I&O culture:

  • Identify Your Mode 2 Behavioral Gap Analysis
  • Work With Key Stakeholders to Gain Consensus on the Approach
  • Start With a Small, Focused Pilot
  • Deploy Behaviors Incrementally to Successive Groups With Feedback Loops
  • Pursue Continual Improvement

Read more at: Gartner Highlights Five Key Steps to Delivering an Agile I&O Culture

Continuous Delivery (CD)

Continuous delivery (CD) is an extension of the concept ofcontinuous integration (CI).

Whereas CI deals with the build/test part of the development cycle for each version, CD focuses on what happens with a committed change after that point. With continuous delivery, any commit that passes the automated tests can be considered a valid candidate for release.

Continuous delivery has a number of benefits. With this method, code is delivered in a steady stream for user acceptance testing (UAT) or the staging environment for evaluation or peer review. In the staging environment, the code can be tested for all aspects of functionality including business rulelogic (something unit tests can’t do reliably). Because CD is ongoing and testing occurs quickly, developers can often receive feedback and start working on fixes before they have moved on to another aspect of the project. This can increase productivity by minimizing the effort required to refocus on the initial task. If an iterative process is becoming unwieldy due to increasing project complexity, CD offers developers a way to get back to doing smaller, more frequent releases that are more reliable, predictable and manageable.

Source: TechTarget-continuous delivery (CD) by Margaret Rouse

Infrastructure as code and other prerequisites for Continuous Delivery

Are you ready for continuous delivery? This article outlines three prerequisites: automated testing, infrastructure as code and a staging environment.

Success with continuous delivery (CD) depends on four prerequisites. In the first article in this two-part series, experts discussed the importance of a well-established iterative development process. In this article, those experts focus on three practices: automated testing, infrastructure as code and a staging environment.

Software test automation isn’t an “all or nothing” affair. Most teams automate some steps and rely on manual testing for others. But continuous delivery demands a deeper commitment to automation. Why? Because, by definition, testing must take place continually, and the only way to manage it cost-effectively is through automation, said Tom Poppendieck, co-author of The Lean Mindset. As each increment of code is released, tests kick off automatically, making sure not only that new code works, but that it integrates properly with the larger codebase. “When software testing is automated at multiple levels, the risk and cost of making changes becomes dramatically cheaper,” Tom Poppendieck said.

The more you automate the test process, the easier it is to reach the goal of delivering software continuously, said Carl Caum, a prototype engineer at Puppet Labs, which sells configuration management software. He is not suggesting that software pros eliminate all manual tests, but said they should move steadily toward deeper and deeper levels of automation. “You start with acceptance testing. What defines success in our software? Write down your acceptance criteria first, and then automate that testing process.”

The key to success is automating as you go. Often that means conducting manual tests; then, where possible, turning manual steps into automated ones. He offered an example: “You do some manual, exploratory testing. When you come across a workflow that requires the same four steps every time, automate it,” he said. “The first time it’s exploratory, second time it’s a pattern, and the third time it’s automated.”

Continuous delivery requires automating most test procedures, said Stephen Forte, chief strategy officer for mobile tool maker Telerik. “But at the end of the day, for public-facing applications, you want to have a human being look at the screens. Does the layout work? Are there any spelling mistakes?” These are things automated testing can’t catch, he said.

Read more at: TechTarget- Infrastructure as code and other prerequisites for CD by Jennifer Lent