The case for continuous application management

Application are the single most common reason why enterprise-level migrations fail. The lack of continuous application management causes a lot of disruption to projects and business as usual (BAU). Many times, when people ask about application owners, all you see are blank faces in response. Managing an application portfolio is a full time job and has a lot of advantages, especially with the way IT is evolving.

Each migration project involves an epic journey of discovery when it comes to applications and enterprises will have to consider all of the following:

  • Which applications are installed on each machine and which are actually in use
  • Who uses each application Whether licences be claimed back
  • The location of the installation media / source code
  • Whether it is possible to retire some applications, or consolidate those with the same functionality into one
  • Who owns each application.

By reviewing application installations and how they are used every few years (the forklift approach), enterprises create revenue-draining gaps in the critical information needed to make effective IT decisions. Employee turnover and a lack of focus on application management can turn the above investigations into an endless nightmare, seriously impacting productivity and significantly delaying projects. Adding to the difficulty is the fact that endpoints are no longer static; they can be located at any one of a company’s multiple offices, remotely via a virtual private network, or from virtual machines, mobile devices and tablets.

And this only refers to physically installed application lications. The third layer of difficulty is web application. Only a few tools can help with the metrics enterprises need to tackle this prevalent application usage scenario. A web page being opened doesn’t mean it is in use. A combination of open plus focus would be the key metric in this case.

However, continuous application management offers many benefits. Not only is it cheaper in the long run, but it also makes migrations and the day-to-day management of an IT infrastructure much simpler. We’ve see an explosion of the use of DevOps, which is no more than an extended effort to make the refresh cycle of a specific application shorter. That doesn’t remove the need to monitor the application and gather vital telemetry about it, especially in the critical days following a release or update where user performance has the most impact on productivity. Telemetry is now available for almost all applications, but is often not leveraged. Monitoring how people interact with the application is becoming increasingly necessary for understanding current gains, and anticipating future return on investment and areas of improvement.

We need to remember why organisations migrate platforms, operating systems, phones, applications, or anything else. They migrate to have a better experience. That better experience may be improved productivity, less work for maintenance on the back end, or better security, integration with other systems, automation, and (almost always tied to) increased profitability.

We are entering the ‘age of the quantified end user,’ where companies measure how people use applications at different times of the day, how often, and via which device and location. Companies want to know which applications are open on a user’s machine, which one they are focusing on and how responsive the application is, including what dependencies it has, the latency between the endpoint and the application servers, and much more. This phenomenon is already happening in other industries. For example, the UK government introduced a smart metering project wherein homes can measure real-time energy consumption. People everywhere are using watches and applications to monitor their lives, how many hours they sleep, how many calories they intake and burn, how many steps they take and more.

All this monitoring has one goal – to improve our lives. People may use gadgets to help them change personal habits and ensure they sleep or exercise better. Why? It is much easier to change things when you are measuring them. Similarly, in application management, companies need to understand concurrent utilisation with metrics to identify which day and time are best suited to fulfil change requests, minimise downtime while patching and upgrading, or managing change control and capacity planning.

The adoption of solutions focused on improving this broader and more quantified user experience is the natural progression in helping IT departments to deliver value-added services for all lines of business, managed and delivered through service desks, service managers, and application owners.

To obtain a solid understanding of application utilisation rates and user behaviour, enterprises should consider using a tool to improve the end user experience and productivity with regard to the adoption and consumption of infrastructure investments. I have used Lakeside Software’s SysTrack for years in forklift migrations, and afterwards to transform application management into a BAU process. Having a tool like this one can provide insight into a huge amount of vital metrics. For example, I can see how many systems have a specific application open and have it in focus (active windows). This would help me to understand when that application is really being used and how intensively.

One of the key components of customer service is the motto ‘Always come back to the customer before the customer comes back to you’. Proactive management of applications will provide enterprises with the tools they need to deliver a better experience for their users before they start complaining about the current one. Isn’t that wonderful?

The last piece of the puzzle is communicating the value of your work to your boss. By having valuable data about application utilisation, users will be able to measure results and keep them focused on improved user experiences with the empirical evidence. You can’t improve what you can’t measure, but with tools in place, enterprises can deliver on migration projects and post-project objectives of measuring and maintaining healthy and productive users as changes continue to occur. Companies can also save money on licensing, give users a solid service level by knowing when an application can be maintained with minimum impact, and keep the infrastructure appropriate for each application with solid metrics for capacity planning. These tools also help to avoid expensive ‘forklift’ exercises during migrations.

In summary, my advice to my fellow IT professionals is to actively manage their applications as a BAU process; monitor utilisation and measure everything they can; use the information they gather to provide a better user experience; save money by reducing downtime and user impact; and always come back to the customer before the customer comes back to them.

Source: onwindows.com -The case for continuous application management

Advertisements

Cloud and managed services: SaaS CIO seeks more than colocation

Colocation versus cloud and managed services is a decision many CIOs face, as they seek the best approach to run an application. How do you know which strategy works best for your company?
Heather Noggle, CIO of Exits Inc., which develops international and compliance software, has had hands-on experience with both approaches. Exits went with colocation when it began offering its trade and compliance application on a software as a service (SaaS) basis in 2003.

Colocation allows an IT organization to lease space, and place storage, servers and the resident apps in a third-party data center, which provides cooling, network connectivity, power and security. It’s is a no-frills approach that offers the advantage of greater control over equipment — until it doesn’t.

Eventually, reliability became an issue with Exits’ colocation arrangement. The company’s site would go down for no discernable reason, and the colocation provider was unresponsive, Noggle said.

Noggle recalled driving to the colocation facility to personally reset the servers hosting Exits’ software. Exits’ development team and its colocation vendor were both located in St. Louis at the time. The company’s headquarters is in Connecticut.

“I only had to do that the once … but that was the proverbial straw that broke the camel’s back,” she said. “When you are in our business, you can’t be down. You’ve got to have someone invested in keeping your data available all the time.”

Exits’ customers rely on its Global Wizard software suite to handle export document generation, determine international trade requirements based on trade lanes and conduct Denied Persons screenings, the company noted. As for the last service, U.S. companies may not engage in export transactions with people or organizations on the Commerce Department’s Denied Persons List.

Making the switch to cloud and managed services

You’ve got to have someone invested in keeping your data available all the time.

Heather Noggle CIO, Exits Inc.

After that pivotal server incident, Exits decided to drop its colocation provider and tap Xiolink to provide what Noggle called a “fully integrated managed service model.” Xiolink is now part of Cosentry Inc., a cloud and managed services provider based in Omaha, Neb.

The transition to a new provider took place in 2005. Today, Cosentry maintains Exits’ IT infrastructure in a managed private cloud that encompasses a Microsoft Internet Information Services Web server and SQL Server database. The cloud and managed services provider also includes managed security and data backup within its service offerings.

Beyond greater reliability, the use of a cloud and managed services firm helps Exits reduce IT costs. The biggest source of savings: Exits has not needed to keep a network expert on staff. Noggle said Exits has general IT knowledge, but noted that networking and hardware aren’t the SaaS company’s specialties.

Starting salary for a network administrator is forecast to increase 6.4% this year, with compensation running from $76,250 to $112,000, according to Robert Half Technology, an IT staffing company.

In addition, Exits takes advantage of business and technology planning that its services provider offers. For example, Exits’ growth has prompted discussions around hardware load balancing. Load balancing is a technique IT departments use to deal with increasing volumes of Web traffic.

“We’re looking at the hardware load balancer now, but we haven’t completed the request with Cosentry, who did recommend it,” Noggle said. “We have some older code and some newer code, and some additional changes in infrastructure planned that we want to nail down before we move forward with that.”

Software rewrite

Exits, meanwhile, has another technology change in the offing: The company plans to update the user interface of its main product, and then its supporting database. The software rework is scheduled for completion in 2017. The new software could change the way Exits wants its infrastructure managed, but Noggle expressed confidence is Cosentry’s flexibility.

“We know they will help us as we go forward,” Noggle said.

And her advice to other CIOs considering a services provider partner?

“Get someone who will work with you as you grow and change.”

 

Source: searchcio.techtarget.com.au- Cloud and managed services: SaaS CIO seeks more than colocation

 

Head in the clouds? What to consider when selecting a hybrid cloud partner

The benefits of any cloud solution relies heavily on how well it’s built and how much advance planning goes into the design. Developing any organisation’s hybrid cloud infrastructure is no small feat, as there are many facets, from hardware selection to resource allocation, at play. So how do you get the most from your hybrid cloud provider?

Here are seven important considerations to make when designing and building out your hybrid cloud:

Right-sizing workloads

One of the biggest advantages of a hybrid cloud service is the ability to match IT workloads to the environment that best suits it. You can build out hybrid cloud solutions with incredible hardware and impressive infrastructure, but if you don’t tailor your IT infrastructure to the specific demands on workloads, you may end up with performance snags, improper capacity allocation, poor availability or wasted resources. Dynamic or more volatile workloads are well suited to the hyper-scalability and speedy provisioning of hybrid cloud hosting, as are any cloud-native apps your business relies on. Performance workloads that require higher IOPS (input/output per second), CPU and utilisation are typically much better suited to a private cloud infrastructure if they have elastic qualities or requirements for self-service. More persistent workloads almost always deliver greater value and efficiency with dedicated servers in a managed hosting or co-location environment. Another key benefit to choosing a hybrid cloud configuration is the organisation only pays for extra compute resources as required.

Security and compliance: securing data in a hybrid cloud

Different workloads may also have different security or compliance requirements which dictates a certain type of IT infrastructure hosting environment. For example, your most confidential data shouldn’t be hosted in a multi-tenant environment, especially if that business is subject to Health Insurance Portability and Accountability Act (HIPAA) or PCI compliance requirements. Might seem obvious, but when right-sizing your workloads, don’t overlook what data must be isolated, and also be sure to encrypt any data you may opt to host in the cloud. Whilst cloud hosting providers can’t provide your compliance for you, most offer an array of managed IT security solutions. Some even offer a third-party-audited Attestation of Compliance to help you document for auditors how their best practices validate against your organisation’s compliance needs.

Data centre footprint: important considerations

There is a myriad of reasons an organisation may wish to outsource its IT infrastructure: from shrinking its IT footprint to driving greater efficiencies, from securing capacity for future growth, or simply to streamline core business functions. The bottom line is that data centres require massive amounts of capital expenditure to both build and maintain, and legacy infrastructure does become obsolete over time. This can place a huge capital and upfront strain onto any mid-to-large-sized businesses expenditure planning.

But data centre consolidation takes discipline, prioritisation and solid growth planning. The ability to migrate workloads to a single, unified platform consisting of a mix of cloud, hosting and datacentre colocation provides your IT Ops with greater flexibility and control, enabling a company to migrate workloads on its own terms and with a central partner answerable for the result.

Hardware needs

For larger workloads should you seek to host on premises, in a private cloud, or through colocation, and what sort of performance needs do you have with hardware suppliers? A truly hybrid IT outsourcing solution enables you to deploy the best mix of enterprise-class, brand-name hardware that you either choose to manage yourself or consume fully-managed from a cloud hosting service provider. Performance requirements, configuration characteristics, your organisation’s access to specific domain expertise (in storage, networking, virtualisation, etc.) as well as the state of your current hardware often dictates the infrastructure mix you adopt. It may be the right time to review your inventory and decommission that hardware reaching end of life. Document the server de-commissioning and migration process thoroughly to ensure no data is lost mid-migration, and follow your lifecycle plan through for decommissioning servers.

Personnel requirements

When designing and building any new IT infrastructure, it’s sometimes easy to get so caught up in the technology that you forget about the people who manage it. With cloud and managed hosting, you benefit from your provider’s expertise and their SLAs — so you don’t have to dedicate your own IT resource to maintaining those particular servers. This frees up valuable staff bandwidth so that your staff focuses on tasks core to business growth, or trains for the skills they’ll need to handle the trickier configuration issues you introduce to your IT infrastructure.

When to implement disaster recovery

A recent study by Databarracks also found that 73% of UK SME’s have no proper disaster recovery plans in place in the event of data loss, so it’s well worth considering what your business continuity planning is in the event of a sustained outage. Building in redundancy and failover as part of your cloud environment is an essential part of any defined disaster recovery service.

For instance, you might wish to mirror a dedicated server environment on cloud virtual machines – paying for a small storage fee to house the redundant environment, but only paying for compute if you actually have to failover. That’s just one of the ways a truly hybrid solution can work for you. When updating your disaster recovery plans to accommodate your new infrastructure, it’s essential to determine your Recovery Point Objectives and Recovery Time Objective (RPO/RTO) on a workload-by-workload basis, and to design your solution with those priorities in mind.

Source: businesscloudnews.com- Head in the clouds? What to consider when selecting a hybrid cloud partner

Five CIO tips for building an IT strategy in the digital age

Modern IT leaders are under siege. CIOs are expected to keep systems up and running, while also keeping track of fast-changing business demands and the technologies that can help improve organisational effectiveness.

Juggling those mixed objectives is a tough gig. Many CIOs developed their expertise in the closed confines of the IT department. However, their C-suite peers now expect technology chiefs to move beyond operational IT concerns and to spend more time engaging with the business.
CIOs should use such forms of engagement to match broader organisational aims with digital capabilities, including those associated to cloud, mobile, big data andsocial media.

But, once again, the challenge is significant, as buzz phrases such as disruption, experience and experimentation help to create unrealistic boardroom expectations about the likely effect of digital transformation.

So how can CIOs develop an IT strategy that delivers real change and lasting business benefits in the digital age? Computer Weekly speaks to the experts and finds the five best practices for transforming an organisation through technology.

Create a single business strategy that covers digital trends

Former CIO turned digital advisor Ian Cox says disruption usually happens in industries that have not seen any major change in business models, products and services for prolonged periods. In fact, he has strong words for people who hype transformation.

“Disruption is nothing new,” he says. “For as long as businesses have existed there has been disruption. And the disruption has usually been made possible by technology in some form. But the power of IT is undoubtedly much more relevant in the digital era.”

One approach some CIOs have taken is to develop a digital strategy that is separate to the firm’s overall approach to IT. But Cox is adamant than no separation should exist. In fact, he believes a CIO’s approach to technologyshould be integral to the broader business strategy.

“What every organisation needs is a single business strategy, and the CIO should take part to provoke debate and extend capabilities,” he says. “The modern CIO should be a person who knows what’s coming in terms of IT and the startup community.”

Use IT to improve operations and boost organisational value

Andy Wilton, CIO at Claranet, says that for all the fluffiness of the term “digital”, its adoption has affected one important change – the line between IT and the other departments of an organisation has been blurred to the point that a demarcation no longer exists. “Today, your industry determines your IT strategy, not your department,” he says.

Wilton has seen huge change in the technology industry, yet he believes retail is probably the best industry-specific example of where CIOs had to significantly adjust their strategies. He says the ease and efficiency that comes with in-store digital devices – and the boon of online ordering and automated processes along the supply chain – have driven huge growth.

However, he says not every sector is changing in the same radical way. He points to the motor industry and says that – despite advancements in car modelling andcustomer relationship management software – there have been fewer changes to the fundamentals of manufacturing, so the industry’s rate of change has not been as great. “Business value and digital growth are not always directly connected,” he says.

Wilton says this lack of connection means C-level executives, who are all too eager to sling buzzwords to mask their lack of activity, need to proceed with care.

“There’s no single label for digital,” he says. “The pace of change is instead largely determined by direct business value of certain products and how IT can seamlessly connect and improve operations, rather than a need to overhaul an entire business model.”

Give technical information to executives in a meaningful way

Richard Norris, head of IT and business change at Reliance Mutual Insurance, joined the business in late 2014 and was tasked with transforming a legacy IT operation. The programme helped create a platform for change, both in technology and human resources.

But despite the focus on all things digital, Norris must keep a watchful eye over more traditional operational IT concerns. The legacy business – the firm’s long-standing insurance clients – remains an important element of Reliance’s strategy. The reformed technology organisation now serves two roles – first, supporting existing systems and clients of Reliance; and second, supporting the development of website There.co.uk.

Norris recognises that strategic development plays a key role, both in terms of his own approach and the skills of his IT team. “I’ve brought Gartner in to mentor people inside their departments and to help them understand how IT can be used to make the business more efficient,” he says.

“By developing experience in other people, I can reduce the pressure on me – and I can focus on areas that will really make a difference in terms of competitive advantage.”

“By developing experience in other people, I can reduce the pressure on me – and I can focus on areas that will really make a difference”

Richard Norris, Reliance Mutual Insurance

As a CIO at a digitally engaged business, Norris recognises that he focuses his IT leadership on human co-operation, rather than systems engineering.

He believes this transition from bits and bytes to communication and collaboration is far from unusual. As an individual moves up the IT career ladder, Norris says he or she starts to leave the technology behind. However, a smart CIO will always keep one eye on IT.

“I’ve worked hard to develop my people and relationship skills, but I’ve also always strived to ensure that I don’t leave my technical awareness behind. I still speak to enterprise architects about governance concerns, for example,” says Norris.

“I can play that information back to the rest of the business in a meaningful way. I can go to the business with a good understanding of how the available technology might be used to help meet key executives’ aspirations. “

Make line-of-business functions choose their own systems

Since joining the company in 2012, Chris Hewertson, chief technology officer (CTO) at hotel group GLH, has been at the forefront of an organisational transformation that has put customer experience at the centre of the firm’s business strategy. The hard work began in spring 2013, when he started making decisions regarding the future direction of IT in the business.

“We started with a fairly traditional technology roadmap,” says Hewertson, who recognised that cloud would play a crucial role. “We came up with an output that, to a greater or lesser extent, was implemented. But rather than simply implementing that plan, we did something different – we stopped.”

Hewertson says the technology strategy that was originally created felt too focused on the core of the business and too traditional.

“If we wanted to transform our revenue-generating businesses, we realised we’d have to give the hotels the power to decide which systems they would use. Rather than focusing on the core, we started to go from the outside in,” he says.

“The business has chosen its own technology. We handed over what was the historic responsibility of the CIO. But this is what every IT leader should want the business to do”

Chris Hewertson, GLH

Hewertson ran a project called Ship Alongside, which recognised the existing capability of the hotels was so broken that the IT team would need to bring along a new ship, kit it out, move everyone over to the new platform and sink the old way of operating.

Looking back on the project, Hewertson says the approach of the project has been transformative. “There’s hardly a service in the hotels that hasn’t changed,” he says.

“We got three competing teams of individuals from across the IT department and business and paired them off with technology partners. They were tasked to find systems for different areas of the business and they evaluated hundreds of services.

“The teams were very passionate and their competitiveness drove them to find innovative solutions to our business challenges.”

Hewertson says these competing teams had many dialogue sessions across a three-month evaluation period. After each team sent back their findings, the senior team at GLH merged the output from the groups. He and his peers then evaluated the shortlisted systems and implemented the technology.

“The benefit of this approach is that the business has chosen its own technology,” says Hewertson. “We handed over what was the historic responsibility of the CIO. But this is what every IT leader should want the business to do – to step up, to engage and to be clear on what it wants.”

Work in an agile way to deliver to high customer expectations

Andrew Agerbak, associate director at Boston Consulting Group, says the business opportunities that can come from digital transformation are enormous. He says change is allowing some of his firm’s clients to take as much as 30% out of costs – and agile development plays a key role.

“It’s hard – transformation can’t be done in a siloed way, like traditional IT development,” he says. “You have to work across sales, marketing and operations – you have to create alignment and a consistent way of talking about the customer. There’s a lot to do to deliver true digital transformation.”

Agerbak recognises that modern customers have changed expectations. IT is no longer about internal technology requirements. “If your online banking app is being evaluated in the App Store, then the stakes are considerably different. People want everything to be simple and intuitive. And outages are completely unacceptable,” he says.

To create a strategy that delivers high quality services, CIOs must help thebusiness to become more agile and to feel more comfortable with experimentation. “You have to show the product regularly to customers before it is scaled-up,” says Agerbak.

“You have to prioritise your actions in response to feedback. Have a strict definition of what ‘done’ means. If you’re looking at changes on a two-week basis, you get better at tracking changes and analysing accuracy over time. You start to move to a much higher level of accuracy.”

 

Source: computerweekly.com -Five CIO tips for building an IT strategy in the digital age

 

Five SAP HANA implementation tips CIOs should know

If you’re planning an SAP HANA implementation in the near future, you’re probably on a fierce hunt for guidance.

That’s because despite HANA’s ability to analyze massive amounts of data in real-time, HANA adoption rates have so far been on the low side, although they doappear to be rising as its functionality expands into a major platform for enter­prise applications and word about its success spreads. Still, the dearth of real-life experience from which to learn may leave early adopters feeling like they’re making an SAP HANA implementation journey in the dark.

Download “SAP mobility walks the runway”

SAP mobility strives to give users flexibility while maintaining security, usability and proper business processes.

To make that journey a bit clearer, here are five tips to help light your way. (Note: The first three tips apply to the HANA database, and the last two are on HANA applications, or S/4HANA Enterprise Management; see sidebar for more on the difference.)

Spend time deciding whether you want on-premises, hybrid or cloud versions of HANA. Maintaining an in-house database infrastructure has always been an expensive undertaking, due to the cost and overhead involved. CIOs must decide if the future direction of their company requires them to partially or fully get rid of their in-house database infrastructure and partially or completely move to cloud. The decision on whether the cost associated with the HANA database is justified may take longer for companies that have already heavily invested in in-house infrastructure. However, undertaking a cost-benefit analysis greatly helps in this decision-making process. Often companies only migrate their old and slow databases to SAP HANA to speed up transactions processing, but leave the applications side as it is or do it later on to take advantage of S/4HANA Enterprise Management offerings.

SAP has positioned the HANA portfolio into two distinct categories:

Technical. This is the powerfully fast database processing side of HANA.

Functional. This is the application side, positioned as S/4HANA Enterprise Management. It’s on the Enterprise Management side of HANA, where there are ongoing innovations and simplifications in business processes taking place. These are released every quarter.

Companies may choose only the technical upgrade by choosing HANA database and leave the functional side for implementation later on, or they may do both at once. However, it is not possible to implement Enterprise Management without first implementing the HANA database.

Address hardware sizing by calling on the experts. HANA’s simpler database, faster CPUs and flexible scaling requires CIOs to diligently conduct hardware sizing when it comes to HANA, since these are an altogether different ballgame than traditional databases, and any misstep could lead to underestimating or overestimating the required hardware. To make the most-informed decision, CIOs should engage multiple SAP HANA-certified vendors and then compare the results and reports of each vendor. If most vendors suggest a similar database size, then it is better to go for the higher estimation of database size. If there’s a significant deviation in hardware sizing estimates from one vendor to another, then CIOs should ask each vendor to provide references to similar sizing projects that they’ve undertaken or get help from SAP to validate the vendor-suggested database size.

Tap into any available HANA knowledge and experience. At this time, there aren’t too many reference clients and industry referrals that CIOs can consult to see how well others have done and the challenges they faced. Being a newer technology that has not been adopted among a lot of enterprises, even SAP consultants engaged in HANA implementation do not have a depth and breadth of experience. Fortunately, SAP helps with its early adopter program, known asSAP Ramp-Up. In this program, SAP engages subject matter experts who then engage with early adopters on a regular basis throughout the duration of SAP HANA implementation to guide and advise on all technical and functional aspects of HANA. SAP also incrementally shares technical guides, user manuals, presentations, roadmaps, accelerators and other assets to enable the client to effectively and successfully implement HANA.

SAP’s ongoing innovations to SAP S/4HANA Enterprise Management can evoke an intense case of FOMO.

Access SAP Activate. For companies opting for HANA’s Enterprise Management, the S/4HANA deployment methodology SAP Activate ensures expedited and guided S/4HANA Enterprise Management implementation. SAP Activate leverages best practices, guided configuration and proven implementation methodology to ensure a project’s success. SAP Activate eliminates the traditional way of creating an SAP Business Blueprint document that maps current business processes with SAP Best Practices business processes and then conducts a gap analysis — SAP Activate gets straight to the gap analysis. CIOs must be aware that it is still too early to evaluate if SAP Activate will ensure that all important business processes are captured, given that not all SAP Best Practices are currently available in HANA. However, SAP does keep on adding and building up the Best Practices library. So can CIOs rely only on the library as it gets fully built out? Not likely. An approach that seems to work well so far is to use traditional business blueprinting, but also create a variant or additional document by using SAP Activate.

Create an implementation roadmap and stick to it. SAP is continuously rolling out newer functionality and innovations with each quarterly release of SAP S/4HANA Enterprise Management, which can evoke an intense case of FOMO (fear of missing out). But it is critical that CIOs do not let new offerings and features derail the ongoing SAP project. In other words, don’t go back two-steps on an already-agreed upon solution just to take advantage of newer innovations, or the result will quickly become scope creep, skyrocketing project costs and timeline slippages. CIOs should put a hard-stop on adopting S/4HANA innovations until the initial SAP HANA implementation is complete. Later, after SAP HANA implementation has successfully gone live and business processes have matured, CIOs can embark on “continuous-improvements” projects to implement latest S/4HANA Enterprise Management innovations available at that time.

Source: searchsap.techtarget.com -Five SAP HANA implementation tips CIOs should know

The link between third-party vendor support and the cloud

Rebecca Wettemann is the vice president of research at Nucleus Research and leads the quantitative research team. Nucleus Research provides case-based technology research with a focus on value, measurement and data. The company assesses the ROI for technology and has investigated and published 600 ROI case studies. Wettemann specializes in enterprise applications, customer relationship management, enterprise resource planning and cloud. She spoke with SearchOracle about the ROI for cloud adoption and third-party vendor support.

Can you tell me what the typical return on investment is for cloud and third-party vendor support?

Rebecca Wettemann: Cloud delivers 1.7 times the return on investment of on premises. It’s interesting because, intuitively, we think it’s because cloud is cheaper and that’s certainly partially true. But the bigger top line benefit is that I can make changes, upgrades, get more value from my cloud application over time without the cost, pain and suffering, and disruption associated with upgrading a traditional on-premises application.

Today, a lot of ERP customers are a couple of upgrades behind. Staying current, particularly if you’ve made a lot of customizations, is extremely expensive, extremely risky, extremely disruptive. Going through an upgrade can cost maybe half a million dollars. It’s not unusual. So customers stay behind, and that’s when they start to look at third-party support as an option. Support from the vendor is expensive, and, as I get further behind, I get less attention from the vendor and less support that is really focused on what my needs and particular challenges might be because they’re focusing their resources on the customers who are upgrading and staying current.

I can cut my maintenance bill in half by going to third-party support and use that money to invest in cloud innovation.

Rebecca Wettemannvice president of Nucleus Research

Are companies that are already using cloud likely to be more or less interested in third-party vendor support?

Wettemann: Someone who is already on the cloud is likely to be less interested in third-party support because cloud vendors tend to recognize that they have to win that contract again every year or two. So they’re in there delivering additional value, delivering upgrades, delivering enhancements and providing support because they know that the barriers to switching are a lot lower for cloud applications. What we do see is companies taking their core ERP or core critical applications like Siebel where they are a few generations behind and [saying], “I’m going to put this on third-party support.” This is either because I already have a plan to implement a whole new version of what I have in a couple of years and I want to save money in order to do that, or because there are other areas of innovation in cloud that I want to take advantage of and I can put the money toward that. I can cut my maintenance bill in half by going to third-party support and use that money to invest in cloud innovation.

What we’re seeing with customers is not a lot that are saying, “Okay, I’m going to move from PeopleSoft to ERP cloud.” It’s a very small population. What we are seeing is people saying, “You know what, PeopleSoft is mission-critical for us. We don’t want to disrupt it right now. We want to watch the road map for ERP cloud and see where it’s going. But we want to get a Taleo subscription, so we can manage talent management, or we want to invest in something on the CRM side in Sales Cloud or Marketing Cloud that’s attractive to us.” So they’re looking at taking advantage of the investment Oracle has made in cloud in different areas of the organization, which is where putting the PeopleSoft portion — to use that example — on third-party support saves them a ton of money. Our research and talking to Rimini customers finds that they get as good if not better support as they get from the vendor.

This is definitely something we’re seeing as we talk to customers about how to fund new projects. IT budgets are not flat, but not growing at a tremendous rate. And what they’re looking at is: “How can I cut out this big portion of expenditure, which for many firms can be high six figures? How do I cut that out? Or cut it in half and use that to fund cloud innovation?” So, if I look at my overall ongoing IT budget, a significant portion of that is license fees. Anything I cut from there becomes, without needing more budget, funds to invest in cloud.

Is this what you see people doing?

Wettemann: We’re definitely seeing folks say, “Yes, I need to do more with my IT budget.” This is a great way to keep systems that I’m not ready to move to cloud yet on a much more cost-effective basis so I can divert my resources elsewhere.

When Oracle customers move to the cloud, do they remain with Oracle or start using other vendors’ products?

Wettemann: I would say it’s a combination.

What factors influence that decision?

Wettemann: How much they’re an Oracle shop, certainly. Specific business needs that they’re looking for, whether it’s supply chain, CRM, HCM or another — Marketing Cloud is a great example. But they’re looking at what are the competing solutions in the cloud marketplace and how does Oracle stack up.

Is now a good time for making big decisions?

Wettemann: Yeah, absolutely. And it can also be a matter of not necessarily wanting to put all of their eggs in one basket. Because, remember, with cloud I don’t have to have the level of developer skill or DBA [database administrator] skill that I do to support an on-premises application. So, I don’t have to decide that I’ve got to have two or three Oracle DBAs that I know I’m going to be able to retain to make sure they keep my application running and everything works. I don’t have to do that with cloud, so I have more flexibility.

Source: searchoracle.techtarget.com- The link between third-party vendor support and the cloud

IDC Directions shifts focus to third-platform ‘accelerators’

The third platform is ramping up to scale in unbelievable numbers, and businesses better get on board, or they will be left in the dust. This was the common theme at IDC Directions, a conference held at Boston’s Hynes Convention Center and sponsored by the Framingham, Mass., research firm.

General session keynote speaker Frank Gens, IDC senior vice president and chief analyst, focused on third-platform technologies and the role they will play in the digital transformation of businesses. “If you want to compete and grow, you need to become expert in third-platform technologies, but you will also need to use them and help customers use them in the use cases that matter, which is about digital transformation,” he said.

But you also better speed it up, because the pace is quickening. Those who don’t keep up risk the same fate as first-platform dinosaurs, such as Digital, Cullinet and Wang Laboratories.

“The pace of new offerings is unbelievable, compared to what we’re used to,” Gens said. “Over the next several years, you’re going to see a scale like you won’t believe, and these types of scale offer two existential questions: Are we equipped to handle the numbers? If not, what do we need to do to get ready for the increase of scale?”

Third-platform technologies are centered on cloud, big data, social business and mobility. These are driven by “innovation accelerators,” including the Internet of Things (IoT), 3D printing, augmented and virtual reality, robotics, cognitive systems and next-generation security. Gens said this is the technology base for growth over the next several years.

Third-platform spending on the rise

IDC predicted that second-platform spending — PCs, Ethernet and so on — will continue to decline year by year, while third-platform spending will continue to increase each year. Third-platform spending exceeded second-platform spending for the first time in 2015, and will almost double it by 2020. Overall, second-platform spending is expected to decrease by 5.1% from 2015 to 2020, while third-platform spending will grow 12.7%.

Gens defined digital transformation as the way by which businesses will use third-platform technologies to add value and competitive advantage by creating new offerings, new business models or new relationships. There are customers for this in almost every industry; for example, manufacturers are using third-platform technologies to add new services around their products, and make them “smarter” and connected products. Health is a huge area for digital transformation, and financial services are using third-platform technologies to reduce waste, fraud and abuse, Gens said.

Companies are in the early stages of the digital transformation, but this is expected to change rapidly, as well. An IDC survey on the state of enterprises’ digital transformation showed that the majority of respondents are just starting projects, with 32% each characterized as digital explorers or digital players, while 22% are more advanced digital transformers (14%) or digital disrupters (8%). IDC expects the latter number to double to at least 45% by 2020.

“This means that the digital transformation will drive the IT spend in the next few years, and you will need to know what the primary business use cases are,” Gens said.

IoT is one area that will be prominent in the digital transformation. Gens said more than 20,000 products were introduced at CES this year, and the largest percentage of products were IoT-related. These ranged from a $400,000 Ford GT connected car, with 50 sensors and 24 processors that generate 100 GB of data an hour, to a $100 sensor-driven smart thermometer, with 16 independent infrared sensors. With the price of sensors dropping so dramatically, the next generation will probably have 30 or 40 sensors, Gens said.

“All of these things together are forming a massive and expanding perimeter, a smart edge of continuous sensing for both enterprise and consumers,” Gens said. “There will be about 30 billion of these devices by 2020, and there will be 10 times growth of apps and services around these devices.” A developer community will grow around this to gather the data, make sense of it and turn it into something valuable.

To that end, the other side of the digital transformation is a cognitive or artificial intelligence (AI) back end that senses what those devices experience, finds patterns and algorithms, and generates recommendations or behaviors to create a more worthwhile product or service.

For example, IBM and Medtronic connected Medtronic’s insulin pumps and glucose monitors to IBM Watson to detect and anticipate hypoglycemic events with a high degree of accuracy early enough to be able to do something about it.

“If you put the IoT and the cognitive story together, we’re really talking about what happens when you connect a continuously sensing, expanding perimeter of things with this collective, deep-learning back end of AI. We believe you’re talking about a new foundation for the next generation of solution and services, of killer apps,” Gens said. “If in a few years, you are in the solutions business, and you’re not taking advantage of that expanding edge and the collective-learning back end, you’re applications and services will look very quaint.”

Startups proliferate

The confluence of IoT and the cognitive back end will create an expanding ecosystem of developers. There are already thousands of developers at work on third-platform services, mostly startups who are the spring of growth driving the digital transformation.

However, Gens said that many large companies have also launched digital transformation centers in one form or another, including CVS Health, Capital One, Home Depot and Caterpillar.

“Many more have launched DT centers or groups within their companies,” Gens said. “These groups will double the number of developers and double the capacity for digital innovation in the company.”

We are on the threshold of jumping to a massive new scale in transformed marketplace, Gens said in summation. To take advantage of this and not be left behind like the old first-platform relics, businesses need to become focused on the third-platform technologies; connect these technologies to real use cases and make them relevant; and connect with the developers who are the “kingmakers” in the new industry.

The companies that lead the digital transformation must transform themselves, however. “If we have the best cloud, the best AI, the best IoT, but we haven’t changed our own companies to be able to compete and operate them at these scales, then we are going to be like Wang,” Gens said. “We need to not only offer these technologies to customers, we need to use the third-platform technologies to drive our own digital transformation.”

The third platform and IoT

Vernon Turner, an IDC senior vice president, took to the stage at IDC Directions to explain how the Internet of Things relates to the third platform. Turner called IoT an accelerant that will fuel many of today’s third-platform technologies, including 3D printing, robotics and cognitive computing. “It will create content, [and] it will connect the unconnected,” he said.

Conversely, each third-platform pillar will play a role in IoT, Turner added. The cloud will be central in collecting data sources and scaling out applications, as well as connecting endpoints, activating applications and establishing networked IoT platforms. Social media — perhaps the most underrated component, he said — will be the vehicle through which people receive much of the IoT content, while mobile will drive connections. “Without a good mobile network, none of this will happen,” he said.

Turner said IDC views IoT as a $1.46 trillion global market opportunity by 2020, and advised attendees to make sure they build fully integrated IoT systems instead of isolated applications. “This is not a paintball competition,” he said.

The other keynote speaker at IDC Directions, Kitty Fok, managing director of IDC China, gave essentials of doing business in China and explained the role of key players, such as provincial governments and state-owned enterprises. Fok urged attendees not to let China’s recent economic troubles discourage them about the potential for growth, which was a still-healthy 6.9% in 2015.

 

Source: searchmanufacturingerp.techtarget.com -IDC Directions shifts focus to third-platform ‘accelerators’

Implementing DevOps presents these three IT hurdles

DevOps is emerging as a more efficient way to develop and deploy cloud applications — but it’s still in its early days. Implementing DevOps removes the barrier between development and operational teams, which reduces enterprise application backlogs and accelerates software delivery. But despite its benefits, DevOps is easier said than done.

Enterprises implementing DevOps processes and tools often discover too late that they have made mistakes — many of which require them to stop, back up and start again.

So, what are enterprises doing wrong with DevOps? While mistakes vary from one organization to another, there are some common patterns when it comes to DevOps failure.

Here are three common mistakes organizations make when implementing DevOps.

Putting technology before people

The core purpose of implementing DevOps is to remove the barrier that exists between developers and IT operations staff. One common mistake enterprises make when implementing DevOps is focusing too early and too often on technology rather than people and processes. This can lead to organizations choosing DevOps tools they may need to replace later. Neglecting to change IT processes and train your staff is fatal. Invest in training programs that focus on the use of the technology, and how to adopt continuous development, testing, integration, deployment and operations. While your DevOps tools are likely to change, your people and processes most likely won’t.

Overlooking security and governance

Another common mistake when implementing DevOps is to fail to consider security and governance as being systemic to your applications. You can no longer separate security from the application. Include security in every process, including continuous testing and continuous deployment. The days of building walls around applications and data are over. Governance needs to be systemic to cloud application development and built into every step of DevOps processes, including policies that place limits on the use of services or APIs, as well as service discovery and service dependencies.

 

Busting common DevOps myths

With DevOps, organizations can more quickly develop and deploy enterprise applications. A large part of the process calls for an increase in communication between developers and IT operations teams. However, DevOps is still relatively new, so implementing DevOps is no easy task. Two experts, Stephen Hendrick and Chris Riley, bust common DevOps myths and share advice to alleviate common DevOps challenges in this podcast.

Being unwilling to change

Implementing DevOps means always questioning the way you develop, test, deploy and operate applications. The process, technology and tools need to change, and organizations should gather metrics to determine if the changes made actually increase productivity. Do not set it and forget it; DevOps needs to change and evolve to keep up with emerging ideas and technology. Always design your DevOps process with change in mind.

Source: searchcloudcomputing.techtarget.com-Implementing DevOps presents these three IT hurdles

Hilton hotel chain powers robot concierge with IBM Watson

The Hilton Worldwide hotel chain is trialling a robotic concierge service – powered by IBM’s Watson cognitive computing technology – to assist front desk staff in its hotels.

Named after its founder Conrad Hilton, “Connie” – who is being tested out at a hotel McLean, Virginia, near Washington DC – uses domain knowledge from Watson and WayBlazer, a cognitive travel recommendation engine, to provide guests with information on tourist attractions, local restaurants and hotel amenities.

“We’re focused on re-imagining the entire travel experience to make it smarter, easier and more enjoyable for guests,” said Jonathan Wilson, vice-president of product innovation and brand services at Hilton Worldwide.

“By tapping into innovative partners like IBM Watson, we’re wowing our guests in the most unpredictable ways.”

Connie will work alongside Hilton staff to assist with guest requests and personalise guests’ stay. The robot uses a number of Watson application programming interfaces (APIs) – including dialogue, speech-to-text, text-to-speech and natural language classification – to greet arrivals and answer their questions by tapping into WayBlazer’s database of travel domain knowledge.

Hilton hopes Connie will be able to learn, adapt and improve its recommendations over time as more guests interact with it. Staff, meanwhile, will be able to access a log of what guests ask it, and the answers they receive, to help them improve the service they provide.

Read more about Watson and the internet of things (IoT)

IBM cuts the ribbon on a global headquarters for its Watson IoT business unit in Munich.
At the IBM PartnerWorld Leadership Conference, distributor Avnet and IBM announced partnerships focused on the IoT and security as a service.
Finnish lift manufacturer Kone is using IBM Watson as part of its IoT strategy after a multi-year agreement with the IT giant.
Robots promote brand loyalty

Rob High, vice-president and CTO of IBM Watson, said the test represented an important shift in human-machine interaction.

“Watson helps Connie understand and respond naturally to the needs and interests of Hilton’s guests – which is an experience that’s particularly powerful in a hospitality setting, where it can lead to deeper guest engagement,” he said.

WayBlazer CEO Felix Laboy added: “We believe providing personalised and relevant insights and recommendations – specifically through a new form factor such as a robot – can transform brand engagement and loyalty at the Hilton.”

Hilton has been testing out a number of other digital innovations in recent years to enhance its guest experience, including digital check-in services and keys, and partnerships with taxi service Uber.

Source: computerweekly.com-Hilton hotel chain powers robot concierge with IBM Watson

Beware: Robots on the Cow Paths

There’s a growing volume of chatter in the outsourcing industry circles about the breakthrough potential of Robotic Process Automation, or RPA.  I’ve recently seen RPA in action for a few major corporations and I’ve developed a concern over this trend.  It’s a concern based on successful adoption.

For those who haven’t had the pleasure of a first-hand exposure to RPA in action, I can tell you that it’s quite impressive.  Software-based “robots” (personally, I am not a fan of the term, but it’s increasingly being used) run on servers and essentially perform the identical tasks that humans otherwise would perform.  They login to various systems, review lists of tasks, lookup and correlate information from varied sources, and execute well-defined procedures.

The benefits include 24×7 operations (no coffee breaks or absenteeism), predictable quality of performance, speed of execution, and … obviously … lower labor costs.  Other benefits include the ability to audit and measure the performance of tasks that might not otherwise lend themselves to 100% verification.

These virtues have great appeal to companies that use people today to execute rather standardized processes.  Lower cost, higher quality, assured outcomes.  Sounds great, right?

Many observers are worried that the rise of robots to perform transactional business processes, such as accounts payable reconciliations, invoice verification, account change processing, and the like, has the potential to displace thousands of “knowledge workers”, leading to a greater level of social issues around employment rates.

Some of the more prominent corporate advocates among the outsourcing industry, many of which operate with thousands of employees domiciled in lower-cost delivery locations, are the most prominent adopters.  They argue that today’s labor arbitrage outsourcing models need the ability to drive greater sources of benefits to their customers.  The ability pull the lever of lower labor cost is diminished, so we must shift to automation from these delivery centers as the next wave of benefits through outsourcing.

Well, what I’ve seen of RPA in practice introduces, to me, a concern that dwarfs that of displacing workers.

I am old enough to recall the rise of Business Process Reengineering in the early 1990s.  BPR was the brainchild, arguably, of two consulting luminaries, James Champy and Michael Hammer.

The central thesis of BPR was that “the usual methods for boosting corporate performance—process rationalization and automation—haven’t yielded the dramatic improvements companies need. In particular, heavy investments in information technology have delivered disappointing results—largely because companies tend to use technology to mechanize old ways of doing business. They leave the existing processes intact and use computers simply to speed them up.”  That was twenty-five years ago!

Back then, the BPR advocates argued that speeding up those processes does not address fundamental performance deficiencies. “Many of our job designs, work flows, control mechanisms, and organizational structures came of age in a different competitive environment and before the advent of the computer. They are geared toward efficiency and control. Yet the watchwords of the new decade are innovation and speed, service and quality.”  Those are the words of Michael Hammer printed in a prominent 1990 HBR article.

Many of the RPA examples I’ve seen are simply a repaving of the cow paths defined by current systems, processes, and policies.  The RPA robots memorialize the existing procedures in ways that mimic today’s human-based operations.

While today’s RPA initiatives are designed, largely, to be proof-of-concept and pilot in nature, I think that great care should be taken to define the innovation roadmap for the underlying business processes prior to shedding the people who are the most knowledgeable about the processes being robot-enabled.  We need to know that we can redesign, replace, or retire those existing systems and processes – not be held hostage to a robot’s execution of legacy procedures.

Perhaps this assignment is a worthy repurposing of the displaced knowledge workers?

I’ll never argue against automation and the use of technology to drive efficiency, accuracy, and cost effectiveness.  Those are sacred principles in an “As A Service” economy.  Yet, we need to be sure that we don’t lock ourselves into legacy ways of running businesses as the ultimate price for near-term efficiencies.

The robots will execute; they will not redesign.  Not yet.

Source: linkedin.com · Beware: Robots on the Cow Paths