Cloud wars: Google turns aggressive in battle with AWS and Microsoft Azure

Google is getting more aggressive in the cloud as it looks to make up ground on Amazon Web Services and Microsoft Azure.

This may not come as a surprise for many who will have noticed that Google has been pretty active on the acquisition front over the past year, having now spent over $1bn on the likes of Apigee, Orbitera, and other acquisitions.

According to Deutsche Bank estimates, Google Cloud Platform has a $750 million revenue run-rate.

Further findings from the markets research study, ‘ Google getting more aggressive in the cloud’, predict that GCP is preparing a series of new product announcements in September that will be aimed at strengthening the company’s customer-facing roadmap.

In a total available market of $1 trillion the combined revenues of AWS, Microsoft Azure, and GCP accounts for less than $15bn, suggesting that while these companies are big in cloud, there is lot more market that they could grow in to.

Deutsche Bank classifies the Enterprise IT spending market by combining storage, network, infrastructure software, IT outsourcing and support, data management software, BI and analytics, application software and consulting. All of which the cloud vendors have some capabilities in and are looking to build out.

Google Cloud Platform can be used by developers to tap into AI capabilities.
Google Cloud Platform can be used by developers to tap into AI capabilities.

 

On the product front it is expected that GCP will continue to concentrate on machine learning, data analytics and security, which will include data encryption and identity, and access management. Already the company has revealed capabilities such as SQL Server Images and the second generation version of Cloud SQL.

To support these moves the company is also taking an aggressive approach to building out global infrastructure locations. It announced in its Q4 2015 earnings that 12 new regions would be built in 2016 and 2017.

Of course, Google isn’t alone in expanding its infrastructure footprint for its cloud as both Microsoft and AWS have already made similar moves. Microsoft recently opened a region in the UK and AWS has one planned to open in late 2016 or early 2017.

All of these capabilities and build out of infrastructure still rely upon someone to sell what the company is offering, which is why Deutsche bank said that Google is “hiring very aggressively” to increase its enterprise sales rep capacity.

According to the research these moves appear to be helping it to gain traction among the start-up community. Customers estimated that 25% of start-ups are using GCP today, but 75% are with AWS, so it still has a long way to go to catch up.

Source: cbronline.com-Cloud wars: Google turns aggressive in battle with AWS and Microsoft Azure

Advertisements

CIOs still struggling to manage cloud use

 Although companies apply ITSM processes to in-house IT, cloud services are more likely to be managed by providers

Research has revealed that 85 per cent of CIOs think the cloud is preventing organisations from having control over their IT network.

A study by Fruition Partners revealed that cloud services are much less commonly managed by IT service management (ITSM) processes compared to other in-house IT services, which are, on average, managed by six processes.

Much of this is down to IT maturity, Fruition Partners said, but the gap and could have a negative impact on the way entire organisations are managed. In fact, 80 per cent of CIOs interviewed by the company said they do not apply the same comprehensive ITSM processes to the cloud as they do for other in-house IT services.

“The maturity of cloud services has started to improve, but it is still leagues away from where it needs to be,” said Paul Cash, managing partner of Fruition Partners UK. “There has to be a recognition that the need for rigorous management is greater, not less, in the cloud.”

Furthermore, the research revealed 73 per cent of CIOs are happily handing over both change management and cloud application management controls to cloud providers and vendors, which means the IT department as a whole has less control over their organisation.

He added that CIOs cannot trust that public cloud services will work flawlessly and be delivered perfectly at all times and therefore should be wary about handing responsibility over to providers without ensuring ITSM principles are applied, because if they do not check there are sufficient safeguards in place, they open themselves up to blame if one of the services fails.

“CIOs should still be managing cloud services internally, rather than abdicating responsibility to the provider. Otherwise they risk losing control, and increasing both cost and risk to themselves and the business,” he added.

Shadow IT is also a concern for CIOs, Fruition Partners’ research revealed. 66 per cent of the respondents said there was an increasing culture of shadow IT in their company and 68 per cent of CIOs reported the organisation doesn’t ask for advice before buying public cloud-based services.

“Organisations have the tools at hand to ensure IT services are delivered consistently, comprehensively, and without risk. By failing to apply these tools to the cloud, they are doing themselves a major disservice,” Cash continued.

“The longer business spend without unifying their approach to both cloud and in-house IT, the harder managing IT will become. Dealing with this is relatively easy in the short term; simply ensuring that ITSM processes are unified across in-house and cloud services will reduce a great number of the challenges and risks associated with cloud.”

 

Source: cloudpro.co.uk-CIOs still struggling to manage cloud use

Cut IT Costs and Improve Service with Smart Automation

For the past 20 years and to the benefit of the C-suite in companies all over the world, smart automation software has been engineered, refined, and constantly improved by its developers to respond automatically to repetitive and resource-consuming tasks – that is, the work that nobody wants to do.

It is widely accepted that the most basic and repetitive tasks usually can be solved with software, scripts, and simple programs with even limited intelligence making it the baseline for automation. That scenario is so 2010 – if you threw just a few variables into the mix, and the engine would come to a grinding halt.

Today, organizations such as the largest banks, IT companies, and telecoms are moving beyond the baseline of basic automation, and it isn’t for cost savings alone. Their compliance requirements related to higher-order transactions have increased, and the stakes are much higher for them to remain in compliance. This plus the expense of using expertly trained personnel to keep IT systems running and to solve what boil down to basic and repetitive problems are key drivers in the move to more advanced intelligent automation.

Another key factor is one that has been a constant this decade – the issue of so much data flowing into the company, from transactions, authentications, and other business processes affecting customers that organizations need any help they can find just to keep up. It is the quintessential image of trying to drink from the fire hose, and the fire hose is ever-widening. And with the state of IT today, few organizations – even highly successful ones – have been able to increase IT spending fast enough or significantly enough to match the growing demand for IT in all aspects of the business.

If You Can’t Get Bigger, Get Smarter

What this means, in the end, is either increased costs or increased risks, neither of which is acceptable. But the good news is that the smart automation technologies available today are light years ahead of earlier automation tools, which depended heavily on hand-coded scripts and rigid run books to automate some traditional manual IT tasks. Today, smart automation engines are more capable than ever of analyzing, learning, and responding to a mountain of transactions, which can number in the thousands per minute. These transactions are critical, must be watched and responded to in real time and, as an added benefit, produce critical, valuable, and actionable data.

Source: data-informed.com-Cut IT Costs and Improve Service with Smart Automation

What to know before migrating applications to the cloud

Migrating applications to the cloud isn’t done with the flip of a switch. Organizations must carefully define an application migration strategy, choose the right migration tool and, in some cases, even refactor their applications to take full advantage of cloud.

At the recent Modern Infrastructure Summit in Chicago, managing editor Adam Hughes spoke with Robert Green, principal consultant and cloud strategist at Enfinitum Inc., a consulting firm based in San Antonio, about the do’s and don’ts of migrating applications to the cloud.

SearchCloudComputing: What is the difference between a lift-and-shift migration approach vs. refactoring an application for the cloud?

Robert Green: Your traditional lift-and-shift model is [when organizations say], ‘I’m going to take my application as it stands today with [its] operating system, as well as the infrastructure of the application, and move that over to the cloud.’ It’s typically done just like you’re moving a VM to an instance on the cloud. It’s kind of holistic.

Your first important migration tool is your people — understanding and changing the way they think about their servers.

Robert Greenprincipal consultant and cloud strategist at Enfinitum Inc.

There’s benefit to refactoring, and there’s detriment. I typically go back to [the question],’ What is your customer experience and business going to look like?’ Understanding how to adopt DevOps [is important] as you’re going through the refactoring, [since] you’ll always have legacy applications. The benefit to refactoring legacy apps really comes in the transformation of your business into a DevOps [and] agile technology.

SearchCloudComputing: What are the biggest challenges when migrating applications to the cloud?

Green: When moving an application to the cloud, you need a full understanding of how an application works, and how it’s going to perform in the cloud. Most people haven’t taken the proper time to understand the performance metrics. You’ve got to know [and] you need some really good detail around you about how your application functions [and] how it scales. Knowing that is half the battle, and if you can figure that out, you can protect your cloud environment to match the performance metrics you need.

SearchCloudComputing: How do availability requirements like disaster recovery and scaling fit in here?

Green: The thing I always talk about is understanding which application is bound [and] understanding which application can horizontally scale, because that’s key for [the] cloud. When you don’t have horizontal scalability, you have diminishing returns. You vastly limit the amount of benefit your cloud migration has to offer.

When you talk about DR, there’s that adage that when you move your instances to the cloud, they become like cattle – [if] one gets sick, you put it down. Cloud instances should be stateless; when one gets ‘sick,’ you delete it, you kill it, you purge it and you create another one. That has to be blueprinted [and] it has to be automated. That’s really the key to hitting that DR statelessness.

SearchCloudComputing: Are there any good tools to use when migrating applications to the cloud?

Green: Your first important migration tool is your people — understanding and changing the way they think about their servers. Don’t build operating systems and VMs, build blueprints. Automate how you install those things. There are other applications out there, like CloudVelocity, where they take a picture of your existing instances and move those to things like Amazon. A lot of the providers that offer cloud, the back end of it is a XenDesktop or a XenServer, [or] a hypervisor that’s well known, so you can work your way into that.

SearchCloudComputing: How can an enterprise mitigate cost during a migration?

Green: If you can take some of those applications, and refactor them while moving your development to DevOps, using that [agile] approach, you gain massive benefits. The refactoring time spent is actually mitigated by taking that as a training opportunity to grow your organization to do things much quicker.

Source: searchcloudcomputing.techtarget.com-What to know before migrating applications to the cloud

Google cloud outage highlights more than just networking failure

Google Cloud Platform went dark some weeks ago in one of the most widespread outages to ever hit a major public cloud, but the lack of outcry illustrates one of the constant knocks on the platform.

Users in all regions lost connection to Google Compute Engine for 18 minutes shortly after 7 p.m. PT on Monday, April 11. The Google cloud outage was tied to a networking failure and resulted in a black eye for a vendor trying to shed an image that it can’t compete for enterprise customers.

Networking appears to be the Achilles’ heel for Google, as problems with that layer have been a common theme in most of its cloud outages, said Lydia Leong, vice president and distinguished analyst at Gartner. What’s different this time is that it didn’t just affect one availability zone, but all regions.

“What’s important is customers expect multiple availability zones as reasonable protection from failure,” Leong said.

Amazon has suffered regional outages but has avoided its entire platform going down. Microsoft Azure has seen several global outages, including a major one in late 2014, but hasn’t had a repeat scenario over the past year.

This was the first time in memory a major public cloud vendor had an outage affect every region, said Jason Read, founder of CloudHarmony (now owned by Gartner), which has monitored cloud uptime since 2010.

Based on the postmortem Google released, it appears a number of safeguards were in place, but perhaps they should have been tested more prior to this incident to ensure this type of failure could have been prevented, Read said.

It sounds like, theoretically, they had measures in place to prevent this type of thing from happening, but those measures failed.

Jason Readfounder, CloudHarmony

“It sounds like, theoretically, they had measures in place to prevent this type of thing from happening, but those measures failed,” he said.

Google declined to comment beyond the postmortem.

Google and Microsoft both worked at massive scale before starting their public clouds, but they’ve had to learn there is a difference between running a data center for your own needs and building one used by others, Leong said.

“You need a different level of redundancy, a different level of attention to detail, and that takes time to work through,” she said.

With a relatively small market share and number of production applications, the Google cloud outage probably isn’t a major concern for the company, Leong said. It also may have gone unnoticed by Google customers, unless they were doing data transfers during those 18 minutes, because many are doing batch computing that doesn’t require a lot of interactive traffic with the broader world.

“Frankly, this is the type of thing that industry observers notice, but it’s not the type of thing customers notice because you don’t see a lot of customers with a big public impact,” Leong said. By comparison, “when Amazon goes down, the world notices,” she said.

Measures have already been taken to prevent a reoccurrence, review existing systems and add new safeguards, according to a message on the cloud status website from Benjamin Treynor Sloss, a Google executive. All impacted customers will receive Google Compute Engine and VPN service credits of 10% and 25% of their monthly charges, respectively. Google’s service-level agreement calls for at least 99.95% monthly uptime for Compute Engine.

Networking failure takes down Google’s cloud

The incident was initially caused by dropped connections when inbound Compute Engine traffic was not routed correctly, as a configuration change around an unused IP block didn’t propagate as it should. Services also dropped for VPNs and L3 network load balancers. Management software’s attempts to revert to previous configuration as a failsafe triggered an unknown bug, removed all IP blocks from the configuration and pushed a new, incomplete configuration.

A second bug prevented a canary step from correcting the push process, so more IP blocks began dropping. Eventually, more than 95% of inbound traffic was lost, which resulted in the 18-minute Google cloud outage that was finally corrected when engineers reverted to the most recent configuration change.

The outage didn’t affect Google App Engine, Google Cloud Storage or internal connections between Compute Engine services and VMs, outbound Internet traffic, and HTTP and HTTPS load balancers.

SearchCloudComputing reached out to a dozen Google cloud customers to see how the outage may have affected them. Several high-profile users who rely heavily on its resources declined to comment or did not respond, while some smaller users said the outage had minimal impact because of how they use Google’s cloud.

Vendasta Technologies, which builds sales and marketing software for media companies, didn’t even notice the Google cloud outage. Vendasta has built-in retry mechanisms and most system usage for the company based in Saskatoon, Sask., happens during normal business hours, said Dale Hopkins, chief architect. In addition, most of Vendasta’s front-end traffic is served through App Engine.

In the five years Vendasta has been using Google’s cloud products, on only one occasion did an outage reach the point where the company had to call customers about it. That high uptime means the company doesn’t spend a lot of time worrying about outages and isn’t too concerned about this latest incident.

“If it’s down, it sucks and it’s a hard thing to explain to customers, but it happens so infrequently that we don’t consider it to be one of our top priorities,” Hopkins said.

For less risk-tolerant enterprises, reticence in trusting the cloud would be more understandable, but most operations teams aren’t able to achieve the level of uptime Google promises inside their own data center, Hopkins said.

Vendasta uses multiple clouds for specific services because they’re cheaper or better, but it hasn’t considered using another cloud platform for redundancy because of the cost and skill sets required to do so, as well as the limitations that come with not being able to take advantage of some of the specific platform optimizations.

All public cloud platforms fail, and it appears Google has learned a lesson on network configuration change testing, said Dave Bartoletti, principal analyst at Forrester Research, in Cambridge, Mass. But this was particularly unfortunate timing, on the heels of last month’s coming-out party for the new enterprise-focused management team at Google Cloud.

“GCP is just now beginning to win over enterprise customers, and while these big firms will certainly love the low-cost approach at the heart of GCP, reliability will matter more in the long run,” Bartoletti said.

 

Source: searchcloudcomputing.techtarget.com- Google cloud outage highlights more than just networking failure

Is Cloud clouding the issue? |

Cloud computing isn’t just fashionable, it’s widely considered to be a required component of any modern IT solution. But the varied and ‘catch-all’ nature of a term like ‘cloud computing’ leaves it wide open to confusion. There are fundamental differences between public and private clouds that can substantially affect their suitability for one business or another.

One is occupied by multiple tenants, externally managed and rapidly expandable, the other is privately managed and secured, specifically for the needs of a single tenant. The only characteristic these solutions share is virtualisation. Despite being decidedly less ‘sexy’ and marketable, virtualisation is the real game-changer in the IT industry today.

Public cloud

The principal value of a cloud solution is often assumed to be as a cost saving initiative, which is undoubtedly the case for public cloud. These solutions can be rapidly expanded to meet business requirements and cope with spiked demands on resources. Opting for public cloud outsources the cost of maintaining, orchestrating and optimising your own systems.

Another key element of the public cloud is the speed with which it can be implemented. Cloud providers offer out-of-the-box solutions that allow for very agile implementations, further keeping costs down. These cost saving elements are a major reason behind the widespread uptake of public cloud by SMEs. Many companies find themselves in the situation where it makes commercial sense to offload the management of their IT infrastructure, lowering the cost-barrier to new technologies and dramatically simplifying a major business process. Public clouds offer access to a smart, automated and adaptive IT solution at a fraction of the cost of a private installation of similar capabilities.

Private cloud

A private cloud brings very different benefits to a business. Because individual businesses are responsible for the management and upkeep of private cloud systems, great control is the primary goal of a private cloud, not cost-saving. When one business is the sole tenant of cloud infrastructure they can control and customise it to their very specific needs. Processes relating to R&D, supply chain management, business analytics, and enterprise resource planning can be all effectively migrated to a private cloud.

Where public clouds can offer flexibility and scalability to a huge degree at reduced cost, private clouds offer increased control and customisation over a virtualised IT environment. This is an important consideration for businesses in industries or sectors that are governed by strict regulations around data handling, security or transaction speed that may effectively prohibit the use of public cloud solutions.

Moving on from Safe Harbour

The European Court of Justice’s recent decision to nullify the ‘Safe Harbour’ agreement has caused many businesses to re-evaluate their cloud solutions in order to adhere to the new rules governing geographical limitations data transfer and storage. What becomes clear here is that all cloud implementations are absolutely not created equal. Given the misplaced connotations around the term, talking about ‘cloud’ can actually unhelpfully confuse discussions that should instead be focused on the required merits of virtualisation and how they can be applied to the specific IT service needs of a particular business.

Cloud computing is not the simple answer to the challenges facing enterprise IT today. Rather it is an important IT sourcing decision in the journey towards a multi-sourced, software-defined IT system that is specifically tailored to the needs of modern business. As businesses mature, managed self-built and public clouds must co-exist with each other as well as with legacy systems.

Here, the role of the data centre in enterprise IT architecture becomes crucial. All clouds have a physical footprint of some kind or another. More often than not, this footprint resides in a data centre that is capable of facilitating the high power-density, efficient workload focus and seamless interconnectivity between data sources that underline the core principles of virtualisation.

The hot topics of enterprise IT – the IoT, big data manipulation, machine to machine communication – all fundamentally rely on the software-defined approach that datacentres can provide. In the question of how enterprise IT can cope with the challenges of these applications, the cloud is only a small part of the answer.

Source: itproportal-Is Cloud clouding the issue? 

Cloud and managed services: SaaS CIO seeks more than colocation

Colocation versus cloud and managed services is a decision many CIOs face, as they seek the best approach to run an application. How do you know which strategy works best for your company?
Heather Noggle, CIO of Exits Inc., which develops international and compliance software, has had hands-on experience with both approaches. Exits went with colocation when it began offering its trade and compliance application on a software as a service (SaaS) basis in 2003.

Colocation allows an IT organization to lease space, and place storage, servers and the resident apps in a third-party data center, which provides cooling, network connectivity, power and security. It’s is a no-frills approach that offers the advantage of greater control over equipment — until it doesn’t.

Eventually, reliability became an issue with Exits’ colocation arrangement. The company’s site would go down for no discernable reason, and the colocation provider was unresponsive, Noggle said.

Noggle recalled driving to the colocation facility to personally reset the servers hosting Exits’ software. Exits’ development team and its colocation vendor were both located in St. Louis at the time. The company’s headquarters is in Connecticut.

“I only had to do that the once … but that was the proverbial straw that broke the camel’s back,” she said. “When you are in our business, you can’t be down. You’ve got to have someone invested in keeping your data available all the time.”

Exits’ customers rely on its Global Wizard software suite to handle export document generation, determine international trade requirements based on trade lanes and conduct Denied Persons screenings, the company noted. As for the last service, U.S. companies may not engage in export transactions with people or organizations on the Commerce Department’s Denied Persons List.

Making the switch to cloud and managed services

You’ve got to have someone invested in keeping your data available all the time.

Heather Noggle CIO, Exits Inc.

After that pivotal server incident, Exits decided to drop its colocation provider and tap Xiolink to provide what Noggle called a “fully integrated managed service model.” Xiolink is now part of Cosentry Inc., a cloud and managed services provider based in Omaha, Neb.

The transition to a new provider took place in 2005. Today, Cosentry maintains Exits’ IT infrastructure in a managed private cloud that encompasses a Microsoft Internet Information Services Web server and SQL Server database. The cloud and managed services provider also includes managed security and data backup within its service offerings.

Beyond greater reliability, the use of a cloud and managed services firm helps Exits reduce IT costs. The biggest source of savings: Exits has not needed to keep a network expert on staff. Noggle said Exits has general IT knowledge, but noted that networking and hardware aren’t the SaaS company’s specialties.

Starting salary for a network administrator is forecast to increase 6.4% this year, with compensation running from $76,250 to $112,000, according to Robert Half Technology, an IT staffing company.

In addition, Exits takes advantage of business and technology planning that its services provider offers. For example, Exits’ growth has prompted discussions around hardware load balancing. Load balancing is a technique IT departments use to deal with increasing volumes of Web traffic.

“We’re looking at the hardware load balancer now, but we haven’t completed the request with Cosentry, who did recommend it,” Noggle said. “We have some older code and some newer code, and some additional changes in infrastructure planned that we want to nail down before we move forward with that.”

Software rewrite

Exits, meanwhile, has another technology change in the offing: The company plans to update the user interface of its main product, and then its supporting database. The software rework is scheduled for completion in 2017. The new software could change the way Exits wants its infrastructure managed, but Noggle expressed confidence is Cosentry’s flexibility.

“We know they will help us as we go forward,” Noggle said.

And her advice to other CIOs considering a services provider partner?

“Get someone who will work with you as you grow and change.”

 

Source: searchcio.techtarget.com.au- Cloud and managed services: SaaS CIO seeks more than colocation

 

Head in the clouds? What to consider when selecting a hybrid cloud partner

The benefits of any cloud solution relies heavily on how well it’s built and how much advance planning goes into the design. Developing any organisation’s hybrid cloud infrastructure is no small feat, as there are many facets, from hardware selection to resource allocation, at play. So how do you get the most from your hybrid cloud provider?

Here are seven important considerations to make when designing and building out your hybrid cloud:

Right-sizing workloads

One of the biggest advantages of a hybrid cloud service is the ability to match IT workloads to the environment that best suits it. You can build out hybrid cloud solutions with incredible hardware and impressive infrastructure, but if you don’t tailor your IT infrastructure to the specific demands on workloads, you may end up with performance snags, improper capacity allocation, poor availability or wasted resources. Dynamic or more volatile workloads are well suited to the hyper-scalability and speedy provisioning of hybrid cloud hosting, as are any cloud-native apps your business relies on. Performance workloads that require higher IOPS (input/output per second), CPU and utilisation are typically much better suited to a private cloud infrastructure if they have elastic qualities or requirements for self-service. More persistent workloads almost always deliver greater value and efficiency with dedicated servers in a managed hosting or co-location environment. Another key benefit to choosing a hybrid cloud configuration is the organisation only pays for extra compute resources as required.

Security and compliance: securing data in a hybrid cloud

Different workloads may also have different security or compliance requirements which dictates a certain type of IT infrastructure hosting environment. For example, your most confidential data shouldn’t be hosted in a multi-tenant environment, especially if that business is subject to Health Insurance Portability and Accountability Act (HIPAA) or PCI compliance requirements. Might seem obvious, but when right-sizing your workloads, don’t overlook what data must be isolated, and also be sure to encrypt any data you may opt to host in the cloud. Whilst cloud hosting providers can’t provide your compliance for you, most offer an array of managed IT security solutions. Some even offer a third-party-audited Attestation of Compliance to help you document for auditors how their best practices validate against your organisation’s compliance needs.

Data centre footprint: important considerations

There is a myriad of reasons an organisation may wish to outsource its IT infrastructure: from shrinking its IT footprint to driving greater efficiencies, from securing capacity for future growth, or simply to streamline core business functions. The bottom line is that data centres require massive amounts of capital expenditure to both build and maintain, and legacy infrastructure does become obsolete over time. This can place a huge capital and upfront strain onto any mid-to-large-sized businesses expenditure planning.

But data centre consolidation takes discipline, prioritisation and solid growth planning. The ability to migrate workloads to a single, unified platform consisting of a mix of cloud, hosting and datacentre colocation provides your IT Ops with greater flexibility and control, enabling a company to migrate workloads on its own terms and with a central partner answerable for the result.

Hardware needs

For larger workloads should you seek to host on premises, in a private cloud, or through colocation, and what sort of performance needs do you have with hardware suppliers? A truly hybrid IT outsourcing solution enables you to deploy the best mix of enterprise-class, brand-name hardware that you either choose to manage yourself or consume fully-managed from a cloud hosting service provider. Performance requirements, configuration characteristics, your organisation’s access to specific domain expertise (in storage, networking, virtualisation, etc.) as well as the state of your current hardware often dictates the infrastructure mix you adopt. It may be the right time to review your inventory and decommission that hardware reaching end of life. Document the server de-commissioning and migration process thoroughly to ensure no data is lost mid-migration, and follow your lifecycle plan through for decommissioning servers.

Personnel requirements

When designing and building any new IT infrastructure, it’s sometimes easy to get so caught up in the technology that you forget about the people who manage it. With cloud and managed hosting, you benefit from your provider’s expertise and their SLAs — so you don’t have to dedicate your own IT resource to maintaining those particular servers. This frees up valuable staff bandwidth so that your staff focuses on tasks core to business growth, or trains for the skills they’ll need to handle the trickier configuration issues you introduce to your IT infrastructure.

When to implement disaster recovery

A recent study by Databarracks also found that 73% of UK SME’s have no proper disaster recovery plans in place in the event of data loss, so it’s well worth considering what your business continuity planning is in the event of a sustained outage. Building in redundancy and failover as part of your cloud environment is an essential part of any defined disaster recovery service.

For instance, you might wish to mirror a dedicated server environment on cloud virtual machines – paying for a small storage fee to house the redundant environment, but only paying for compute if you actually have to failover. That’s just one of the ways a truly hybrid solution can work for you. When updating your disaster recovery plans to accommodate your new infrastructure, it’s essential to determine your Recovery Point Objectives and Recovery Time Objective (RPO/RTO) on a workload-by-workload basis, and to design your solution with those priorities in mind.

Source: businesscloudnews.com- Head in the clouds? What to consider when selecting a hybrid cloud partner

The link between third-party vendor support and the cloud

Rebecca Wettemann is the vice president of research at Nucleus Research and leads the quantitative research team. Nucleus Research provides case-based technology research with a focus on value, measurement and data. The company assesses the ROI for technology and has investigated and published 600 ROI case studies. Wettemann specializes in enterprise applications, customer relationship management, enterprise resource planning and cloud. She spoke with SearchOracle about the ROI for cloud adoption and third-party vendor support.

Can you tell me what the typical return on investment is for cloud and third-party vendor support?

Rebecca Wettemann: Cloud delivers 1.7 times the return on investment of on premises. It’s interesting because, intuitively, we think it’s because cloud is cheaper and that’s certainly partially true. But the bigger top line benefit is that I can make changes, upgrades, get more value from my cloud application over time without the cost, pain and suffering, and disruption associated with upgrading a traditional on-premises application.

Today, a lot of ERP customers are a couple of upgrades behind. Staying current, particularly if you’ve made a lot of customizations, is extremely expensive, extremely risky, extremely disruptive. Going through an upgrade can cost maybe half a million dollars. It’s not unusual. So customers stay behind, and that’s when they start to look at third-party support as an option. Support from the vendor is expensive, and, as I get further behind, I get less attention from the vendor and less support that is really focused on what my needs and particular challenges might be because they’re focusing their resources on the customers who are upgrading and staying current.

I can cut my maintenance bill in half by going to third-party support and use that money to invest in cloud innovation.

Rebecca Wettemannvice president of Nucleus Research

Are companies that are already using cloud likely to be more or less interested in third-party vendor support?

Wettemann: Someone who is already on the cloud is likely to be less interested in third-party support because cloud vendors tend to recognize that they have to win that contract again every year or two. So they’re in there delivering additional value, delivering upgrades, delivering enhancements and providing support because they know that the barriers to switching are a lot lower for cloud applications. What we do see is companies taking their core ERP or core critical applications like Siebel where they are a few generations behind and [saying], “I’m going to put this on third-party support.” This is either because I already have a plan to implement a whole new version of what I have in a couple of years and I want to save money in order to do that, or because there are other areas of innovation in cloud that I want to take advantage of and I can put the money toward that. I can cut my maintenance bill in half by going to third-party support and use that money to invest in cloud innovation.

What we’re seeing with customers is not a lot that are saying, “Okay, I’m going to move from PeopleSoft to ERP cloud.” It’s a very small population. What we are seeing is people saying, “You know what, PeopleSoft is mission-critical for us. We don’t want to disrupt it right now. We want to watch the road map for ERP cloud and see where it’s going. But we want to get a Taleo subscription, so we can manage talent management, or we want to invest in something on the CRM side in Sales Cloud or Marketing Cloud that’s attractive to us.” So they’re looking at taking advantage of the investment Oracle has made in cloud in different areas of the organization, which is where putting the PeopleSoft portion — to use that example — on third-party support saves them a ton of money. Our research and talking to Rimini customers finds that they get as good if not better support as they get from the vendor.

This is definitely something we’re seeing as we talk to customers about how to fund new projects. IT budgets are not flat, but not growing at a tremendous rate. And what they’re looking at is: “How can I cut out this big portion of expenditure, which for many firms can be high six figures? How do I cut that out? Or cut it in half and use that to fund cloud innovation?” So, if I look at my overall ongoing IT budget, a significant portion of that is license fees. Anything I cut from there becomes, without needing more budget, funds to invest in cloud.

Is this what you see people doing?

Wettemann: We’re definitely seeing folks say, “Yes, I need to do more with my IT budget.” This is a great way to keep systems that I’m not ready to move to cloud yet on a much more cost-effective basis so I can divert my resources elsewhere.

When Oracle customers move to the cloud, do they remain with Oracle or start using other vendors’ products?

Wettemann: I would say it’s a combination.

What factors influence that decision?

Wettemann: How much they’re an Oracle shop, certainly. Specific business needs that they’re looking for, whether it’s supply chain, CRM, HCM or another — Marketing Cloud is a great example. But they’re looking at what are the competing solutions in the cloud marketplace and how does Oracle stack up.

Is now a good time for making big decisions?

Wettemann: Yeah, absolutely. And it can also be a matter of not necessarily wanting to put all of their eggs in one basket. Because, remember, with cloud I don’t have to have the level of developer skill or DBA [database administrator] skill that I do to support an on-premises application. So, I don’t have to decide that I’ve got to have two or three Oracle DBAs that I know I’m going to be able to retain to make sure they keep my application running and everything works. I don’t have to do that with cloud, so I have more flexibility.

Source: searchoracle.techtarget.com- The link between third-party vendor support and the cloud

Five things to know to land a cloud architect job

 

Demand for cloud architects is growing in the enterprise, but competition for jobs is tough. Here are five questions to help you ace a cloud architect interview.

Cloud computing is becoming a key way for businesses to deploy new applications, which is rapidly changing the IT job market. And demand for cloud architects is especially high.
In fact, roughly 11,100 cloud architect jobs are currently listed on career website Indeed.com, with salaries ranging from $75,000 to more than $150,000 annually. But before landing that dream cloud architect job, you have to wow potential employers during the interview process.

Here are five key questions you can expect an employer to ask during a cloud architect interview, along with advice for how to respond.

1. How do a cloud architect’s responsibilities differ from those of other data center professionals?

A cloud architect focuses more on the meta, or big-picture, view of the data center and less on an individual server’s configuration and throughput. For instance, cloud architects examine how an organization’s central authentication system ensures that only authorized employees access system resources. By comparison, a system analyst is tasked with tying the authentication system to a specific application, such as Salesforce.com.

2. Where do you see technology in one year? How about three years?

Rather than get caught up in the daily grind of data center and cloud operations, cloud architects must think ahead. They need to be blue-sky, big-picture thinkers. Cloud architects determine how emerging technologies, like biometrics and the Internet of Things, will impact enterprise systems and cloud infrastructure. They also need to craft a roadmap that shows where the business’ systems are today and where they need to be in a few years.

3. How do containers fit into a company’s cloud architecture?

Businesses are constantly trying to make software more portable, and containers are the latest variation on that theme — which makes them a critical technology for cloud architects to know. First, cloud architects must understand the capabilities containers offer. Containers work at a layer above the OS and virtualization software. Theoretically, they offer more portability, but they pay a price for that easy movement: decreased security. The software running in the containers does not include the inherent security checks found at the OS or virtualization layer. Consequently, running containers within a firm’s data center and behind its security perimeter makes sense, while putting the software onto a public cloud is a bit risky.

4. What standard interfaces should a company use?

OpenStack has emerged as a key platform, enabling companies to tie different cloud applications together. Businesses primarily use free, open source software as an IaaS platform. OpenStack, which is available under an Apache license, consists of a group of interrelated components that control pools of processing, storage and networking resources.

5. What cloud architect certifications do you have, or are pursuing?

Cloud architect certification programs come from two different sources. Independent training and certification companies, like Arcitura Education, CompTIA and EXIN, offer vendor-neutral certifications. In addition, the industry’s biggest vendors, such as EMC, Hewlett-Packard, IBM and Microsoft, have cloud architect certifications geared toward their particular products.

Cloud architects are becoming more popular. By knowing answers to key questions, IT pros can position themselves to land a high-paying cloud architect job.

Source: Techtarget-Five things to know to land a cloud architect job by Paul Korzeniowski