DevOps teams offer relief for cloud migration headaches

As their role is being established, DevOps workers learn more about cloud computing than almost any other IT staff member. DevOps teams know how to configure applications for newly developed software, and how to interface with legacy systems. Naturally, this makes them champions at facilitating the migration of legacy software to the cloud.

DevOps staffers know the ins and outs of traditional file systems, distributed file systems and object stores, such as Amazon Simple Storage Service. They also know how to handle large-scale analytics and non-relational databases. They can help you migrate existing application logic to services that scale and run entirely in the cloud.

Organizations can simplify app migration from legacy hardware to the cloud by running all the software as-is on VMs in the cloud. But a better approach is to actually transition all of the logic, usually one small piece at a time, over to Web-scale technologies. DevOps teams can help handle load-balancing and fault-tolerance with domain name system latency and health checks.

In addition, DevOps teams are often required to produce analytics. They have their hands in all pies, and can access all underlying data, including traffic data and log analytics. This sort of data can be incredibly useful in measuring application performance and locating bottlenecks. DevOps staffers are able to help manage deployments and track bugs for each deployment released. They can help determine speed and performance changes per release, as well.

Tools to round out DevOps teams

Meet the challenges involved in migrating  to the cloud

Even the most highly functional DevOps teams need third-party tools to manage distributed environments such as cloud. And certain tools are specifically useful for such environments.

Utilities such as FlowDock or HipChat can help members of a development group keep in touch with each other and DevOps staff. A ticketing service, such as Asana or Basecamp, can help track software development tasks, as well as what needs to go out in which application release.

Customer-focused support portals, such as Freshdesk, Zendesk or Get Satisfaction, allow users to communicate requests to management or software development teams directly. This triggers new or improved features, and makes sure customers’ needs are being met. A DevOps team can help set up these services and get teams acquainted with the technology.

The people who make it happen

If you want to make sure someone writes quality code that’s been well tested, get them out of bed when something in that code breaks. A DevOps team doesn’t want to be called in the middle of the night, so they’re going to make sure they have all the tools in place to guarantee automation for as many tasks as possible.

If a server dies, immediately replace and kill it, keeping any relevant logs for a post-mortem, if necessary. It’s unwise to try to fix servers anymore; organizations can easily replace them with a simple API call that happens automatically when a self-healing system detects a problem. Anomaly detection could alert you ahead of time to potential risk factors or leaks in a system.

Members of a DevOps team need to be the foremost experts on cloud computing and configuring services in the cloud. They need to understand the benefits of nonrelational databases, and how to scale relational databases effectively, if needed. They should help developers succeed by showing them which parts of their applications are problematic, and determine the type of virtual hardware on which to run each piece. They’ll help with architecture diagrams to ensure your system is split to the right point — enough to make sure you’re fault tolerant, but not so much that it becomes slow. They’ll be able to identify the algorithms that scale well — and those that don’t — and determine if something scales appropriately.

 

Source: searchcloudcomputing.techtarget.com-DevOps teams offer relief for cloud migration headaches

Six CIO tips for business innovation with data

First Utility CIO Bill Wilkins has a job that relies on data. The company started as a small, entrepreneurial business in 2008 and experienced rapid growth as a pioneer in smart meters. By 2014, the firm had become the seventh-largest energy supplier in the UK, with over a million customers and a market share of 2%.

“Because of the company’s rapid growth, every year has been different,” says Wilkins.

The firm is now scaling up for further growth through a focus on its digital platform. Wilkins, who joined First Utility full-time in 2010 after spells with Sun Microsystems and SeeBeyond, is drawing on his experience to push a data-led process of innovation.

Wilkins offers six best-practice tips for other CIOs from his experiences of running data projects, covering areas such as organisational culture, external partnership and continuous innovation.

1. Create a customer-driven approach to data analytics

Wilkins says First Utility benefits from access to a huge data asset that it uses in three key ways. First, the business runs a number of information-led initiatives to boost customer engagement. “We are, essentially, a retailer and we want to have long-term, valuable relationships with our clients,” he says.

The second way First Utility uses data is for optimisation. “Information helps us to understand what processes work, which processes are causing us problems and how we can use our experience around those processes to make the business better,” says Wilkins.

The third way the firm uses information is strategically, says Wilkins. “Now we’ve built a platform, we want to know our technology is working and where the business can use systems and services to develop and grow,” he adds. “It’s all about making the most of data to find new opportunities and to market to new sets of customers.”

Wilkins says the firm’s customer-driven approach goes further – other external stakeholders are included, too. “From a very early stage of operation, the firm – because of its strong focus on data – has had access to detailed market information,” he says.

“It was not until the IT team built an application on top of that data that a wide base of users started making the most of our knowledge. The awareness within our organisation about how competitive we are in the marketplace is now much clearer because we created a visual representation of data for our employees.”

2. Get your organisational structure right

Wilkins says that, while his firm’s use of data is very broad, he can benefit from a tight organisational set-up. “We have the advantage of still being focused as an organisation, despite our rapid growth,” he says.

Take information management, for example. Here Wilkins benefits from access to a single data team. “If you go into many other billion-pound businesses, you’d have a much more established set of functions with their own silos of data,” he says.

“We’ve managed to retain a coherent organisational structure. We also have a central repository for data and that represents a huge advantage, because it means we can look at information and synthesise it in many different ways.”

Wilkins says he has strived to achieve an integrated approach to data, both in terms of human skills and technical resources. One key factor is that he combined the role of head of enterprise architecture with that of data delivery at a very early stage of his tenure as CIO.

This single manager has design authority for data inputs, but also needs to drive insight from the information. “In a period of rapid growth and innovation, we can use this integrated approach to make sure our aims and objectives are still as aligned as they possibly can be,” he says.

3. Look to evolve, rather than to keep starting afresh

Wilkins says that from the start, the investors at First Utility recognised that the company would use technology to create a competitive differentiation. The firm wanted to deliver smart, rather than standard, energy and it spent a lot of time building an end-to-end infrastructure.

“At the time, you couldn’t get a smart gas meter,” he says. “The business ended up building its own hardware to measure volume and send the data back to the office. The senior executives got involved in a lot of low-level, but clever, technology to get their smart proposition set up.”

“Work with a partner, learn from their experience, innovate for your customers and differentiate from your competitors”

Bill Wilkins, First Utility

On joining the business, Wilkins was able to inherit this foundation work. The company had already solved the problem of taking heterogeneous data and creating a normalised, standard view of information that worked in a billing system. The problem, however, was that the system did not scale.

The answer to that challenge, says Wilkins, was to modify the foundation, which he says represents key advice to other CIOs facing a similar data conundrum. “What you have to do is to take what’s already there and look at ways to evolve that approach,” he says.

4. Partner with external specialists to build engagement

With the platform in place, Wilkins started to look at other ways to help First Utility develop its smart approach to energy provision. He realised there was a huge opportunity for using the firm’s half-hourly collected smart meter data to create a new form of engagement with customers.

“The call to action was that we realised we had this rich data set which contained lots of interesting information. What we had to do was to turn it into knowledge that could inform our customers about their energy use,” says Wilkins, who explains how the firm partnered initially with external specialist Opower to create its My Energy programme.

Read more about innovating with data

  • Driving innovation with big data: Rather than refusing the wider use of data outside the organisation, only by opening it up to third parties can we realise its true value.
  • An innovation and collaboration centre has been launched in London to develop ideas relating to data and the UK economy.
  • The Digital Catapult in London’s King’s Cross is home to ambitious technology startups innovating around big data.

“That’s another tip from me – don’t try to build it all yourself,” he says. “Innovation happens in many places, so look for partners that can complement what you do. We worked with Opower, as the leading US player in energy analytics, and learnt from their knowledge and experience.”

First Utility then decided to bring the My Energy initiative back in-house, because it thought the US-centric platform was not necessarily the best way to serve its UK customer base. “Partnering allowed us to get a product out there and to learn from its use in the real world very quickly,” says Wilkins.

5. Use experience to build your own data platform

Knowledge from the first, externally developed iteration of My Energy proved essential as First Utility created its own version of the platform, which was launched at the end of 2014. “For other CIOs, I would say the lesson is to work with a partner, learn from their experience, innovate for your customers and differentiate from your competitors,” says Wilkins.

To achieve this level of differentiation, Wilkins built a dedicated team of internal specialists. He initially thought First Utility would need to employ a broad range of analytical specialists, but quickly discovered that customer experience expertise would be more useful.

“As we started building out the My Energy platform, we realised we needed people who could translate the data into areas that customers would be keen to investigate and use,” says Wilkins. “We inevitably spent much more time and money on the look and feel of the service and less on the data side.”

“We’ve learnt that the way you present information to different stakeholders is very important. Half-hourly updates have a very low value for consumers, but we’ve used My Energy to take that information and present it in a more informative manner for customers.”

6. Think in an entrepreneurial fashion and continue to innovate

As a smaller utility firm, First Utility must try to keep pace with larger competitors, but often with fewer resources, says Wilkins. He points to the firm’s mobile programme, which – when compared to the big budget spend of some competitors – was launched with the help of just four engineers in under a year.

One area of pioneering development is the firm’s partnership with Cosy, a Cambridge firm specialising in the development of smart heating systems. Wilkins says First Utility’s aim is to be in a position to offer the Cosy technology to all its customers by the end of the year.

“Cosy is all about bringing in a new data set concerning the heating characteristics of a house,” he says. “Customers get to control their heating from an app, and we get fine-grain information on their requirements, and the efficiency of their boiler and insulation. That information is then fed back into the My Energy platform.”

Wilkins says data becomes more valuable when it is weaved together and used for cross-purposes. As well as Cosy, First Utility is also set to launch a new Auto Read feature as part of its mobile app for customers who do not yet have smart meters. A UK first, the app uses the phone’s camera to take a snapshot of the meter and helps to create more accurate readings.

“Both innovations – Cosy and Auto Read – are concerned with how we can get better-quality data into our analytics engine,” he says. “Getting an accurate view of energy consumption is a challenge for utility firms. Unless you get it right, you start billing estimates, which isn’t great for customers and doesn’t help create certainty in terms of revenue for the business.”

 

Souce: computerweekly.com-Six CIO tips for business innovation with data

Time to cut IT costs again, predicts Gartner

IT spending is set to rise slightly in 2016, with IT spending increasing in datacentres, services and software, but devices and telecommunications spending is set to fall.

According to Gartner’s latest spending forecast, spending on datacentre systems is projected to reach $175bn in 2016, a 2.1% increase from 2015. Global enterprise software spending is on pace to total $321bn, a 4.2% increase from 2015. Spending on IT services is expected to reach $921bn, a 2.1% rise from 2015.

Gartner research vice-president John-David Lovelock said the top line of 0.5% of growth in 2016 follows two years of decline, due to the strength of the US dollar.

“When we look at western Europe, its growth will be 0.2%, but there was 0.7% growth in 2015,” said Lovelock.

He expects spending in Europe will rise in 2017. Companies have continued to spend when necessary, such as replacing aging servers, but he said there was a retraction in the phones and devices markets.

The analyst firm predicted the device market (PCs, ultramobiles, mobile phones, tablets and printers) would decline 3.7% in 2016. The smartphone market is approaching global saturation, which is slowing growth, said Gartner.

The main factor limiting IT spending, according to Lovelock, is worsening economic conditions.

Read more about digital transformation

  • Computer Weekly looks at the key characteristics of successful leaders as digital transformation becomes a business priority.
  • In this presentation from Computer Weekly’s CW500 event, digital transformation programme director Richard Philips explains how the AA tackled the challenges of delivering people, cultural and organisational change.

When asked about Europe, he said: “There is a shift from growth to cost optimisation.” But this is not like the stagnated market in 2001, where overspending in IT led to massive cut costs and redundancies.

“No one has revenue growth to transform to a digital business. CIOs must now optimise IT and business to fund spending on digital projects,” said Lovelock. As an example, he said the savings from legacy system optimisation and enhancements can be redirected to fund digital initiatives.

It is necessary to reduce costs to become a digital business, he added. One of the approaches Gartner promotes is so-called Mode 2 development, which Lovelock said costs less than traditional, or Mode 1 IT.

Businesses then need to move away from owning assets to using services such as software as a service (SaaS) and infrastructure as a service (IaaS). “Instead of buying IT, businesses will buy services,” Lovelock predicted.

“Things that once had to be purchased as an asset can now be delivered as a service. Most digital service twin offerings change the spending pattern from a large upfront payment to a smaller reoccurring monthly amount. This means the same level of activity has a very different annual spend,” he said.

Source: computerweekly.com-Time to cut IT costs again, predicts Gartner

Technology is becoming the lifeblood of business

Cognizant Technology Solutions Corp., a US-based information technology (IT) firm with most of its employees working out of India, expects its business growth in the Asia-Pacific region to outpace the company average this year, maintaining the trend seen in recent years, Jayajyoti Sengupta, vice-president and Asia-Pacific head, said in an interview.

Automation, which includes robots, machine learning and artificial intelligence, will be among the new frontiers for Cognizant, as rote and repetitive processes become “digital, instrumented, analyzed and intelligent”, he said.

Edited excerpts:

Cognizant has said it expects its revenue growth to slow to between 10% and 14.3% for the calendar year 2016. How do you see the situation in the Asia-Pacific?

It would be pertinent to note that Cognizant’s growth of 21% in calendar 2015 included revenues from the acquisition of TriZetto. In 2015, our “rest of the world” revenues, which include those from the Asia-Pacific region as well as the Middle East, grew 29.9%, significantly higher than the company average. Because of the lower revenue base, and higher investment and pipeline of deals in the Asia-Pacific region, we believe this region will continue to grow above company average as has been the case for the past several years.

Cognizant recently announced a partnership to develop Blockchain solutions for secure record-keeping of documents for Japan’s Mizuho Financial Group Inc. Could you share more details on what the partnership means for the company?

We help our clients become digital enterprises. In this endeavour, Blockchain presents an opportunity to rethink how various business processes might work more efficiently going forward. Our vision is to enable our clients to identify and incorporate a decentralized architecture wherever applicable to cut costs and develop new growth opportunities using distributed ledger technologies. Financial services and other organizations often have difficulty in ensuring execution against the most recent versions of documents and in verifying the authenticity of those documents. With Blockchain’s decentralized document verification, they can easily share verified documents with third-party requestors, and speed up verification by multiple parties. As part of the deal with Mizuho Financial Group announced earlier this year, we will bring together our extensive financial services, consulting and digital technology expertise to design and develop a Blockchain solution for secure record-keeping of documents among Mizuho’s group firms. The solution will allow them to exchange and sign sensitive documents in a secure and transparent manner.

In each of the last five years, Cognizant’s banking, financial services, insurance (BFSI) practice has grown at over 15%. You have a huge lead of Indian IT firms in that space on account of your robust consulting practice. How do you see the company maintaining this edge going forward?

Becoming a digital enterprise is now a necessity, a matter of survival, as businesses, products, people and devices become more connected. Clients, including those in the financial services sector, are looking for innovative ways to combine their traditional business models and product sets with new and continuously evolving digital technologies. IT is a means not just to drive productivity and efficiency—what we refer to as “run better”—but also reimagine organizations and business models for future growth, what we call “run different”. This is what we refer to as the dual mandate. Effectively addressing the dual mandate for clients requires partners that can combine strategy, technology and business consulting in one integrated model. It also requires a partner that has a deep understanding of clients’ legacy environments and business processes so that these can be leveraged and integrated into new digital backbones. Our strength in the market comes from the fact that we have built this breadth of capabilities at Cognizant and we’ve integrated these together in our Cognizant Digital Works methodology for maximum client impact. We are among the few companies that can provide comprehensive digital innovation at an enterprise scale. Our matrix structure deeply integrates our consulting team with our technology and business process services delivery organization. This synergy between our consulting and delivery organizations helps the teams to work closely, driving business model change, the re-engineering of business processes and organizational change management for our clients’ businesses.

What are the trends in the IT outsourcing space in South-East Asia and the Asia-Pacific? How divergent are these from trends in the West?

The APAC market is poised well to benefit the best from fulfilling the dual mandate to “run better” and “run different”. Clients who have been partnering with IT and business process firms for a few cycles are in a great position of advantage to generate efficiencies from their ongoing engagements and thereby savings that can be directed towards digital transformation and other business initiatives. At the same time, we are also seeing enterprises making the big leap directly towards adopting digital initiatives. One area that clients in Asia are laying particular emphasis on is around proximity delivery. Overall, the outsourcing space is bustling with action and we are seeing a healthy mix of conventional technology services together with emerging ones such as platform-based services, automation and next-generation IT.

How are you positioned in the two key markets of China and Japan? How do you see these two markets going forward to Cognizant?

We have been operational in both China and Japan for nearly a decade now and seeing great opportunities there. In both markets, we have been successful in associating with strategic customers in strategic areas such as digital marketing, where we can help clients gain competitive differentiation. For us, China and Japan are important for sourcing talent.

We are in Japan for the long term and continue to invest there. Our strategy in Japan is to not just grow our business organically with MNCs (multinational companies) and local Japanese customers, but also to work closely with local partners to deepen understanding of Japanese market requirements and sharpen onsite service delivery and Japanese language capabilities. In addition, we have entered into strategic alliance partnerships with global product partners to utilize their specific solutions for our Japanese customers.

We will continue to focus on Japanese MNCs as our primary targets to help them leverage advanced technology architectures to drive business transformation and global expansion. Apart from servicing dozens of clients from our development centre in China, we are also making investments to capitalize on China as a market. Cognizant serves four categories of clients from China: US and European clients; Chinese subsidiaries of MNCs; clients in Japan—because of cultural and linguistic similarities—and local Chinese clients.

You identified BSFI early, and also built digital capabilities, and rode these segments. What are such new-age sectors that the company is betting on? Automation, artificial intelligence (AI)?

As the industrial economy makes way for the digital economy, the role of technology is shifting from supporting business to being its lifeblood. To be able to transform successfully to a new digital economy, enterprises need a new business and IT architecture to help them drive innovation and efficiency at scale as well as provide flexibility for designing new business processes. Just consider how traditional companies are now under threat. Betterment, a robo-advisor platform for wealth management, was only founded in 2008 and already has $3.5 billion under management.

Now, this pales in comparison to the big traditional banks, but it’s clear the game is changing. Robots, machine learning, AI, IoT (Internet of Things), 3D printing—so many buzzwords surround automation, but all point to the same theme: rote and repetitive processes are becoming digital, instrumented, analyzed and intelligent—and increasingly operated by smart machines instead of solely by humans. The digital platform in an example like Betterment is the key because it includes software AI, algorithms, data sources, people, connectivity, an always-on infrastructure—and in that sense, it is “born automated” from the outset. The output is enhanced customer experiences and an organization able to make better decisions about future products and services.

The need for efficiency, lower error rates, lower costs and faster throughput will drive aggressive adoption of automation. While these outcomes are welcome, the true value of digitization lies in the rich data and metadata that accumulates around process value chains, which will be further enhanced with the emergence of new technologies such as machine learning, deep learning and AI in the coming years—this is when the real transformation can begin. The underlying theme of this transformation is that technology will be far more prevalent in the coming years, and that provides large players like us a great opportunity to make meaningful and lasting contributions to solving some of the most pressing and significant societal and business problems. We are building capabilities to enable our clients to drive digital transformation at scale. Our Cognizant Digital Works approach, which combines strategy, design, technology and business consulting in one integrated model, is solid and showing results. Our focus is on scaling the capability quickly across the world. We are evolving our IT services capabilities to enable our clients to create next-generation technology infrastructures that are characterized by high levels of operational efficiency at the back-end, and enhanced by new digital capabilities on the front-end. Our market opportunities are substantial.

Do you see automation as a double-edged sword for firms as it can cannibalize existing business, while at the same time increase profitability?

We help smart robots complement smart people, and we call this Intelligent Process Automation (IPA). IPA is enabled by a powerful, proprietary framework of technology, methodology, best practices and tools, and integrated analytics, and aligned to address functional processes across industries and horizontal functions. Today, we can support entire business processes, inclusive of the underlying IT infrastructure and platforms, and have created utility or BPaaS (business process-as-a-service) models that efficiently harness the power of automation, to drive more effective processes through analytics. Our ability to deploy IPA solutions quickly and with depth of expertise and scale in vertical industry functions, allows our customers to become agile at scale—without adding significant resources. Our view is that in aggregate, IPA won’t supplant knowledge labour as much as work in tandem to make smart humans smarter and businesses more agile. IPA allows for the automation of front, middle and back office processes typically performed by teams of people accessing multiple applications and following defined, rules-based guidelines. With IPA, these same teams of people are now supported and empowered by smart robots and achieving ever-greater levels of performance and innovation. Automation becomes a fundamental enabler, working powerfully in tandem with humans, to drive not only new levels of process efficiency, but to tackle business challenges differently, like never before.

As all IT firms look to new markets such as China, Japan, and South-East Asia, will it lead to a scenario, where there will be an oversupply of services, coupled with the declining demand? Already, BFSI segment’s demand for IT/BPO services is slowing, and there is an oversupply of competitors vying for the business, driving down prices, profit margins and revenue growth.

Each of these individual markets in the Asia Pacific region is big and diverse. Enterprises in these markets are re-examining how they operate, moving from merely incremental levels of performance efficiency to building new digital business capabilities. The only way for businesses to adapt in this new era is to simultaneously improve efficiency and scale with existing systems, while driving business innovation through newer technologies. This presents significant opportunities for service providers such as Cognizant. As in other markets and industries, a healthy competition and mix of capable providers will deliver the best business results to APAC customers. As digital technologies put industries and businesses at a crossroads, innovation cycles have compressed dramatically. Clients need a partner who has the ability to integrate and execute end-to-end transformations driving both efficiency and innovation. There will be a clear bifurcation of service providers in these markets and those who cannot compete on both sides of the dual mandate will try to compete on volume and price. However, going by what has happened during previous big technology shifts, notwithstanding isolated instances of irrational pricing over the short term, overall pricing will remain largely stable and value creators will be able to build deeper roots in the market. Our deep industry, process, legacy and transformative digital knowledge, our integrated Cognizant Digital Works model combining strategy, design, technology and business consulting, and global footprint affording access to the best talent across the world will continue to gives us an edge in these markets as well.

Source: livemint.com-Technology is becoming the lifeblood of business

Google cloud outage highlights more than just networking failure

Google Cloud Platform went dark some weeks ago in one of the most widespread outages to ever hit a major public cloud, but the lack of outcry illustrates one of the constant knocks on the platform.

Users in all regions lost connection to Google Compute Engine for 18 minutes shortly after 7 p.m. PT on Monday, April 11. The Google cloud outage was tied to a networking failure and resulted in a black eye for a vendor trying to shed an image that it can’t compete for enterprise customers.

Networking appears to be the Achilles’ heel for Google, as problems with that layer have been a common theme in most of its cloud outages, said Lydia Leong, vice president and distinguished analyst at Gartner. What’s different this time is that it didn’t just affect one availability zone, but all regions.

“What’s important is customers expect multiple availability zones as reasonable protection from failure,” Leong said.

Amazon has suffered regional outages but has avoided its entire platform going down. Microsoft Azure has seen several global outages, including a major one in late 2014, but hasn’t had a repeat scenario over the past year.

This was the first time in memory a major public cloud vendor had an outage affect every region, said Jason Read, founder of CloudHarmony (now owned by Gartner), which has monitored cloud uptime since 2010.

Based on the postmortem Google released, it appears a number of safeguards were in place, but perhaps they should have been tested more prior to this incident to ensure this type of failure could have been prevented, Read said.

It sounds like, theoretically, they had measures in place to prevent this type of thing from happening, but those measures failed.

Jason Readfounder, CloudHarmony

“It sounds like, theoretically, they had measures in place to prevent this type of thing from happening, but those measures failed,” he said.

Google declined to comment beyond the postmortem.

Google and Microsoft both worked at massive scale before starting their public clouds, but they’ve had to learn there is a difference between running a data center for your own needs and building one used by others, Leong said.

“You need a different level of redundancy, a different level of attention to detail, and that takes time to work through,” she said.

With a relatively small market share and number of production applications, the Google cloud outage probably isn’t a major concern for the company, Leong said. It also may have gone unnoticed by Google customers, unless they were doing data transfers during those 18 minutes, because many are doing batch computing that doesn’t require a lot of interactive traffic with the broader world.

“Frankly, this is the type of thing that industry observers notice, but it’s not the type of thing customers notice because you don’t see a lot of customers with a big public impact,” Leong said. By comparison, “when Amazon goes down, the world notices,” she said.

Measures have already been taken to prevent a reoccurrence, review existing systems and add new safeguards, according to a message on the cloud status website from Benjamin Treynor Sloss, a Google executive. All impacted customers will receive Google Compute Engine and VPN service credits of 10% and 25% of their monthly charges, respectively. Google’s service-level agreement calls for at least 99.95% monthly uptime for Compute Engine.

Networking failure takes down Google’s cloud

The incident was initially caused by dropped connections when inbound Compute Engine traffic was not routed correctly, as a configuration change around an unused IP block didn’t propagate as it should. Services also dropped for VPNs and L3 network load balancers. Management software’s attempts to revert to previous configuration as a failsafe triggered an unknown bug, removed all IP blocks from the configuration and pushed a new, incomplete configuration.

A second bug prevented a canary step from correcting the push process, so more IP blocks began dropping. Eventually, more than 95% of inbound traffic was lost, which resulted in the 18-minute Google cloud outage that was finally corrected when engineers reverted to the most recent configuration change.

The outage didn’t affect Google App Engine, Google Cloud Storage or internal connections between Compute Engine services and VMs, outbound Internet traffic, and HTTP and HTTPS load balancers.

SearchCloudComputing reached out to a dozen Google cloud customers to see how the outage may have affected them. Several high-profile users who rely heavily on its resources declined to comment or did not respond, while some smaller users said the outage had minimal impact because of how they use Google’s cloud.

Vendasta Technologies, which builds sales and marketing software for media companies, didn’t even notice the Google cloud outage. Vendasta has built-in retry mechanisms and most system usage for the company based in Saskatoon, Sask., happens during normal business hours, said Dale Hopkins, chief architect. In addition, most of Vendasta’s front-end traffic is served through App Engine.

In the five years Vendasta has been using Google’s cloud products, on only one occasion did an outage reach the point where the company had to call customers about it. That high uptime means the company doesn’t spend a lot of time worrying about outages and isn’t too concerned about this latest incident.

“If it’s down, it sucks and it’s a hard thing to explain to customers, but it happens so infrequently that we don’t consider it to be one of our top priorities,” Hopkins said.

For less risk-tolerant enterprises, reticence in trusting the cloud would be more understandable, but most operations teams aren’t able to achieve the level of uptime Google promises inside their own data center, Hopkins said.

Vendasta uses multiple clouds for specific services because they’re cheaper or better, but it hasn’t considered using another cloud platform for redundancy because of the cost and skill sets required to do so, as well as the limitations that come with not being able to take advantage of some of the specific platform optimizations.

All public cloud platforms fail, and it appears Google has learned a lesson on network configuration change testing, said Dave Bartoletti, principal analyst at Forrester Research, in Cambridge, Mass. But this was particularly unfortunate timing, on the heels of last month’s coming-out party for the new enterprise-focused management team at Google Cloud.

“GCP is just now beginning to win over enterprise customers, and while these big firms will certainly love the low-cost approach at the heart of GCP, reliability will matter more in the long run,” Bartoletti said.

 

Source: searchcloudcomputing.techtarget.com- Google cloud outage highlights more than just networking failure

The augmented project manager

The coming artificial intelligence revolution will dramatically change the project management discipline. Three capability areas are ripe for improvement.

After seeing recent industry presentations on bots, machine learning and artificial intelligence (AI), I see the application of these technologies changing the practice of project management. The question is, is this future desirable or will we have a choice?

The project manager role

Much of the daily work of a project manager has not dramatically changed over the last 30 years. We may use different management methodologies, but we spend a great deal of time manually collecting and disseminating information between the various roles on a project. This effort directly results from the need to fill the information gaps caused by systems that can’t capture what is truly happening within the organization. In a recent PMI sponsored roundtable discussion, missing or incorrect data was highlighted as a significant issue. Today’s systems are totally dependent on human entry of information, where it can be nuanced or simply not entered.

The combination of artificial intelligence in the form of bots and cloud computing could radically change this situation. PM effectiveness would be dramatically enhanced and likely the need for some PM roles diminished. In the future, as data capture becomes richer and more automated, we may see new advisor services that arise from improved data quality and completeness. I foresee significant improvements in three key areas.

Planning

One of the black arts of project management is predicting the future, where we represent this future state as a new project plan. We draw upon our own domain and company experience to determine the steps, resources and time needed to accomplish the goal. Our success rate at predicting the future is not good. Our predictions are fraught with error due to the limits of our experience and that of the organization. If you’ve ever managed a project for something completely new to an organization, you are familiar with this situation.

Imagine if your scheduling bot generates a proposed project plan, based on the aggregated and anonymized experiences of similar sized companies doing the same type of project. Today, we use tools like Monte Carlo to simulate this information. The bot could incorporate real world data, potentially yielding better results.

Benchmarking of business data has been around for some time. These new cloud capabilities could see benchmarking expanded to include real-time project management data.

Resource allocation

Another common challenge of project managers is that of resource constraints. Imagine a world where your resource pool is the world and it’s as easy to use as Amazon.

We are seeing the continued growth of the freelance nation trend in corporations. Currently, corporations use agencies to locate and recruit talent. Agencies may simply be a stopgap as bots become a more efficient clearinghouse of freelancer information. Staff augmentation agencies could become obsolete.

For example, your resourcing bot determines that you need a social media expert on your project on April 5th for two days of work. It searches data sources like LinkedIn and your public cloud calendar to find a list of suitable and available candidates. Three are West Coast of the U.S., one is in Paris and one is in Sydney. It then automatically reaches out to these candidates with offers. If multiple people accept, it automatically manages the negotiation. Once complete, the planning bot is informed, a virtual desktop with requisite software is provisioned, user login credentials are generated and the specific task information is sent to them. When the job is complete and rated as satisfactory, the bot coordinates with your accounts payable system to pay the freelancer. The planning bot automatically updates the plan and pushes the data to the BI dashboards.

Tracking

Project feedback loops on work are awful. The largest challenge is incomplete data, which results from increasingly fragmented work days, limits of the worker’s memory and tools that rely on human input. It is also incomplete as it serves little benefit to the person entering the data.

Workers are overwhelmed with tasks arriving via multiple communication channels and no consolidated view.

Imagine a world where the timesheet is antiquated. Today, we have systems such as Microsoft Delve that know what content you’ve touched. We have IP-based communication systems that know what collaborations you’ve conducted. We have machine learning capabilities that can determine what you’ve discussed and the content of the documents you’ve edited. This week, we have facial recognition capabilities and other features that can track and interpret your movements. Given all of this, why is a timesheet necessary?

Professional athletes use this type of data in the competition setting to improve their performance, using the data feedback to spot areas of development. Combining this activity information could prove a boon to productivity.

I can see this working as a “Fitbit” type feedback loop that helps the worker be better at their job and allows them to get home on time. Doing so provides direct benefit to the employee and reduces the Big Brother feel of this data.

The personal bot acts as a personal assistant, reminding the worker of tasks mined from meeting notes and marking tasks as complete in real time. All the while, it is also keeping track of the time spent that enables to the worker to get a better picture of how they spent their time.

Brave new world

There are many challenges with the view I’ve presented above. Many of these challenges are the same faced when we automated and integrated procurement processes. It is also hard to deny that there is compelling opportunities to improve the worker lives as well. Bots, machine learning and artificial intelligence are reachable capabilities that should be incorporated in the PM toolbox as you plan your organization’s future work management needs.

Source: CIO.com-The augmented project manager

Is Failure Is An Option In Project Management?

When it comes to project management failure is sometimes an option. Many of us have pulled ourselves from the smoking ruins of a project to say, “What happened?” and “How did this not succeed?”

In just about any movie or book that includes a military presence, there is more than likely a line of dialogue by the hero or General to the effect of, “Failure is NOT an option!” Perhaps, when aliens are taking over the earth and probably going to destroy all of humanity that could arguably be true. However, I think when it comes to project management that is not true… not by a long shot. Many of us have pulled ourselves from the smoking ruins of a project to say, “What happened?” and “How did this not succeed?”

A study by PricewaterhouseCoopers, which reviewed 10,640 projects from 200 companies in 30 countries and across various industries, found that only 2.5% of the companies successfully completed 100% of their projects. A study published in the Harvard Business Review, which analyzed 1,471 IT projects, found that the average overrun was 27%, but one in six projects had a cost overrun of 200% on average and a schedule overrun of almost 70%. And we all have heard about large construction projects — the Channel Tunnel, Euro Disney, and Boston’s “Big Dig” — that ended up costing almost double their original estimate.

The great thing about success is that it encourages us to take more risks, try more, innovate more, and engage more. The natural outcome is that there will be times when we overreach, commit too many resources, and don’t perform the appropriate risk analysis. That’s when failure might occur. And that’s when we close the project and shake our heads in disappointment. So, what’s the solution? Do we simply play it safe? Do we not take on projects where there is a great amount of risk?

The short answer? No.

I have had more successful projects than failed projects, but I can guarantee that I wouldn’t have had the winning season without the losses. Here are a few things that I’ve learned along the way that helped me when I had to tell Sr. Management that the project was finished, and unsuccessfully finished at that.

Risk management is real

One of my fellow PMPs, J Ashley Hunt, referred to herself as a “catastrophic thinker.” In other words, she was always looking for the “what ifs.” I kiddingly called her Monte Carlo. It’s not like I didn’t do Qualitative and Quantitative Risk Analysis, instead I didn’t foresee any of those medium impact risks occurring. In the Bible there’s a verse, “the little foxes spoil the vineyard.” Yes, a big risk event can bring you down, but so can the death of a thousand little risks. Use your failed project and do some serious reverse risk analysis. At what point did things go south? Who and what policies were involved? Were there any warnings or triggers that might have gone unnoticed? Was the mitigation/transference/avoidance up to the task?

Lessons learned aren’t just on the exam

There’s a joke about how the answer on the PMP exam question of “What input is used for such and such?” is the WBS (work breakdown structure), and that the answer to “What is the primary output of such and such?” is lessons learned. The other aspect is that lessons learned is not just a one-off meeting at the end of the project. I strongly believe that at each phase or major milestone, there should be a debriefing with your team. If your team is too large, then break it down into sub groups and then have those team leaders meet with you to discuss the gathered details. This ONE thing alone helped turn around a development project I was working on with Cisco.

The bottom line is don’t get down on yourself too much. Shake off the dust, admit what you did wrong, and then get back into the game. Many of you have probably heard about the man who was defeated for state legislature, failed in his business a year later, lost his sweetheart two years later, had a nervous breakdown a year after that, tried again for Congress and lost, was rejected to be a land officer, defeated for US Senate twice, and lost his re-nomination for Congress. That man was Abraham Lincoln. He was elected President two years after his final Senatorial defeat. Failure is an option.

Source: CIO.com-Is Failure Is An Option In Project Management?

Gaining Productivity with Robotics Process Automation (RPA)

With so much hype in the market about Robotic Process Automation (RPA) and the various vendors all rebranding their existing software to RPA, customers are confused by the all the conflicting information and range of functionality. Blue Prism brings a number of experts together to discuss what is going on in the marketplace and the most recent research that has been conducted of the early adopters in this space.

Join us as Sarah Burnett, VP and global lead for automation research at the analyst firm Everest Group, shares her analysis of the automation market, the technology adoption trends, and the outlook for the future. She is followed by London School of Economics Professor Leslie Willcocks and Dr. Mary Lacity from University of Missouri who have studied 14 early adopters of RPA. The results of that research have culminated in their book “Service Automation: Robots and The Future of Work.”

Source: Vimeo-London School of Economics, Missouri Univ.  & Everest Research

Is Cloud clouding the issue? |

Cloud computing isn’t just fashionable, it’s widely considered to be a required component of any modern IT solution. But the varied and ‘catch-all’ nature of a term like ‘cloud computing’ leaves it wide open to confusion. There are fundamental differences between public and private clouds that can substantially affect their suitability for one business or another.

One is occupied by multiple tenants, externally managed and rapidly expandable, the other is privately managed and secured, specifically for the needs of a single tenant. The only characteristic these solutions share is virtualisation. Despite being decidedly less ‘sexy’ and marketable, virtualisation is the real game-changer in the IT industry today.

Public cloud

The principal value of a cloud solution is often assumed to be as a cost saving initiative, which is undoubtedly the case for public cloud. These solutions can be rapidly expanded to meet business requirements and cope with spiked demands on resources. Opting for public cloud outsources the cost of maintaining, orchestrating and optimising your own systems.

Another key element of the public cloud is the speed with which it can be implemented. Cloud providers offer out-of-the-box solutions that allow for very agile implementations, further keeping costs down. These cost saving elements are a major reason behind the widespread uptake of public cloud by SMEs. Many companies find themselves in the situation where it makes commercial sense to offload the management of their IT infrastructure, lowering the cost-barrier to new technologies and dramatically simplifying a major business process. Public clouds offer access to a smart, automated and adaptive IT solution at a fraction of the cost of a private installation of similar capabilities.

Private cloud

A private cloud brings very different benefits to a business. Because individual businesses are responsible for the management and upkeep of private cloud systems, great control is the primary goal of a private cloud, not cost-saving. When one business is the sole tenant of cloud infrastructure they can control and customise it to their very specific needs. Processes relating to R&D, supply chain management, business analytics, and enterprise resource planning can be all effectively migrated to a private cloud.

Where public clouds can offer flexibility and scalability to a huge degree at reduced cost, private clouds offer increased control and customisation over a virtualised IT environment. This is an important consideration for businesses in industries or sectors that are governed by strict regulations around data handling, security or transaction speed that may effectively prohibit the use of public cloud solutions.

Moving on from Safe Harbour

The European Court of Justice’s recent decision to nullify the ‘Safe Harbour’ agreement has caused many businesses to re-evaluate their cloud solutions in order to adhere to the new rules governing geographical limitations data transfer and storage. What becomes clear here is that all cloud implementations are absolutely not created equal. Given the misplaced connotations around the term, talking about ‘cloud’ can actually unhelpfully confuse discussions that should instead be focused on the required merits of virtualisation and how they can be applied to the specific IT service needs of a particular business.

Cloud computing is not the simple answer to the challenges facing enterprise IT today. Rather it is an important IT sourcing decision in the journey towards a multi-sourced, software-defined IT system that is specifically tailored to the needs of modern business. As businesses mature, managed self-built and public clouds must co-exist with each other as well as with legacy systems.

Here, the role of the data centre in enterprise IT architecture becomes crucial. All clouds have a physical footprint of some kind or another. More often than not, this footprint resides in a data centre that is capable of facilitating the high power-density, efficient workload focus and seamless interconnectivity between data sources that underline the core principles of virtualisation.

The hot topics of enterprise IT – the IoT, big data manipulation, machine to machine communication – all fundamentally rely on the software-defined approach that datacentres can provide. In the question of how enterprise IT can cope with the challenges of these applications, the cloud is only a small part of the answer.

Source: itproportal-Is Cloud clouding the issue?