Why 2015 won’t be the year of software-defined everything

With the technical and market parameters of SDE still taking shape, CIOs are in no rush to adopt, despite the hype going into overdrive.

Just as everybody starts getting comfortable with the once derided term “cloud”, the IT industry marketing machine foists a new one on us.

And although “software-defined everything” (SDE) heralds a return to the tried-and-tested three-letter acronym (TLA) formula, its precise meaning to many CIOs is as nebulous as its more descriptive predecessor used to be.

Like cloud before it, SDE is being used as a catch-all term for all the latest technologies and approaches that promise to take us to that long-touted nirvana of true IT flexibility and agility through advances in virtualisation.

Clearly, it’s a vital approach for the giant cloud operators and providers looking to build out vast, scalable datacentres with industrial scale and efficiency. Indeed, it’s from developments like Facebook’s Open Compute Project (an initiative to drive standardisation and automation right through the datacentre) that SDE has largely taken its cue. But does the approach really translate to enterprises more broadly, and if so is now the time to be doing something about it?

While cloud has today largely come to mean a way of easily spinning up virtual servers, processing power and storage – and the services and applications hosted on them – SDE refers to the idea of virtualising everything in the datacentreand beyond, from compute and storage to networks and devices. This is meant to give IT departments the capability to automate all their IT provisioning and management entirely through software, using standard commodity hardware that will effectively become invisible to them. Finally, we’re told, CIOs will be able to focus wholly on delivering a fast, efficient, fully scalable IT service to their organisations without the inevitable bottlenecks of integration and manual configuration commonly encountered with today’s public and private clouds.

Some analysts are bullish. IDC predicts the market for software-defined networking (SDN) alone will grow from less than $1bn in 2014 to $3.7bn by 2016 and $8bn by 2018. Gartner claims that by 2017 more than half of all enterprises will have adopted an architectural approach similar to that of the cloud giants. Frost & Sullivan, meanwhile, says the hypergrowth starts here – at least in Asia-Pacific. “2015 is seen to be the year of SDE as the software-defined revolution spreads beyond the boundaries of the datacentre,” the analyst recently predicted.

Read more at: Compuerweekly.com Why 2015 won’t be the year of software-defined everything by Jim Mortleman

Microservices: How to prepare next-generation cloud applications

In March 2014, Martin Fowler and James Lewis from ThoughtWorks published an article on microservices. Since then, the concept has gained prominence among web-scale startups and enterprises.

A microservice architecture promotes developing and deploying applications composed of independent, autonomous, modular, self-contained units.

This is fundamentally different from the way traditional, monolithic applications are designed, developed, deployed and managed.

Distributed computing has been constantly evolving in the past two decades. During the mid-90s, the industry evaluated component technology based on Corba, DCOM and J2EE. A component was regarded as a reusable unit of code with immutable interfaces that could be shared among disparate applications.

The component architecture represented a shift away from how applications were previously developed using dynamic-link libraries, among others.

However, the communication protocol used by each component technology was proprietary – RMI for Java, IIOB for Corba and RPC for DCOM. This made interoperability and integration of applications built on different platforms using different languages a complex task.

Evolution of microservices

With the acceptance of XML and HTTP as standard protocols for cross-platform communication, service-oriented architecture (SOA) attempted to define a set of standards for interoperability.

Initially based on Simple Object Access Protocol, the standards for web services interoperability were handed over to a committee called Oasis.

Suppliers like IBM, Tibco, Microsoft and Oracle started to ship enterprise application integration products based on SOA principles.

While these were gaining traction among the enterprises, young Web 2.0 companies started to adopt representational state transfer (Rest) as their preferred protocol for distributed computing.

With JavaScript gaining ground, JavaScript Object Notation (JSON) and Rest quickly became the de facto standards for the web.

Key attributes of microservices

Microservices are fine-grained units of execution. They are designed to do one thing very well. Each microservice has exactly one well-known entry point. While this may sound like an attribute of a component, the difference is in the way they are packaged.

Microservices are not just code modules or libraries – they contain everything from the operating system, platform, framework, runtime and dependencies, packaged as one unit of execution.

Each microservice is an independent, autonomous process with no dependency on other microservices. It doesn’t even know or acknowledge the existence of other microservices.

Microservices communicate with each other through language and platform-agnostic application programming interfaces (APIs). These APIs are typically exposed as Rest endpoints or can be invoked via lightweight messaging protocols such as RabbitMQ. They are loosely coupled with each other avoiding synchronous and blocking-calls whenever possible.

Factors that influence and accelerate the move to microservices

Contemporary applications rely on continuous integration and continuous deployment pipelines for rapid iteration. To take advantage of this phenomenon, the application is split to form smaller, independent units based on the functionality.

Each unit is assigned to a team that owns the unit and is responsible for improving it. Byadopting microservices, teams can rapidly ship newer versions of microservices without disrupting the other parts of the application.

The evolution of the internet of things and machine-to-machine communication demands new ways of structuring the application modules. Each module should be responsible for one task participating in the larger workflow.

Developers are choosing best-of-breed languages, frameworks and tools to write parts of applications

Container technology such as Docker, Rocket and LXD offer portability of code across multiple environments. Developers are able to move code written on their development machines seamlessly across virtual machines, private cloud and public cloud. Each running container provides everything from an operating system to the code responsible for executing a task.

Infrastructure as code is a powerful concept enabling developers to programmatically deal with underlying infrastructure. They will be able to dynamically provision, configure and orchestrate a few hundred virtual servers. This capability, when combined with containers, offers powerful tools such as Kubernetes to dynamically deploy clusters that run microservices.

Developers are choosing best-of-breed languages, frameworks and tools to write parts of applications. One large application might be composed of microservices written in Node.js, Ruby on Rails, Python, R and Java. Each microservice is written in a language that is best suited for the task.

This is also the case with the persistence layer. Web-scale applications are increasingly relying on object storage, semi-structured storage, structured storage and in-memory cache for persistence. Microservices make it easy to adopt a polyglot strategy for code and databases.

Benefits of microservices

With microservices, developers and operators can develop and deploy self-healing applications. Since each microservice is autonomous and independent, it is easy to monitor and replace a faulty service without impacting any other.

By moving to microservices, organisations can invest in reusable building blocks that are composable

Unlike monolithic applications, microservice-based applications can be selectively scaled out.

Instead of launching multiple instances of the application server, it is possible to scale-out a specific microservice on-demand. When the load shifts to other parts of the application, an earlier microservice will be scaled-in while scaling-out a different service. This delivers better value from the underlying infrastructure as the need to provision new virtual machines shifts to provisioning new microservice instances on existing virtual machines.

Developers and administrators will be able to opt for best-of-breed technologies that work best with a specific microservice. They will be able to mix and match a variety of operating systems, languages, frameworks, runtimes, databases and monitoring tools.

Finally, by moving to microservices, organisations can invest in reusable building blocks that are composable. Each microservice acts like a Lego block that can be plugged into an application stack. By investing in a set of core microservices, organisations can assemble them to build applications catering to a variety of use cases.

Getting started with microservices

Docker is the easiest way to get started with microservices. The tools and ecosystem around Docker make it a compelling platform for both web-scale startups and enterprises.

Enterprises can sign up for hosted container services like Google Container Engine orAmazon EC2 Container Service to get a feel of deploying and managing containerised applications.

Based on that learning, enterprises can consider deploying container infrastructure on-premise.

Source: ComputerWeekly.com: Microservices: How to prepare next-generation cloud applications by Janakiram MSV

CIOs, CDOs and CMOs: New IT roles and responsibilities

UBS CIO Oliver Bussmann explores how a new set of technology stakeholders is changing the dynamics of IT decision-making.

The role of the CIO has changed dramatically in recent years — but so too has the line-up of executives who influence the shape of IT projects, programs and budgets.

According to UBS group CIO Oliver Bussmann, there are now three clear levels of maturity in the IT leader’s role. “Typically as a functional CIO you are driving operational excellence, the shape of your infrastructure and the applications portfolio, your sourcing and partner strategy,” he says, highlighting the availability of various industry benchmarks for measuring that.

The second phase, he observes, is for the CIO to become part of business transformation, to sit at the management table with the rest of the business, mapping out business plans and turning those into IT roadmaps and investments.

But the third maturity level involves an active engagement with innovation — understanding the impact of new digital technologies and business models and sharing that knowledge with internal and external customers. Such innovation can then be injected into new products and business approaches, says Bussmann. “That’s a different strategic role for the CIO and the IT organization, and one that can drive a major impact on revenue, brand awareness and so on.”
CIO or CDO

If that sounds like the description of the role of a chief digital officer then it is intentional. “From my perspective this third role, as a strategic, innovative CIO, certainly encompasses that of a chief digital officer. You need a sound understanding of the existing IT landscape, a deep knowledge of the business’s core processes and also the ability to leverage what you learn outside the organization.” That might involve taking advantage of disruptive technologies — such as mobile, social, cloud or big data — and mapping those onto business opportunities, he says. And no one is in a better position to do that.

“The CIO and the IT organization is perfectly placed to have that end-to-end knowledge and then to team up with other players like the chief marketing officer (CMO) and line of business heads to drive through those changes,” says Bussmann.

That engagement with the CMO, or with marketing heads across UBS’s different lines of business, is particularly important, he says, given the influence many marketing leaders now have over IT spending.

“At both UBS and previously as CIO of [business application software company] SAP, I have built a very close relationship with the chief marketing officer. With the growth of digital channels, CIOs and CMOs are teaming up to leverage the new and different ways of approaching, attracting, interacting with and retaining customers.”

And that “totally different game plan” is inevitably impacting the priorities that determine marketing spend and IT spend, he highlights. “There is a new level of [marketing] sophistication out there that requires strong technology support. So that relationship is now critical.”

Source: i-cio.com-CIOs, CDOs and CMOs: New IT roles and responsibilities

Gartner: CIOs boost spending on sci-fi technologies

CIOs in the UK and Ireland are expected to increase their spending by 1.4% in 2015, fuelled by the economic recovery, according to research company Gartner.

Technologies related to the internet of things (IoT), robotics and 3D printing are among the big areas of investment.

Gartner’s research found that 10% of CIOshave already deployed IoT.

“A lot of science fiction is becoming real investment, such as 3D printing,” said Gartner analyst Lee Weldon.

Gartner estimated that 5% of CIOs have already deployed 3D printing, while 9% of UK and Irish CIOs are implementing new forms of robotics. This is higher than the global average.

In 2015, the technologies attracting the highest level of investments will be analytics and cloud computing. According to Weldon, of the CIOs who said their organisations were developing new software, 21% were considering hosting these in the cloud as their first choice.

“After years of cost-cutting, strategic investment in information and technology is returning,” said Weldon.

He said companies are looking to drive business growth in new areas, pointing to widespread interest in technology trends such as cloud, mobility, IoT and digitisation.

CEOs recognise that growth often comes through technology innovation, according to Gartner’s research. “There is more investment from the business in technology,” said Weldon.

This year, spending on digital technology moved up three places to third priority, behind analytics and cloud but ahead of datacentre and infrastructure spending.

Gartner urged CIOs to spend the majority of their time working with areas of the business outside IT to ensure that the value of information and technology is understood and the right investments are made.

In 2015, leading CIOs will spend less than 40% of their time running the IT organisation, choosing instead to spend time with other CxOs (27% of their time), business unit leaders (18% of their time) and external customers (16% of their time).

Source: Computerweekly.com Gartner: CIOs boost spending on sci-fi technologies by Cliff Saran

Apple invests in European datacentres as EU investigates tax break

Apple is investing €1.7bn in two European datacentres to support iTunes, the App Store, iMessage, Maps and Siri.

The investment comes at a time when the EU is investigating whether a corporation tax deal between Apple and the Irish government amounts to state aid.

The two datacentres each measure 166,000 square metres and will be located in Ireland and Denmark.

Apple said both will run on 100% renewable energy.

The facilities, located in County Galway, Ireland, and Denmark’s central Jutland, are expected to go online in 2017.

In Denmark, Apple said it is aiming to eliminate the need for additional generators by locating the datacentre adjacent to one of Denmark’s largest electrical substations.

The facility is also designed to capture excess heat from equipment inside the facility and conduct it into the district heating system to help warm homes in the neighbouring community.

In Ireland, Apple said it would recover land previously used for growing and harvesting non-native trees and restore native trees to Derrydonnell Forest.

Apple CEO Tim Cook said the “significant new investment” represents Apples biggest project in Europe to date.

“We’re thrilled to be expanding our operations, creating hundreds of local jobs and introducing some of our most advanced green building designs yet,” he said.

The datacentre investment comes just a few months after the EU said it would be investigating a tax deal between Apple and the Irish government. Due to the way it declares overseas sales, Apple pays only 2% corporation tax in Ireland.

Competitions Commissioner Joaquín Almunia wrote a letter which stated: “In the light of the foregoing considerations, the commission’s preliminary view is that the tax ruling of 1990 (effectively agreed in 1991) and of 2007 in favour of the Apple group constitute state aid.”

In 2014, Apple chief financial officer Luca Maestri denied Apple agreed to bring jobs to Ireland in exchange for preferential tax treatment.

But the company now said it supports nearly 672,000 European jobs, including 530,000 jobs directly related to the development of iOS apps and employs 18,300 people across 19 European countries.

Apple spent more than €7.8bn with European companies and suppliers helping build Apple products and support operations around the world.

Source: ComputerWeekly Apple invests in European datacentres as EU investigates tax break  by Cliff Saran

The Mobile Workspace: A Strategy for Security – and ROI

What is it about mobility that brings out the ad hoc in IT? According to the 2014 Global State of Information Security Survey by PricewaterhouseCoopers and CIO magazine, only 42 percent of companies have a mobile security strategy.

Good for the 42 percent. But what about the 58 percent, a clear majority, that don’t have a mobile security strategy? It’s a sure bet that most of those companies have employees who use mobile devices. But security for those devices and the applications and data on them is not part of any strategy. Shocking.

The IT pros responding to that survey are refreshingly honest, but you have to ask, just what are they waiting for? In my last blog entry, I asked  the somewhat rhetorical question whether the mobile enterprise is a passing fad or a permanent characteristic of enterprise IT. It’s permanent, of course. Unless you think we’re going to return to the pre-mobile era. So what are the 58 percent waiting for?

“Right now PC and mobile security are still fairly separate,” says Christian Kane, an analyst with Forrester Research. “Companies don’t have policies in place regarding personal devices. They might not have all the data and application security tools they need.” Indeed, [link to Desmond paper here] many survey respondents have not implemented such technologies as virtualized desktops, data loss prevention tools, asset management tools and a centralized data store. These technologies should complement an enterprise mobility management (EMM) solution, that brings together mobile device management (MDM), mobile application management (MAM), enterprise file sync & share, single sign-on, virtual private networks (VPN) and more.

Additionally, a mobile security strategy should go hand-in-hand with a mobile return-on-investment (ROI) strategy. The ROI of your mobile strategy will not entirely be tangible, but should account for the increased productivity of mobile users and of the company as a whole, among other factors. Not surprisingly, some of the same technologies, like virtual desktops and enterprise mobility management, come into play for both security and ROI.

To best address both the security and the ROI challenges that come with mobility, organizations should focus on finding a single solution that enables employees to be productive – securely – wherever they are, on whichever device they happen to be using, over any type of network. This can be done with a mobile workspace solutions that delivers a portable, always on and always connected work environment. That will maximize your employees’ engagement level, which should in turn reduce turnover, both key to productivity and ROI.

So think of mobile security and mobile productivity at the same time, as inseparable components of a single strategy. If you’re part of the 58 percent, it’s never too late to start.

Source: CIO-The Mobile Workspace: A Strategy for Security – and ROI  By Stan Gibson

5 Tech Resources Trending Now

ITwhitepapers Featured Five

Infrastructure as code and other prerequisites for Continuous Delivery

Are you ready for continuous delivery? This article outlines three prerequisites: automated testing, infrastructure as code and a staging environment.

Success with continuous delivery (CD) depends on four prerequisites. In the first article in this two-part series, experts discussed the importance of a well-established iterative development process. In this article, those experts focus on three practices: automated testing, infrastructure as code and a staging environment.

Software test automation isn’t an “all or nothing” affair. Most teams automate some steps and rely on manual testing for others. But continuous delivery demands a deeper commitment to automation. Why? Because, by definition, testing must take place continually, and the only way to manage it cost-effectively is through automation, said Tom Poppendieck, co-author of The Lean Mindset. As each increment of code is released, tests kick off automatically, making sure not only that new code works, but that it integrates properly with the larger codebase. “When software testing is automated at multiple levels, the risk and cost of making changes becomes dramatically cheaper,” Tom Poppendieck said.

The more you automate the test process, the easier it is to reach the goal of delivering software continuously, said Carl Caum, a prototype engineer at Puppet Labs, which sells configuration management software. He is not suggesting that software pros eliminate all manual tests, but said they should move steadily toward deeper and deeper levels of automation. “You start with acceptance testing. What defines success in our software? Write down your acceptance criteria first, and then automate that testing process.”

The key to success is automating as you go. Often that means conducting manual tests; then, where possible, turning manual steps into automated ones. He offered an example: “You do some manual, exploratory testing. When you come across a workflow that requires the same four steps every time, automate it,” he said. “The first time it’s exploratory, second time it’s a pattern, and the third time it’s automated.”

Continuous delivery requires automating most test procedures, said Stephen Forte, chief strategy officer for mobile tool maker Telerik. “But at the end of the day, for public-facing applications, you want to have a human being look at the screens. Does the layout work? Are there any spelling mistakes?” These are things automated testing can’t catch, he said.

Read more at: TechTarget- Infrastructure as code and other prerequisites for CD by Jennifer Lent

Has HP taken a calculated risk in splitting the business?

This is a trend in the IT sector where multifaceted, publicly listed suppliers increase business lines over the years but are unable to gain full potential value from individual parts.

Dividing HP into two would also bring its PC business full circle having acquired Compaq for $21bn in a controversial deal in 2002.

Download the report at :Uncertain times: HP’s latest chief faces a tough challenge