Extending human intelligence

The exciting change is applying deep learning and high-performance computing to achieve greater automation and accuracy in the interaction between computers and people

Oliver Schabenberger, SAS

SAS CTO Oliver Schabenberger at a panel discussion at the Analytics Experience in Las Vegas.

“Cognitive computing is disruptive, combining technologies such as natural language processing, image processing, text mining and machine learning to augment human intelligence,” says Oliver Schabenberger, chief technology officer at SAS.

SAS has supported cognitive technologies in analytics for decades, he states. “The exciting change is applying deep learning and high-performance computing to achieve greater automation and accuracy in the interaction between computers and people.”

The goal is to extend human intelligence and apply it to solve complex problems using big data and analytics, says Schabenberger, at the inaugural Analytics Experience 2016.

Cognitive computing helps people and machines interact in natural ways, he states. “The computer makes sense of the world around us, it senses, reads, listens and sees. It provides feedback and results by speaking or writing in natural language and directing our actions.”

He says cognitive capabilities based on deep learning and artificial intelligence will be embedded in SAS solutions built on the SAS Viya platform.

The latest SAS additions to the portfolio of analytics that contribute to cognitive computing are SAS Visual Data Mining and Machine Learning and SAS Visual Investigator.

Cognitive services include question-answer systems that drive analytics, make recommendations, or learn from user responses. Customers also will have access to cognitive analytics, image processing, and deep learning in the open SAS Viya platform, enabling them to build cognitive solutions.

These include unparalleled natural language processing and open, deep learning API (application programming interface) libraries sitting on top of advanced analytics.

Combined, they help developers create cognitive computing systems and apply them to high volumes of fast-moving data from text and images, and soon from audio and video too.

360 degree customer view

SAS also announces the release of Customer Intelligence 360, which solves a critical problem of marketers – a fragmented understanding of customer across all channels.

“SAS Customer Intelligence 360 channels the power of data scientists to the digital marketers,” says Wilson Raj, global director of customer Intelligence at SAS. “With modern analytics approaches, such as machine learning, marketers can easily combine insights from existing and emerging channels to steer marketing decisions that are truly customer-centric across their entire organisation.”

When all data is easily accessible, well governed and up-to-date, intelligence analysts can stay ahead of issues

Brooke Fortson, SAS
SAS also announces the launch of a cloud-ready investigation and alert management product that will allow analysts to quickly gain a complete view of people, relationships, networks, patterns, events, trends and anomalies across all available data.

SAS Visual Investigator can help investigators look for insider threats, disease outbreaks, loan risks, drug trafficking, fraud or other emerging issues.

“Complex and disparate data can really slow down investigators,” says Brooke Fortson, product marketing manager for data science and emerging technologies at SAS.

“SAS Visual Investigator alleviates these challenges by bringing together data and exposing patterns of interest. The visual and interactive interface lets users import data, perform point-and-click exploratory analysis, and access third-party systems. When all data is easily accessible, well governed and up-to-date, intelligence analysts can stay ahead of issues.”

Source: cio.co.nz-Extending human intelligence

Hiring Your First Chief AI Officer

A hundred years ago electricity transformed countless industries; 20 years ago the internet did, too. Artificial intelligence is about to do the same. To take advantage, companies need to understand what AI can do and how it relates to their strategies. But how should you organize your leadership team to best prepare for this coming disruption? Follow history.

A hundred years ago, electricity was really complicated. You had to choose between AC and DC power, different voltages, different levels of reliability, pricing, and so on. And it was hard to figure out how to use electricity: Should you focus on building electric lights? Or replace your gas turbine with an electric motor? Thus many companies hired a VP of Electricity to help them organize their efforts and make sure each function within the company was considering electricity for its own purposes or its products. As electricity matured, the role went away.

Recently, with the evolution of IT and the internet, we saw the rise of CIOs to help companies organize their information. As IT matures, it is increasingly becoming the CEO’s role to develop their companies’ internet strategy. Indeed, many S&P 500 companies wish they had developed their internet strategy earlier. Those that did now have an advantage. Five years from now, we will be saying the same about AI strategy.

AI is still immature and evolving quickly, so it is unreasonable to expect everyone in the C-suite to understand it completely. But if your industry generates a large amount of data, there is a good chance that AI can be used to transform that data into value. To the majority of companies that have data but lack deep AI knowledge, I recommend hiring a chief AI officer or a VP of AI. (Some chief data officers and forward-thinking CIOs are effectively taking on this role.)

The benefit of a chief AI officer is having someone who can make sure AI gets applied across silos. Most companies have naturally developed siloed functions in order to specialize and become more efficient. For the sake of argument, let’s say your company has a gift card division. There is a reasonable chance that AI could make the selling and processing of gift cards much better. If the team has the expertise to attract and deploy AI talent, by all means let them do so! However, in most cases, that’s unrealistic. Because AI talent is extremely scarce right now, it is unlikely that they will attract top talent to work on gift cards at the division level.

A dedicated AI team has a higher chance of attracting AI talent and maintaining standards than a single gift card division does — and anyway the new talent can be matrixed into the other business units in order to support them. But the dedicated team needs leadership, and I am seeing more companies hire senior AI leaders to build up AI teams across functions.

Hiring the right AI leader can dramatically increases your odds of success, but only if you pick the right person. Here are some traits I recommend you look for in a chief AI officer or a VP of AI, based on my experience in leading and nurturing some of the most successful AI teams at Google, Stanford, and Baidu:

  • Good technical understanding of AI and data infrastructure. For example, they should ideally have built and shipped nontrivial machine learning systems. In the AI era, data infrastructure — how you organize your company’s databases and make sure all the relevant data is stored securely and accessibly — is important, though data infrastructure skills are arguably more common.
  • Ability to work cross-functionally. AI itself is not a product or a business. Rather, it is a foundational technology that can help existing lines of business and create new products or lines of business. The ability to understand and work with diverse business units or functional teams is therefore critical.
  • Strong intrapreneurial skills. AI creates opportunities to build new products, from self-driving cars to speakers you can talk to, that just a few years ago would not have been economical — or might even have been in the realm of science fiction. A leader who can manage intrapreneural initiatives will increase your odds of successfully creating such innovations for your industry.
  • Ability to attract and retain AI talent. This talent is highly sought after. Among new college graduates, I see a clear difference in the salaries of students who specialized in AI. A good chief AI officer needs to know how to retain talent, for instance by emphasizing interesting projects and offering team members the chance to continue to build their skill set.

An effective chief AI officer should have experience managing AI teams. With AI evolving rapidly, they will need to keep up with changes, but it is less important that they be on the bleeding edge of AI (though this helps attract talent). What’s more important is that they can work cross-functionally and have the business skills to figure out how to adapt existing AI tools to your enterprise.

Source: Harvard Business Review-Hiring Your First Chief AI Officer

6 mistakes businesses make when automating their systems and processes

Major  organisations are increasingly turning to Robotic Process Automation (RPA) to improve operational efficiency, productivity, quality and customer satisfaction.

ANZ, for instance, is experimenting with RPA offshore in India, the Philippines and China to manage routine tasks and allow human workers to refocus on new areas.

Similarly, Westpac is trialling artificial intelligence in its digital banking services to allow its customers to have basic questions about their finances answered by machines. According to the Accenture Technology Vision 2016, 54 per cent of Australian organisations have reported cost savings of 15 per cent or more from automating systems and processes over the past two years.

Though more and more organisations are turning to RPA, few have experience in adopting large scale, organisation-wide RPA capabilities. Because of this lack of understanding, Australian organisations attempting to implement large scale RPA transformation are vulnerable to costly mistakes.

There are six major mistakes that organisations can make when trying their hand at large scale automation programs. Organisations need to be wary of these mistakes and prepared with the right strategies to overcome them.

1. Thinking that robots are the whole solution

Few processes can be automated using only an RPA tool alone. Organisations often need to consider multiple tools and techniques, such as ‘mini bots’, natural language processing, data analytics, process re-engineering, mashups and more.

One of the most important and complex areas of implementing an automation program is solution design, which involves figuring out which combination of capabilities to apply to processes in order to create optimal efficiency.

Software robots should be introduced as part of a strategy of incremental investment in automation, analytics and artificial intelligence that will underpin transformation, modernisation and innovation in operations for the next decade and beyond. Focusing only on short-term cost reductions will not result in the full benefits of automation.

2. Introducing RPA without the support of IT

Because RPA tools require no integration to legacy applications and can be installed on any desktop, there is a tendency to perceive RPA does not need significant involvement from an organisation’s IT team. However, it is essential to ensure RPA systems are part of IT’s overall strategy in terms of security, reliability, scalability, continuity and fault tolerance. A lack of collaboration with IT can lead to costly and time consuming internal wrangling, as there is a risk that RPA pilots might not integrate with existing business process management systems developed in house.

Organisations should create an Automation Centre of Excellence, which is responsible for automation governance, idea generation, skill development, process assessment and organisation wide support, in order to ensure the changes fit within the overall IT structure and strategy and achieve best practice.

3. Running before learning to walk

Getting started in RPA is actually relatively simple for organisations. By testing and learning about RPA with one robot in a sandbox (or testing) environment, organisations are able to gain experience and insights without the risk of negative, organisation-wide impacts. While results from pilot programs, tests and reviews demonstrate the effectiveness of RPA for organisations, they need to be mindful of assuming they can run before they walk. While implementing one robot is relatively easy, implementing hundreds of robots across diverse processes – and integrating automation across the organisation – is much more difficult.

To make the process as seamless as possible, organisations should first consider consulting the wider business to allow different areas across the business to present ideas to be built into the program. A strong infrastructure support network is needed, complete with a virtual environment, server hosting and management, product installation and service capabilities to support large-scale rollout.

4. Letting everyone do their own

Because RPA is relatively inexpensive, easy to use and applicable across various contexts, larger organisations often let various departments organise their own RPA capabilities. However, when this happens, solutions can overlap and a random mix of tools and techniques that hamper future scaling can develop. This can also lead key risk practices to be inconsistently applied – or missed entirely – including business continuity planning, formal maintenance schedules, system documentation, IT security protocols, robot inventories and measures to preserve human process knowledge.

RPA at scale is best achieved within a common environment using common security, risk and quality standards under centralised control and governance procedures.

5. Thinking robots are ‘set and forget’

Robots should be thought of as true virtual workers that require continuous management and maintenance. When rules and procedures are updated, a change strategy should be applied to ensure virtual workers are kept up to date.

Leading organisations are implementing comprehensive governance frameworks to manage organisational change, update processes and manage service demand fluctuations.

6. Putting people strategy later

RPA has a range of positive implications for employees. Because the kinds of tasks being automated with RPA are mundane and repetitive, teams are able to focus more of their time on tasks of higher value that offer greater satisfaction. All RPA plans need to ensure their technology and people strategies are on the same page.

Failing to do this will at best cause delays in training, redeployments and team development – and at worst, lead to unrest in employees that feel uncertain about their future.

Source: businessinsider.com.au  – 6 mistakes businesses make when automating their systems and processes

Image Credit: IMDb

An Artificial Intelligence Definition for Beginners

All-natural and organic are familiar terms to consumers, and anything artificial has become anathema to many. Unless we’re talking artificial intelligence – or AI – then investors should be hungry to learn as much as possible about a technology that is becoming as ubiquitous as organic tofu.

The vast majority of nearly 2,000 experts polled by the Pew Research Center in 2014 said they anticipate robotics and artificial intelligence will permeate wide segments of daily life by 2025. A 2015 study covering 17 countries found that artificial intelligence and related technologies added an estimated 0.4 percentage point on average to those countries’ annual GDP growth between 1993 and 2007, accounting for just over one-tenth of those countries’ overall GDP growth during that time.

Interesting numbers – but just what is artificial intelligence? And are robots with AI going to enslave humanity?

We can’t answer the second question, but here’s a good working artificial intelligence definition from a recent U.S. government report called “Preparing for the Future of Artificial Intelligence”:

“Artificial intelligence is a computerized system that exhibits behavior that is commonly thought of as requiring intelligence.”

Or more technically speaking, AI is a “system capable of rationally solving complex problems or taking appropriate actions to achieve its goals in whatever real world circumstances it encounters.” In a way, artificial intelligence is about understanding – then recreating – the human mind. And AI is not just about designing computers that mimic how we think, learn and process information, but also how we perceive and feel about the world around us.

Understanding the world of AI only begins with a simple artificial intelligence definition. There’s a whole universe of terminology we need to explore in order to understand the domain before we can invest in it.

Source: Artificial Intelligence for Dummies

Machine learning

Machine learning is about how computers with artificial intelligence can improve over time using different algorithms (a set of rules or processes), as it is fed more data. AI machines learn by recognizing trends in data that allow it to make decisions. For example, designing autonomous vehicles involves building machines that learn to navigate. A system may use pattern recognition algorithms from which it learns, for instance, to identify pedestrians from vehicles from animals, so that it knows when to hit the brakes when it sees a cat or a zebra, even if it never encountered the latter because it has learned to identify animals accurately. Regular readers will recall a previous article we wrote on this topic tiled “Deep Learning And Machine Learning Simply Explained” which gives an example of how this works in practice.

Neural networks

A type of machine learning, neural networks are superficially based on how the brain works. There are different kinds of neural networks – feedforward and recurrent are a couple terms that you may encounter – but basically they consist of a set of nodes (or neurons) arranged in multiple layers with weighted interconnections between them. Each neuron combines a set of input values to produce an output value, which in turn is passed on to other neurons downstream as seen in the example below:

From an example from the U.S. government report: In an image recognition application, a first layer of units might combine the raw data of the image to recognize simple patterns in the image; a second layer of units might combine the results of the first layer to recognize patterns-of-patterns; a third layer might combine the results of the second layer; and so on. We train neural networks by feeding them lots of delicious big data to learn from.

Deep learning

Deep learning is simply a larger neural network. Deep learning networks typically use many layers – sometimes more than 100 – and often use a large number of units at each layer, to enable the recognition of extremely complex, precise patterns in data. The below diagram shows how deep learning fits into the bigger picture:

Some successful applications of deep learning are computer vision and speech recognition (also referred to as natural language processing).

Computer vision

Computer vision might sound like the latest 3D eyewear, but it’s actually a field of research for designing machines with the ability to process, understand and use visual data just as humans do. The eyes, in this case, usually consist of a camera. Autonomous vehicles are one obvious example. Another one that may be sitting in your living room with a teenager right now is a device called Kinect for Xbox. Kinect uses a sophisticated system for motion-capture that allows a user to interact with the computer without the use of a controller.

Natural language processing

Natural language processing, as defined by aitopics.org, “enables communication between people and computers and automatic translation to enable people to interact easily with others around the world.” Those who belong to the Apple cult are familiar with one version of this artificial intelligence – Siri. Output and input can be verbal or written. Other terms that fall into this category include natural language understanding and natural language generation.

Affective computing

For some, it’s not enough to imbue computers with artificial intelligence. An emerging field called affective computing works to give our electronics emotional intelligence, whether to understand a human user or to influence one emotionally. The best way to learn more about this exciting application of artificial intelligence is to read our article on “Affective Computing and AI Emotion Recognition” which talks about an interesting startup in this space called Affectiva.

GPUs

Many neural networks for artificial intelligence are powered by what’s called graphics processing units (GPUs). Pioneered in 2007 by a company called NVIDIA, GPUs basically help computers work much faster than those operating with a central processing unit (CPU) alone. Some companies have built their own versions of GPUs. For example, Google being Google, the technology giant has a chip it calls the Tensor Processing Unit, or TPU, that supports the software engine (TensorFlow) that drives its deep learning services, according to Wired magazine. There are also a handful of startups working on building AI chips.

Cognitive computing

Cognitive computing is one of those terms that has fairly recently entered the AI lexicon. One of the definitions for cognitive computing – a term that some attribute to IBM with the birth of Watson, the computer Jeopardy champ – that seems to be prolific around the inter-webs sums it up thus: “Cognitive computing involves self-learning systems that use data mining (i.e., big data), pattern recognition (i.e., machine learning) and natural language processing to mimic the way the human brain works.” Katherine Noyes writes in Computerworld that cognitive computing “deals with symbolic and conceptual information rather than just pure data or sensor streams, with the aim of making high-level decisions in complex situations.”

That all sounds a lot like artificial intelligence, eh? In a blog by technology market analysis company VDC Research, the difference between artificial intelligence and cognitive computing boils down to the idea that the former tells the user what course of action to take based on its analysis while the latter provides information to help the user to decide. Or perhaps, as Noyes implies in her conclusion, cognitive computing might just be clever repackaging of artificial intelligence. After all, even the best AI can’t outsmart old-fashioned human marketing.

Source: nanalyze.com-An Artificial Intelligence Definition for Beginners

The next acronym you need to know is RPA

RPA is a promising new development in business automation that offers a potential ROI of 30–200 percent—in the first year. Employees may like it too.

Robotics are beginning to have a profound effect on business. In this interview, Xavier Lhuer, an associate principal in McKinsey’s London office, speaks with Leslie Willcocks, professor of technology, work, and globalization at the London School of Economics’ department of management, about his work on robotic process automation—its impact on work, and how companies can capture its strategic and financial benefits. 1. Leslie P. Willcocks and Mary C. Lacity are coauthors of Service Automation, Robots and the Future of Work, Steve Brookes Publishing, UK, 2016.

McKinsey: Can you start by defining robotic process automation (RPA)?

Leslie Willcocks: RPA takes the robot out of the human. The average knowledge worker employed on a back-office process has a lot of repetitive, routine tasks that are dreary and uninteresting. RPA is a type of software that mimics the activity of a human being in carrying out a task within a process. It can do repetitive stuff more quickly, accurately, and tirelessly than humans, freeing them to do other tasks requiring human strengths such as emotional intelligence, reasoning, judgment, and interaction with the customer.

There are four streams of RPA. The first is a highly customized software that will work only with certain types of process in, say, accounting and finance. The more general streams I describe in terms of a three-lane motorway. The slow lane is what we call screen scraping or web scraping. A user might be collecting data, synthesizing it, and putting it into some sort of document on a desktop. You automate as much of that as possible. The second lane in terms of power is a self-development kit where a template is provided and specialist programmers design the robot. That’s usually customized for a specific organization. The fast lane is enterprise/enterprise-safe software that can be scaled and is reusable.

You can multiskill each piece of software. It’s lightweight in the sense that you don’t need a lot of IT involvement to get it up and running. Business-operations people can learn quite quickly how to configure and apply the robots. It’s lightweight also in that it only addresses the presentation layer of information systems. It doesn’t have to address the business logic of the underlying system or the data-access layer.

McKinsey: How is RPA different from cognitive intelligence?

Leslie Willcocks: RPA deals with simpler types of task. It takes away mainly physical tasks that don’t need knowledge, understanding, or insight—the tasks that can be done by codifying rules and instructing the computer or the software to act. With cognitive automation, you impinge upon the knowledge base that a human being has and on other human attributes beyond the physical ability to do something. Cognitive automation can deal with natural language, reasoning, and judgment, with establishing context, possibly with establishing the meaning of things and providing insights. So there is a big difference between the two.

In addition, whereas RPA is pretty ripe as a technology, cognitive automation isn’t. I’ve not seen a wave of powerful, cognitive automation tools appear in the market or many companies using them yet.

McKinsey: What are the business benefits of RPA?

Leslie Willcocks: The major benefit we found in the 16 case studies we undertook is a return on investment that varies between 30 and as much as 200 percent in the first year. But it’s wrong to look just at the short-term financial gains, particularly if those are simply a result of labor savings. That approach does not do justice to the power of the software, because there are multiple business benefits.

For example, companies in highly regulated industries such as insurance and banking are finding that automation is a cheap and fast way of applying superior capability to the problem of compliance. You also get better customer service because you’ve got more power in the process. A company that receives lots of customer inquiries, for example, can free staff to deal with the more complex questions.

There are benefits for employees, too. In every case we looked at, people welcomed the technology because they hated the tasks that the machines now do, and it relieved them of the rising pressure of work. Every organization we have studied reports that it is dealing with bigger workloads. I think there will be an exponential amount of work to match the exponential increase in data—50 percent more each year. There is also a massive increase in audit regulation and bureaucracy. We need automation just to relieve the stress that creates in organizations. One online retailer measures the success of RPA in terms of the number of hours given back to the business. So it’s not just the shareholders, the senior managers, and the customers who benefit, but also employees.

McKinsey: Can you describe a process where you have seen RPA in action?

Leslie Willcocks: In an insurer we studied, there was a particular process where it used to take two days to handle 500 premium advice notes. It now takes 30 minutes. It worked like this: a range of brokers would issue insurance policies for clients, and there was a central repository into which the policies had to go and a process that someone had to manage to get the premium advice note (i.e., notification/details of the business written for accounting purposes) from the broker into the repository, a system that tracks policies. A number of operations had to occur for that advice note to be fully populated with all the data, and the process operator might find that the data had not been completely filled out, perhaps because the advice note wasn’t structured very well. So the data had to be structured to standardize it so that it could be a common document like all the other advice notes. And if any data was missing, that person might have had to go back to the broker, or add things from the systems of record in the back office. Then, once the note was complete and the process operator had signed off, it went into the repository.

Now a lot of that sort of work can be automated. But some of it requires human intervention, human reasoning, judgment. So an RPA engineer would look at that type of process and say, “Which bit can we automate?” The answer is, “Not everything”—it can’t structure the data. There may at some stage be cognitive-automation technology that could structure the data, but RPA can’t, so the human being has to structure the data at the front end and create a pro forma ideal advice note. Clearly, the RPA can’t deal with exceptions either. The engineer has to intervene and look at the exceptions and create a rule to deal with them, so that gradually you educate and configure the RPA to do more and more work. Eventually it can do 90 or 95 percent of the work, and very few exceptions have to be dealt with by a human.

McKinsey: What are the most important considerations for those wishing to adopt RPA?

Leslie Willcocks: The most important consideration is strategy. You can use automation tactically for cost savings. But if you use RPA as a broader strategic tool, you get a lot more out of it. That’s number one. Number two concerns the launch. You need to get the C-suite involved and appoint a really good project champion, and you have to pick the right process. It has to be stable, mature, optimized, rules-based, repetitive, and usually high volume. Start with a controlled experiment on a visible bottleneck or pain point.

The third consideration is change management—persuading the organization to change and adopt automation. It is a key issue from the outset. And the fourth is building a mature enterprise capability for RPA. Long-term users have built centers of excellence over time, usually within business operations, and developed skills and capabilities within that center. They have people who assess the feasibility of a proposal from a business unit. They have people who configure a robot, install it, and develop it, and controllers who switch it on and off and plan its work and how it fits with human work. They have some sort of continuous improvement capability and relationships with IT, governance, and security. Organizations signing up to RPA now should probably think about building a center of excellence immediately.

McKinsey: How do companies choose whether to implement an IT solution or RPA? And how do the two departments work together?

Leslie Willcocks: When organizations consider proof-of-concept for RPA, they look at the business case and compare it to an IT solution. Often that’s pretty unflattering for IT. In one organization we looked at, the return on investment for RPA was about 200 percent in the first year, and they could implement it within three months. The IT solution did the same thing but with a three-year payback period, and it was going to take nine months to implement.

In addition, many business operations find going through IT frustrating because it’s so busy. Often the business wants something relatively small, but the IT function has bigger fish to fry, and the business has to go to the back of the queue. So if an RPA tool is usable, cheap, and doesn’t require much IT skill to implement, it’s a no-brainer for the average operator in a business unit. The reason IT gets worried is that they know the disruptive, potentially disastrous effects of people playing around with IT in the organization without understanding how it’s going to upset infrastructure, governance, security, and all the important touchpoints that IT is held responsible for. So it’s not surprising to find IT functions in denial about RPA and what it can do. It’s crucial, therefore, that IT is brought on board early.

McKinsey: What do you think will be the long-term impact of robotic process automation?

Leslie Willcocks: In the longer term, RPA means people will have more interesting work. For 130 years we’ve been making jobs uninteresting and deskilled. The evidence is that it’s not whole jobs that will be lost but parts of jobs, and you can reassemble work into different types of job. It will be disruptive, but organizations should be able to absorb that level of change. The relationship between technology and people has to change in the future for the better, and I think RPA is one of the great tools to enable that change.

Source: McKinsey & Co.: The next acronym you need to know is RPA

Image Credit: Thinkstock

How to get IT on board with RPA

Q&A with Symphony Co-founder and Chief Operations Officer David Brain

What is the initial feedback from IT? Where does the resistance lie?

As you might expect, resistance generally comes from engaging any team late in the process or without sufficient information or sponsorship. Generally, we have found IT teams to be hugely supportive of RPA when it is deployed within IT governance and addresses a challenge that IT is not already addressing through other programs of work.

Information security is generally the IT team we spend most of our time with. RPA provides a new model for understanding and constructing appropriate controls. The thought of a robot performing transactions on an unlocked machine accessing sensitive data has obvious risks. Working with the information security team to propose, review and implement controls to manage these risks is essential to a successful deployment.

 

How should business leaders message the value of RPA to IT?

RPA has numerous benefits to a business, such as improved quality and consistency, reduced transaction times, business continuity and agility – not to mention the obvious cost savings. That said, it is not the only tool in the tool box and IT teams may have different approaches they are already pursuing to solve the same challenge the business is trying to solve with RPA. Understanding the IT roadmap prior to embarking on any implementation is therefore imperative to avoid conflicting agendas.

It is also essential that a business sponsor with sufficient seniority is identified to allocate project budget and prioritize RPA among other initiatives. We’ve found the key to obtaining sponsorship is to perform a ‘Future of Work Assessment,’ or FOWA, across the area of the business. The FOWA evaluates potential solutions, proposes a Target Operating Model (TOM) and compiles a business case to articulate the value of RPA and the cashable and non-cashable benefits it will bring. Once the investment and benefit are quantified, it’s easy to justify RPA and resources in supporting its implementation.

Why is it important for IT to be involved in implementation?

While RPA is often managed by operations teams to provide a virtual workforce, it is still an IT implementation, and therefore has to be deployed and managed within an IT governance framework so that the risks associated with automation can be effectively managed.

We have seen some organizations take a different route in implementing RPA without the involvement of IT. In every single instance, this has caused additional delay and/or risk to the business, and in the vast majority of cases, has resulted in a lag in adoption or the RPA initiative being shut down altogether.

IT teams tend to be the budget holders for the infrastructure that is put in place (new robots), and are responsible for infrastructure and system availability, up time and recovery. IT teams also tend to hold the licensing and roadmaps for the target applications RPA is automating (office, SAP etc.).

Regardless of which function manages the implementation, for RPA to be successful both operations and IT have to be bought into the initiative and actively involved.

When should IT get involved?

In our experience, it is best to involve IT from the outset. This doesn’t mean heavy involvement – just the socializing of the investigation, which may or may not lead to a business case for RPA. The earlier the engagement, the less risk to the project as you may discover that the application you wish to automate is due for replacement in the following year or that there is already another RPA pilot in your organization that you could leverage.

Once the idea has been socialized, and ideally once the sponsorship is in place, the next step is to perform a FOWA assessment. We have found that a strong business case is far more convincing to leadership than a compromised proof of concept that proves the software can work in your business as it does now in countless other organizations. Once the FOWA is complete, IT needs to be involved in validating the security model and providing the governance within which the deployment can take place. Often the IT team will be required to provide access to non-production environments that mirror the live systems for the purpose of developing and testing RPA.

Once the implementation is complete, IT involvement is more important than ever, as they need to manage any upgrade or change to the systems being automated to ensure continuity of automation.

Where has IT seen value? How have their jobs been made easier?

RPA is a great means to address the projects that IT cannot prioritize. We’ve worked with several CIOs and CTOs who have told us that they now look at RPA in their triaging of investment requests. If the opportunity is not significant enough to make it onto the roadmap, then the IT teams look to see if RPA can provide a lower cost pragmatic solution. This is a great dynamic to create as our clients that deploy RPA do not want to compete with the initiatives to replace systems or upgrade their functionality, rather they want to find a more effective solution than dealing with these problems manually as they do today.

With many projects delivering a typical payback period of less than one year, there is often a case for implementing RPA even when there are longer term strategic solutions for addressing the same challenge.

Another way RPA can be used by IT teams is as a means of prototyping automations which can then be transferred into the underlying applications when stabilized.

How do you see RPA impacting IT day to day in the next five years?

I think RPA will become a more valuable tool to IT departments where they can provide operations teams with a means to provide automated solutions to problems that are not addressed through the IT roadmap.

We do not see it as a means to reduce IT spend or channel away the limited funds IT teams usually have available to them. Instead, we see it as a way to extend the amount of opportunities they can support by enabling operations with tools that have been approved by IT and are managed effectively and securely.

There is also the potential to grow hybrid IT / business roles by bringing the functions closer together.

Source: Blueprism-How to get IT on board with RPA

Robotic Process Automation: Leveraging Robots In The Enterprise

While companies have traditionally operated under the assumption that it makes sense to improve processes before automating them, the advent of robotic process automation (RPA) technology may be turning that conventional wisdom on its head.

RPA, or software “robots,” use business rules logic to automate repetitive, time-consuming tasks previously conducted by people. The technology reduces cost and enhances speed, accuracy, availability and auditability, and is being applied to tasks such as data entry, claims processing and access management in industries as diverse as insurance, retail and financial services. It’s relatively quick to implement – RPA systems can be deployed in a matter of weeks. They also can, cost as little as $10,000 a year to implement and maintain, and replace three to 10 human administrators. Considering, moreover, that robots can work 24/7 without vacation and sick days, the efficiency gains are significant.

Since the cost and effort to implement RPA is relatively low, and the efficiencies are immediate and significant, the traditional order of optimizing processes prior to automating is becoming less relevant. While automating broken processes is never a good idea, spending too much time over-thinking a working process up front can be counter-productive, given the quick results RPA can deliver.

Moreover, data analysis and incremental process improvement can always be applied after deploying an RPA solution. By alleviating immediate “symptoms” and reducing customer pain points, businesses can lay the groundwork for end-to-end process redesign and implement a smoother long-term transformation to a digital operating model that delivers even greater ROI and business value.

One reason RPA systems are so easy to implement is because they don’t require much coding and instead are taught specific, repeatable tasks. While traditional automation software interacts with other applications on multiple levels, RPA is implemented at the user interface level, requiring minimal IT involvement. The technology is also more configurable, and systems can be can easily be updated and modified, which is critical in industries with continually changing regulations and requirements.

RPA best practices

So how can businesses determine an effective process optimization and automation approach for their organizations? Here are some best practices:

  • Collaborate early and often. Engagement between IT and business stakeholders provides insight into current and future business requirements and helps determine the types of technology solutions and deployment strategies that will best support business requirements.
  • Identify the processes that can be automated through RPA. Look for routine, repeatable and rules-based processes where automation can drive efficiency gains. RPA systems will need to offload exceptions to rules-based operations to human operators. In addition, by providing immediate efficiency and accuracy results, RPA can avoid or postpone the need to engage in expensive and time-consuming re-engineering projects.
  • Work closely with human workers to collect key information about how the task is typically completed so it can be “taught” to the software robots.
  • Be prepared to redeploy and potentially retrain workers in the new environment. In many cases, RPA will allow people to spend more time using their existing skills and experience, thereby adding value to the business. In other instances, acquisition of new skills may be needed.
  • Analyze data to improve. RPA tools provide clean and accurate data that can be applied to drive higher rates of automation. By assessing operations to identify root causes of exceptions, businesses can in many cases eliminate those causes to enhance efficiency. Analysis is also essential to keeping up with changing business environments. As systems change and industry regulations evolve, RPA systems must adapt to new requirements.

Source: networkcomputing.com – Robotic Process Automation: Leveraging Robots In The Enterprise

Image Credit: Thinkstock

AI, Cognitive Computing To Disrupt Enterprises

IDC is forecasting big growth for cognitive computing and AI in the next 5 years. This infographic shows the growth, industries, and use-cases for these technologies.

What used to be science fiction is now an accepted path for IT. Multiple IT analyst firms are predicting that artificial intelligence technologies will become important components in future IT organizations. Indeed, at the Gartner Symposium, AI was simply another accepted factor in almost every system that will be created in the next decade.

That predicted acceptance is also reflected in new market forecast from research firm IDC. IDC’s Worldwide Semiannual Cognitive/Artificial Intelligence Systems Spending Guide, which offers a full market forecast over the forecast period of 2016 to 2020, with expected compound annual growth of 55.1%

“Software developers and end user organizations have already begun the process of embedding and deploying cognitive/artificial intelligence into almost every kind of enterprise application or process,” said David Schubmehl, research director for cognitive systems and content analytics at IDC in a prepared statement.

Sponsor video, mouseover for sound

“Recent announcements by several large technology vendors and the booming venture capital market for AI startups illustrate the need for organizations to be planning and undertaking strategies that incorporate these wide-ranging technologies,” he added.

But it’s not just about startups. Enterprises will play a big part, too, or risk being subsumed by digital disruptors, according to IDC.

“Identifying, understanding, and acting on the use cases, technologies and growth opportunities for cognitive/AI systems will be a differentiating factor for most enterprises and the digital disruption caused by these technologies will be significant.”

IDC says that enterprises across a broad range of industries will be able to enable cognitive systems and AI by applying algorithms and rules-based logic to data flows.

Source: informationweek.com-AI, Cognitive Computing To Disrupt Enterprises

9 enterprise tech trends for 2017 and beyond

One word sums up this year in enterprise tech: clarity.

We learned that the emerging ecosystem of containers, microservices, cloud scalability, devops, application monitoring, and streaming analytics is not a fad. It’s the future, already powering Silicon Valley’s and Seattle’s most advanced tech companies. Throw in machine learning and IoT and you have a comprehensive framework for the next phase of enterprise IT, with continuous improvement as its founding principle.

At the same time, we became more aware of the widening gulf between this new world and most existing enterprise IT operations. That’s why the hoary phrase “digital transformation” refuses to die — the leap from legacy to modernity requires profound, multiphase, across-the-board change.

But what about next year? Well, when you know where today’s enterprise tech stands, it’s easier to look ahead. In that spirit, I offer my nine enterprise tech trends for the coming year and beyond (with no repeats from previous years!). Let’s start with the most obvious:

1. Advanced collaboration

After years of “business social networking” failures, Slack and its ballooning ecosystem have established chat-based collaboration as a first-order business application. Competitors abound, of course, from HipChat to Flock, and everyone wonders whether Microsoft Teams will be able to beat Slack at its own game — particularly since Teams comes free with Office 365.

But if you ask me, it’s odd that simple chat-based collaboration has taken off, because the chat room metaphor has been around since IRC. Developers have engaged in a deeper form of collaboration from the time Linus Torvalds introduced Git as a way to organize revisions to the Linux kernel, with GitHub, Bitbucket, and GitLab offering today’s most popular Git implementations. Jon Udell and others have suggested that GitHub could provide the basis of all sorts of collaborations beyond code.

More exciting, though, is the notion that machine learning might enable collaborative platforms to gather people, resources, and data in an organization to form workgroups on the fly, which is an idea that Zorawar Biri Singh put forth in a recent InfoWorld interview. Silo-busting collaboration is the key to digital transformation, so machine intelligence to enable that seems like a prime opportunity in this space for years to come. Flock already shows flashes of it with its “magic search” feature.

2. Deep learning

AI and its subset machine learning owe much of their resurgence to the cloud’s ability to serve up gobs of compute, memory, and data, on which algorithms can gorge themselves and produce useful results quickly. That goes double for deep learning, a compute-intensive variety of machine learning that employs multiple layers of neural networks operating on the same problem at the same time for tasks ranging from image recognition to fraud detection to predictive analytics.

All the major clouds give customers the ability to crank up the horsepower required (including GPU processing) for deep learning, with Google’s TensorFlow in the lead, which is available both as a service on Google Cloud Platform and as an open source project. Over time, IBM’s Watson has gained deep learning abilities as well, now accessible to developers in the Bluemix cloud. New offerings from Microsoft Azure (Microsoft Cognizant Toolkit) and AWS (the MXNet framework plus the new Rekognition, Polly, and Lex services) help make this the hottest space around.

3. The incredible SQL comeback

For a few years it seemed like all we did was talk about NoSQL databases like MongoDB or Cassandra. The flexible data modeling and scale-out advantages of these sizzling new solutions were stunning. But guess what? SQL has learned to scale out, too — that is, with products such as ClustrixDB, DeepSQL, MemSQL, and VoltDB, you can simply add commodity nodes rather than bulking up a database server. Plus, such cloud database-as-a-service offerings as Amazon Aurora and Google Cloud SQL make the scale-out problem moot.

At the same time, NoSQL databases are bending over backward to offer SQL interoperability. The fact is, if you have a lot of data then you want to be able to analyze it, and the popular analytics tools (not to mention their users) still demand SQL. NoSQL in its crazy varieties still offers tremendous potential, but SQL shows no sign of fading. Everyone predicts some grand unification of SQL and NoSQL. No one knows what practical form that will take.

4. The triumph of Kubernetes

We know what the future of applications looks like: microservices running in Docker containers on scalable cloud infrastructure. But when you break monolithic applications into microservices, you have a problem: You need to manage and orchestrate them. A few solutions have emerged to meet the challenge, including Apache Mesos, Docker Swarm, and Google Kubernetes.

It’s pretty clear at this point that Kubernetes has won, at least for now. Why shouldn’t it? After all, no company has had more experience running containers in production at scale than Google, using an internal system known as Borg, from which Kubernetes was derived. All the major clouds support Kubernetes, with CoreOS and Red Hat leading Kubernetes providers for both on-premises and cloud implementations. Add to those Heptio, a new startup formed by ex-Googler Craig McLuckie, co-founder of the Kubernetes project.

Kubernetes triumph may be short-lived, though, in part because we’re at such an early stage with containers. At the latest AWS re:Invent conference, for example, CTO Werner Vogels announced a slew of new container management and orchestration tools. Google will stick with Kubernetes for obvious reasons, but the cloud is where the action is and this contest is far from over. It’s just beginning.

5. Serverless computing

When you’re a developer, worrying about infrastructure, even the virtual kind, is a drag when you just want to concentrate on application logic and UI design. Serverless computing platforms take the industry’s long history of piling abstraction on top of abstraction to the next level so that such lowly concerns become a thing of the past. The serverless model also encourages developers to grab functions from a library and string them together, minimizing the amount of original code that needs to be written.

AWS Lambda is the best-known example of serverless computing, but other clouds have followed suit. Microsoft has Azure Functions and Google offers Cloud Functions. The startup Iron.io, which develops software for microservices workload management, also provides a serverless computing platform.

6. Custom cloud processors

Did you know that Amazon has a subsidiary that designs its own ARM processors for servers? Better known is Google’s foray into co-processing — the Tensor Processing Unit specifically designed to accelerate machine learning. Plus, Microsoft has added FPGAs to its data centers to optimize particular applications such as machine learning and plans to offer tools to enable Azure customers to program FPGAs as well. At Amazon re:Invent last week, AWS introduced its own FPGAs in the form of a new F1 instance type for EC2.

7. IoT interoperability

The established messaging protocol for IoT has long been MQTT, whose compact and efficient nature lends itself to low-power, relatively dumb devices. In May 2016, Google’s Nest subsidiary open sourced Thread, a mesh networking protocol that enables devices with more processing power to maintain peer-to-peer connections without relying on a hub.

The most interesting developments have emerged at the application layer. In October, the AllSeen Alliance merged with the Open Connectivity Foundation, which effectively unified the IoT software frameworks AllJoyn and IoTivity into a single open source project. More dramatically, at the Amazon re:Invent conference last week, AWS CEO Andy Jassy announced AWS Greengrass, a software core (and SDK) designed to run on IoT devices, enabling those devices to run AWS Lambda functions and connect securely to the AWS IoT platform. All the major public clouds now have IoT platforms, which are crucial to IoT progress, so you can expect Microsoft Azure, Google Cloud Platform, and IBM Cloud to deliver their own Greengrass-like offerings in 2017.

8. Hardware as a service

This one is kind of a sleeper. IDC predicts that in 2017, 10 percent of enterprises will begin exploring PC-as-a-service agreements with vendors. Reportedly, HP and Lenovo already have such rental programs in place. On the server side, Dell, HP, and Lenovo will begin offering Microsoft-managed servers preloaded with Azure Stack on a subscription basis. Oracle has the on-premises version of Oracle Cloud, dubbed Oracle Cloud Machine, that is offers via a “cloud-oriented subscription model.” Is this the end of capital investment in IT as we know it?

9. Python, Python, Python

OK, this one is a little silly. But each year, the ranks of Python programmers grow, with Python occupying the No. 4 position among all languages in the Tiobe Index. Python’s clean, English-like syntax has helped make it the most recommended first programming language.

People use it for everything, but in particular it has gained traction among data scientists. Moreover, Python has become the preferred language of devops engineers who write code to automate operations, and Python frameworks and IDEs continue to blossom. How devoted is the Python crowd? Here’s a clue: Python 3.6 was released on Christmas Day.

Source: Infoworld  – 9 enterprise tech trends for 2017 and beyond

Robotic process automation is killer app for cognitive computing

Cognitive capabilities could supercharge RPA efforts automating tasks that once required the judgment and perception of humans. But implementing these systems is not as simple as it sounds.

Robotic Process Automation (RPA) is an increasingly hot topic in the digital enterprise. Implementing software robots to perform routine business processes and eliminate inefficiencies is an attractive proposition for IT and business leaders. And providers of traditional IT and business process outsourcing facing potential loss of business to bots are themselves investing in these automation capabilities as well.

While the basic benefits of RPA are relatively straightforward, however, these emerging business process automation tools could also serve as en entry point for incorporating cognitive computing capabilities into the enterprise, says David Schatsky managing director with Deloitte.

By injecting RPA with cognitive computing power, companies can supercharge their automation efforts, says Schatsky, who analyzes the implications of emerging technology and other business trends. By combining RPA with cognitive technologies such as machine learning, speech recognition, and natural language processing, companies can automate higher-order tasks that in the past required the perceptual and judgment capabilities of humans.
Some leading RPA vendors are already combining forces with cognitive computing vendors. Blue Prism, for example, is working with IBM’s Watson team to bring cognitive capabilities to clients. And a recent Forrester report on RPA best practices advised companies to design their software robot systems to integrate with cognitive platforms.

CIO.com talked to Schatsky about RPA adoption rates, the budding relationship between software robots and cognitive systems, the likelihood that the combination of the two will replace traditional outsourcing, and the three steps companies should take before implementing RPA on a wider scale.

CIO.com: Where are most companies in terms of their adoption of RPA?

David Schatsky, managing director, Deloitte: RPA is a new topic to some and a well understood one to others. More and more IT leaders have heard of the term and at least know what it is in principle. Adoption thus far is pretty modest. RPA has been more widely adopted in Europe and Asia than it has been in the U.S. And even those companies in the U.S. that have adopted RPA are typically just piloting it.

CIO.com: Why did RPA catch on more rapidly in Asia and Europe?

Schatsky:That’s due to the level of business process outsourcing that has taken place there. Asia is the hope of business process outsourcing and European companies have been eager to cut the costs of onshore operation using RPA. Also, one of the leading RPA companies, Blue Prism, is based in Europe.

CIO.com: Why are you focusing on the potential combination of RPA and cognitive computing systems in particular?

Schatsky: I think it will help to broaden the application of RPA and increase the value it delivers to the companies that adopt it. Cognitive technology is progressing rapidly, but many companies don’t have a clear path to taking advantage of these technologies. They’re not sure how and where to put them to use.

RPA is a platform that can provide clear use cases for applying cognitive capabilities. Companies can install it to automate processes and it provides a framework or platform to integrate with cognitive systems to take automation to the next level. It’s almost the ‘killer app’ for cognitive computing.

RPA is very useful technology, but it’s not terribly intelligent technology. It only performs tasks with clear-cut rules. You can’t substitute RPA for human judgment. It can’t perform rudimentary tasks that require perceptual skills, like locating a price or purchase order number in a document. It can identify a happy customer versus an unhappy customer. Cognitive takes the sphere of automation that RPA can handle and broadens it.

CIO.com: Where will be the most beneficial use cases for using RPA in conjunction with cognitive technology?

Schatsky: A lot of them are in the front office: classifying customer issues and routing them to the right person, deciding what issues need to be escalated, extracting information from written communication.

CIO.com: Who tends to lead these RPA efforts—an IT leader or a business process owner?

Schatsky: It’s mixed. Sometimes it’s led by the process owner in the business. They learn about RPA and identify an opportunity to deploy it and improve efficiency. In other cases, IT has been leading the effort. It’s indicative of the broader trend of tech-centric decision being made increasingly in the business and not just IT.

However, it’s a classic example of technology that benefits from the involvement of both IT and the business. The business is accountable for the business process operation, but IT is responsible for things like security, compliance and governance. If the business goes out and deploys this stuff without IT’s involvement, those issues can get overlooked.

CIO.com: Is there much adoption of RPA to automate processes within the IT function?

Schatsky: There’s plenty of potential there is in terms of automation routine IT tasks such as password resets. But RPA has come onto the scene really focused on business processes rather than IT.

CIO.com: Will RPA, particularly infused with cognitive capabilities, displace the outsourcing of business and IT processes?

Schatsky: It certainly raises the question of whether its possible to do away with outsourcing by automating functions. We’ve seen some studies that suggest that it an onshore process automated with RPA could be less costly than offshore process performed by humans. But the jury is still out regarding what the impact is going to be on outsourcing.

We do see outsourcing providers themselves investing in RPA in order to capture the cost and business benefits to remain competitive and forestall the adoption of alternatives that don’t include them. In some cases, the business process outsourcing model will likely evolve.

CIO.com: You’ve noted that a proof-of-concept RPA project may take as little as two weeks and a pilot could be up and running within a month or two. Sounds simple. Is it?

Schatsky: Some of these proofs of concept are intended to do just that, prove the concept. They explain what RPA is and show it working in practice. You have to go a fair way up the road to get to something that’s in production ready and delivering value.

CIO.com: What are the most important steps companies need to take if they’re thinking about implementing RPA on a wider scale.

Schatsky: There are three issues that companies need to think about.

One is the level of standardization of the business process you want to automate. You have to understand the business processes you’re seeing to automate enough to determine if automatable as is or whether it makes send to redesign them a bit.

Sometime business processes performed by humans, who are adaptable and flexible, can be fairly unstandardized and full of exceptions. That’s not a problem for people, but is a problem for an automated tool that seeks to do this in a more repetitive way. Processes can be hard to automate as is and will need to be rationalized in order to take advantage of RPA.

The other thing is scalability. Once someone has proved the value of RPA in one particular business process or piece of a business process, the interest in expanding the use of it grows. But companies need to do more planning when they expand the use of RPA. They think about issues like how many software bots do we need to have and how they will manage secure access to systems the bots are interacting with. That requires more thought.
Finally, you need to understand the business purpose — what you’re trying to accomplish with RPA. Often the adoption of RPA is driven by cost cutting, but it’s worth thinking about the broader business goals. For instance, some companies are looking to improve service to customers by being more responsive or fulfilling customer requests faster.

Some may be interested in scalability and the ability deal with spikes in demand, sudden changes in workflow, or the need to comply with new regulations. Companies should take a step back to understand what they’re trying to do with RPA because that will dictate the approach they take. In fact, that’s the biggest consideration to make when an enterprise decides to go whole hog with RPA.

CIO.com: What are the biggest mistakes companies make related to RPA?

Schatsky: Overestimating what is possible and thinking that since RPA just replaces what people are doing right now, I can just deploy it right away. It’s not a macro that automates one person’s keystrokes. It’s much more sophisticated than automating one person’s work. The concept of RPA is simply, but the reality of implementing it requires more thought.

Source: Cio.com-Robotic process automation is killer app for cognitive computing