Using cognitive tech to connect customers to business operations

Creating an engaging customer experience is more readily achieved by embedding increasingly sophisticated digital and cognitive technologies into the very fiber of an organization’s processes, from its front office right through to its back office.

Successful organizations are both strategic and nimble, leveraging the power of real-time data to reduce inefficiencies and enhance their effectiveness. Agile businesses predict their customers’ needs before their competitors do. More importantly, they have the ability to act on those predictions, which is essential for getting ahead in today’s global digital economy. Investment in cognitive technologies (those which mimic human thinking and have the capability to learn) will be required for having intelligent operations in the enterprise. An intelligent enterprise has the ability to inform and implement better business decisions by leveraging data and smarter technology.

In a study conducted in partnership with IPsoft, HfS Research interviewed 100 C-Suite executives to understand their views, expectations, and strategies, along with their investment plans for cognitive technologies. This report discusses opportunities and challenges that business leaders see for moving their organizations toward being truly intelligent—knowing their customers, using technology most effectively, and infusing cognitive technology into the fiber of their business operations.

Table of Contents

  • Smart investments in cognitive tech will help solve business problems and collapse internal barriers
  • C-Suite executives seek to align operations with business outcomes
  • Cognitive Agents are at the Forefront of Investments
  • Cognitive Tech is Driving Intelligent, Self-Learning Business Operations
  • Intelligent operations of the future: cognitive is a lever for theOneOffice core
  • OneOffice by Definition
  • Impediments to OneOffice: The Challenges of Aligning the Enterprise
  • How to track the impact?
  • Using cognitive glue to construct OneOffice

Download Now

Source: Hfs-Using cognitive tech to connect customers to business operations

Advertisements

New Independent Study Reveals Why Not All Software Robots Are Created Equally

By Leslie Willcocks

Robotic Process Automation (RPA) continues to be a growing success story. In 2016, RPA alone experienced a 68 percent growth rate in the global market, with 2017 maintaining this momentum. Some reports have even predicted a US$ 8.75 billion market by 2024. However, merely investing in RPA is not an instant recipe for growth.

In “Service Automation Robots and The Future of Work” (2016), my colleague Mary Lacity and I highlighted successful RPA deployments and how organizations were achieving triple wins for their shareholders, customers, and employees alike. We continued tracking these developments in 2017 and also noticed something different — many less successful journeys. In practice, it appears that automation success is far from guaranteed. Wider reports provide anecdotal evidence of between 30 to 50 percent of initial projects stalling, failing to scale, being abandoned or moving to other solutions. Our most recent research has examined in detail both successful and more challenged automation deployments. It turns out that service automation — like all organizational initiatives that try to scale — can be fraught with risk. We’re seeing 41 specific risks that need to be managed in eight areas: strategy, sourcing, tool selection, stakeholder buy-in, project execution, change management, business maturity and an automation center of excellence.

One of the key risk areas is tool/platform selectionBecause of the hype and confusion in the RPA marketplace, clients risk choosing the wrong tool(s), too many tools, or bad tool(s). By early 2018, over 45 tools or platforms were being sold as “RPA” and over 120 tools were being sold as some form of cognitive automation. Because the space is relatively new to many clients, it’s difficult to assess the actual capabilities and suitability of these tools. Clients must be wary of hype and “RPA washing”.

In our new report on Benchmarking the Client Experience, we extensively polled clients at Blue Prism on the results they’ve been getting by integrating RPA into existing business processes. In order to get the most valuable feedback, we set the bar high in requesting client assessments of the Blue Prism RPA platform on the following criteria: scalability, adaptability, security, service quality, employee satisfaction, ease of learning, deployment speed and overall satisfaction. From our qualitative research into process automation, these emerged as the most critical and essential characteristics and requirements for a successful enterprise-grade RPA implementation.

The overall level of satisfaction with the Blue Prism platform was extremely high in our survey. Respondents reported a 96 percent overall satisfaction rate, with 79 percent of respondents ranking Blue Prism’s platform a six or seven on a seven-point Likert Scale. Based on our 25-year research history into process improvement initiatives (BPM, shared services, outsourcing, six sigma, etc.), these are extremely high RPA satisfaction levels. Our research into IT and Business Services outsourcing finds only 20 percent of vendors getting “world class” performance, 25 percent getting good performance, 40 percent “doing OK”, while 15 percent experience poor outcomes. The record on IT projects also continues to frustrate. The most recent (2017) Standish Group CHAOS report found only a third of IT projects were successfully completed on time and on budget over the past year – the worst failure rate the Standish Group has recorded.

What, then, accounts for the impressive 96 percent overall satisfaction rate with Blue Prism?

Our observation is that not all RPA offerings are the same. The capability of RPA software depends greatly on the origins and orientations of the supplier. If designed as a desktop assistant, many RPA tools experience problems with scaling, security and integration with other information systems. Other RPA vendors offer RPA which is effectively a disguised form of what we have described as a “software-development kit,” needing a lot more IT development by the in-house team or the RPA vendor than first imagined, and incurring unanticipated expense, time and resources. True enterprise RPA, however, is designed from the start with a platform approach, to fit with wider enterprise systems. This might make it more expensive initially, and require more attention in the first few months of trial, but true enterprise RPA platforms have proven to be an investment in success later in the deployment cycle, when compared to other RPA software that tends to run into real problems.

Our qualitative research also suggests that some RPA tools are not easily scalable, especially those based on a recording capability, or requiring a lot of IT development. This occurs because some RPA tools are not designed as configurable service delivery platforms that can be integrated with other existing systems. These also need a lot more management involvement than clients and their vendors often expect. Many clients, moreover, do not put in place the necessary IT, project and program governance (rules and constitution, who does what, roles and responsibilities), and often do not use built-in tools that contain technical governance.

This, of course, is not the whole story. An RPA and cognitive skills shortage is already upon us. This means that retained capability and in-house teams are sometimes not strong enough – a situation not helped by sometimes skeptical senior management under-resourcing automation initiatives and not taking a strategic approach. Consultants are also hit by skills shortages and cannot always provide the support necessary — this is also true with business services outsourcing providers. We are also finding that clients often do not give enough attention to stakeholder buy-in and change management. Given these emerging challenges, the Blue Prism client satisfaction level are very notable indeed.

To download the report, click here.

Leslie Willcocks is Professor in the Department of Management at the London School of Economics, and co-author, with Mary Lacity, John Hindle and Shaji Khan, of the Robotic Process Automation: Benchmarking The Client Experience (Knowledge Capital Partners, London).

Source: blog.blueprism.com-New Independent Study Reveals Why Not All Software Robots Are Created Equally

The Future of Human Work Is Imagination, Creativity, and Strategy

It seems beyond debate: Technology is going to replace jobs, or, more precisely, the people holding those jobs. Few industries, if any, will be untouched.

Knowledge workers will not escape. Recently, the CEO of Deutsche Bank predicted that half of its 97,000 employees could be replaced by robots. One survey revealed that “39% of jobs in the legal sector could be automated in the next 10 years. Separate research has concluded that accountants have a 95% chance of losing their jobs to automation in the future.”

And for those in manufacturing or production companies, the future may arrive even sooner. That same report mentioned the advent of “robotic bricklayers.” Machine learning algorithms are also predicted to replace people responsible for “optical part sorting, automated quality control, failure detection, and improved productivity and efficiency.” Quite simply, machines are better at the job: The National Institute of Standards predicts that “machine learning can improve production capacity by up to 20%” and reduce raw materials waste by 4%.

It is easy to find reports that predict the loss of between 5 and 10 million jobs by 2020. Recently, space and automotive titan Elon Musk said the machine-over-mankind threat was humanity’s “biggest existential threat.” Perhaps that is too dire a reading of the future, but what is important for corporate leaders right now is to avoid the catastrophic mistake of ignoring how people will be affected. Here are four ways to think about the people left behind after the trucks bring in all the new technology.

The Wizard of Oz Is the Wrong Model

In Oz, the wizard is shown to run the kingdom through some complex machine hidden behind a curtain. Many executives may think themselves the wizard; enthralled by the idea that AI technology will allow them to shed millions of dollars in labor costs, they could come to believe that the best company is the one with the fewest people aside from the CEO.

Yet the CEO and founder of Fetch Robotics, Melonee Wise, cautions against that way of thinking: “For every robot we put in the world, you have to have someone maintaining it or servicing it or taking care of it.” The point of technology, she argues, is to boost productivity, not cut the workforce.

Humans Are Strategic; Machines Are Tactical

McKinsey has been studying what kind of work is most adaptable to automation. Their findings so far seem to conclude that the more technical the work, the more technology can accomplish it. In other words, machines skew toward tacticalapplications.

On the other hand, work that requires a high degree of imagination, creative analysis, and strategic thinking is harder to automate. As McKinsey put it in a recent report: “The hardest activities to automate with currently available technologies are those that involve managing and developing people (9 percent automation potential) or that apply expertise to decision making, planning, or creative work (18 percent).” Computers are great at optimizing, but not so great at goal-setting. Or even using common sense.

Integrating New Technology Is About Emotions

When technology comes in, and some workers go away, there is a residual fear among those still in place at the company. It’s only natural for them to ask, “Am I next? How many more days will I be employed here?” Venture capitalist Bruce Gibney explains it this way: “Jobs may not seem like ‘existential’ problems, but they are: When people cannot support themselves with work at all — let alone with work they find meaningful — they clamor for sharp changes. Not every revolution is a good revolution, as Europe has discovered several times. Jobs provide both material comfort and psychological gratification, and when these goods disappear, people understandably become very upset.”

The wise corporate leader will realize that post-technology trauma falls along two lines: (1) how to integrate the new technology into the work flow, and (2) how to cope with feelings that the new technology is somehow “the enemy.” Without dealing with both, even the most automated workplace could easily have undercurrents of anxiety, if not anger.

Rethink What Your Workforce Can Do

Technology will replace some work, but it doesn’t have to replace the people who have done that work. Economist James Bessen notes, “The problem is people are losing jobs and we’re not doing a good job of getting them the skills and knowledge they need to work for the new jobs.”

For example, a study in Australia found a silver lining in the automation of bank tellers’ work: “While ATMs took over a lot of the tasks these tellers were doing, it gave existing workers the opportunity to upskill and sell a wider ranges of financial services.”

Moreover, the report found that there is a growing range of new job opportunitiesin the fields of big data analysis, decision support analysts, remote-control vehicle operators, customer experience experts, personalized preventative health helpers, and online chaperones (“managing online risks such as identify theft, reputational damage, social media bullying and harassment, and internet fraud”). Such jobs may not be in your current industrial domain. But there may be other ways for you to view this moment as the perfect time to rethink the shape and character of your workforce. Such new thinking will generate a whole new human resource development agenda, one quite probably emphasizing those innate human capacities that can provide a renewed strategy for success that is both technological and human.

As Wise, the roboticist, emphasized, the technology itself is just a tool, one that leaders can use how they see fit. We can choose to use AI and other emerging technologies to replace human work, or we can choose to use them to augment it. “Your computer doesn’t unemploy you, your robot doesn’t unemploy you,” she said. “The companies that have those technologies make the social policies and set those social policies that change the workforce.”

Source: HBR-The Future of Human Work Is Imagination, Creativity, and Strategy

These 100 Companies Are Leading the Way in A.I.

Whether you fear it or embrace it, the A.I. revolution is coming—and it promises to have an enormous impact on the world economy. PwC estimates that artificial intelligence could add $15.7 trillion to global GDP by 2030. That’s a gargantuan opportunity. To identify which private companies are set to make the most of it, research firm CB Insights recently released its 2018 “A.I. 100,” a list of the most promising A.I. startups globally (grouped by sector in the graphic above). They were chosen, from a pool of over 1,000 candidates, by CB Insights’ Mosaic algorithm, based on factors like investor quality and momentum. China’s Bytedance leads in funding with $3.1 billion, but 76 of the 100 startups are U.S.-based.

COMPANY COUNTRY SECTOR FUNDING($ Mil.) AEYE U.S. AUTO TECH 16.27 Affirm U.S. FINTECH & INSURANCE 525 Afiniti U.S. MARKETING, SALES, CRM 80 AiCure U.S. HEALTHCARE 30.74 Algolia U.S. ENTERPRISE AI 74.02 Amplero U.S. MARKETING, SALES, CRM 25.5 Anki U.S. ROBOTICS 182 Appier Taiwan COMMERCE 81.5 Applitools U.S. SOFTWARE DEVELOPMENT & DEBUGGING 10.5 Appthority U.S. CYBERSECURITY 23.25 Aquifi U.S. COMMERCE 32.76 Arterys U.S. HEALTHCARE 42 babylon U.K. HEALTHCARE 85 Benson Hill Biosystems U.S. AGRICULTURE 34.21 Brain corporation U.S. ROBOTICS 114 Bytedance China NEWS & MEDIA 3,100 C3 IoT U.S. IOT 130.94 Cambricon China HARDWARE FOR AI 101.4 Cape Analytics U.S. FINTECH & INSURANCE 14 Captricity U.S. CROSS-INDUSTRY 49.02 Casetext U.S. LEGAL TECH 24.28 Cerebras Systems U.S. HARDWARE FOR AI 85 CloudMinds U.S. ROBOTICS 130 CognitiveScale U.S. CROSS-INDUSTRY 40 Conversica U.S. MARKETING, SALES, CRM 56 CrowdFlower U.S. ENTERPRISE AI 55.95 CrowdStrike U.S. CYBERSECURITY 281 Cybereason U.S. CYBERSECURITY 188.62 Darktrace U.K. CYBERSECURITY 182.3 DataRobot U.S. ENTERPRISE AI 124.61 Deep Sentinel U.S. PHYSICAL SECURITY 7.4 Descartes Labs U.S. GEOSPATIAL ANALYTICS 38.46 Drive.ai U.S. AUTO TECH 77 Dynamic Yield U.S. COMMERCE 45.25 Element AI Canada ENTERPRISE AI 102 Endgame U.S. CYBERSECURITY 96.05 Face++ China CROSS-INDUSTRY 608 Flatiron Health U.S. HEALTHCARE 313 FLYR U.S. TRAVEL 14.25 Foghorn Systems U.S. IOT 47.5 Freenome U.S. HEALTHCARE 79 Gong U.S. MARKETING, SALES, CRM 26 Graphcore U.S. HARDWARE FOR AI 110 InsideSales.com U.S. MARKETING, SALES, CRM 264.3 Insight Engines U.S. CROSS-INDUSTRY 15.8 Insilico Medicine U.S. HEALTHCARE 8.26 Invoca U.S. MARKETING, SALES, CRM 60.75 Kindred Systems Canada ROBOTICS 43 KYNDI U.S. CROSS-INDUSTRY 9.6 LeapMind Japan ENTERPRISE AI 13.4 Liulishuo China EDUCATION 100 MAANA U.S. IOT 40.14 Merlon Intelligence U.S. RISK & REGULATORY COMPLIANCE 7.65 Mighty AI U.S. AUTO TECH 27.25 Mobalytics U.S. E-SPORTS 2.65 Mobvoi China CROSS-INDUSTRY 257 MOOGsoft U.S. IT & NETWORKS 52.93 Mya Systems U.S. HR TECH 29.5 Mythic U.S. HARDWARE FOR AI 19.42 Narrative Science U.S. CROSS-INDUSTRY 47.87 NAUTO U.S. AUTO TECH 182.6 Neurala U.S. ROBOTICS 15.95 Numerai U.S. FINTECH & INSURANCE 7.5 Obsidian Security U.S. CYBERSECURITY 9.5 Onfido U.K. RISK & REGULATORY COMPLIANCE 59.53 Orbital Insight U.S. GEOSPATIAL ANALYTICS 78.7 OrCam Technologies Israel IOT 47 Osmo U.S. EDUCATION 38.5 PerimeterX U.S. CYBERSECURITY 35 Petuum U.S. ENTERPRISE AI 108 Preferred Networks Japan IOT 112.8 Primer U.S. CROSS-INDUSTRY 14.7 Prospera Israel AGRICULTURE 22 Recursion Pharmaceuticals U.S. HEALTHCARE 118.62 Reflektion U.S. COMMERCE 45.91 SenseTime China CROSS-INDUSTRY 637 Shape Security U.S. CYBERSECURITY 106 Sher.pa Spain PERSONAL ASSISTANTS 8.2 Shield AI U.S. PHYSICAL SECURITY 13.15 Shift Technology France CYBERSECURITY 39.72 Socure U.S. RISK & REGULATORY COMPLIANCE 33.25 SoundHound U.S. NEWS & MEDIA 114.1 SparkCognition U.S. CYBERSECURITY 43.88 Sportlogiq Canada SPORTS 7.2 Tamr U.S. ENTERPRISE AI 41.2 Tempus Labs U.S. HEALTHCARE 70 Text IQ U.S. RISK & REGULATORY COMPLIANCE 3.34 Textio U.S. HR TECH 29.5 Tractable U.K. CROSS-INDUSTRY 9.82 Trifacta U.S. ENTERPRISE AI 76.3 Twiggle Israel COMMERCE 35 UBTECH Robotics China ROBOTICS 521.39 Upstart U.S. FINTECH & INSURANCE 584.73 Versive U.S. CYBERSECURITY 57 Vicarious Systems U.S. ROBOTICS 118.03 Workey Israel HR TECH 9.6 WorkFusion U.S. RISK & REGULATORY COMPLIANCE 71.3 ZestFinance U.S. FINTECH & INSURANCE 268 Zoox U.S. AUTO TECH 290 Zymergen U.S. LIFE SCIENCE 174

Source: Fortune-These 100 Companies Are Leading the Way in A.I.

9 AI trends to look for in 2018 RPA 2.0 initiatives

WorkFusion’s president Alex Lyashok originally wrote this post for his own feed, but we thought it was so fascinating that we decided to publish it here.

We all know that Artificial Intelligence is developing at breakneck speed. But what you may not know, is how these AI technology advancements will benefit your Intelligent Automation or RPA 2.0 programs, making them more powerful and manageable.

Here are nine trending solutions for common automation issues that will help your digital workforce improve and evolve in 2018.

Theme: AI in a Dynamic Enterprise

1. Continual Learning

Most Machine learning (ML) models are trained very infrequently (maaaybe hourly). At the same time, they are making decisions on inputs very frequently (in seconds or less). This can cause models to make incorrect decisions when they operate in a dynamic, quickly shifting environment. If a model was trained on a set of documents from one market and documents from another market start to rapidly come in, this can cause problems.

Look for:

  • Online Learning where models are trained in real-time as new data arrives.
  • Ensembles that combine frequently trained models with infrequently trained (think long-term/short-term memory).
  • Reinforcement Learning that focuses on learning the policies (not the inputs) and updating the policies based on feedback.

2. Robust Decisions

Making important decisions with machine learning means being able to deal with noisy or even adversarial inputs. For example, decisions made on inputs that the model has never seen before can be hard to evaluate.

Look for:

  • Data Provenance to track and understand where exactly the training inputs came from.
  • Confidence Management to develop more nuanced understanding of the ML model output (confidence intervals) and manage and detect unforeseen inputs.

3. Explainable Decisions

Decisions that are made in regulated or sensitive industries, such as banking or healthcare, need to be explained to people in the context of the regulatory or legal framework in which they were made. This means that you need to establish a set of preventive controls to support audits for an automated process.

Look for:

  • Interpretation to be able to interactively review a model in terms and concepts that SME is familiar with.
  • “What-if” Simulation to understand what other inputs could have led to the same decision.
  • Record and Replay to be able to trace, repeat and analyze computations that led to the decision.

Theme: Secure AI

4. Secure Enclaves

Securing AI means being able to run models in an isolated (software or even hardware) environment.

Look for:

  • Enclaves to run code in a secure environment that protects data, privacy, and decision integrity.
  • Secure Modularization to split AI code into parts where the smaller, sensitive part can be run in an enclave and the larger unprotected part can be run in an untrusted environment

5. Adversarial Learning

The adaptive nature of ML systems makes them vulnerable to new types of attacks. Broadly, they can be classified into evasion and data poisoning attacks. Evasion attacks target the inference stage, where data that can be correctly processed by a person, but is processed incorrectly by a ML model is crafted. For example, two documents may look the same to a person, but get classified differently by a computer. Data poisoning attacks target the training stage, where data is injected into the training set to cause the ML model to behave incorrectly in the future.

Look for the Data provenance and Explainable Decision capabilities described above. They can mitigate these risks and can be effectively combined with human-in-the-loop (HITL) capabilities to create reliable preventive controls in ML-based automation.

6. Shared Learning on Confidential Data

When you conduct learning across data that belongs to multiple organizations, you will derive much better results. However, when you train models on sensitive data, prohibiting leaks of confidential information can be a challenge. Hence, exploring secure multi-party learning is increasingly becoming a priority for many organizations.

Look for:

  • Differential Privacy to mix noise into data to securely obscure sensitive inputs.
  • Multi-party Computation (MPC) to allow each party to compute private inputs on joint models without learning of other party’s inputs.

Theme: AI-specific Enterprise Architecture

7. Composable AI systems

Modularity is essential to scaling systems. Breaking down complex software into modules helps reduce cost and improve manageability at scale.

Look for:

  • Model Composition to be able to assemble smaller models into ensembles where each individual model can be added or removed to improve overall output and manageability.
  • Action Composition to combine model outputs into options thereby shifting decision making to higher levels of concepts. For example, options to decline or accept claim vs. data on specific document fields.

8. Cloud-edge systems

Cloud systems are used extensively today to run and manage AI systems. At the same time, most enterprises operate systems in their data centers or on the edge of the cloud. Systems that combine cost benefits of the cloud with control advantages of the edge systems can bring the best of the both worlds together.

Look for Model Composition and Action Composition described above to be able to take advantage of secure, fast learning on the edge with the power of centralized cloud systems.

6. Domain specific hardware

Many enterprises increasingly adopt hardware architectures that increase performance, reduce cost, or improve security of AI systems.

Look for:

  • GPU and FPGA Support to reduce cost and improve scalability in data centers or public/private clouds
  • Google Tensor Processing Unit (TPU) Compatibility to accelerate certain compatible AI payloads
  • Enclave Support to take advantage of secure computation such as Intel’s SGX and ARM’s TrustZone

Visit workfusion.com to learn more about AI, Cognitive Automation, and how to expand your RPA program to take advantage of all our Intelligent Automation techniques.

Source: blog.workfusion.com-9 AI trends to look for in 2018 RPA 2.0 initiatives

Making AI and robotics work for your business

The use of robotics and artificial intelligence in businesses is on the rise, but there are still significant challenges for organisations adopting the technologies. Two executives from global IT consulting and outsourcing group Capgemini spoke to IoT Hub about how best to meet these challenges and why the returns make the effort worthwhile.

“The amount of data that’s available now in places like social media and enterprises means it is becoming for efficient for machines to make decisions rather than humans, taking the human bias out of it and making decisions objectively,” said Saugata Ghosh, senior manager of digital services at Capgemini.

This trend, together with the maturity of robotic process automation (RPA) technologies over the last three to five years, has contributed to the growth in adoption of robotics and AI, Ghosh said.

“If you look at the spectrum of robotic automation, at one end you have simple rules-based automation where the economics of those are such that they are quite easy to implement and have strong returns on investment,” he explained.

“At the other end, towards the cognitive and artificial intelligence side, you’re also seeing accelerated maturity, with things such as driverless vehicles making it possible to automate tasks that we wouldn’t have previously thought of automating a few years ago.”

Ghosh is also observing convergence between both ends of the automation spectrum.

“In real life, many processes have an element of both. For example, in the case of email feedback analysis, the interpretation of the body of the email is within the realms of cognitive or pattern recognition, while the processing of the email once it has been analysed could be rules-based,” he said.

Ghosh has noticed a trend in the motivations of deploying RPA technologies from that of cost-saving to improved accuracy and customer experience.

“Initially, everybody was after headcount reduction. Most people are telling us now that their focus is on reducing errors, improving compliance, or improving the customer experience,” he explained.

“We’re certainly seeing maturity in this area and the focus has shifted from the tactical to more strategic and sustainable objectives.”

Hilda Carmichael, director of digital program delivery for digital services at Capgemini, added: “The ambition particularly around more traditional finance, HR and IT functions is to have better business partnering capabilities by eliminating more of those manual tasks, freeing capacity to properly engage with customers instead of being distracted with repeated, administrative tasks.”

How to meet the challenges

Despite the benefits that automation technologies can provide, Ghosh said that there are a number of challenges that businesses face when adopting AI.

“All organisations recognise the potential for RPA to significantly transform their business, but they have questions as to how they get started,” he said.

“These organisations may also have a good sense of what it takes financially to do a pilot or a proof-of-concept, but are aware that just because the entry barrier to adoption is low, they must also prevent uncontrolled proliferation of these technologies across the enterprise.”

“It all comes down to scope,” Carmichael added. “Companies need to pick a candidate set of processes by which they have a span of control that they can deploy initially.”

“Processes that cut across multiple functions within an organisation will require a greater set of engaged stakeholders.

“So start small, start with a number of high-volume, manual, repetitive set of processes that’s within your span of control, and go away and prototype that.”

Carmichael also said that business units should work together to build the business case and realise the potential of RPA.

“It doesn’t matter who leads the charge, whether it’s the business or IT, but there has to be a partnering component to it,” she explained.

“The business needs to determine and help codify the business rules, and IT needs to determine the infrastructure and scalability of the solution.”

Source: iothub.com.au  – Making AI and robotics work for your business

Image Credit: Thinkstock

Automation Will Make Us Rethink What a “Job” Really Is

As businesses enter the unchartered waters of machine intelligence – where machines learn by experience and improve their performance over time – researchers are trying to predict its impact on jobs and work. Optimists suggest that by taking over cognitive but labor-intensive chores the intelligent machines will free human workers to do more “creative” tasks, and that by working side by side with us they will boost our imagination to achieve more. Experience with Robotic Process Automation (RPA) seems to confirm this prediction. Pessimists predict huge levels of unemployment, as nearly half of existing jobs appear prone to automation and, therefore, extinction.

More nuanced analysis points to a less dystopian future where a great number of activities within jobs will be undertaken by intelligent systems rather than humans. This view, in effect, calls for a re-examination of what a “job” actually is: how it is structured, and how it should be reconfigured, or perhaps redefined, in the age of intelligent automation. How should companies rethink the value of a job, in terms of increased performance through machine intelligence? What set of skills should companies invest in? Which jobs should remain within the company, and which should be accessed via talent platforms, or perhaps shared with peers, or even competitors?

Conventional wisdom has long suggested that, as job performance increases, so does the value or return to the company. This myth of a consistent relationship between job performance and value across all jobs within a company has since been debunked, most recently in Transformative HR, which illustrates the variance in roles where great talent makes a difference and where good enough suffices.

However, with technology, digitalization, and artificial intelligence accelerating changes to jobs, the relationships between performance and value become even more complex and yield potentially exponential opportunities for value creation. Return on Improved Performance (ROIP) – similar to Return on Investment – measures the value of improved performance in a given position (i.e., not just the value of average performance in a job). Let’s look at an example that most of us directly interact with for hundreds, if not thousands, of hours annually: the airline industry.

Pilots are a critical pool of talent for an airline; there must be a sufficient supply with appropriate skills to operate the airline. But this is a segment where “good enough” suffices. As the chart below illustrates, beyond a certain standard, having higher performing airline pilots will not yield additional business value (defined as customer loyalty) to the organization, although having even one pilot “below minimum standards” can have a significantly negative impact on the performance and reputation of the organization as well as compromise the integrity of the business model.

This is the reason airlines invest in elongated career paths for pilots. For instance, it takes 20 years to move from the “right seat” of an Embraer 175 doing a short haul flight to the “left seat” of a Boeing 747 going across the Pacific Ocean. Significant investment also takes place in cockpit technology as well as in training and development (e.g., minimum simulator hours required) among other things, in order to take the left side of the curve out of play. This is a classic proficiency role: though the skills are high level, beyond a certain standard, higher performance won’t yield more value.

Nevertheless, as airlines increasingly pursue competitive advantage by differentiating the customer experience – particularly for premium passengers – flight attendants become a pivotal workforce segment. Often they are only “face of the organization” to most passengers – which suggests that higher levels of performance, particularly when it comes to delivering an experience that truly delights a passenger, can yield significantly greater customer loyalty, as the work of the flight attendant steadily shifts from the transactional to the relational. This is a classic pivotal role: higher performance yields more value.

So armed with this insight about the differential relationship between employee performance and value to the company, how can we apply the rapid advances in artificial intelligence to further enhance the impact of these roles? Indeed, how can we ensure that task automation does not merely reduce labor cost but also delivers increased performance for the human workers? To answer these questions, we need to begin disaggregating work and understanding how automation and AI can differentially handle various aspects of work.

Let’s go back to our flight attendants and think specifically about how cognitive automation might enable them to take the work of delivering the optimal customer experience to a whole new level – in this case with augmented reality powered by cognitive computing to deliver an unprecedented level of insight. If we deconstruct the job into the three categories defined in the chart above, you would ensure that the legally required and airline minimum elements of work were highly standardized and performed to the minimum acceptable standard while empowering and enabling the flight attendant to unleash all his discretionary effort on a highly personalized level of service. Imagine flight attendants wearing a version of Google Glass, through which they can access customer data and personalized preferences. No nut dishes served to Charles in 3C given his allergy, but black coffee and a predisposition for onboard duty free. Early seating meal for Sarah in 2A so she can get to sleep quickly. And so on.
In a scenario such as this, machine intelligence overlaid on augmented reality further increases the steepness of the curve for the discretionary portion of this pivotal role’s work. For the flight attendant using this technology a unit improvement in individual performance provides even greater increases in organizational value, as premium passengers are treated with a level of personalized service where it matters that would be otherwise unfathomable.

Conversely, consider how robotic process automation can change the left side of the curve for a pilot (i.e., the legally required element). Instead of investing the aforementioned resources to minimize the possibility of human error, AI (in this case, robot pilots or autonomous airplanes) can replace the routine and repetitive elements of the pilot role, flattening that portion of the curve. The emphasis could shift to having highly skilled pilots act as overseers from a distance for multiple flights, intervening when an unforeseen event moves the work beyond the routine. This would allow airlines to leverage the experience and insight of skilled pilots in a much more efficient way. The net effect is both a reduction in labor cost (as fewer pilots are required) and a reduction in the risk of an accident.

And yet….as we have seen countless times, the very idea of a robot making a mistake is terrifying to humanity. Consider the difference in the public reaction to the recent news of the Tesla autopilot accident versus the statistics about the countless lives lost every day due to human drivers’ texting while driving. It doesn’t matter that we know that IBM Watson’s success rate in diagnosing lung cancer is close to 90% while our human oncologists average 50%. We trust humans and expect robots to be infallible. Will we as a society be willing to allow the robots to learn? How long will it take the flying public to get comfortable putting their lives in the hands of a robot?

Given these challenges, here are five steps we recommend companies take to rethink work in light of automation and AI:

  • Gain clarity on pivotal vs. proficiency roles in your organization
  • Understand the specific nature of the relationship between performance and value for your pivotal and proficiency roles
  • Disaggregate the different parts of the curve shown in the chart above and determine how AI can play a role
  • Determine the specific activities that these different forms of AI might transform, and the relevant cost, capability, and risk implications
  • Plan for how stakeholders can be engaged in understanding and embracing the potential changes to work, recognizing the aforementioned biases and resistance factors

Recognizing how technology and AI can transform the performance and value equation provides a significant competitive advantage. Successful leaders will translate the evolving pivot points in their business models into specific implications for work, looking beyond jobs, and understand the transformative role AI can play in redefining the performance curve for the work of the future.

Source: Harvard Business Review –  Automation Will Make Us Rethink What a “Job” Really Is

The current state of machine intelligence 2.0

Autonomous systems and focused startups among major changes seen in past year.

A year ago today, I published my original attempt at mapping the machine intelligence ecosystem. So much has happened since. I spent the last 12 months geeking out on every company and nibble of information I can find, chatting with hundreds of academics, entrepreneurs, and investors about machine intelligence. This year, given the explosion of activity, my focus is on highlighting areas of innovation, rather than on trying to be comprehensive. Figure 1 showcases the new landscape of machine intelligence as we enter 2016:

1400px-MI-Landscape-2.0-R9_update-31f507e4ce5f56e6cc06ee0b60bb6690

Despite the noisy hype, which sometimes distracts, machine intelligence is already being used in several valuable ways. Machine intelligence already helps us get the important business information we need more quickly, monitors critical systems, feeds our population more efficiently, reduces the cost of health care, detects disease earlier, and so on.

LEARNING PATH

Machine Learning

In this Learning Path, you’ll master everything you need to transform data into action. Start with basic techniques and move on to coding your own machine learning algorithms.

The two biggest changes I’ve noted since I did this analysis last year are (1) the emergence of autonomous systems in both the physical and virtual world and (2) startups shifting away from building broad technology platforms to focusing on solving specific business problems.

Reflections on the landscape

With the focus moving from “machine intelligence as magic box” to delivering real value immediately, there are more ways to bring a machine intelligence startup to market.  Most of these machine intelligence startups take well-worn machine intelligence techniques, some more than a decade old, and apply them to new data sets and workflows. It’s still true that big companies, with their massive data sets and contact with their customers, have inherent advantages—though startups are finding a way to enter.

Achieving autonomy

In last year’s roundup, the focus was almost exclusively on machine intelligence in the virtual world. This time we’re seeing it in the physical world, in the many flavors of autonomous systems: self-driving cars, autopilot drones, robots that can perform dynamic tasks without every action being hard coded. It’s still very early days—most of these systems are just barely useful, though we expect that to change quickly.

These physical systems are emerging because they meld many now-maturing research avenues in machine intelligence. Computer vision, the combination of deep learning and reinforcement learning, natural language interfaces, and question-answering systems are all building blocks to make a physical system autonomous and interactive. Building these autonomous systems today is as much about integrating these methods as inventing new ones.

The new (in)human touch

The virtual world is becoming more autonomous, too. Virtual agents, sometimes called bots, use conversational interfaces (think of Her, without the charm). Some of these virtual agents are entirely automated, others are a “human-in-the-loop” system, where algorithms take “machine-like” subtasks and a human adds creativity or execution. (In some, the human is training the bot while she or he works.) The user interacts with the system by either typing in natural language or speaking, and the agent responds in kind.

These services sometimes give customers confusing experiences, like mine the other day when I needed to contact customer service about my cell phone. I didn’t want to talk to anyone, so I opted for online chat. It was the most “human” customer service experience of my life, so weirdly perfect I found myself wondering whether I was chatting with a person, a bot, or some hybrid. Then I wondered if it even mattered. I had a fantastic experience and my issue was resolved. I felt gratitude to whatever it was on the other end, even if it was a bot.

On one hand, these agents can act utterly professional, helping us with customer support, research, project management, scheduling, and e-commerce transactions. On the other hand, they can be quite personal and maybe we are getting closer to Her — with Microsoft’s romantic chatbotXiaoice, automated emotional support is already here.

Evaluating Machine Learning Models

As these technologies warm up, they could transform new areas like education, psychiatry, and elder care, working alongside human beings to close the gap in care for students, patients, and the elderly.

50 shades of grey markets

At least I make myself laugh.

Many machine intelligence technologies will transform the business world by starting in regulatory grey areas. On the short list: health care (automated diagnostics, early disease detection based on genomics, algorithmic drug discovery); agriculture (sensor- and vision-based intelligence systems, autonomous farming vehicles); transportation and logistics (self-driving cars, drone systems, sensor-based fleet management); and financial services (advanced credit decisioning).

To overcome the difficulties of entering grey markets, we’re seeing some unusual strategies:

  • Startups are making a global arbitrage (e.g., health care companies going to market in emerging markets, drone companies experimenting in the least regulated countries).
  • The “fly under the radar” strategy. Some startups are being very careful to stay on the safest side of the grey area, keep a low profile, and avoid the regulatory discussion as long as possible.
  • Big companies like Google, Apple, and IBM are seeking out these opportunities because they have the resources to be patient and are the most likely to be able to effect regulatory change—their ability to affect regulation is one of their advantages.
  • Startups are considering beefing up funding earlier than they would have, to fight inevitable legal battles and face regulatory hurdles sooner.

What’s your (business) problem?

A year ago, enterprises were struggling to make heads or tails of machine intelligence services (some of the most confusing were in the “platform” section of this landscape). When I spoke to potential enterprise customers, I often heard things like, “these companies are trying to sell me snake oil” or, “they can’t even explain to me what they do.”

The corporates wanted to know what current business problems these technologies could solve. They didn’t care about the technology itself. The machine intelligence companies, on the other hand, just wanted to talk about their algorithms and how their platform could solve hundreds of problems (this was often true, but that’s not the point!).

Two things have happened that are helping to create a more productive middle ground:

  1. Enterprises have invested heavily in becoming “machine intelligence literate.” I’ve had roughly 100 companies reach out to get perspective on how they should think about machine intelligence. Their questions have been thoughtful, they’ve been changing their organizations to make use of these new technologies, and many different roles across the organization care about this topic (from CEOs to technical leads to product managers).
  2. Many machine intelligence companies have figured out that they need to speak the language of solving a business problem. They are packaging solutions to specific business problems as separate products and branding them that way. They often work alongside a company to create a unique solution instead of just selling the technology itself, being one part educator and one part executor. Once businesses learn what new questions can be answered with machine intelligence, these startups may make a more traditional technology sale.

The great verticalization

STRATA + HADOOP WORLD IN SAN JOSE 2016

Strata + Hadoop World in San Jose 2016

I remember reading Who Says Elephants Can’t Dance and being blown away by the ability of a technology icon like IBM to risk it all. (This was one of the reasons I went to work for them out of college.) Now IBM seems poised to try another risk-it-all transformation—moving from a horizontal technology provider to directly transforming a vertical. And why shouldn’t Watson try to be a doctor or a concierge? It’s a brave attempt.

It’s not just IBM: you could probably make an entire machine intelligence landscape just of Google projects. (If anyone takes a stab, I’d love to see it!)

Your money is nice, but tell me more about your data

In the machine intelligence world, founders are selling their companies, as I suggested last year—but it’s about more than just money. I’ve heard from founders that they are only interested in an acquisition if the acquirer has the right data set to make their product work. We’re hearing things like, “I’m not taking conversations but, given our product, if X came calling it’d be hard to turn down.” “X” is most often Slack (!), Google, Facebook, Twitter in these conversations—the companies that have the data.

(Eh)-I

Until recently, there’s been one secret in machine intelligence talent:Canada! During the “AI winter,” when this technology fell out of favor in the 80s and 90s, the Canadian government was one of a few entities funding AI research. This support sustained the formidable trio of Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, the godfathers of deep learning.

Canada continues to be central to the machine intelligence frontier. As an unapologetically proud Canadian, it’s been a pleasure to work with groups like AICML to commercialize advanced research, the Machine Learning Creative Destruction Lab to support startups, and to bring the machine intelligence world together at events like this one.

So what now?

Machine intelligence is even more of a story than last year, in large companies as well as startups. In the next year, the practical side of these technologies will flourish. Most new entrants will avoid generic technology solutions, and instead have a specific business purpose to which to put machine intelligence.

I can’t wait to see more combinations of the practical and eccentric. A few years ago, a company like Orbital Insight would have seemed farfetched—wait, you’re going to use satellites and computer vision algorithms to tell me what the construction growth rate is in China!?—and now it feels familiar.

Similarly, researchers are doing things that make us stop and say, “Wait, really?” They are tackling important problems we may not have imagined were possible, like creating fairy godmother drones to help the elderly, computer vision that detects the subtle signs of PTSD, autonomous surgical robots that remove cancerous lesions, and fixing airplane WiFi (just kidding, not even machine intelligence can do that).

Overall, agents will become more eloquent, autonomous systems more pervasive, machine intelligence more…intelligent. I expect more magic in the years to come.

 

Source: Oreilly-The current state of machine intelligence 2.0 By Shivon Zilis

A Learning Advance in Artificial Intelligence Rivals Human Abilities

Computer researchers reported artificial-intelligence advances on Thursday that surpassed human capabilities for a narrow set of vision-related tasks.

The improvements are noteworthy because so-called machine-vision systems are becoming commonplace in many aspects of life, including car-safety systems that detect pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

Researchers at the Massachusetts Institute of Technology, New York University and the University of Toronto reported a new type of “one shot” machine learning on Thursday in the journal Science, in which a computer vision program outperformed a group of humans in identifying handwritten characters based on a single example.

The program is capable of quickly learning the characters in a range of languages and generalizing from what it has learned. The authors suggest this capability is similar to the way humans learn and understand concepts.

The new approach, known as Bayesian Program Learning, or B.P.L., is different from current machine learning technologies known as deep neural networks.

Neural networks can be trained to recognize human speech, detect objects in images or identify kinds of behavior by being exposed to large sets of examples.

Although such networks are modeled after the behavior of biological neurons, they do not yet learn the way humans do — acquiring new concepts quickly. By contrast, the new software program described in the Science article is able to learn to recognize handwritten characters after “seeing” only a few or even a single example.

The researchers compared the capabilities of their Bayesian approach and other programming models using five separate learning tasks that involved a set of characters from a research data set known as Omniglot, which includes 1,623 handwritten character sets from 50 languages. Both images and pen strokes needed to create characters were captured.

“With all the progress in machine learning, it’s amazing what you can do with lots of data and faster computers,” said Joshua B. Tenenbaum, a professor of cognitive science and computation at M.I.T. and one of the authors of the Science paper. “But when you look at children, it’s amazing what they can learn from very little data. Some comes from prior knowledge and some is built into our brain.”

Also on Thursday, organizers of an annual academic machine vision competition reported gains in lowering the error rate in software for finding and classifying objects in digital images.

Three researchers who have created a computer model that captures humans’ unique ability to learn new concepts from a single example: from left, Ruslan Salakhutdinov, Brenden M. Lake and Joshua B. Tenenbaum. Credit Alain Decarie for The New York Times
“I’m constantly amazed by the rate of progress in the field,” said Alexander Berg, an assistant professor of computer science at the University of North Carolina, Chapel Hill.

The competition, known as the Imagenet Large Scale Visual Recognition Challenge, pits teams of researchers at academic, government and corporate laboratories against one another to design programs to both classify and detect objects. It was won this year by a group of researchers at the Microsoft Research laboratory in Beijing.

 

The Microsoft team was able to cut the number of errors in half in a task that required their program to classify objects from a set of 1,000 categories. The team also won a second competition by accurately detecting all instances of objects in 200 categories.

The contest requires the programs to examine a large number of digital images, and either label or find objects in the images. For example, they may need to distinguish between objects such as bicycles and cars, both of which might appear to have two wheels from a certain perspective.

In both the handwriting recognition task described in Science and in the visual classification and detection competition, researchers made efforts to compare their progress to human abilities. In both cases, the software advances now appear to surpass human abilities.

However, computer scientists cautioned against drawing conclusions about “thinking” machines or making direct comparisons to human intelligence.

“I would be very careful with terms like ‘superhuman performance,’ ” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence in Seattle. “Of course the calculator exhibits superhuman performance, with the possible exception of Dustin Hoffman,” he added, in reference to the actor’s portrayal of an autistic savant with extraordinary math skills in the movie “Rain Man.”

The advances reflect the intensifying focus in Silicon Valley and elsewhere on artificial intelligence.

Last month, the Toyota Motor Corporation announced a five-year, billion-dollar investment to create a research center based next to Stanford University to focus on artificial intelligence and robotics.

Also, a formerly obscure academic conference, Neural Information Processing Systems, underway this week in Montreal, has doubled in size since the previous year and has attracted a growing list of brand-name corporate sponsors, including Apple for the first time.

“There is a sellers’ market right now — not enough talent to fill the demand from companies who need them,” said Terrence Sejnowski, the director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in San Diego. “Ph.D. students are getting hired out of graduate schools for salaries that are higher than faculty members who are teaching them.”

Source: NYTimes-A Learning Advance in Artificial Intelligence Rivals Human Abilities By JOHN MARKOFF