Artificial intelligence is redefining corporate finance

Sven Denecken, SVP and Head of Product Management and Co-Innovation at SAP, discusses how AI is changing finance functions

Artificial intelligence (AI) and its potential to transform business processes across industries has become a central focus for organizations across the globe. Whether its conversations in the boardroom, sessions at an industry conference or a small-scale team meeting of accountants, companies today are buzzing about AI and the opportunity it poses to help usher in digital transformation.

While many still speculate that AI is more hype than reality, AI is already deeply ingrained in many organisations, driving automation that simplifies business processes.

This is especially true in corporate finance, with a recent study from Oxford Economics and SAP finding that 73% of finance executives agree that automation is improving finance efficiency at their company.

What is AI?

Related content

Defining artificial intelligence is perhaps the biggest initial hurdle that many finance stakeholders face in evaluating these technologies and weighing their potential impact in the enterprise. So, to start with the basics, AI can be broadly defined to include any simulation of human intelligence exhibited by machines.

One historical application that many organizations today are using is robotic process automation (RPA), which is rule-based robotic automation that can be extremely beneficial to companies in automating routine tasks. But beyond RPA, AI technology is a huge growth area that is branching into a multitude of areas when it comes to research, development and investment.

Other examples of AI include autonomous robotics, natural language processing or NLP (think of virtual assistants such as Apple’s Siri or Amazon’s Alexa), knowledge representation techniques (knowledge graphs) and more.

Machine learning is one specific subset of AI that has been gaining buzz in the industry today. Machine learning is learning based AI – it aims to teach computers how to accomplish tasks using data inputs, but without explicit rule-based programming that has historically been seen with RPA.

Drive efficiency in finance with AI

RPA is increasingly common within finance departments today, to help automate routine finance responsibilities, including streamlining transactional tasks and reporting. However, advanced AI technologies, like machine learning, have the power to take this a step further, removing the need for rule-based machines by implementing learning technology.

For instance, invoicing is a finance responsibility that can often be a nightmare for accounts receivable or treasury clerks. Often a customer might pay the incorrect amount for an invoice, combine several invoices together into one check, or even forget to include their invoice reference number. Rectifying this can be a huge time suck in trying to sift through invoices or track down the customer.

This is an area where machine learning could support finance teams in real-time by applying its learning technology to ultimately make suggestions to accounting teams on matching payments to invoices. With this, finance teams can not only better ensure accuracy in aligning payments, they can massively cut down the time spent manually tracking down the relevant information and apply themselves to other needs within the business.

Let AI have a seat at the table

The potential for AI doesn’t just lie in efficiency. As these machines get smarter, there is enormous potential for AI to support CFOs and finance directors in informing strategy and driving action.

In the consumer technology space, NLP applications like Siri and Alexa have helped to “humanize” technology and information for individuals, answering questions about the weather and news headlines – even occasionally entertaining the user with a bad joke. The use of these voice-enabled devices isn’t limited to the consumer setting, and in the coming years we will likely see an increase of NLP technology being applied in the B2B enterprise setting.

For instance, CFOs and other finance executives often receive questions in boardroom meetings around revenue forecasts, and a myriad of other topics. Often, the executive needs to spend countless hours prepping and pulling these figures to anticipate what information might be needed, or alternatively, halt an in-progress meeting to pull up the latest numbers.

These digital assistant devices could be used in the enterprise setting to let the CFO easily ask questions of his or her data analytics system in real-time. This technology would not only enable uninterrupted meetings, but also allow the CFO and other company stakeholders to make informed decisions that drive action quickly and with confidence.

Smart technologies will change the talent landscape

AI offers exciting promise for innovation as companies look to stay-ahead in today’s fast-paced, globalised business landscape, but as its popularity continues to grow, conversations have begun about the possible negative implications for workers.

For finance teams, while AI can have a measurable impact on efficiency, it cannot replace the human element. Human review and monitoring is still required when technology like machine learning streamlines some manual tasks, especially in cases that may be too complex for the machine to rectify.

Additionally, there is an opportunity for finance executives to build their teams by hiring people who are familiar with advanced technologies and can help support, improve and innovate their use within the finance function, ensuring human workers are equipped to excel in their roles.

Eighty-four percent of global companies cite digital transformation as an important factor for survival in the next five years, but to-date, only 3% of organisations have completed a company-wide digital transformation, according to another recent survey by Oxford Economics and SAP.

With this, finance executives in particular, believe that investment in digital skills and technology will have the greatest impact on company revenue in the next two years.

By exploring how AI technology can be implemented, not only in streamlining processes, but also as a valuable resource in informing strategy and driving action in finance, CFOs and other finance stakeholders can ensure their workforce is best armed to drive success in the digital economy.

Source:  Financial Director- Artificial intelligence is redefining corporate finance 

Advertisements

Automation, robotics, and the factory of the future

Cheaper, more capable, and more flexible technologies are accelerating the growth of fully automated production facilities. The key challenge for companies will be deciding how best to harness their power.

At one Fanuc plant in Oshino, Japan, industrial robots produce industrial robots, supervised by a staff of only four workers per shift. In a Philips plant producing electric razors in the Netherlands, robots outnumber the nine production workers by more than 14 to 1. Camera maker Canon began phasing out human labor at several of its factories in 2013.

This “lights out” production concept—where manufacturing activities and material flows are handled entirely automatically—is becoming an increasingly common attribute of modern manufacturing. In part, the new wave of automation will be driven by the same things that first brought robotics and automation into the workplace: to free human workers from dirty, dull, or dangerous jobs; to improve quality by eliminating errors and reducing variability; and to cut manufacturing costs by replacing increasingly expensive people with ever-cheaper machines. Today’s most advanced automation systems have additional capabilities, however, enabling their use in environments that have not been suitable for automation up to now and allowing the capture of entirely new sources of value in manufacturing.

Falling robot prices

As robot production has increased, costs have gone down. Over the past 30 years, the average robot price has fallen by half in real terms, and even further relative to labor costs (Exhibit 1). As demand from emerging economies encourages the production of robots to shift to lower-cost regions, they are likely to become cheaper still.

Exhibit 1

Accessible talent

People with the skills required to design, install, operate, and maintain robotic production systems are becoming more widely available, too. Robotics engineers were once rare and expensive specialists. Today, these subjects are widely taught in schools and colleges around the world, either in dedicated courses or as part of more general education on manufacturing technologies or engineering design for manufacture. The availability of software, such as simulation packages and offline programming systems that can test robotic applications, has reduced engineering time and risk. It’s also made the task of programming robots easier and cheaper.

Ease of integration

Advances in computing power, software-development techniques, and networking technologies have made assembling, installing, and maintaining robots faster and less costly than before. For example, while sensors and actuators once had to be individually connected to robot controllers with dedicated wiring through terminal racks, connectors, and junction boxes, they now use plug-and-play technologies in which components can be connected using simpler network wiring. The components will identify themselves automatically to the control system, greatly reducing setup time. These sensors and actuators can also monitor themselves and report their status to the control system, to aid process control and collect data for maintenance, and for continuous improvement and troubleshooting purposes. Other standards and network technologies make it similarly straightforward to link robots to wider production systems.

New capabilities

Robots are getting smarter, too. Where early robots blindly followed the same path, and later iterations used lasers or vision systems to detect the orientation of parts and materials, the latest generations of robots can integrate information from multiple sensors and adapt their movements in real time. This allows them, for example, to use force feedback to mimic the skill of a craftsman in grinding, deburring, or polishing applications. They can also make use of more powerful computer technology and big data–style analysis. For instance, they can use spectral analysis to check the quality of a weld as it is being made, dramatically reducing the amount of postmanufacture inspection required.

Robots take on new roles

Today, these factors are helping to boost robot adoption in the kinds of application they already excel at today: repetitive, high-volume production activities. As the cost and complexity of automating tasks with robots goes down, it is likely that the kinds of companies already using robots will use even more of them. In the next five to ten years, however, we expect a more fundamental change in the kinds of tasks for which robots become both technically and economically viable (Exhibit 2). Here are some examples.

Exhibit 2

Low-volume production

The inherent flexibility of a device that can be programmed quickly and easily will greatly reduce the number of times a robot needs to repeat a given task to justify the cost of buying and commissioning it. This will lower the threshold of volume and make robots an economical choice for niche tasks, where annual volumes are measured in the tens or hundreds rather than in the thousands or hundreds of thousands. It will also make them viable for companies working with small batch sizes and significant product variety. For example, flex track products now used in aerospace can “crawl” on a fuselage using vision to direct their work. The cost savings offered by this kind of low-volume automation will benefit many different kinds of organizations: small companies will be able to access robot technology for the first time, and larger ones could increase the variety of their product offerings.

Emerging technologies are likely to simplify robot programming even further. While it is already common to teach robots by leading them through a series of movements, for example, rapidly improving voice-recognition technology means it may soon be possible to give them verbal instructions, too.

Highly variable tasks

Advances in artificial intelligence and sensor technologies will allow robots to cope with a far greater degree of task-to-task variability. The ability to adapt their actions in response to changes in their environment will create opportunities for automation in areas such as the processing of agricultural products, where there is significant part-to-part variability. In Japan, trials have already demonstrated that robots can cut the time required to harvest strawberries by up to 40 percent, using a stereoscopic imaging system to identify the location of fruit and evaluate its ripeness.

These same capabilities will also drive quality improvements in all sectors. Robots will be able to compensate for potential quality issues during manufacturing. Examples here include altering the force used to assemble two parts based on the dimensional differences between them, or selecting and combining different sized components to achieve the right final dimensions.

Robot-generated data, and the advanced analysis techniques to make better use of them, will also be useful in understanding the underlying drivers of quality. If higher-than-normal torque requirements during assembly turn out to be associated with premature product failures in the field, for example, manufacturing processes can be adapted to detect and fix such issues during production.

Complex tasks

While today’s general-purpose robots can control their movement to within 0.10 millimeters, some current configurations of robots have repeatable accuracy of 0.02 millimeters. Future generations are likely to offer even higher levels of precision. Such capabilities will allow them to participate in increasingly delicate tasks, such as threading needles or assembling highly sophisticated electronic devices. Robots are also becoming better coordinated, with the availability of controllers that can simultaneously drive dozens of axes, allowing multiple robots to work together on the same task.

Finally, advanced sensor technologies, and the computer power needed to analyze the data from those sensors, will allow robots to take on tasks like cutting gemstones that previously required highly skilled craftspeople. The same technologies may even permit activities that cannot be done at all today: for example, adjusting the thickness or composition of coatings in real time as they are applied to compensate for deviations in the underlying material, or “painting” electronic circuits on the surface of structures.

Working alongside people

Companies will also have far more freedom to decide which tasks to automate with robots and which to conduct manually. Advanced safety systems mean robots can take up new positions next to their human colleagues. If sensors indicate the risk of a collision with an operator, the robot will automatically slow down or alter its path to avoid it. This technology permits the use of robots for individual tasks on otherwise manual assembly lines. And the removal of safety fences and interlocks mean lower costs—a boon for smaller companies. The ability to put robots and people side by side and to reallocate tasks between them also helps productivity, since it allows companies to rebalance production lines as demand fluctuates.

Robots that can operate safely in proximity to people will also pave the way for applications away from the tightly controlled environment of the factory floor. Internet retailers and logistics companies are already adopting forms of robotic automation in their warehouses. Imagine the productivity benefits available to a parcel courier, though, if an onboard robot could presort packages in the delivery vehicle between drops.

Agile production systems

Automation systems are becoming increasingly flexible and intelligent, adapting their behavior automatically to maximize output or minimize cost per unit. Expert systems used in beverage filling and packing lines can automatically adjust the speed of the whole production line to suit whichever activity is the critical constraint for a given batch. In automotive production, expert systems can automatically make tiny adjustments in line speed to improve the overall balance of individual lines and maximize the effectiveness of the whole manufacturing system.

While the vast majority of robots in use today still operate in high-speed, high-volume production applications, the most advanced systems can make adjustments on the fly, switching seamlessly between product types without the need to stop the line to change programs or reconfigure tooling. Many current and emerging production technologies, from computerized-numerical-control (CNC) cutting to 3-D printing, allow component geometry to be adjusted without any need for tool changes, making it possible to produce in batch sizes of one. One manufacturer of industrial components, for example, uses real-time communication from radio-frequency identification (RFID) tags to adjust components’ shapes to suit the requirements of different models.

The replacement of fixed conveyor systems with automated guided vehicles (AGVs) even lets plants reconfigure the flow of products and components seamlessly between different workstations, allowing manufacturing sequences with entirely different process steps to be completed in a fully automated fashion. This kind of flexibility delivers a host of benefits: facilitating shorter lead times and a tighter link between supply and demand, accelerating new product introduction, and simplifying the manufacture of highly customized products.

Making the right automation decisions

With so much technological potential at their fingertips, how do companies decide on the best automation strategy? It can be all too easy to get carried away with automation for its own sake, but the result of this approach is almost always projects that cost too much, take too long to implement, and fail to deliver against their business objectives.

A successful automation strategy requires good decisions on multiple levels. Companies must choose which activities to automate, what level of automation to use (from simple programmable-logic controllers to highly sophisticated robots guided by sensors and smart adaptive algorithms), and which technologies to adopt. At each of these levels, companies should ensure that their plans meet the following criteria.

Automation strategy must align with business and operations strategy. As we have noted above, automation can achieve four key objectives: improving worker safety, reducing costs, improving quality, and increasing flexibility. Done well, automation may deliver improvements in all these areas, but the balance of benefits may vary with different technologies and approaches. The right balance for any organization will depend on its overall operations strategy and its business goals.

Automation programs must start with a clear articulation of the problem. It’s also important that this includes the reasons automation is the right solution. Every project should be able to identify where and how automation can offer improvements and show how these improvements link to the company’s overall strategy.

Automation must show a clear return on investment. Companies, especially large ones, should take care not to overspecify, overcomplicate, or overspend on their automation investments. Choosing the right level of complexity to meet current and foreseeable future needs requires a deep understanding of the organization’s processes and manufacturing systems.

Platforming and integration

Companies face increasing pressure to maximize the return on their capital investments and to reduce the time required to take new products from design to full-scale production. Building automation systems that are suitable only for a single line of products runs counter to both those aims, requiring repeated, lengthy, and expensive cycles of equipment design, procurement, and commissioning. A better approach is the use of production systems, cells, lines, and factories that can be easily modified and adapted.

Just as platforming and modularization strategies have simplified and reduced the cost of managing complex product portfolios, so a platform approach will become increasingly important for manufacturers seeking to maximize flexibility and economies of scale in their automation strategies.

Process platforms, such as a robot arm equipped with a weld gun, power supply, and control electronics, can be standardized, applied, and reused in multiple applications, simplifying programming, maintenance, and product support.

Automation systems will also need to be highly integrated into the organization’s other systems. That integration starts with communication between machines on the factory floor, something that is made more straightforward by modern industrial-networking technologies. But it should also extend into the wider organization. Direct integration with computer-aided design, computer-integrated engineering, and enterprise-resource-planning systems will accelerate the design and deployment of new manufacturing configurations and allow flexible systems to respond in near real time to changes in demand or material availability. Data on process variables and manufacturing performance flowing the other way will be recorded for quality-assurance purposes and used to inform design improvements and future product generations.

Integration will also extend beyond the walls of the plant. Companies won’t just require close collaboration and seamless exchange of information with customers and suppliers; they will also need to build such relationships with the manufacturers of processing equipment, who will increasingly hold much of the know-how and intellectual property required to make automation systems perform optimally. The technology required to permit this integration is becoming increasingly accessible, thanks to the availability of open architectures and networking protocols, but changes in culture, management processes, and mind-sets will be needed in order to balance the costs, benefits, and risks.

Cheaper, smarter, and more adaptable automation systems are already transforming manufacturing in a host of different ways. While the technology will become more straightforward to implement, the business decisions will not. To capture the full value of the opportunities presented by these new systems, companies will need to take a holistic and systematic approach, aligning their automation strategy closely with the current and future needs of the business.

Source: Mckinsey-Automation, robotics, and the factory of the future

Competing in the Age of Artificial Intelligence

Until recently, artificial intelligence (AI) was similar to nuclear fusion in unfulfilled promise. It had been around a long time but had not reached the spectacular heights foreseen in its infancy. Now, however, AI is realizing its potential in achieving human-like capabilities, so it is time to ask: How can business leaders harness AI to take advantage of the specific strengths of man and machine?

AI is swiftly becoming the foundational technology in areas as diverse as self-driving cars and financial trading. Self-learning algorithms are now routinely embedded in mobile and online services. Researchers have leveraged massive gains in processing power and the data streaming from digital devices and connected sensors to improve AI performance. And machines have essentially cracked speech and vision specifically and human communication generally. The implications are profound:

  • Because they know how to speak, read text, and absorb and retain encyclopedic knowledge, machines can interact with people intuitively and naturally on a wide range of topics at considerable depth.
  • Because they can identify objects and recognize optical patterns, machines can leave the virtual and join the real world.

A field that once disappointed its proponents is now striking remarkably close to home as it expands into activities commonly performed by humans. (See the exhibit and the sidebar.) AI programs, for example, have diagnosed specific cancers more accurately than radiologists. No wonder that traditional companies in finance, retail, health care, and other industries have started to pour billions of dollars into the field.

You may also be interested in

Three milestone events made the general public aware of AI. Each one illustrates key aspects of the technology.

Deep Blue’s Defeat of World Chess Champion Garry Kasparov in 1997. Chess was originally considered an exercise that captures the essential tactical and strategic elements of human intelligence, and so it became the standard by which new AI algorithms were tested. For decades, programmers made little progress in defeating human players. But in 1997, Deep Blue, a computer developed by IBM, won the match against the world champion. Still, many people were disappointed when they realized that solving chess was not the same as solving artificial general intelligence. They did not like that Deep Blue relied heavily on brute force and memory. The program did not learn and certainly did not excel at any task but chess.

The event, however, revealed two important lessons. First, machines solve problems differently than people do. Second, many “intelligent” tasks are ultimately narrow and so can be solved by specialized programs.

With AlphaGo’s 2016 victory over Lee Sedol in Go, computer dominance of board games was complete. AlphaGo, developed by DeepMind Technologies, relied on deep learning—a neural network, or computational brain, with multiple layers—to beat a Go world champion. An intriguing fact about this match was how the machine prepared: having run out of human games to study, it spent the final months before the match playing against itself.

Watson’s Victory over Top Jeopardy Champs in 2011. By winning this challenging game show, IBM’s Watson effectively passed a Turing test of human-like intelligence. The performance showcased state-of-the-art speech recognition, natural-language processing, and search. The victory, however, was clinched by a different skill: Watson outperformed the other contestants in the “Daily Doubles,” in which players can wager all or part of their current winnings to secure a decisive lead. Making the best bet requires fast sequential reasoning, knowledge of game theory, and an ability to calculate probabilities and outcomes correctly. All these are areas in which humans are notoriously weak, as the Nobel laureate Daniel Kahneman observed in his famous book Thinking, Fast and Slow. Machines, on the other hand, think fast and fast in making data-heavy decisions.

Google’s Demonstration of a Self-Driving Car in 2012. Google is not the pioneer of self-driving cars. That distinction arguably goes to Ernst Dickmanns, a German computer vision expert who rode 1,785 kilometers in autonomous mode on a German autobahn in 1995, reaching speeds above 175 kilometers an hour.

Dickmanns, however, never had to turn left. In their 2004 book The New Division of Labor, Frank Levy and Richard Murnane argue that “executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behavior.” Google’s self-driving car, however, routinely managed this exercise without incident. The car combined robots, computer vision, and real-time data processing to produce the ultimate intelligent agent that was capable of both exploring and learning from the real world.

Because AI systems think and interact, they are invariably compared to people. But while humans are fast at parallel processing (pattern recognition) and slow at sequential processing (logical reasoning), computers have mastered the former in narrow fields and are superfast in the latter. Just as submarines don’t swim, machines solve problems and accomplish tasks in their own way.

It is critical for companies to figure out how humans 
and computers can play off each other’s strengths as intertwined
 actors to create competitive advantage.

Without further quantum leaps in processing power, machines will not reach artificial general intelligence (AGI): the combination of vastly different types of problem-solving capabilities—the hallmark of human intelligence. Today’s robo-car, for example, doesn’t exhibit what we would consider common sense, such as abandoning an excursion to assist a child who has fallen off her bicycle. But when properly applied, AI excels at performing many business tasks quickly, intelligently, and thoroughly.

Artificial intelligence is no longer an elective. It is critical for companies to figure out how humans and computers can play off each other’s strengths as intertwined actors to create competitive advantage.

The Evolution of Competitive Advantage

In simpler times, a technology tool, such as Walmart’s logistics tracking system in the 1980s, could serve as a source of advantage. AI is different. The naked algorithms themselves are unlikely to provide an edge. Many of them are in the public domain, and businesses can access open-source software platforms, such as Google’s TensorFlow. OpenAI, a nonprofit organization started by Elon Musk and others, is making AI tools and research widely available. And many prominent AI researchers have insisted on retaining the right to publish their results when joining companies such as Baidu, Facebook, and Google.

Rather than scrap traditional sources of competitive advantage, such as position and capability, AI reframes them. (See the exhibit.) Companies, then, need a fluid and dynamic view of their strengths. Positional advantage, for example, generally focuses on relatively static aspects that allow a company to win market share: proprietary assets, distribution networks, access to customers, and scale. These articles of faith have to be reimagined in the AI world.

Let’s look at three examples of how AI shifts traditional notions of competitive advantage.

  • Data. AI’s strongest applications are data-hungry. Pioneers in the field, such as Facebook, Google, and Uber, have each secured a “privileged zone” by gaining access to current and future data, the raw material of AI, from their users and others in ways that go far beyond traditional data harvesting. Their scale gives them the ability to run more training data through their algorithms and thus improve performance. In the race to leverage fully functional self-driving cars, for example, Uber has the advantage of collecting 100 million miles of fleet data daily from its drivers. This data will eventually inform the company’s mobility services. Facebook and Google take advantage of their scale and depth to hone their ad targeting.

    Not all companies can realistically aspire to be Facebook, Google, or Uber. But they do not need to. By building, accessing, and leveraging shared, rented, or complementary data sets, even if that means collaborating with competitors, companies can complement their proprietary assets to create their own privileged zone. Sharing is not a dirty word. The key is to build an unassailable and advantaged collection of open and closed data sources.

  • Customer Access. AI also changes the parameters of customer access. Well-placed physical stores and high-traffic online outlets give way to customer insights generated through AI. Major retailers, for example, can run loyalty, point-of-sale, weather, and location data through their AI engines to create personalized marketing and promotion offers. They can predict your route and appetite—before you are aware of them—and conveniently provide familiar, complementary, or entirely new purchasing options. The suggestive power of many of these offers has generated fresh revenue at negligible marginal cost.
  • Capabilities. Capabilities traditionally have been segmented into discrete sources of advantage, such as knowledge, skills, and processes. AI-driven automation merges these areas in a continual cycle of execution, exploration, and learning. As an algorithm incorporates more data, the quality of its output improves. Similarly, on the human side, agile ways of working blur distinctions between traditional capabilities as cross-functional teams build quick prototypes and improve them on the basis of fast feedback from customers and end users.

    AI and agile are inherently iterative. In both, offerings and processes become continuous cycles. Algorithms learn from experience, allowing companies to merge the broad and fast exploration of new opportunities with the exploitation of known ones. This helps companies thrive under conditions of high uncertainty and rapid change.

In addition to reframing specific sources of competitive advantage, AI helps increase the rate and quality of decision making. For specific tasks, the number of inputs and the speed of processing for machines can be millions of times higher than they are for humans. Predictive analytics and objective data replace gut feel and experience as a central driver of many decisions. Stock trading, online advertising, and supply chain management and pricing in retail have all moved sharply in this direction.

To be clear, humans will not become obsolete, even if there will be dislocations similar to (but arguably more rapid than) those during the Industrial Revolution. First, you need people to build the systems. Uber, for instance, has hired hundreds of self-driving vehicle experts, about 50 of whom are from Carnegie Mellon University’s Robotics Institute. And AI experts are the most in-demand hires on Wall Street. Second, humans can provide the common sense, social skills, and intuition that machines currently lack. Even if routine tasks are delegated to computers, people will stay in the loop for a long time to ensure quality.

In this new AI-inspired world, where the sources of advantage have been transformed, strategic issues morph into organizational, technological, and knowledge issues, and vice versa. Structural flexibility and agility—for both man and machine—become imperative to address the rate and degree of change.

Scalable hardware and adaptive software provide the foundation for AI systems to take advantage of scale and flexibility. One common approach is to build a central intelligence engine and decentralized semiautonomous agents. Tesla’s self-driving cars, for example, feed data into a central unit that periodically updates the decentralized software.

Winning strategies put a premium on agility, flexible employment, and continual training and education. AI-focused companies rarely have an army of traditional employees on their payroll. Open innovation and contracting agreements proliferate. As the chief operating officer of an innovative mobile bank admitted, his biggest struggle was to transform members of his leadership team into skilled managers of both people and robots.

Related interview

Looking into the Future of Artificial Intelligence

Getting Started

Companies need to embrace the adaptive and agile ways of working and setting strategy that are common at startups and AI pioneers.

 Companies looking to achieve a competitive edge through AI need to work through the implications of machines that can learn, conduct human interactions, and engage in other high-level functions—at unmatched scale and speed. They need to identify what machines do better than humans and vice versa, develop complementary roles and responsibilities for each, and redesign processes accordingly. AI often requires, for example, a new structure, of both centralized and decentralized activities, that can be challenging to implement. Finally, companies need to embrace the adaptive and agile ways of working and setting strategy that are common at startups and AI pioneers. All companies might benefit from this approach, but it is mandatory for AI-enabled processes, which undergo constant learning and adaptation for both man and machine.

Executives need to identify where AI can create the most significant and durable advantage. At the highest level, AI is well suited to areas with huge amounts of data, such as retail, and to routine tasks, such as pricing. But that heuristic oversimplifies the playing field. Increasingly, all corporate activities are awash in data and capable of being broken down into simple tasks. (See the exhibit.) We advocate looking at AI through four lenses:

  • Customer needs
  • Technological advances
  • Data sources
  • Decomposition of processes

First, define the needs of your customers. AI may be a sexy field, but it always makes sense to return to the basics in building a business. Where do your current or potential customers have explicit or implicit unmet needs? Even the most disruptive recent business ideas, such as Uber and Airbnb, address people’s fundamental requirements.

Second, incorporate technological advances. The most significant developments in AI generally involve assembling and processing new sources of data and making partially autonomous decisions. Numerous services and platforms can capture incoming data from databases, optical signals, text, and speech. You will probably not have to build such systems yourself. The same is true on the back end as a result of the increasing availability of output technologies such as digital agents and robots. Consider how you can use such technologies to transform your processes and offerings.

Third, create a holistic architecture that combines existing data with new or novel sources, even if they come from outside. The stack of AI services has become reasonably standardized and is increasingly accessible through intuitive tools. Even nonexperts can use large data sets.

Finally, break down processes and offerings into relatively routinized and isolated elements that can be automated, taking advantage of technological advances and data sources. Then, reassemble them to better meet your customers’ needs.

For many organizations, these steps can be challenging. To apply the four lenses systematically, companies need to be familiar with the current and emerging capabilities of the technology and the required infrastructure. A center for excellence can serve as a place to incubate technical and business acumen and disseminate AI expertise throughout the organization. But ultimately, AI belongs in and belongs to the businesses and functions that must put it to use.

Only when humans and machines solve problems together—and learn from each other—can the full potential of AI be achieved.

 

Source: bcg.com-Competing in the Age of Artificial Intelligence

The Future of Robotic Process Automation

Artificial intelligence. Is there any term that’s more used in tech these days or that has a wider range of meanings? Any one that conjures up more excitement, hyperbole and fear? In this episode Jon Prial talks with Adam Devine, the CMO of WorkFusion, one of Georgian Partners’ newest portfolio companies, about a very practical application of the technology: Using AI to improve and even automate what have traditionally been human-driven processes in the workplace. You’ll hear about robotic process automation, an emerging field that is bringing AI-powered software robots into the workplace to help make companies more efficient and effective.

Jon Prial: Artificial intelligence. Is there any term that’s more used in tech these days or that has a wider range of meanings? Is there any one that conjures up more excitement, hyperbole and fear? Today, we’re going to focus on a very practical and a real application of this technology, using AI to improve and automate what have traditionally been human-driven processes.

We’ll take a journey, looking at how technology has evolved to help automate the work of traditional back-office business processes. The latest step in the evolution has been the development of robotic process automation, an emerging field that’s bringing AI-powered software robots into the workplace to help make companies more efficient and effective.

We’ll find out how on today’s episode, when I talk to my guest, Adam Devine, head of marketing at WorkFusion. WorkFusion is one of the newest members to our portfolio, and it’s using AI to help large companies use intelligent automation to work more efficiently.

I’m Jon Prial, and welcome to the Impact Podcast.

Jon: At one point, I was looking at a survey. I’m not sure if it was on your website or something I found, but McKinsey had said that 49 percent of the activities that people are doing today in the global economy can be automated with a currently demonstrated technology. Can you take me through your view of what you think of when you think of automation?

Adam Devine: Sure. First, I would invite everyone listening to close their eyes and imagine the huge expanse of a back office of a large financial institution or insurance company. Hundreds if not thousands of super-smart, capable people spending 30, 40, 50, 60 percent of their day doing things like operating the UIs of SAP or Oracle, super-repetitive swivel chair work, or looking at a PDF on one monitor and an Excel sheet on another monitor and simply, routinely transferring the information from that PDF, which you can’t manipulate, to an Excel sheet.

I think McKinsey is very much right. There’s a high percentage of work that the average so-called knowledge worker, people who work with information all day long, can be automated.

Jon: The thought of taking the data from the PDF to the Excel spreadsheet has to get codified somewhere. How do you approach that in terms of that’s something that could be done more efficiently, that needs to be automated?

How do you figure that out? How do you get the algorithms behind all these changes, perhaps?

Adam: There is this notion of writing rules or having rules learned. In the old days, like two years ago, there was scripting — if/then/else automation. You’d have teams of engineers and maybe some data scientists writing rules for scripts to follow, and that meant, as you say, codifying each and every action that a machine would take so that there is absolutely no ambiguity about how the work is done.

This, today, is an old-fashioned way of automating a process. What we can do today with machine learning — and it’s not just our business, this is a growing trend — is having machines that learn. Learn is the key word.

Rather than writing the rules, people do as they do. They open up an Excel sheet. They open up a document. They click here. They click there, and over the course of time, machines can detect patterns that people can’t. This is what I mean by learning. Where someone clicks on a document once it’s been digitized, what the context is of that information.

With enough repetition — typically 400, 500 repetitions — the software is able to identify a pattern and train an algorithm to do what a person had been doing.

Jon: I started, one of my early careers we did a lot in the world of workflow and image processing, taking electronic versions of paper and moving it through a process, maybe reading the paper, managing workflow. That evolved from paper-based processes to human-based processes.

Can you talk to me more, then, about robotic process automation, what that market is and what it was a few years ago and what it’s evolving to?

Adam: Sure. If workflow yesterday was the movement of paper, the movement of information, RPA is one level above that, or one step up the ladder, in that it doesn’t just move the information, it can transform it and transfer it. A good example would be moving structured information from SAP to Oracle or from Oracle to Workday.

These are systems that don’t inherently talk to one another. They’re different formats, and they require what you’d call human handoffs between these applications. RPA can operate these systems at either a UI level, meaning at a virtual desktop a bot will enter credentials automagically and run an operation to do a transaction or to move the information, or it can operate at an API level where — I guess you could call it— diplomatic code serves as an intermediary between these two applications.

I would say that RPA is the next level up above old‑fashioned BPM or workflow.

Jon: Does that involve AI, or does AI then come to the next level?

Adam: It can involve AI. One of the problems with scripting and with RPA is exceptions. What happens when something changes about the process or the content and the bot, which has been programmed to do a very defined task, says I don’t know how to do this? That means the process breaks. That means the bot breaks.

What happens, with just RPA, is that a person discovers that a bot has broken, because the business process has failed, and has to go in and manually retrain that bot and fix the business process. When you add AI to RPA, you have automated exceptions handling. You have an intelligent agent identify that the bot doesn’t know what to do and route that work to a person.

The person handles the change if the bot can’t figure it out, and that creates a contribution to the knowledge base. It teaches the bot what has gone wrong so that the same mistake or a similar mistake doesn’t happen in the future. What AI does for RPA is business continuity.

Jon: When you talk about RPA getting improved by managing the exceptions, and you’re managing the exceptions because you’re learning things — it’s a learning opportunity. Obviously, you’re learning from data. What new type of data is being brought in to a system to allow that learning to take place?

Adam: There is a lot of new learning that takes place when AI assists RPA. One of the more interesting things is workforce analytics. Rather than having opacity around who your best human performers are, around what their capacity is, what their capability is, what their aptitude is, when AI gets involved and can monitor the actions of a person that’s intervening in a process, you very quickly figure out who your star performers are and what they’re good at. You very quickly have transparency on what the capacity is of a workforce and how work should be routed.

A good example would be the back offices of a large bank. Most offices are highly distributed across Latin America and India and the US and Europe, so when workforce A in Costa Rica blows out of capacity or doesn’t have the capability, AI can look at that workforce and say, “OK, I’m going to move this task, this business process, to a supplementary workforce in the Philippines or in Omaha.”

The number one set of data you get when AI is involved in a business process is not just the automation of the work, but an understanding of how people are performing it and how best to perform the work in the future.

Jon: As you get started, as you do an implementation, I assume the first focus area is how to make a process better and focus on that data. I know you even do some crowdsourcing of data around that. Let’s talk about making a process better, and then we’ll take a step back and do a little more about the people.

Adam: We get this question a lot, about how our software enables transformation. I was talking to an executive from a shared services organization just yesterday at a big conference down in Orlando, and I used the word transformation, and he flinched. Apparently, transformation is a four-letter word in a lot of these big organizations.

They’re not necessarily trying to transform. They are truly trying to automate. What we see is that by using software such as ours, there needn’t be a focus on transformation for the sake of transformation. When you allow an intelligent automation to do its thing by automating — for example, import payments in trade finance, or claims processing in insurance — the byproduct of automating that work, by letting algorithms see how data is handled, see what the sources are, see how people extract and categorize and remediate information and thus automate it, the process, the byproduct of this automation is transformation. Does that make sense?

Jon: The transformation, it still involves automation. I’m talking about the conflict you had with the customer you were talking to. Doesn’t that transformation get them to automation, or not? I’m trying to think what the end goal might be here. They’re not mutually exclusive, are they?

Adam: They’re definitely not mutually exclusive. Most businesses simply have a remit to either cut costs or improve service and capacity. It’s one of those two things, and in these days, it’s both. Most shared services, product lines, operations, wherever the genesis of automation is, wherever the genesis of these initiatives are, they’re starting with their KPIs.

Their KPIs are not impacted by simply transforming work. Their KPIs are impacted by eliminating the amount of manual work done in the operation. That elimination of manual work and the freeing up of human intelligence to focus on higher-value work is, in effect, transformation.

Jon: The results, you’re looking at the KPIs and you’re getting better business results, then everybody should be happy, because the topline numbers matter the most.

Adam: Exactly.

Jon: In terms of industry, you’re mostly in fintech, but what do you see is the opportunities for the automation of these types of processes against different industries? What’s your take on that?

Adam: That’s a great question. Martin Ford, who’s the author of “Rise of the Robots” — we’re actually featured prominently in that book — super-smart guy, true futurist, he spoke at this conference I was at yesterday in Orlando, the Shared Services and Outsourcing Week, and he said that to ask what the impact of AI will be on different industries would be like asking the impact of electricity on different industries.

His perspective, and I share it, is that AI will have a ubiquitous impact across every industry. It’s going to touch everything that we do. We’re not going to feel it until it’s ubiquitous and we stand back and say, wow, it really has transformed everything.

To drill into it specifically, we at WorkFusion have made a strategic choice to focus on banking, financial services, and insurance. We’re now getting into health care very quickly. We have a lot of interest from utilities, from telecom. I don’t think there’s any one industry that we won’t touch. It’s just a question of sequence, and it’s also a question of internal drive.

Banking and financial services, because of regulatory compliance, have had an unusually high amount of pressure to digitize, to automate. I think health care is very closely behind, and then the general Fortune 1000, I don’t think there’s been quite as much pressure, but some of the things that’s happening with the optics of offshoring and outsourcing will probably catalyze automation efforts even faster.

Jon: Let’s talk a little bit about the analytics. A lot of it’s rooted in basic machine learning. There are semantical challenges sometimes for people understanding the difference. I see a lot of you saying this is AI, and it’s really just machine learning. Give us your thoughts on how a technology like machine learning can evolve into something a little richer in terms of a solution set with AI.

Adam: As I understand it, machine learning is really the only practical application of what we refer to as artificial intelligence right now — algorithms that take in massive amounts of data and are configured, the feature sets are programmed to do something like extract data from an invoice.

Another subset of that question is what’s the difference between analytics and AI. A lot of businesses, as they get into their AI journey, confuse the two. Machine learning automates the manual work in business processes. Analytics, that may or may not be powered by AI, tells you something about the way a business is performing. These are two very different things.

Automation replaces manual effort. Analytics tells you something about the way things are happening. That’s the simplest definition I could give.

Jon: You earlier mentioned about training. What’s your approach to training and making sure you’re learning the best algorithms, you’re actually codifying the right actions — avoiding biases, avoiding codifying bad behaviors. What’s your approach to training?

Adam: There are two things to remember about how intelligent automation and machine learning learns to do the work of people and automates the work of people. The first problem is how do you get lots and lots of good quality data. This was a problem that we solved back in 2010 at MIT’s Computer Science and Artificial Intelligence Lab, which is where the company was born.

The researchers back then used the same approach to identify human quality that banks use to identify fraud in financial transactions, and that’s called outlier detection. If there are 100 workers doing a specific task and 90 of them are performing the same keystrokes, the same speed, on the same content, but 5 of them are going really fast and using only numeric characters, whereas the vast majority are using alphanumeric, and 5 of them are going really slow and using just alpha, then that means those 10 on other end are outliers, and that means that they get an increased level of scrutiny.

Maybe there is adjudication between two workers where two workers do the same thing and the results are compared. That first problem of how do you get quality data was solved by using machines to perform outlier detection and statistical quality control on workers, the same way that assembly lines ensure quality.

The second challenge is how do you take that quality data and train machine learning models. In the old days, you’d need data scientists and engineers to perform countless experiments on Markov models, traditional random fields, deep learning neural nets, and run all of this sequentially to figure out which model and which combination of models and features was going to best perform relative to human quality.

We solved that second problem about three or four years ago, which was to do with software what data scientists had done. We call it the virtual data scientist, where once the good-quality data is generated, the software automatically performs experiments with different models and feature sets and then compares them all in real time.

Once confidence is high enough, that means there’s been a winning algorithm. You can think of it like an algorithm beauty pageant where they’re all competing to do the best work and the software chooses the best model to deploy as automation.

Jon: Does this go across both broad processes, long-running processes across a day or multiple days, as well as some of the small micro tasks that individuals might do? What’s the difference?

Adam: The question is around what’s the level of granularity and application of this process of learning and training. It happens at an individual worker level in a matter of seconds, where maybe a worker handles a task and that makes a small adjustment to the way an algorithm performs, and it happens across entire lines of business where the data generated by hundreds of workers impacts the way automation performs.

That’s the beauty of machine learning is that it isn’t a blunt object. It’s an incredibly specific and — I guess you could say — perceptive capability in that the slightest adjustment by a person can change the way a machine performs.

Adam: You really are sourcing the logic from the data that you’re collecting across multiple companies, individual companies? What’s your view of data rights and learning and normalizing and learning from different companies together?

Adam: Jon, this could be a whole other podcast. We get this question a lot from our customers. Dealing with some of the most data-rigid security concerns, compliance-ridden businesses in the world in these giant banks, one of the very first questions we get from C-level stakeholders is, what are you doing with my data, and do I own it?

There are a couple of different answers within that question. The first is that our customers own their data. You could think of us as a car wash, the car being the data. The data comes into our business, into our software. It is manipulated. It is improved. It is stored into a place in a customer environment, and we no longer touch it. Our software does not store our customers’ data.

The second part of that question is do our algorithms retain the intelligence created by one customer, and can we take the intelligence from one customer and apply it to the next? There is some level of retention, but if, for example, we’re talking about something like KYC — know your customer in banking — we do not and will not take the insights generated by one customer’s very specific, proprietary business process and apply it to another customer’s.

We consider that proprietary to the customer. Would it be nice if we were like Google and my search history could be applied to your search history to improve the search results for all of mankind? Sure, but that’s not what our customers want.

Jon: You mentioned that you might change into other industries over time and grow. Absolutely that makes sense in fintech and potentially in health care, although perhaps learning about how health care procedures work across different hospitals and different solutions they may be willing to share some of the data rights and allow you to aggregate.

Today, the answer is here you are today within fintech. Could that change in other industries?

Adam: It really could, even within fintech. There’s another process called anti-money laundering, and this is essentially massive-scale fraud detection for banks. Banks don’t really consider the way they execute compliance a competitive differentiator.

There is a stream of thought among our customers to pool their intellectual resources on our software to create a ubiquitous software utility to solve non-differentiating processes like anti-money laundering, like KYC to an extent, like CCAR and BCBS 239. There are 80,000 regulations out there or something like that.

Industries and our customers, we may be a forum for them to decide where they want to compete and where they want to collaborate. In healthcare, this is particularly true, given that it is outcome-based. There’s actually a really cool company in San Francisco called Kalix Health, and they have the remit to democratize outcome based on democratizing good-quality data.

I think we’re going to see a big trend in healthcare to do more sharing than siloing.

Jon: Let’s put some CEO hats on as we get toward the end of this thing and think about CEOs. I’m going to make the assumption, the mental leap, that they have already figured out what we’ve been calling applied analytics. They’ve got analytics. They’re beginning to inject insights into processes, but they haven’t really taken the next step yet, in terms of degrees of automation and efficiencies that they could get.

They’ve got better outcomes, but they really haven’t thought about where these efficiencies are. If you’re talking to a CEO and talk about the managing and improving of processes and leveraging of the AI, where would you have the CEO start? Where should he or she start?

Adam: There’s a strategic answer and a tactical answer. Sometimes the tactical one is more insightful. In terms of the geography of a business, we see a lot of companies beginning in what’s called shared services, where there is a large aggregated workforce that serves as an internal service provider to the business, like handling processes like employee onboarding or accounts payable, the high-volume, common processes that different lines of business all employ.

Shared service is a great place to start, because it’s where outsourcing typically happens, and where there is outsourcing or offshoring, there is a large amount of work that can be automated. I would say, tactically, I would say CEOs should look toward their global business services or shared services to start their automation journey.

Strategically, I think every business needs to decide, do I want to do the same with less, or do I want to do more with the same, or do I want to be extreme and do even more with fewer? That’s the existential question of a business. If a business is healthy, they’re going to want to continue to grow their headcount but then exponentially grow their productivity.

That is the power of our software, of other software like it, that can do with machines what had been done before with people. The other thing, too, is how do you elevate the application of human intelligence? You do that by automation. You hire the same great people, but you expect them to perform at a higher level because machines are doing for them what they had done before with their hands and their minds.

Jon: As I’m building my team and I’m thinking about this and I’m going to embrace this model of being more efficient, you mentioned that you deliver a virtual data scientist. How much do you look for your customers to help, in terms of them owning and building a data science team? What are you looking for these companies to provide as they get started?

Companies are going to embrace this. These aren’t necessarily your customers, but customers are going to embrace AI. We’ve talked about data quite a bit. It does get rooted in that. What kind of skills should they be looking for?

Adam: The honest answer is that the only skills that a business truly needs to be effective with WorkFusion are subject matter experts on the process — people who understand the progression of work. Any big, successful company is going to have people within the organization that understand the methodical flow of work — first do this, then do that.

The problem with AI in the past, and actually some other vendors that are out there, is that they’re black boxes, and these black boxes require data scientists and engineers to build — you could say to fill the gap between the black box and the practical business process.

The reason why WorkFusion has been so successful and why we’re going to be such a big, important company to C-level executives across all industry is that there is no need for them to fill the gap between our capabilities and the practical business process, because we are rooted in the practical business process, and we do not require teams and teams of data scientists and engineers.

Sure, if you’re a power user and you want to radically automate across an entire business, you’re going to need some level of technical capability. You’re going to need some java engineers. IT is certainly going to need to provision environments just like they do with any software, but the beauty of this next generation of practical machine learning powered software is that it can do the dirty, highly complex work of teams of data scientists automatically.

Jon: I like the thought of making sure the CEOs stay focused on what they do well — know their business process. Stay in your swim lane. Keep going and the application of AI might get you into deeper water, but you’ll keep going. That’s the key.

Adam: That’s the key. There is the D word — disruption. I don’t see this as a disruption to the way a business works. I think a disruption would be expecting a business to go out and hire 500 data scientists and use some magical black box that has no transparency. That’s disruptive.

What is non-disruptive, what is purely evolutionary but exponential in its benefit, is a software that can seamlessly integrate into the way a business is doing their processes now and make those processes slicker and faster and more automated. That’s what every CEO wants.

[background music]

Jon: I can’t think of a better way to end it than that. That’s a perfect message. Adam, thank you so much for being with us today.

Adam: Jon, my pleasure. Thank you.

Source: georgianpartners.com-The Future of Robotic Process Automation

Companies are using AI to screen candidates now

Human job recruiters can only physically juggle so many candidates at once. HireVue, a company with a “video interview intelligence platform,” wants to make that easier by using artificial intelligence to do the heavy lifting for you and screen multiple candidates at once.

Candidates can use HireVue’s mobile or desktop app to set up a video interview with an employer and record answers to interview questions at their convenience.

When I tested the demo on my smartphone, I was asked to answer what my ideal career would be. It was slightly awkward seeing my own face staring back at me, but HireVue gives you an unlimited number of chances until you can record with saying “ummm.” So far, it seems like any normal video application.

Then the A.I. kicks in.

Using voice and face recognition software, HireVue lets employers compare a candidate’s word choice, tone, and facial movements with the body language and vocabularies of their best hires. The algorithm can analyze all of these candidates’ responses and rank them, so that recruiters can spend more time looking at the top performing answers.

HireVue’s AI can judge your tone and vocabulary for employers

HireVue said it doesn’t want to replace recruiters; instead, it wants to make the job interview process more efficient. At its best, it can serve as an initial screener before job seekers can get to the promised land of interviewing with a human.

As part of its positive testimonies, HireVue said SHIPT, a grocery delivery service, tripled its recruitment rate as recruiters no longer had to deal with technical difficulties and coordinating video times. Goldman Sachs, Under Armour, Unilever, and Vodafone are also among the companies that have used the platform.

By having each candidate answer the same questions, HireVue said it makes its process more structured, which can help eliminate biases.

“Structured interviews are much better and subject to less bias than unstructured interviews,” HireVue founder Mark Newman told Fast Company. “But many hiring managers still inject personal bias into structured interviews due to human nature.”

In other words, the algorithm is only as objective as the human minds that guide it. So if the employer’s ideal candidate is already biased against certain characteristics, HireVue’s platform would only embed these biases further, potentially making discriminatory practices a part of the process. Human recruiters would need to recognize their own personal biases before they could stop feeding them into HireVue. It’s one more reminder that behind each robot lies a human who engineered it.

Source: theladders.com-Companies are using AI to screen candidates now

Robots will not lead to fewer jobs – but the hollowing out of the middle class

Moravec’s paradox says that robots find difficult things easy and easy things difficult, which might lead to humans taking lower-paid manual work. Photograph: Fabian Bimmer/Reuters

Throughout modern history there has been a recurrent fear that jobs will be destroyed by technology. Everybody knows the story of the Luddites, bands of workers who smashed up machinery in the textile industry in the second decade of the 19th century.

The Luddites were wrong. There has been wave after wave of technological advance since the first Industrial Revolution, and yet more people are working than ever before. Jobs have certainly been destroyed. Banks, for example, no longer employ clerks to log every transaction in ledgers with quill pens. At this time of year, 150 years ago, the fields would have been full of people with scythes and pitchforks bringing in the harvest. That work is now done by motorised harvesters.

The reason new technology has not been the cause of mass unemployment is that new kit will only be used when it makes the productive process more profitable. Higher productivity frees up the resources to buy other goods and services. The rural workers that Thomas Hardy described in Tess of the D’Urbervilles found work in factories and offices. What’s more, it was better paid work, and so the upshot was an increase in living standards.

Similarly, the age of robots will lead to more jobs. Kallum Pickering, analyst with Berenberg, says there is a big hole in the argument that artificial intelligence (AI) will lead to vast numbers of workers joining the dole queue.

“Producers will only automate if doing so is profitable. For profit to occur, producers need a market to sell to in the first place. Keeping this in mind helps to highlight the critical flaw of the argument: if robots replaced all workers, thereby creating mass unemployment, to whom would the producers sell? Because demand is infinite whereas supply is scarce, the displaced workers always have the opportunity to find fresh employment to produce something that satisfies demand elsewhere.”

That, though, is not the end of the story. Robots will create more jobs, but what if these jobs are less good and less well paid than the jobs that automation kills off? Perhaps the weak wage growth of recent years is telling us something, namely that technology is hollowing out the middle class and creating a bifurcated economy in which a small number of very rich people employ armies of poor people to cater for their every whim.

This is certainly a much more likely threat than mass job destruction. What’s more, it fits with the history of the recent past, the theory of automation, and recent trends in the labour market.

Christian Siegel from the University of Kent’s school of economics has found that labour markets in the advanced countries of the west started to polarise as far back as the 1950s as they became more dominated by the service sector. Growth was strong during this period, but the job creation tended to be either at the top end of the pay scale or at the bottom end, while employment opportunities in traditional middle-class sectors of the economy declined. The arrival of IT in the 1980s merely accentuated a process already underway.

Robots are likely to result in a further hollowing out of middle-class jobs, and the reason is something known as Moravec’s paradox. This was a discovery by AI experts in the 1980s that robots find the difficult things easy and the easy things difficult. Hans Moravec, one of the researchers, said: “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Put another way, if you wanted to beat Magnus Carlsen, the world chess champion, you would choose a computer. If you wanted to clean the chess pieces after the game, you would choose a human being.

Dhaval Joshi, economist at BCA research, believes Moravec’s paradox will have a big impact on the labour market. He considers two scenarios for a stylised economy with three jobs: a high-income innovator, a middle-income manufacturer and a low-income animal tender.

In scenario one, the innovator comes up with a machine that dispenses with the need for the animal tender. The machine is more productive than the animal tender and so the innovator uses his extra income to buy manufactured goods. That provides the opportunity for the animal tender to retrain as a more highly paid manufacturer.

In scenario two, the innovator invents a machine that makes the middle-income manufacturer obsolete. Again, the innovator has more disposable income and uses it to purchase animal tending services. The middle-income manufacturer now has to make a living as a more lowly paid animal tender.

In the modern economy, the jobs that are prized tend to be the ones that involve skills such as logic. Those that are less well-rewarded tend to involve mobility and perception. Robots find logic easy but mobility and perception difficult.

“It follows,” says Joshi, “that the jobs that AI can easily replicate and replace are those that require recently evolved skills like logic and algebra. They tend to be middle-income jobs. Conversely, the jobs that AI cannot easily replicate are those that rely on the deeply evolved skills like mobility and perception. They tend to be lower-income jobs. Hence, the current wave of technological progress is following scenario 2. AI is hollowing out middle-income jobs and creating lots of lower-income jobs.”

Recent developments in the labour market suggest this process is already well under way. In both Britain and the US, economists have been trying to explain why it has been possible for jobs to be created without wage inflation picking up. Britain has an unemployment rate of 4.4% but average earnings are rising by just 2.1%. Something similar has happened in the US. The relationship between unemployment and pay – the Phillips curve – appears to have broken down.

But things become a bit easier to understand if the former analysts and machine operators are now being employed as dog walkers and waiting staff. Employment in total might be going up, but with higher-paid jobs being replaced by lower-paid jobs. Is there any hard evidence for this?

Well, Joshi says it is worth looking at the employment data for the US, which tends to be more granular than in Europe. For many years in America, the fastest-growing employment subsector has been food services and drinking places: bar tenders and waiters, in other words.

AI is still in its infancy, so the assumption has to be that this process has a lot further to run. Wage inflation is going to remain weak by historic standards, leading to debt-fuelled consumption with all its attendant risks. Interest rates will remain low. Inequality, without a sustained attempt at the redistribution of income, wealth and opportunity, will increase. And so will social tension and political discontent.

Source: The Guardian-Robots will not lead to fewer jobs – but the hollowing out of the middle class

AI Demystified: Shaping the future for positive change

The debate between Mark Zuckerberg and Elon Musk on the misunderstandings of artificial intelligence (AI) has brought to the forefront concerns and dangers of a robot takeover.

Often misrepresented and misunderstood, AI continues to serve as a source of significant intrigue. It has long been lauded as the future of work, but according to notable Hollywood movies, is also a harbinger of a robot takeover.

Futuristic movies, like I, Robot, and Avengers: Age of Ultron, portray AI as the precursor to a robot revolution wherein a seemingly innocuous utilization of the tool devolves into dystopia. And in many cases, despite being an effective money making tool, it is a mischaracterization of AI. Still, it is believable because of the lack of public fluency on the issue.

As the idiom states: “We fear what we don’t understand.”

With anxieties abound, it is important to understand that every technology shift has its own set of winners and losers. The advent of the car was initially rejectedby the public, and even ridiculed by horse owners. The only difference, is that the pace of advancing technology is now much quicker than it was in the past. When we do not understand a technology, we automatically tend to demonize it.

Similarly with AI, being able to define it and have awareness towards how it is impacting various industries for positive change will offer a more profound understanding that may ease concerns and lead it to be more widely accepted.

What is AI?

The first step in busting AI myths is to arrive at a reasonable, inclusive and thoughtful definition of the term.

Oxford Dictionary defines AI as, “The theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.”

This definition is effective because it makes clear that AI is, in many instances, simply streamlining a process to make it more efficient. And while it is executing tasks that “require human intelligence,” the tasks themselves – like mass data analysis or translation, complex calculations or immediate responsiveness – are rarely those which people are otherwise capable of or willing to perform.

The Robot Workforce

There are concerns about how AI will impact the workforce and the global economy. For example, some fear that the rise of AI will lead to the replacement of jobs. In fact, researchers at Oxford University projected that 47 percent of U.S. employment may end up “at risk” with the expansion of AI.

However, it is important to keep an open mind on the opportunities it presents.

AI alone, is not enough. It requires humans to help AI understand language and make subjective decisions for a business. With the availability of online education, workers are able to receive the training and schooling that will present new employment opportunities. There are tailored courses for data scientists or machine learning engineers specifically designed to assist with AI.

While the concern exists that a sizeable number of jobs across all levels will be displaced by AI, a new study from Forrester Research argued that the development of AI and automation will actually transform and advance current jobs as humans get familiar working alongside their machine counterparts. Furthermore, Forrester estimated that in the next decade, 15 million new jobs will be created in the US as a result of AI and automation technology.

As the workforce modernizes, the door will open for new, previously unexplored jobs.

Change for Good

Healthcare is seen as one of the industries which will see tremendous benefits from AI-powered tools.

At a recent Stanford University conference, Andy Slavitt, former acting director, Center for Medicare and Medicaid Services (CMS), said that the expansion of AI in healthcare is designed to address productivity concerns. Specifically, “We need to be taking care of more people with less resources, but if we chase too many problems and business models or try to invent new gadgets, that’s not going to change productivity. That’s where data and machine learning capabilities will come in.”

CB Insights reported that there are now over 100 AI-based healthcare startups. The companies have wide-ranging aims, from aiding oncology treatment to reducing administrative responsibilities for doctors and nurses to powering digital journaling tools. In each case, AI is enhancing productivity through machine learning and deep data analysis.

This is precisely why AI-anxiety is misguided. These are unexplored tools, each of which has the potential to revolutionize healthcare delivery and improve outcomes for the issue which they are intended to address.

As the healthcare industry undergoes several paradigm shifts – from fee-for-service to value-based care, impersonal to precision medicine, traditional to digital healthcare delivery – AI is becoming essential. There are, for example, an overwhelming number of cancer variations that depend upon one’s family history, upbringing, DNA, environment, work and medical history. Coordinating care delivery and analyzing treatments and outcomes is essential, but with finite manpower and resources, impossible without the use of AI.

AI is a catch-all and a flashpoint, a source of concern and of intrigue. But it does not need to be. Instead, it should be recognized for what it is – a state-of-the-art way to utilize limited resources to advance an industry. It is not without its share of concerns, like other innovation groundswells before it. But it is also not an issue to be feared.

Eliezer Yudkowsky, an American AI researcher and writer who champions friendly AI, wrote in Singular Hypotheses: A Scientific and Philosophical Assessment, “By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” Misinformed generalizations that lead to stigmatizing attitudes, fears and misconceptions, limit the public’s scope to further the conversation surrounding the benefits of AI development.

With a broader and deeper understanding of the technology and the opportunities it presents, I am hopeful that fear and hesitation will become excitement. Unlike it had initially been thought to be, AI is not all doom and gloom.

Source: itproportal-AI Demystified: Shaping the future for positive change

A Survey of 3,000 Executives Reveals How Businesses Succeed with AI

 

he buzz over artificial intelligence (AI) has grown loud enough to penetrate the C-suites of organizations around the world, and for good reason. Investment in AI is growing and is increasingly coming from organizations outside the tech space. And AI success stories are becoming more numerous and diverse, from Amazon reaping operational efficiencies using its AI-powered Kiva warehouse robots, to GE keeping its industrial equipment running by leveraging AI for predictive maintenance.

While it’s clear that CEOs need to consider AI’s business implications, the technology’s nascence in business settings makes it less clear how to profitably employ it. Through a study of AI that included a survey of 3,073 executives and 160 case studies across 14 sectors and 10 countries, and through a separate digital research program, we have identified 10 key insights CEOs need to know to embark on a successful AI journey.

Don’t believe the hype: Not every business is using AI… yet. While investment in AI is heating up, corporate adoption of AI technologies is still lagging. Total investment (internal and external) in AI reached somewhere in the range of $26 billion to $39 billion in 2016, with external investment tripling since 2013. Despite this level of investment, however, AI adoption is in its infancy, with just 20% of our survey respondents using one or more AI technologies at scale or in a core part of their business, and only half of those using three or more. (Our results are weighted to reflect the relative economic importance of firms of different sizes. We include five categories of AI technology systems: robotics and autonomous vehicles, computer vision, language, virtual agents, and machine learning.)

For the moment, this is good news for those companies still experimenting or piloting AI (41%). Our results suggest there’s still time to climb the learning curve and compete using AI.

However, we are likely at a key inflection point of AI adoption. AI technologies like neural-based machine learning and natural language processing are beginning to mature and prove their value, quickly becoming centerpieces of AI technology suites among adopters. And we expect at least a portion of current AI piloters to fully integrate AI in the near term. Finally, adoption appears poised to spread, albeit at different rates, across sectors and domains. Telecom and financial services are poised to lead the way, with respondents in these sectors planning to increase their AI tech spend by more than 15% a year — seven percentage points higher than the cross-industry average — in the next three years.

Believe the hype that AI can potentially boost your top and bottom line. Thirty percent of early AI adopters in our survey — those using AI at scale or in core processes — say they’ve achieved revenue increases, leveraging AI in efforts to gain market share or expand their products and services. Furthermore, early AI adopters are 3.5 times more likely than others to say they expect to grow their profit margin by up to five points more than industry peers. While the question of correlation versus causation can be legitimately raised, a separate analysis uncovered some evidence that AI is already directly improving profits, with ROI on AI investment in the same range as associated digital technologies such as big data and advanced analytics.

Without support from leadership, your AI transformation might not succeed. Successful AI adopters have strong executive leadership support for the new technology. Survey respondents from firms that have successfully deployed an AI technology at scale tend to rate C-suite support as being nearly twice as high as those companies that have not adopted any AI technology. They add that strong support comes not only from the CEO and IT executives but also from all other C-level officers and the board of directors.

You don’t have to go it alone on AI — partner for capability and capacity. With the AI field recently picking up its pace of innovation after the decades-long “AI winter,” technical expertise and capabilities are in short supply. Even large digital natives such as Amazon and Google have turned to companies and talent outside their confines to beef up their AI skills. Consider, for example, Google’s acquisition of DeepMind, which is using its machine learning chops to help the tech giant improve even core businesses like search optimization. Our survey, in fact, showed that early AI adopters have primarily bought the right fit-for-purpose technology solutions, with only a minority of respondents both developing and implementing all AI solutions in-house.

Resist the temptation to put technology teams solely in charge of AI initiatives. Compartmentalizing accountability for AI with functional leaders in IT, digital, or innovation can result in a hammer-in-search-of-a-nail outcome: technologies being launched without compelling use cases. To ensure a focus on the most valuable use cases, AI initiatives should be assessed and co-led by both business and technical leaders, an approach that has proved successful in the adoption of other digital technologies.

Take a portfolio approach to accelerate your AI journey. AI tools today vary along a spectrum ranging from tools that have been proven to solve business problems (for example, pattern detection for predictive maintenance) to those with low awareness and currently-limited-but-high-potential utility (for example, application of AI to developing competitive strategy). This distribution suggests that organizations could consider a portfolio-based approach to AI adoption across three time horizons:

Short-term: Focus on use cases where there are proven technology solutions today, and scale them across the organization to drive meaningful bottom-line value.

Medium-term: Experiment with technology that’s emerging but still relatively immature (deep learning video recognition) to prove their value in key business use cases before scaling.

Long-term: Work with academia or a third party to solve a high-impact use case (augmented human decision making in a key knowledge worker role, for example) with bleeding-edge AI technology to potentially capture a sizable first-mover advantage.

Machine learning is a powerful tool, but it’s not right for everything. Machine learning and its most prominent subfield, deep learning, have attracted a lot of media attention and received a significant share of the financing that has been pouring into the AI universe, garnering nearly 60% of all investments from outside the industry in 2016.

But while machine learning has many applications, it is just one of many AI-related technologies capable of solving business problems. There’s no one-size-fits-all AI solution. For example, the AI techniques implemented to improve customer call center performance could be very different from the technology used to identify credit card payments fraud. It’s critical to look for the right tool to solve each value-creating business problem at a particular stage in an organization’s digital and AI journey.

Digital capabilities come before AI. We found that industries leading in AI adoption — such as high-tech, telecom, and automotive — are also the ones that are the most digitized. Likewise, within any industry the companies that are early adopters of AI have already invested in digital capabilities, including cloud infrastructure and big data. In fact, it appears that companies can’t easily leapfrog to AI without digital transformation experience. Using a battery of statistics, we found that the odds of generating profit from using AI are 50% higher for companies that have strong experience in digitization.

Be bold. In a separate study on digital disruption, we found that adopting an offensive digital strategy was the most important factor in enabling incumbent companies to reverse the curse of digital disruption. An organization with an offensive strategy radically adapts its portfolio of businesses, developing new business models to build a growth path that is more robust than before digitization. So far, the same seems to hold true for AI: Early AI adopters with a very proactive, strictly offensive strategy report a much better profit outlook than those without one.

The biggest challenges are people and processes. In many cases, the change-management challenges of incorporating AI into employee processes and decision making far outweigh technical AI implementation challenges. As leaders determine the tasks machines should handle, versus those that humans perform, both new and traditional, it will be critical to implement programs that allow for constant reskilling of the workforce. And as AI continues to converge with advanced visualization, collaboration, and design thinking, businesses will need to shift from a primary focus on process efficiency to a focus on decision management effectiveness, which will further require leaders to create a culture of continuous improvement and learning.

Make no mistake: The next digital frontier is here, and it’s AI. While some firms are still reeling from previous digital disruptions, a new one is taking shape. But it’s early days. There’s still time to make AI a competitive advantage.

Source: HBR-A Survey of 3,000 Executives Reveals How Businesses Succeed with AI

AI projects are taking off: What does this mean for the future of work

he robots are coming and the world of work is set to change forever: recent research from consultants PWC estimates a third of existing jobs are susceptible to automation, due to the use of robots and artificial intelligence (AI) by 2030.

The survey adds more weight to a fast-growing body of work on the impact of AI. Take KPMG’s recent global CIO survey in conjunction with recruiter Harvey Nash, which found almost two-thirds of CIOs are investing or planning to invest in digital labour, which broadly covers robotics, automation, and artificial intelligence.

A quarter of these technology chiefs have already see very effective results. The survey suggests digital leaders are investing in digital labour at four times the rate of other executives. These CIOs are also implementing digital labour solutions across the enterprise, in some cases at twice the rate of their less-pioneering peers.

Understanding the scale of change

 

There are some things that machines are simply better at doing than humans, but humans still have plenty going for them. Here’s a look at how the two are going to work in concert to deliver a more powerful future for IT, and the human race.

However, it is important not to overemphasise the pace of change. Lisa Heneghan, global head of KPMG’s CIO advisory practice, says we are still in the early days of AI. Digital leaders are using robotic process automation to deal with repetitive manual processes, such as claims processing and data entry. Heneghan says spending decisions around more advanced digital capabilities, such as machine learning, are still to be made.

“We’re seeing pilots and small amounts of investment, but that’s not where the money is now,” she says, referring to the CIO’s role in assessing AI. “We’re seeing CIOs start to focus on building centres of excellence around digital labour. When they build this centre, it enables CIOs to look at the opportunities from digital across their business.”

First Utility CIO Bill Wilkins, for example, has created a customer-driven approach to data analytics and continues to look for new ways to help his business grow through technology, including via AI and automation. Wilkins says that, to remain competitive, his organisation must continue to innovate through information.

It is a sentiment that chimes with Brian Franz, chief productivity officer at Diageo, who has responsibility for the firm’s shared services around the world. His chief priority centres on driving efficiency and effectiveness in a sustainable manner — and that work aims to makes the most of advanced technology, including AI.

“We’ve started using robotics in some of the processes that we run in shared services where we have a level of confidence that we’ll be successful,” he says, referring to the firm’s initial forays into automation. “We’re looking at AI in some experimental ways in terms of how we interact with consumers and how they interact with our brands.”

Mark Ridley, group technology officer at venture builder Blenheim Chalcot Accelerate, is another CIO who is keen to establish how machine learning and automaton might be used within the IT department and out across the wider business. For now, Ridley says AI is a great way of making many decisions quickly.

He expects that focus to remain until businesses start seeing the benefits of developments in deep learning. “That will obviously take a few years to become pervasive. But when it does, AI will start making trivial decisions instead of humans and that will be very interesting,” says Ridley.

“That level of development will have a huge impact on work. How do we, as senior managers, deal with a society where human input isn’t necessary for simple tasks? The combination of AI and robotics represents a very interesting area to consider when it comes to the future of humanity.”

Preparing for the transformation

What is true, therefore, is that the various flavours of AI will have an impact on the way that businesses and their employees work. Interim CIO Christian McMahon, who is managing director at transformation specialist three25, also believes the biggest short-term impact of AI is likely to be confined to the rapid completion of routine tasks.

Yet there is no room for complacency. McMahon says AI will eventually create a seismic shift in business operations and CIOs must build awareness. “I cannot stress enough the importance of putting the effort in now to give your organisation tangible competitive advantage in how it can use or acquire these technologies,” he says.

Other lines of business executives have a role to play, too. Peter Markey, chief marketing officer at TSB, is at the vanguard of digital marketing and has spent his career melding business data with customer requirements to help create innovative services. He expects AI to offer similar opportunities for CMOs, but agrees there is much work to be done.

“I love the idea but I don’t think we’ve really explored its full potential or limitation. Marketeers need to strike the balance between business and personal. Do you lose something by automating a marketing programme within an inch of its life?” says Markey.

“Machines will get better at learning but I don’t see a day where all interactions are replaced by AI. We could get close, of course. The key to how far AI develops in terms of marketing is understanding your customers’ demands and how they want to interact with your business.”

Scope CDO Mark Foulsham is similarly customer-focused and believes AI could help his charity make smarter decisions. He believes developments in machine learning and automation run alongside attempts to exploit big data. The charity is not using AI currently but Foulsham expects the organisation to take advantage of the technology as it develops its data insight strategy.

“There’s huge potential,” he says. “All charities are speaking to customers and donors across several channels. What they need to provide is a seamless and transparent experience across those channels.”

AI, says Foulsham, can help charities to take a more proactive approach. He anticipates a situation where AI technologies work in the background and help senior managers at charities to make timely interventions and to boost the level of service provided.

“As executives, we need to know how customers are acting,” says Foulsham. “We need to know what their needs might be and we need to make sure that they have a seamless experience when they connect to us, be that through web, mobile or whatever platform they choose to use. AI provides another potential means to that end.”

Source: ZDNet-AI projects are taking off: What does this mean for the future of work

10 Artificial Intelligence (AI) Technologies that will rule 2018

Artificial Intelligence is changing the way we think of technology. It is radically changing the various aspects of our daily life. Companies are now significantly making investments in AI to boost their future businesses.

According to a Narrative Science report, just 38% percent of the companies surveys used artificial intelligence in 2016—but by 2018, this percentage will increase to 62%.

Another study performed by Forrester Research predicted an increase of 300% in investment in AI this year (2017), compared to last year.

IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020. “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new.

Here are the 10 most revolutionary artificial intelligence technologies that will rule 2018.

1. Natural Language Generation

Saying (or writing) the right words in the right sequence to convey a clear message that can be easily understood by the listener (or reader) can be a tricky business. For a machine, which processes information in an entirely different way than the human brain does, it can be trickier still.

Solving this issue has been the key focus of the burgeoning field of Natural Language Generation (NLG) for years beyond counting. Natural language generation, a field which has made great strides of late, has begun to manifest in many areas of our lives. It is currently being used in customer service to generate reports and market summaries.

Sample vendors: Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, Yseop.

2. Speech recognition

Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Every day, more and more systems incorporate the transcription and transformation of human language into useful formats suitable for computers.

Companies offering speech recognition services include NICE, Nuance Communications, OpenText and Verint Systems.

3. Machine Learning Platforms

These days, computers can also learn, and they can be incredibly intelligent!

Machine learning is a subdiscipline of computer science and a branch of artificial intelligence. Its goal is to develop techniques that allow computers to learn.

By providing algorithms, APIs (application programming interface), development and training tools, big data, applications and other machines, machine learning platforms are gaining more and more traction every day.

They’re currently being used in diverse business activities, mainly for prediction or classification. Companies focused in machine learning include Amazon, Fractal Analytics, Google, H2O.ai, Microsoft, SAS, Skytree.

4. Virtual Agents

There’s no denying that virtual agents – or “chat bots” (or simply, bots) – are experiencing a tremendous resurgence in interest, and along with that, a rapid advance in innovation and technology.

Currently used in customer service and support and as a smart home manager. Some of the companies that provide virtual agents include Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfi.

5. Decision Management

Intelligent machines are capable of introducing rules and logic to artificial intelligence systems and can be used for initial setup/training, ongoing maintenance and tuning.

It is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Some of the companies that provide this are Advanced Systems Concepts, Informatica, Maana, Pegasystems, UiPath.

6. AI-Optimized Hardware

Companies are investing heavily in ML/AI with hardware designs intended to greatly accelerate the next generation of applications. Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs.

Some of the companies focused on AI-Optimized Hardware includes Alluviate, Cray, Google, IBM, Intel, Nvidia.

7. Deep Learning Platforms

Deep learning is the fastest growing field and the new big trend in machine learning. A set of algorithms that use artificial neural networks to learn in multi-levels, corresponding to different levels of abstraction.

Some of the applications of deep learning are automatic speech recognition, image recognition/Optical character recognition, NLP, and classification/clustering/prediction of almost any entity that can be sensed & digitized.

Deep learning platform services providers and suppliers include Deep Instinct, Ersatz Labs, Fluid AI, MathWorks, Peltarion, Saffron Technology, Sentient Technologies.

8. Robotic Process Automation

Robotic processes automation is possible thanks to scripts and methods that mimic and automate human tasks to support corporate processes. It is now being used in special situations where it’s too expensive or inefficient to hire humans for a specific job or task.

We need to remember artificial intelligence is not meant to replace humans, but to complement their abilities and reinforce human talent.

Some of the companies focused on this include Advanced Systems Concepts, Automation Anywhere, Blue Prism, UiPath, WorkFusion.

9. Text Analytics and NLP

Natural language processing (NLP) is concerned with the interactions between computers and human (natural) languages. This technology uses text analytics to understand the structure of sentences, as well as their meaning and intention, through statistical methods and machine learning.

They are also being use by a huge array of automated assistants and apps to extract unstructured data.

Some of the services providers and suppliers of these technologies include Basis Technology, Coveo, Expert System, Indico, Knime, Lexalytics, Linguamatics, Mindbreeze, Sinequa, Stratifyd and Synapsify.

10. Biometrics

This technology deals with the identification, measurement and analysis of physical aspects of the body’s structure and form and human behavior.

It allows more natural interactions between humans and machines, including interactions related to touch, image, speech and body language recognition.

This technology is currently mostly being used for market research. Suppliers of this technologies include 3VR, Affectiva, Agnitio, FaceFirst, Sensory, Synqera and Tahzoo.

Source: knowstartup.com-10 Artificial Intelligence (AI) Technologies that will rule 2018