Google’s AI guru predicts humans and machines will merge within 20 years

Public perception of artificial intelligence technology, seems to lie somewhere at the intersection of existential fear and cautious optimism. Yet there’s a growing movement of people who believe AI is crucial to the evolution of our species. These people aren’t outsiders or outliers — they’re actually directing research on the cutting edge at companies like Google.

Ray Kurzweil, Google’s guru of AI and futurism, spoke last week at the Council for Foreign Relations, in an intimate Q&A session. His views on the future of humanity might seem radical to a public that’s been cutting its teeth on doomsayer headlines featuring Elon Musk and Stephen Hawking warning about World War III.

He’s quick to point out that today, right now, is the best our species has ever had it. According to him, most people don’t know that the world we live in currently has less hunger and poverty than ever before. “Three billion people have smartphones, I thought it was two but I just found out it was three. In a few years that’ll be six billion.” he says.

The deadliest war in recorded human history, World War II, ended just 72 years ago. In the time since, humanity has engaged in what feels like countless skirmishes, police actions, and outright wars. And while the US remains engaged in the longest war in its history – with no end in sight – the human species is currently enjoying the most peaceful period in the history of our civilization.

The existential fear is that AI will somehow compromise this progress and send us careening into the next extinction-level event. If technology like the atom bomb made World War II so much worse than everything before it, doesn’t it follow that WWIII will be even more devastating?

It’s more complex than that, according to Kurzweil. He believes part of the reason we’re able to coexist so wonderfully (in the grand historical scheme) for so long is because democracy has begun to take hold globally. He also believes the rise of democracy is the direct result of advances made in communication technology. According to him:

You can count the number of democracies a century ago on the fingers of one hand, you can count the number of democracies two centuries ago on one finger. The world has become more peaceful. That doesn’t appear to be the case, because our information about what’s wrong with the world is getting exponentially better.

So what’s next? He believes we’ll all be less biological, because humans are always evolving, and the next step of our evolution will be the internal implementation of technology. The human-robot hybrid won’t be a monstrosity of metal. It’ll just be a chip in your brain instead of an iPhone in your hand.

In the future it’ll be no more shocking to think about the weather in Hong Kong and get an answer than it is to say “Hey Google, what’s the weather in China?” and receive accurate information from a glowing rectangle with a speaker inside of it.

Kursweil believes “medical robots will go inside our brain and connect our neo-cortex to the smart cloud” by the year 2029.

That’s a jaw-dropper, even for a technology journalist who writes about AI regularly. It’s pretty hard to imagine people walking around with their brains connected to the cloud before Justin Bieber turns 35.

But dismiss Ray Kurzweil’s predictions at your own peril: he’s seldom wrong. When it comes to technology he’s gone on the record with hundreds of predictions, which is what futurists do, and he’s correct over 90 percent of the time.

According to Kurzweil the future is incredible, but it’s also worth mentioning that his view of the present is pretty fantastic as well. He reminds us that “just a few years ago we had these devices that looked like smartphones but they didn’t work very well,” and he’s right.

Today’s smartphones know how to respond to complex voice commands like “find all the pictures from my trip to San Francisco” and “play Star Trek The Next Generation season three, episode 16.” Today’s phones can recognize who is talking, pick out your voice even when music is playing, and execute the command without a hitch.

But just a few years back, most of us quickly gave up on using voice control regularly, because we were sick of repeating ourselves. We figured we’d wait until the technology got better. Tada! It’s better now.

The truth about AI, according to experts such as Ray Kurzweil, is that there’s no part of our lives that won’t be directly affected by it. As individuals we probably won’t notice the changes in real-time, but our dependence on machine learning will increase at exponential rates.

The law of accelerating returns is behind the artificial intelligence revolution — and Ray Kurzweil’s predictions. The very limits of what is “possible” concerning machine learning are going to require reevaluation on a daily basis going forward.

Source: The Next Web-Google’s AI guru predicts humans and machines will merge within 20 years

Advertisements

10 Companies Using Machine Learning in Cool Ways

If science-fiction movies have taught us anything, it’s that the future is a bleak and terrifying dystopia ruled by murderous sentient robots.

Fortunately, only one of these things is true — but that could soon change, as the doomsayers are so fond of telling us.

Artificial intelligence and machine learning are among the most significant technological developments in recent history. Few fields promise to “disrupt” (to borrow a favored term) life as we know it quite like machine learning, but many of the applications of machine learning technology go unseen.

Want to see some real examples of machine learning in action? Here are 10 companies that are using the power of machine learning in new and exciting ways (plus a glimpse into the future of machine learning).

1. Yelp — Image Curation at Scale

Few things compare to trying out a new restaurant then going online to complain about it afterwards. This is among the many reasons why Yelp is so popular (and useful).

While Yelp might not seem to be a tech company at first glance, Yelp is leveraging machine learning to improve users’ experience.

Classifying images into simple exterior/interior categories is easy for humans,
but surprisingly difficult for computers

Since images are almost as vital to Yelp as user reviews themselves, it should come as little surprise that Yelp is always trying to improve how it handles image processing.

This is why Yelp turned to machine learning a couple of years ago when it first implemented its picture classification technology. Yelp’s machine learning algorithms help the company’s human staff to compile, categorize, and label images more efficiently — no small feat when you’re dealing with tens of millions of photos.

2. Pinterest — Improved Content Discovery

Whether you’re a hardcore pinner or have never used the site before, Pinterest occupies a curious place in the social media ecosystem. Since Pinterest’s primary function is to curate existing content, it makes sense that investing in technologies that can make this process more effective would be a priority — and that’s definitely the case at Pinterest.

In 2015, Pinterest acquired Kosei, a machine learning company that specialized in the commercial applications of machine learning tech (specifically, content discovery and recommendation algorithms).

Today, machine learning touches virtually every aspect of Pinterest’s business operations, from spam moderation and content discovery to advertising monetization and reducing churn of email newsletter subscribers. Pretty cool.

3. Facebook — Chatbot Army

Although Facebook’s Messenger service is still a little…contentious (people have verystrong feelings about messaging apps, it seems), it’s one of the most exciting aspects of the world’s largest social media platform. That’s because Messenger has become something of an experimental testing laboratory for chatbots.

Some chatbots are virtually indistinguishable from humans when
conversing via text

Any developer can create and submit a chatbot for inclusion in Facebook Messenger. This means that companies with a strong emphasis on customer service and retention can leverage chatbots, even if they’re a tiny startup with limited engineering resources.

Of course, that’s not the only application of machine learning that Facebook is interested in. AI applications are being used at Facebook to filter out spam and poor-quality content, and the company is also researching computer vision algorithms that can “read” images to visually impaired people.

4. Twitter — Curated Timelines

Twitter has been at the center of numerous controversies of late (not least of which were the much-derided decisions to round out everyone’s avatars and changes to the way people are tagged in @ replies), but one of the more contentious changes we’ve seen on Twitter was the move toward an algorithmic feed.

Rob Lowe was particularly upset by the introduction of
algorithmically curated Twitter timelines

Whether you prefer to have Twitter show you “the best tweets first” (whatever that means) or as a reasonably chronological timeline, these changes are being driven by Twitter’s machine learning technology. Twitter’s AI evaluates each tweet in real time and “scores” them according to various metrics.

Ultimately, Twitter’s algorithms then display tweets that are likely to drive the most engagement. This is determined on an individual basis; Twitter’s machine learning tech makes those decisions based on your individual preferences, resulting in the algorithmically curated feeds, which kinda suck if we’re being completely honest. (Does anybody actually prefer the algorithmic feed? Tell me why in the comments, you lovely weirdos.)

5. Google — Neural Networks and ‘Machines That Dream’

These days, it’s probably easier to list areas of scientific R&D that Google — or, rather, parent company Alphabet — isn’t working on, rather than trying to summarize Google’s technological ambition.

Needless to say, Google has been very busy in recent years, having diversified into such fields as anti-aging technology, medical devices, and — perhaps most exciting for tech nerds — neural networks.

A selection of images created by Google’s neural network

The most visible developments in Google’s neural network research has been the DeepMind network, the “machine that dreams.” It’s the same network that produced those psychedelic images everybody was talking about a while back.

According to Google, the company is researching “virtually all aspects of machine learning,” which will lead to exciting developments in what Google calls “classical algorithms” as well as other applications including natural language processing, speech translation, and search ranking and prediction systems.

6. Edgecase — Improving Ecommerce Conversion Rates

For years, retailers have struggled to overcome the mighty disconnect between shopping in stores and shopping online. For all the talk of how online retail will be the death-knell of traditional shopping, many ecommerce sites still suck.

Edgecase, formerly known as Compare Metrics, hopes to change that.

Edgecase hopes its machine learning technology can help ecommerce retailers improve the experience for users. In addition to streamlining the ecommerce experience in order to improve conversion rates, Edgecase plans to leverage its tech to provide a better experience for shoppers who may only have a vague idea of what they’re looking for, by analyzing certain behaviors and actions that signify commercial intent — an attempt to make casual browsing online more rewarding and closer to the traditional retail experience.

7. Baidu — The Future of Voice Search

Google isn’t the only search giant that’s branching out into machine learning. Chinese search engine Baidu is also investing heavily in the applications of AI.

A simplified five-step diagram illustrating the key stages of
a natural language processing system

One of the most interesting (and disconcerting) developments at Baidu’s R&D lab is what the company calls Deep Voicea deep neural network that can generate entirely synthetic human voices that are very difficult to distinguish from genuine human speech. The network can “learn” the unique subtleties in the cadence, accent, pronunciation and pitch to create eerily accurate recreations of speakers’ voices.

Far from an idle experiment, Deep Voice 2 — the latest iteration of the Deep Voice technology — promises to have a lasting impact on natural language processing, the underlying technology behind voice search and voice pattern recognition systems. This could have major implications for voice search applications, as well as dozens of other potential uses, such as real-time translation and biometric security.

8. HubSpot — Smarter Sales

Anyone who is familiar with HubSpot probably already knows that the company has long been an early adopter of emerging technologies, and the company proved this again earlier this month when it announced the acquisition of machine learning firm Kemvi.

Predictive lead scoring is just one of the many potential applications
of AI and machine learning

HubSpot plans to use Kemvi’s technology in a range of applications — most notably, integrating Kemvi’s DeepGraph machine learning and natural language processing tech in its internal content management system.

This, according to HubSpot’s Chief Strategy Officer Bradford Coffeywill allow HubSpot to better identify “trigger events” — changes to a company’s structure, management, or anything else that affects day-to-day operations — to allow HubSpot to more effectively pitch prospective clients and serve existing customers.

9. IBM — Better Healthcare

The inclusion of IBM might seem a little strange, given that IBM is one of the largest and oldest of the legacy technology companies, but IBM has managed to transition from older business models to newer revenue streams remarkably well. None of IBM’s products demonstrate this better than its renowned AI, Watson.

An example of how IBM’s Watson can be used
to test and validate self-learning behavioral models

Watson may be a Jeopardy! champion, but it boasts a considerably more impressive track record than besting human contestants in televised game shows. Watson has been deployed in several hospitals and medical centers in recent years, where it demonstrated its aptitude for making highly accurate recommendations in the treatment of certain types of cancers.

Watson also shows significant potential in the retail sector, where it could be used as an assistant to help shoppers, as well as the hospitality industry. As such, IBM is now offering its Watson machine learning technology on a license basis — one of the first examples of an AI application being packaged in such a manner.

10. Salesforce — Intelligent CRMs

Salesforce is a titan of the tech world, with strong market share in the customer relationship management (CRM) space and the resources to match. Lead prediction and scoring are among the greatest challenges for even the savviest digital marketer, which is why Salesforce is betting big on its proprietary Einstein machine learning technology.

Salesforce Einstein allows businesses that use Salesforce’s CRM software to analyze every aspect of a customer’s relationship — from initial contact to ongoing engagement touch points — to build much more detailed profiles of customers and identify crucial moments in the sales process. This means much more comprehensive lead scoring, more effective customer service (and happier customers), and more opportunities.

The Future of Machine Learning

One of the main problems with rapid technological advancement is that, for whatever reason, we end up taking these leaps for granted. Some of the applications of machine learning listed above would have been almost unthinkable as recently as a decade ago, and yet the pace at which scientists and researchers are advancing is nothing short of amazing.

So, what’s next in machine learning trends?

Machines That Learn More Effectively

Before long, we’ll see artificial intelligences that can learn much more effectively. This will lead to developments in how algorithms are treated, such as AI deployments that can recognize, alter, and improve upon their own internal architecture with minimal human supervision.

Automation of Cyberattack Countermeasures

The rise of cybercrime and ransomware has forced companies of all sizes to reevaluate how they respond to systemic online attacks. We’ll soon see AI take a much greater role in monitoring, preventing, and responding to cyberattacks like database breaches, DDoS attacks, and other threats.

Convincing Generative Models

Generative models, such as the ones used by Baidu in our example above, are already incredibly convincing. Soon, we won’t be able to tell the difference at all. Improvements to generative modeling will result in increasingly sophisticated images, voices, and even entire identities generated entirely by algorithms.

Better Machine Learning Training

Even the most sophisticated AI can only learn as effectively as the training it receives; oftentimes, machine learning systems require enormous volumes of data to be trained. In the future, machine learning systems will require less and less data to “learn,” resulting in systems that can learn much faster with significantly smaller data sets.

Source: Medium-10 Companies Using Machine Learning in Cool Ways 

The Complete Beginners’ Guide to Artificial Intelligence

Ten years ago, if you mentioned the term “artificial intelligence” in a boardroom there’s a good chance you would have been laughed at. For most people it would bring to mind sentient, sci-fi machines such as 2001: A Space Odyssey’s HAL or Star Trek’s Data.

Today it is one of the hottest buzzwords in business and industry. AI technology is a crucial lynchpin of much of the digital transformation taking place today as organizations position themselves to capitalize on the ever-growing amount of data being generated and collected.

So how has this change come about? Well partly it is due to the Big Data revolution itself. The glut of data has led to intensified research into ways it can be processed, analyzed and acted upon. Machines being far better suited than humans to this work, the focus was on training machines to do this in as “smart” a way as is possible.

This increased interest in research in the field – in academia, industry and among the open source community which sits in the middle – has led to breakthroughs and advances that are showing their potential to generate tremendous change. From healthcare to self-driving cars to predicting the outcome of legal cases, no one is laughing now!

What is Artificial Intelligence? 

The concept of what defines AI has changed over time, but at the core there has always been the idea of building machines which are capable of thinking like humans.

After all, human beings have proven uniquely capable of interpreting the world around us and using the information we pick up to effect change. If we want to build machines to help us do this more efficiently, then it makes sense to use ourselves as a blueprint.

 

AI, then, can be thought of as simulating the capacity for abstract, creative, deductive thought – and particularly the ability to learn which this gives rise to – using the digital, binary logic of computers.

Research and development work in AI is split between two branches. One is labelled “applied AI” which uses these principles of simulating human thought to carry out one specific task. The other is known as “generalized AI” – which seeks to develop machine intelligences that can turn their hands to any task, much like a person.

Ten years ago, if you mentioned the term “artificial intelligence” in a boardroom there’s a good chance you would have been laughed at. For most people it would bring to mind sentient, sci-fi machines such as 2001: A Space Odyssey’s HAL or Star Trek’s Data.

Today it is one of the hottest buzzwords in business and industry. AI technology is a crucial lynchpin of much of the digital transformation taking place today as organizations position themselves to capitalize on the ever-growing amount of data being generated and collected.

So how has this change come about? Well partly it is due to the Big Data revolution itself. The glut of data has led to intensified research into ways it can be processed, analyzed and acted upon. Machines being far better suited than humans to this work, the focus was on training machines to do this in as “smart” a way as is possible.

This increased interest in research in the field – in academia, industry and among the open source community which sits in the middle – has led to breakthroughs and advances that are showing their potential to generate tremendous change. From healthcare to self-driving cars to predicting the outcome of legal cases, no one is laughing now!

What is Artificial Intelligence? 

The concept of what defines AI has changed over time, but at the core there has always been the idea of building machines which are capable of thinking like humans.

After all, human beings have proven uniquely capable of interpreting the world around us and using the information we pick up to effect change. If we want to build machines to help us do this more efficiently, then it makes sense to use ourselves as a blueprint.

Shutterstock

AI, then, can be thought of as simulating the capacity for abstract, creative, deductive thought – and particularly the ability to learn which this gives rise to – using the digital, binary logic of computers.

Research and development work in AI is split between two branches. One is labelled “applied AI” which uses these principles of simulating human thought to carry out one specific task. The other is known as “generalized AI” – which seeks to develop machine intelligences that can turn their hands to any task, much like a person.

 

Source: Forbes-The Complete Beginners’ Guide to Artificial Intelligence

7 Ways in which Artificial Intelligence is Redefining Customer Experience in Contact Centers

A lot has already been said about Artificial Intelligence (AI). Some love it while others hate it. But today’s truth is that you can’t ignore AI. It has arrived and how. The advent of Big Data has further given a boost to AI. It helped companies to exploit the power of AI to provide the best customer experience.

According to a research by Gartner, by 2020, AI will be a top five investment priority for more than 30 percent of CIOs. But how exactly will it redefine or revolutionize the customer experience? Let us look at some ways through which artificial intelligence will improve the customer experience in contact centers:

1. All-time Customer Service
Providing convenient customer service is the need of the hour. When a customer has an issue, they want it resolved immediately. They don’t care what time of the day it is. Therefore, companies aim to provide round the clock customer support. Chatbots make this an efficient and seamless process. These are AI enabled devices which can be operational 24X7, unlike their human counterparts.

2. Omnichannel Integration
Customers today reach out to the companies through various channels – social media, phones, mobile apps, emails, etc. Therefore, it becomes important to integrate data from all these channels to provide a wholesome customer experience. For instance, a customer first reaches out through call and then also drops an email. The agent should get data from all the previous touch points in an integrated manner. AI helps in providing that omnichannel support to the agents.

3. Reduction in Waiting Time
AI streamlines the calling process. It helps in prioritizing the customers and thereby routing them to the best-suited agent in case of specific issues. Moreover, in case of general queries, the bot can route it to any available agent. This way, the customer doesn’t have to wait long and leads to satisfaction.

4. Repurposing Historical Data
By now we are accustomed to collecting Big Data. Companies try to gather data for all the possible domains and aspects – customer journey, operations, marketing, customer behavior and much more. Earlier, a lot of that data used to be dumped. But with AI, we can use this data to get a 360-degree view of the customer which helps us to improve CX.

5. Personalised Customer Interactions
Stemming from the previous point, the Chatbots being the virtual agents use the historical data to provide real-time information to the human agents. This information empowers the agent to be spontaneous and provide a customized experience to the customer. In addition to that, the customer will also be happy that the company is sensitive to her/his issues which might actually convert into brand loyalty.

6. Building Customer Relationships
Building strong customer relationships is the first step to brand loyalty. Unfortunately, humans have their limitations. AI can be effective in this situation. Bots can send an email to get timely feedback or an SMS on special occasions to make them feel valued.

7. Providing Future Opportunities
All the data crunching yields long-term results. Companies can analyze the historical trends to predict future trends. Based on sentiment analysis, machine learning and natural language processing data, companies can improve their products and target the right buyer persona.

Conclusion
Artificial Intelligence has proved its mettle, but there is still a long way to go before it gains acceptance from the majority. There always are people who are skeptical of new technology. AI has also not been left untouched by that. Some customers still feel more comfortable in interacting with live agents than virtual ones. Having said all that, AI is here to stay. We can expect an increase in its buy-in amongst companies in the near future.

Source: CustomerThink-7 Ways in which Artificial Intelligence is Redefining Customer Experience in Contact Centers

Here’s How Your Company Needs To Prepare For AI

In the late 1960s and early 70s, the first computer-aided design (CAD) software packages began to appear. Initially, they were mostly used for high-end engineering tasks, but as they got cheaper and simpler to use, they became a basic tool to automate the work of engineers and architects.

According to a certain logic, with so much of the heavy work being shifted to machines, a lot of engineers and architects must have been put out of work, but in fact just the opposite happened. There are far more of them today than 20 years ago and employment in the sector is supposed to grow another 7% by 2024.

Still, while the dystopian visions of robots taking our jobs are almost certainly overblown, Josh Sutton, Global Head, Data & Artificial Intelligence at Publicis.Sapient, sees no small amount of disruption ahead. Unlike the fairly narrow effect of CAD software, AI will transform every industry and not every organization will be able to make the shift. The time to prepare is now.

Shifting Value To Different Tasks

One of the most important distinctions Sutton makes is between jobs and tasks. Just as CAD software replaced the drudgery of drafting, which allowed architects to spend more time with clients and coming up with creative solutions to their needs, automation from AI is shifting work to more of what humans excel at.

For example, in the financial industry, many of what were once considered core functions, such as trading, portfolio allocation and research, have been automated to a large extent. These were once considered high-level tasks that paid well, but computers do them much better and more cheaply.

However, the resources that are saved by automating those tasks are being shifted to ones that humans excel at, like long-term forecasting. “”Humans are much better at that sort of thing,” Sutton says. He also points out that the time and effort being saved with basic functions frees up a lot of time to focus on customers and has opened up a new market in “mass affluent” wealth management.

Finally, humans need to keep an eye on the machines, which for all of their massive computational prowess, still lack basic common sense. Earlier this year, when Dow Jones erroneously reported that Google was buying Apple for $9 billion — a report no thinking person would take seriously — the algorithms bought it and moved markets until humans stepped in.

Human-Machine Collaboration

Another aspect of the AI-driven world that’s emerging is the opportunity for machine learning to extend the capabilities of humans. For example, when a freestyle chess tournament that included both humans and machines was organized, the winner was not a chess master nor a supercomputer, but two amateurs running three simple programs in parallel.

In a similar way, Google Health, IBM’s Watson division and many others as well are using machine learning to partner with humans to achieve results that neither could achieve alone. One study cited by a White House report during the Obama Administration found that while machines had a 7.5 percent error rate in reading radiology images and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

There is also evidence that machine learning can vastly improve research. Back in 2005, when The Cancer Genome Atlas first began sequencing thousands of tumors, no one knew what to expect. But using artificial intelligence researchers have been able to identify specific patterns in that huge mountain of data that humans would have never been able to identify alone.

Sutton points out that we will never run out of problems to solve, especially when it comes to health, so increasing efficiency does not reduce the work for humans as much as it increases their potential to make a positive impact.

Making New Jobs Possible

A third aspect of the AI-driven world is that it is making it possible to do work that people couldn’t do without help from machines. Much like earlier machines extended our physical capabilities and allowed us to tunnel through mountains and build enormous skyscrapers, today’s cognitive systems are enabling us to extend our minds.

Sutton points to the work of his own agency as an example. In a campaign for Dove covering sport events, algorithms scoured thousands of articles and highlighted coverage that focused on the appearance of female athletes rather than their performance. It sent a powerful message about the double standard that women are subjected to.

Sutton estimates that it would have taken a staff of hundreds of people reading articles every day to manage the campaign in real time, which wouldn’t have been feasible. However, with the help of sophisticated algorithms his firm designed, the same work was able to be done with just a few staffers.

Increasing efficiency through automation doesn’t necessarily mean jobs disappear. In fact, over the past eight years, as automation has increased, unemployment in the US has fallen from 10% to 4.2%, a rate associated with full employment. In manufacturing, where you would expect machines to replace humans at the fastest rate, there is actually a significant labor shortage.

The Lump Of Labor Fallacy

The fear that robots will take our jobs is rooted in what economists call the lump of labor fallacy, the false notion that there is a fixed amount of work to do in an economy. Value rarely, if ever, disappears, it just moves to a new place. Automation, by shifting jobs, increases our effectiveness and creates the capacity to do new work, which increases our capacity for prosperity.

However, while machines will not replace humans, it’s become fairly clear that it can disrupt businesses. For example, one thing we are seeing is a shift from cognitive skills to social skills, in which machines take over rote tasks and value shifts to human centered activity. So it is imperative that every enterprise adapt to a new mode of value creation.

“The first step is understanding how leveraging cognitive capabilities will create changes in your industry,” Sutton says, “and that will help you understand the data and technologies you need to move forward. Then you have to look at how that can not only improve present operations, but open up new opportunities that will become feasible in an AI driven world.”

Today, an architect needs to be far more than a draftsman, a waiter needs to do more than place orders and a travel agent needs to do more than book flights. Automation has commoditized those tasks, but opened up possibilities to do far more. We need to focus less on where value is shifting from and more on where value is shifting to.

Source: Inc.-Here’s How Your Company Needs To Prepare For AI

6 Hot AI Automation Technologies Destroying And Creating Jobs

Forrester

Physical and software robots rise

Nothing gets the Silicon Valley-obsessed media more excited than watching the online mud-wrestling of two tech titans, especially when the fight is over the hottest topic of the day: Will AI destroy our jobs or will it be a force for good?

It all started with Elon Musk declaring that “robots will be able to do everything better than us,” creating the “biggest risk that we face as a civilization.” To which Mark Zuckerberg responded that the “naysayers” drumming up “doomsday scenarios” are “pretty irresponsible.” Musk retorted on Twitter (where else?) “I’ve talked to Mark about this. His understanding of the subject is limited,” and Zuckerberg blogged on Facebook (where else?) that he is “excited about all the progress [in AI] and it’s [sic] potential to make the world better.”

And so it goes. I don’t agree with the notion that only people who are actually doing AI can comment on AI and I’m sure both Musk’s and Zuckerberg’s understanding of AI is not limited. Like the rest of us, however, they inject into the debate their own biases, perspectives, and ambitions. It may help anyone interested in the question of what AI will do or not do to our jobs and civilization to study its history (you may want to start here), to look for evidence refuting what we believe in, and to assessments of the current and future impact of AI technologies that are based on relevant data analyzed with minimal assumptions.

Surveys, interviews and conversations with the people that actually make decisions about creating or eliminating jobs are an example of the latter category and they often serve as the basis for market landscape descriptions and better-informed speculations from industry analysts. A recent case in point—and recommended reading—is “Automation technologies, Robotics, and AI in the Workplace, Q2 2017” from Forrester’s J.P. Gownder (his blog post on the report is here).

Gownder and his Forrester colleagues discuss in detail (33 dense pages instead of 140 characters) a dozen “automation technologies”—all based on what we now generally refer to as “artificial intelligence”—that were selected because they play a role in either eliminating or augmenting jobs, require long-term planning for maximum impact, and (most importantly, in my opinion), generate questions from Forrester’s clients. In addition to assessing the developmental stage and long-term impact on jobs and businesses, Forrester provides definitions of the AI technologies/categories they discuss, valuable simply because definitions are often sorely missing from discussions of “artificial intelligence.”

Here is my summary of the 6 AI technologies that will have the most impact on jobs—positive and negative—in the near future:

  1. Customer Self-Service: Customer-facing physical solutions such as kiosks, interactive digital signage, and self-checkout. Improved by recent innovations (better touchscreens, faster processors, improved connectivity and sensors), it is also entering new markets and applications—a prime example being the experimental Amazon Go convenience store. Example vendors: ECRS, Four Winds Interactive, Fujitsu, Kiosks Information Systems, NCR, Olea Kiosks, Panasonic, Protouch Manufacturing, Samsung, and Stratacache.
  2. AI-Assisted Robotic Process Automation: Automating organizational workflows and processes using software bots. Analyzing 160 AI-related Deloitte consulting projects, Tom Davenport found it to be one of the fastest growing AI applications, an observation confirmed by Forrester. Example vendors: Automation Anywhere, Blue Prism, Contextor, EdgeVerve Systems, Kofax, Kryon Systems, NICE, Pegasystems, Redwood Software, Softomotive, Symphony Ventures, UiPath, and WorkFusion.
  3. Industrial Robots: Physical robots that execute tasks in manufacturing, agriculture, construction, and similar verticals with heavy, industrial-scale workloads. The Internet of Things, improved software and algorithms, data analytics, and advanced electronics have contributed to a wider array of form factors, ability to perform in semi- and unstructured environments, and the “intelligence” to learn and operate autonomously. A rising sub-category is collaborative robots (cobots), working safely alongside humans. Example vendors: ABB, Aethon, Blue River Technology (agriculture), Clearpath Robotics (autonomous, multiterrain), Denso, FANUC (traditional robots and cobots), Kawasaki, Kuka, Mitsubishi, Nachi Robotics, OptoFidelity, RB3D (cobots), Rethink Robotics (cobots), and Yaskawa.
  4. Retail and Warehouse Robots: Physical robots with autonomous movement capabilities used in retailing and/or warehousing. Picking up objects is still the biggest challenge, but retailers such as Hudson’s Bay and JD.com, and of course Amazon, are investing in potential solutions. Example vendors: Amazon Kiva Systems (structured environments), Fetch Robotics (unstructured), Locus Robotics (unstructured), and Simbe Robotics (retail scanning robots for product restocking).
  5. Virtual Assistants: Personal digital concierges that know users and their data and are discerning enough to interpret their needs and make decisions on their behalf. Developed for the consumer market just a few years ago, these assistants can be used by companies in a business-to-consumer setting (e.g., answer questions at home or augment the work of call center employees) or inside the business organization (e.g., serve as subject matter experts or support business processes). Example vendors: Amazon Alexa, Apple Siri, Dynatrace for ITSM, Google Now and Google Assistant, IBM Watson conversational interface, IBM Watson Virtual Agent, IPsoft Amelia, Microsoft Cortana, Nuance Communications Nina, and Samsung Bixby.
  6. Sensory AI: Improving computers ability to identify, “understand,” and even express human sensory faculties and emotions via image and video analysis, facial recognition, speech analytics, and/or text analytics. Example vendors: Affectiva, Amazon Lex, Amazon Rekognition, Aurora Computer Services, Caffe, Clarifai, Deepomatic, Ditto, Equals 3 Lucy, FaceFirst, Google Cloud Platform APIs, HyperVerge, IBM Watson Developer Cloud, KeyLemon, Linkface, Microsoft Cognitive Services, Microsoft Cortana Intelligence Suite, ModiFace, Nuance Communications, OpenText, Revuze, Talkwalker, and Verint Systems.

The first 4 categories have been around for a while (Forrester calls them “mature”) but have recently become energized by hardware and software innovations. It is interesting to note that the key reason for the recent excitement about and fear of AI—the rapid advancement in a number of narrow AI tasks (e.g. object identification) due to improvements in deep learning techniques—has not contributed greatly to the newly-found sexiness of these 4 categories. But deep learning has been a key contributor to the nascent success of the other 2 hot categories—virtual assistants and sensory AI. My general conclusion from these observations is that the excitement (and fear) generated by specific “triumphs” of AI technologies can obscure for us a very fundamental fact of technology adoption throughout history, including recent history—it takes a very long time. This has important implications for our assumptions and projections regarding the question when will AI eliminate (lots of) jobs.

It’s tough to make predictions about the timeframe and magnitude of job elimination, especially when we consider the future of employment (to paraphrase a very wise man). But the difficulties inherent in saying anything about the future, especially the future of jobs in a dynamic, constantly evolving, and multi-faceted economy (e.g., persistent low wages may postpone the adoption of robots), have never stood in the way of people writing and/or analyzing and/or speaking for fame and fortune (or more simply, for continuous employment).

The current cycle of here-are-authoritative-numbers-on-how-many-jobs-will-be-eliminated-by-AI started 4 years ago by two Oxford academics (47% percent of jobs in the US are at risk of being automated in the next 20 years). Forrester’s analysts could not resist the much in-demand forecasting exercise and, in what became “one of the five best-read among all reports at Forrester,” estimated that automation will destroy 17% of US jobs by 2027. But, unlike many other commentators on the subject, they also looked at the glass-half-full and estimated that automation will add 10% of new jobs to the US economy by 2027, for a net loss of 7%.

Whether it will be 7% or 47% or any other quantitative or qualitative speculation about the future impact of AI on employment, the debate over when and how muchdoes not even take into consideration the question of if. Will robots really “be able to do everything better than us,” as Musk believes, and not just in 20 or 100 years, but anytime in the future? I know, it’s tough to make predictions, especially about the future of technology. What is certain is that inquiry minds steeped in the scientific ethos, such as Musk’s, should consider all possibilities and avoid making dogmatic statements, either of the AI-will-destroy-civilization type or AI-will-cure-all-diseases kind. Why not consider the possibility that intelligent machines will not take over because they will never be human and that the futile quest for “human-level intelligence” has actually slowed down progress in AI research?

There is no question that we will continue to see in the future the same disruption in the job market that we have witnessed in the last sixty-plus years of computer technology creating and destroying jobs (like other technologies that preceded it). The type of disruption that has created Facebook and Tesla. Facebook had a handful of employees in 2004 and today employs 20,000. Tesla was founded in 2003 and today has 33,000 employees. Whether AI technologies progress fast or slow and whether AI will continue to excel only at narrow tasks or succeed in performing multi-dimensional activities, entrepreneurs like Zuckerberg and Musk (and Jack Ma and Vijay Shekhar Singh Sharma and Masayoshi Son) will seize new business opportunities to both destroy and create jobs. Humans, unlike bots and robots (now and possibly forever), adapt to changing circumstances.

Source: Forbes-6 Hot AI Automation Technologies Destroying And Creating Jobs

Predictions, Redactions

At their annual Gartner Symposium event in Orlando , Peter Sondergaard predicted:

“AI will be a net job creator starting in 2020”

To that, Jason Hiner at TechRepublic snarkily commented

“Gartner’s research chief couldn’t have opened the company’s flagship conference with a more astounding proclamation if he had claimed that next year’s event would be held on the International Space Station and Gartner was offering free rides.”

Actually, I agree with Peter – I wrote a whole book, Silicon Collar which looks at a century of automation and how humans go through panic attacks every couple of decades about automation and impact on jobs. Automation tends to target tasks, not complete jobs. In general, it transforms jobs, not destroy them. And societies have “circuit breakers” which slow down rapid mass adoption of automation technology as I wrote here.

What I would I have liked to hear from Peter was “we were too pessimistic just 3 years ago”, when he said from the same podium

“Gartner predicts one in three jobs will be converted to software, robots and smart machines by 2025…New digital businesses require less labor; machines will make sense of data faster than humans can. By 2018, digital business will require 50% fewer business process workers.”

And I would liked him to say “we really fxxked up” when we predicted that by next year (2018)

  • 20 percent of business content will be authored by machines.
  • more than 3 million workers globally will be supervised by a “robo-boss.”
  • 45 percent of the fastest-growing companies will have fewer employees than instances of smart machines.

In contrast, Oracle Co-CEO Mark Hurd shared with the OpenWorld audience a few hours later some of the “mean tweets” as he called it about some of the predictions he has been making about the cloud market.

Later in a Q&A, I joked with Mark that as an industry analyst he would have the luxury of hedging and assigning a probability to his predictions and then never publicly having to audit or redact his predictions.

Source: enterpriseirregulars.com-Predictions, Redactions

The impact of cobots on workers’ wellbeing

“Cobots”, or collaborative robots, are making inroads into work previously considered too difficult to automate. But as cobots get better at performing tasks such as material handling or packaging, their designers are having to consider the effects on their colleagues of the machines’ improved ability to interact with humans.

In its early stages, this new technology has been safe if underwhelming, says David Mindell, a professor at Massachusetts Institute of Technology. Of the cobots, he says: “They don’t do much collaboration, but at least they won’t cut your head off.”

Small, light and slow moving, cobots are generally harmless — the sensors and machine-learning software that enable them to “understand” their environment have a simple override: if a human gets too close, they are programmed to shut down.

The first job has been to design the software models to allow robots to operate in the human world, says Manuela Veloso, head of machine learning at Carnegie Mellon’s School of Computer Science. “It’s very important to be able to envision a mobile creature moving around in our space,” she says. For instance, getting machines to work alongside people will require an understanding of “safety zones” of the body: “We’re trying to model a person. You don’t want to hit an eye — an elbow is less important.”

As the software becomes more sophisticated, it promises more flexible machines that can be released from their cages. “We’ve got people doing jobs today because the regular robots can’t do it,” says Jim Lawton, head of product and marketing at Rethink Robotics, a Boston-based maker of cobots. These often involve repetitive actions that strain human limbs, are mind-numbingly dull and consign workers to jobs with no chance of career advancement, he says.

Mindell, author of Our Robots, Ourselves, a 2015 book about human-robot interaction, agrees there is much to be gained in the way of worker wellbeing: “If your work is truly about to be augmented, or made less dangerous or less straining, these are good things.” But he says that limits in both the technology and imagination on how to apply it have made this more promise than reality.

A Rethink Robotocs machine, with ‘eyes’ displayed on a tablet to indicate its intended movements © AP

Designing complex interactions between robots and people will take a change in mindset, he says, adding that the history of automation has largely been about treating humans like robots, to fit into automated processes. “The computer science world still has a long way to go before it has a clue about how to deal with people,” he says.

At a simple level, makers of cobots are working to reduce the sense of weirdness for people working alongside machines whose level of intelligence they find hard to judge. Rethink, for instance, experimented with putting smiling mouths on its robots to make them seem more “human”. The result was the opposite, says Lawton: people thought the machines were smirking at them, and found them “arrogant and condescending”. Moving into the “uncanny valley” where robots start to copy humans too closely “spooked people”, he says.

Veloso says there are hurdles that will have to be overcome to improve the human experience of working with the machines. One is that the machines have to be more understandable. “The more humans infer what a robot will do next, the safer it will be,” she says.

Rethink’s answer has been to give its robots “eyes” (an image on a tablet computer) that indicate the direction the machine is about to move in — a simple way to prepare people around them that they are about to do something, says Lawton.

The computer science world still has a long way to go on how to deal with people

David Mindell

Another key is to design a form of robot-human symbiosis in which each helps the other achieve its goal, says Veloso. That will mean teaching people to respond to requests from the robots, or to anticipate their needs, as much as the other way around. As interactions like this become more subtle and machines take over more work alongside people, the long-term impact on the wellbeing of human workers is hard to predict. Against the obvious benefits of taking dangerous or tedious work away from people, there may be unexpected side-effects. “When people invented keyboards, they weren’t imagining carpal tunnel syndrome,” Veloso points out.

As more automation creeps in, there may be subtle but far-reaching effects on the way work is designed. There is a fear that the iterative process improvements that are a product of lean manufacturing — constantly learning and implementing better ways of working — may be threatened, says Lawton. If existing work processes are automated, the result could be an ossific9ation that prevents this steady improvement.

Like much technology whose benefits are clear in the short term, even if their long-term effects on human wellbeing are hard to judge, the advance of the cobots is unlikely to be slowed. People are likely to take to their new robot colleagues as enthusiastically as they took to their smartphones, says Mindell. “People have their fears — in some ways, they are legitimate fears,” he says. “At the same time, they are addicted to their technology.”

‘Algorithms took our jobs’

Tom Gordon was 45 when his lucrative career as an oil trader suddenly faced a new threat. Electronic trading, which originally had been introduced to expand trading capacity overnight, was now operating head-to-head with Gordon and his colleagues on the floor of the exchange during the day.

Gordon says he used to handle between 500 and 750 trades a day. In his nearly 25 years as a trader he recalls recording only two months of losses. But even the high volumes that a successful trader like Gordon could handle were quickly overshadowed by the volumes electronic systems were capable of processing.

For Gordon, working alongside the electronic market was like being hit by a truck. “I saw the transition was coming and knew [traders] were going to get run over,” he says. He eventually left and retrained as a social worker.

He was wise to do so, because a few years later, in 2016, CME Group, which owns the New York Mercantile Exchange (Nymex), closed the last of its remaining commodity-trading pits.

Gordon says some of his former colleagues have struggled to cope in their new lives. “Some have done quite well, but for many of the people it really broke their lives and their spirit.”

Losing a job to a machine or algorithm carries a unique psychological burden, says Marty Nemko, a psychologist and career counsellor.

No training exists that can help a human match the speed and efficiency of artificial intelligence. “There is an inevitability of [one’s] inferior ability that accrues,” Nemko says.

Tim Leberecht, a consultant on business leadership, agrees: “If we lose our jobs due to automation and can’t get back into the workforce, then there is this huge void of purpose and meaning.”

“The big issue with this fourth industrial revolution is that we don’t have the social institutions that are facilitating and enabling the transition,” says Ravin Jesuthasan, managing director at Willis Towers Watson, and leader of the consulting group’s research area, “Future of Work”.

Research on the threat of automation paints a complicated picture. A 2016 OECD report found an average of 9 per cent of all jobs across the 21 countries the research covered could be automated, given current technology. A report by consultants McKinsey puts the global figure at less than 5 per cent.

Many researchers suggest the more nuanced effect of this transition will be on the handful of tasks across all sectors that are routine and repetitive.

According to another McKinsey report, more than 70 per cent of tasks performed by workers in the food service and hospitality sector could be carried out by machines. In manufacturing, nearly 60 per cent of tasks in jobs such as welding and maintaining equipment are at risk.

Higher-paying jobs are not immune from the disruption. McKinsey found that up to 50 per cent of tasks in the financial services industry could be automated, as could about a third of jobs in healthcare.

Jesuthasan says this refocusing of tasks can give people the space to do more meaningful work. “Leaving behind all of those routine things [creates] a huge emphasis on creativity and empathy and care,” he says.

After witnessing his original job as a trader vanish, it is perhaps no surprise that Gordon has found himself engrossed in work requiring these human characteristics. “I want to do my part,” he says. “Will I make a difference? I don’t know, but I’m going to give it a shot.”

Source: Financial Times-The impact of cobots on workers’ wellbeing

Robots Won’t Steal Our Jobs if We Put Workers at Center of AI Revolution

Future robots will work side by side with humans, just as they do today. Ronny Hartmann/AFP/Getty Images

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Lessons from history

There is no question coming technologies like AI will eliminate some jobs, as did those of the past.

More than half of the American workforce was involved in farming in the 1890s, back when it was a physically demanding, labor-intensive industry. Today, thanks to mechanization and the use of sophisticated data analytics to handle the operation of crops and cattle, fewer than 2 percent are in agriculture, yet their output is significantly higher.

But new technologies will also create new jobs. After steam engines replaced water wheels as the source of power in manufacturing in the 1800s, the sector expanded sevenfold, from 1.2 million jobs in 1830 to 8.3 million by 1910. Similarly, many feared that the ATM’s emergence in the early 1970s would replace bank tellers. Yet even though the machines are now ubiquitous, there are actually more tellers today doing a wider variety of customer service tasks.

So trying to predict whether a new wave of technologies will create more jobs than it will destroy is not worth the effort, and even the experts are split 50-50.

It’s particularly pointless given that perhaps fewer than 5 percent of current occupations are likely to disappear entirely in the next decade, according to a detailed study by McKinsey.

Instead, let’s focus on the changes they’ll make to how people work.

The invention of the automated teller machine didn’t kill off the bank teller. It simply altered what tasks the human teller performs. Justin Sullivan/Getty Images

It’s about tasks, not jobs

To understand why, it’s helpful to think of a job as made up of a collection of tasks that can be carried out in different ways when supported by new technologies.

And in turn, the tasks performed by different workers – colleagues, managers and many others – can also be rearranged in ways that make the best use of technologies to get the work accomplished. Job design specialists call these “work systems.”

One of the McKinsey study’s key findings was that about a third of the tasks performed in 60 percent of today’s jobs are likely to be eliminated or altered significantly by coming technologies. In other words, the vast majority of our jobs will still be there, but what we do on a daily basis will change drastically.

To date, robotics and other digital technologies have had their biggest effects on mostly routine tasks like spell-checking and those that are dangerous, dirty or hard, such as lifting heavy tires onto a wheel on an assembly line. Advances in AI and machine learning will significantly expand the array of tasks and occupations affected.

Creating an integrated strategy

We have been exploring these issues for years as part of our ongoing discussions on how to remake labor for the 21st century. In our recently published book, “Shaping the Future of Work: A Handbook for Change and a New Social Contract,” we describe why society needs an integrated strategy to gain control over how future technologies will affect work.

And that strategy starts with helping define the problems humans want new technologies to solve. We shouldn’t be leaving this solely to their inventors.

Fortunately, some engineers and AI experts are recognizing that the end users of a new technology must have a central role in guiding its design to specify which problems they’re trying to solve.

The second step is ensuring that these technologies are designed alongside the work systems with which they will be paired. A so-called simultaneous design process produces better results for both the companies and their workers compared with a sequential strategy – typical today – which involves designing a technology and only later considering the impact on a workforce.

An excellent illustration of simultaneous design is how Toyota handled the introduction of robotics onto its assembly lines in the 1980s. Unlike rivals such as General Motors that followed a sequential strategy, the Japanese automaker redesigned its work systems at the same time, which allowed it to get the most out of the new technologies and its employees. Importantly, Toyota solicited ideas for improving operations directly from workers.

In doing so, Toyota achieved higher productivity and quality in its plants than competitors like GM that invested heavily in stand-alone automation before they began to alter work systems.

Similarly, businesses that tweaked their work systems in concert with investing in IT in the 1990s outperformed those that didn’t. And health care companies like Kaiser Permanente and others learned the same lesson as they introduced electronic medical records over the past decade.

Each example demonstrates that the introduction of a new technology does more than just eliminate jobs. If managed well, it can change how work is done in ways that can both increase productivity and the level of service by augmenting the tasks humans do.

Worker wisdom

But the process doesn’t end there. Companies need to invest in continuous training so their workers are ready to help influence, use and adapt to technological changes. That’s the third step in getting the most out of new technologies.

And it needs to begin before they are introduced. The important part of this is that workers need to learn what some are calling “hybrid” skills: a combination of technical knowledge of the new technology with aptitudes for communications and problem-solving.

Companies whose workers have these skills will have the best chance of getting the biggest return on their technology investments. It is not surprising that these hybrid skills are now in high and growing demand and command good salaries.

None of this is to deny that some jobs will be eliminated and some workers will be displaced. So the final element of an integrated strategy must be to help those displaced find new jobs and compensate those unable to do so for the losses endured. Ford and the United Auto Workers, for example, offered generous early retirement benefits and cash severance payments in addition to retraining assistance when the company downsized from 2007 to 2010.

Examples like this will need to become the norm in the years ahead. Failure to treat displaced workers equitably will only widen the gaps between winners and losers in the future economy that are now already all too apparent.

In sum, companies that engage their workforce when they design and implement new technologies will be best-positioned to manage the coming AI revolution. By respecting the fact that today’s workers, like those before them, understand their jobs better than anyone and the many tasks they entail, they will be better able to “give wisdom to the machines.”

Source: Observer-Robots Won’t Steal Our Jobs if We Put Workers at Center of AI Revolution

Artificial intelligence is redefining corporate finance

Sven Denecken, SVP and Head of Product Management and Co-Innovation at SAP, discusses how AI is changing finance functions

Artificial intelligence (AI) and its potential to transform business processes across industries has become a central focus for organizations across the globe. Whether its conversations in the boardroom, sessions at an industry conference or a small-scale team meeting of accountants, companies today are buzzing about AI and the opportunity it poses to help usher in digital transformation.

While many still speculate that AI is more hype than reality, AI is already deeply ingrained in many organisations, driving automation that simplifies business processes.

This is especially true in corporate finance, with a recent study from Oxford Economics and SAP finding that 73% of finance executives agree that automation is improving finance efficiency at their company.

What is AI?

Related content

Defining artificial intelligence is perhaps the biggest initial hurdle that many finance stakeholders face in evaluating these technologies and weighing their potential impact in the enterprise. So, to start with the basics, AI can be broadly defined to include any simulation of human intelligence exhibited by machines.

One historical application that many organizations today are using is robotic process automation (RPA), which is rule-based robotic automation that can be extremely beneficial to companies in automating routine tasks. But beyond RPA, AI technology is a huge growth area that is branching into a multitude of areas when it comes to research, development and investment.

Other examples of AI include autonomous robotics, natural language processing or NLP (think of virtual assistants such as Apple’s Siri or Amazon’s Alexa), knowledge representation techniques (knowledge graphs) and more.

Machine learning is one specific subset of AI that has been gaining buzz in the industry today. Machine learning is learning based AI – it aims to teach computers how to accomplish tasks using data inputs, but without explicit rule-based programming that has historically been seen with RPA.

Drive efficiency in finance with AI

RPA is increasingly common within finance departments today, to help automate routine finance responsibilities, including streamlining transactional tasks and reporting. However, advanced AI technologies, like machine learning, have the power to take this a step further, removing the need for rule-based machines by implementing learning technology.

For instance, invoicing is a finance responsibility that can often be a nightmare for accounts receivable or treasury clerks. Often a customer might pay the incorrect amount for an invoice, combine several invoices together into one check, or even forget to include their invoice reference number. Rectifying this can be a huge time suck in trying to sift through invoices or track down the customer.

This is an area where machine learning could support finance teams in real-time by applying its learning technology to ultimately make suggestions to accounting teams on matching payments to invoices. With this, finance teams can not only better ensure accuracy in aligning payments, they can massively cut down the time spent manually tracking down the relevant information and apply themselves to other needs within the business.

Let AI have a seat at the table

The potential for AI doesn’t just lie in efficiency. As these machines get smarter, there is enormous potential for AI to support CFOs and finance directors in informing strategy and driving action.

In the consumer technology space, NLP applications like Siri and Alexa have helped to “humanize” technology and information for individuals, answering questions about the weather and news headlines – even occasionally entertaining the user with a bad joke. The use of these voice-enabled devices isn’t limited to the consumer setting, and in the coming years we will likely see an increase of NLP technology being applied in the B2B enterprise setting.

For instance, CFOs and other finance executives often receive questions in boardroom meetings around revenue forecasts, and a myriad of other topics. Often, the executive needs to spend countless hours prepping and pulling these figures to anticipate what information might be needed, or alternatively, halt an in-progress meeting to pull up the latest numbers.

These digital assistant devices could be used in the enterprise setting to let the CFO easily ask questions of his or her data analytics system in real-time. This technology would not only enable uninterrupted meetings, but also allow the CFO and other company stakeholders to make informed decisions that drive action quickly and with confidence.

Smart technologies will change the talent landscape

AI offers exciting promise for innovation as companies look to stay-ahead in today’s fast-paced, globalised business landscape, but as its popularity continues to grow, conversations have begun about the possible negative implications for workers.

For finance teams, while AI can have a measurable impact on efficiency, it cannot replace the human element. Human review and monitoring is still required when technology like machine learning streamlines some manual tasks, especially in cases that may be too complex for the machine to rectify.

Additionally, there is an opportunity for finance executives to build their teams by hiring people who are familiar with advanced technologies and can help support, improve and innovate their use within the finance function, ensuring human workers are equipped to excel in their roles.

Eighty-four percent of global companies cite digital transformation as an important factor for survival in the next five years, but to-date, only 3% of organisations have completed a company-wide digital transformation, according to another recent survey by Oxford Economics and SAP.

With this, finance executives in particular, believe that investment in digital skills and technology will have the greatest impact on company revenue in the next two years.

By exploring how AI technology can be implemented, not only in streamlining processes, but also as a valuable resource in informing strategy and driving action in finance, CFOs and other finance stakeholders can ensure their workforce is best armed to drive success in the digital economy.

Source:  Financial Director- Artificial intelligence is redefining corporate finance