The rise of the machines: not so doom and gloom

As technology perforates every aspect of our professional and personal lives, it is clear that a robotic revolution is upon society. While robots have been used in industries such as manufacturing and automotive for years, today’s systems are ever-improving and, in some cases, are now exceeding human limitations.

For instance, Google’s artificial intelligence (AI), DeepMind AlphaGo, is beating the world’s number one player in tournaments of Go (an incredibly complex Chinese grid game). While this represents the higher end of robots’ current capabilities, there is a growing, but unfounded, fear that with the rate of development, it’s only a matter of time before all jobs are completed by machines.

Losing jobs to technology is not simply a modern day worry. For example, the early days of the industrial revolution saw everyone believing that their jobs were at risk. In the 1930s, economist John Maynard Keynes coined the term ‘technological unemployment’, believing that the rise of technology would lead to a permanent decline in the number of jobs. In both cases, the impact was not quite as dramatic. Technology helped employees to do their jobs better, it didn’t necessarily replace them. It even created new positions.

It’s a similar story today. An accountant, for instance, is likely to use tools that automate certain aspects of their role. Data collection and report creation are both arduous time-consuming activities to complete manually, and the ever-growing deluge of information only increases the chances that something vital will be omitted. Automating the processes provides accountants with the data they need to analyse, enabling them to increase accuracy and productivity.

Many will argue that automated production lines are a testament to the fact that machines can do some jobs better, and businesses do indeed invest heavily in order to have work floors almost free of humans.

>See also: VR and machine predictions for 2017

But the belief that robots will take all jobs is wrong. Semi-skilled positions will bear the biggest brunt, while low- and high-skilled jobs (such as caretakers and data scientists respectively) will be impacted less, due to cost or the complexity of roles. In fact, the rise of machines will also help stimulate employment figures by creating new roles.

Despite having all the ‘bells and whistles’, robots can’t design and service themselves. Humans can try and create another robot to do the job, but then what happens if that one breaks down too?

The truth is, the cost of developing a machine to do such a role will always outweigh paying a few humans a salary; meaning the number of positions will increase in proportion to the number of robots. Furthermore, as machines become even more intelligent there are simply more things that can go wrong, exacerbating the need for skilled workers.

Some humans will be working for or following orders from robots. In warehouses and other logistics operations, for example, machines already tell humans what to do, including which items to select off shelves and where to process them. Robots can more quickly analyse orders and delegate the relevant responsibilities to ensure that they are fulfilled in almost real-time.

While it may seem incredibly ‘overlord’, it’s worth noting that people all already work for machines in some capacity. Every time someone uses Facebook or Google, they are providing AI systems with data and they pay people back in services. People also take their driving directions from apps on their smartphones. So, just as an employee generates value for an employer, humans are all worth something to those machines.

There will also be humans who own the machines, a role that is not just going to be filled by the billionaires developing machines today. Today’s independent lorry-driver who can operate just one lorry today will, in just a few years, be able to invest in and operate several autonomous lorries.

Ultimately, while reports will continue to conclude that robots are coming for people’s jobs, it’s often forgotten just how intensively competitive humans are. Elon Musk argues that humans will have to become cyborgs to beat machines, but people compete with other people, they don’t compete with machines. Humans will always find a way to win, even if they adopt technology from machines in order to compete with each other. Even if some jobs are replaced by robots, those affected will simply re-skill themselves, and perhaps upgrade themselves, to find another profession.

Source: Information Age-The rise of the machines: not so doom and gloom

Advertisements

RPA and AI – the same but different

For a conference run by the Institute of Robotic Process Automation (IRPA), there sure was a lot of talk about Artificial Intelligence (AI). Unfortunately, most of that talk only seemed to confuse people about this latest, and most-hyped of, technologies. There were frameworks presented which showed RPA and AI as a ‘continuum’, there were models that seemed to suggest that there was a natural ‘journey’ from RPA to AI, whilst others talked about AI being a ‘must have’ if RPA was to realise its full value. Some presenters talked about a ‘choice’ between RPA or AI. None of which really helped educate the conference attendees on the benefits of either technology. Let’s unravel each of these points so that everyone can be clear on the relationship between RPA and AI.

The RPA/AI Continuum – whilst it can be argued that RPA is the relatively simpler of the two types of technologies, they are very different beasts indeed. The key difference is that the robots of RPA are ‘dumb’ whilst the AI is ‘self—learning’. The robots will do exactly what you tell them to do, and they will do it exactly the same way again and again and again. Which is perfect when you have rules-based processes where compliance and accuracy are critical. However, where there is any ambiguity, usually when the inputs into a process are unstructured (such as customer emails) or where there are very large amounts of data, then AI is the appropriate technology to use because it can manage that variability and, most importantly, get better at it over time through its own experiences. So, if you do want think of a technology continuum, make sure you put a large gap between RPA and AI.

The RPA to AI Journey – there are a number of case studies where companies have implemented RPA and then implemented AI, but only because RPA is a more mature technology than AI. There are far more examples of companies implementing RPA and not implementing AI at all because they actually don’t need the AI. RPA does a fantastic job of delivering labour arbitrage, accuracy and compliance without AI coming anywhere near it. And, of course, some companies implement AI without RPA. It’s not a journey, just a set of choices based on specific demands.

The RPA Dependency on AI – another view that was put forward was that RPA is only valuable when it has AI in support. This is clearly a self-fulfilling view put forward by the vendors that are able to offer both technologies, but it is simply not correct. As mentioned above, many (in fact, most) companies implement RPA without any consideration or need for AI. If you want compliant, repeatable processes, and can feed the robots with structured data, then why complicate and confuse matters by introducing AI?

The RPA/AI Choice – There was yet another the view put forward (which actually conflicts with much of the above) that companies need to make a choice between RPA and AI – in other words which is the best one for them to implement that will deliver their objectives? As should be clear by now, the two technologies actually complement each other very well, for example by using AI to structure unstructured data at the beginning of the process, by using the robots to process the transactions, and then potentially using AI for decision making and/or data analytics at the end.

So, why all this confusion and mis-information? Part of it is obviously self-interest from vendors and providers to create frameworks and models that align with their own capabilities and marketing messages. And, although RPA is now pretty well defined (with that badge of maturity: its own acronym) some of the confusion surely arises from the multiple terms used to describe artificial intelligence; AI, cognitive computing, machine learning, NLP, etc. For now, it is much the best approach to think of AI in terms of how it can help your business, without worrying about what to call it. As the technology develops though a more robust approach is required, which is why at Symphony Ventures we are working on an ‘AI taxonomy’ that will clarify the different types of AI, and therefore help to explain the practical opportunities and uses for AI in our clients. We look forward to sharing this with you and de-bunking much of the confusion around RPA and AI that we have seen over the past few months.

Source: symphonyhq.com-RPA and AI – the same but different 

Making the Case for Employing Software Robots

One of the main tenets of advancing technology is to free up the time and effort workers are often required to put into relatively mundane tasks. Automating processes that once took hours for a person to complete has been a boon to a business’ bottom line while allowing IT workers to focus on tasks more central to advancing a company’s strategic initiatives. When it comes to Robotic Process Automation (RPA), Rod Dunlap, a director at Alsbridge, a global sourcing advisory and consulting firm, understands how RPA tools can positively impact workflow in industries such as health care and insurance. In this interview with CIO Insight, Dunlap expands on the RPA ecosystem, when it makes sense to employ RPA tools—and when it doesn’t.

For those unfamiliar, please describe Robotic Process Automation and explain a basic example of it in use.

RPA tools are software “robots” that use business rules logic to execute specifically defined and repeatable functions and work processes in the same way that person would. These include applying various criteria to determine whether, for example, a healthcare claim should be accepted or rejected, whether an invoice should be paid or whether a loan application should be approved.

What makes RPA attractive to businesses?

290_RobotTechFor one thing RPA tools are low in cost – a robot that takes on the mundane work of a human healthcare claims administrator, for example, costs between $5K and $15K a year to implement and administer. Another advantage is ease of implementation. Unlike traditional automation tools, RPA systems don’t require extensive coding or programming. In fact, it’s more accurate to say that the tools are “taught” rather than “programmed.” Relatedly, the tools can be quickly and easily adapted to new conditions and requirements. This is critical in, for example, the healthcare space, where insurance regulations are constantly changing. And while the tools require some level of IT support, they don’t have a significant impact on IT infrastructure or resources or require changes to any of the client’s existing applications.

What are the drawbacks of RPA?

RPA tools are limited in terms of their learning capabilities. They can do only what they have been taught to do, and can’t reason or draw conclusions based on what they encounter. RPA tools typically cannot read paper documents, free form text or make phone calls. The data for the Robots must be structured.

In what industries does RPA make the most sense?

They make sense in any situation that has a high volume of repeatable or predictable outcomes, on other words, where the same task is repeated over and over. We’ve seen a lot of adoption in the Insurance, Financial, Healthcare, Media, Services and Distribution industries.

Where does it make the least sense?

They don’t make sense in situations that have a high volume of one-off or unusual situations. To take the healthcare claims processing example, RPA is ideal for processing up to 90 percent of claims that an insurer receives. The remaining 10 percent of claims are for unusual situations. In these cases, while you could teach the robots the rules to process these claims, it’s more cost-effective to have a human administrator do the review.

If you automate a process once done by humans, and have it perfected by a robot, is it possible for the robot to determine a better way to accomplish the task?

Not with RPA. As mentioned, these tools will execute tasks only in the way in which they were taught. They can’t observe and suggest a different way to do things based on their experience, but what you are suggesting is indeed where the industry is heading.

What sort of data can be learned from RPA?

RPA tools can’t really provide insight from data on their own. They can log detailed data about every transaction they process. This can then be fed into a number of tools that will provide operation statistics. Also, they can work in tandem with more sophisticated cognitive tools that use pattern recognition capabilities to process unstructured data. For example, insurance companies have huge volumes of data sitting on legacy systems in a wide range of formats. Insurers are looking at ways to apply the cognitive tools to process and categorize this data and then use RPA tools to feed the data into new systems. Retailers are looking to apply the tools in similar ways to gain insight from customer data.

How much human oversight is needed to ensure mistakes are avoided?

The robots won’t make “mistakes” per se, but oversight is necessary to make sure that the robots are updated to reflect changes in business conditions and rules. An operator, similar to a human supervisor, can start and stop robots, change the tasks they perform and increase throughput all without worrying about who gets the window office.

Source: cioinsight.com-Making the Case for Employing Software Robots

Robotic Process Automation & Artificial Intelligence – Disruption

Robotic Process Automation & Artificial Intelligence. Two technologies for your business – great alone, better combined

Right now there is plenty of excitement around the huge potential of automation in businesses, particularly regarding Robotic Process Automation (RPA) and Artificial Intelligence (AI). These two technologies have the capability to drive significant, step-change efficiencies as well as generating completely new sources of value for organisations.

But, as businesses look to adopt RPA and AI, and seek to get the most value from these disruptive technologies, they need to have a clear picture as to what they do and don’t do, and how they can work together to deliver even more value.

The first thing to understand is that RPA and AI are very different types of technology, but they complement each other very well. One can use RPA without AI, and AI without RPA, but the combination of the two together is extremely powerful.

So, first to explain what RPA and AI actually are, starting with RPA as it is the easiest to define. Robotic Process Automation is a class of software that replicates the actions of humans operating computer systems in order to run business processes. Because the software ‘robots’ mimic exactly what the human operators do (by logging into a system, entering data, clicking on ‘OK’, copying and pasting data between systems, etc.) the underlying systems, such as ERP systems, CRM systems and Office applications, work exactly as they always have done without any changes required. And because the licenses for the robots are a fraction of the price of employing someone, as well as being able to work 24×7 if need be, the business case from a cost point of view only is very strong.

As well as cost savings, RPA also delivers other important benefits such as accuracy and compliance (the robots will always carry out the process in exactly the same way every time) and improved responsiveness (they are generally faster than humans, and can work all hours). They are also very agile – a single robot can do any rules-based process that you train it on, whether it is in finance, customer services or operations.

Processes that can be automated through RPA need to be rules-based and repetitive, and will generally involve processes running across a number of different systems. Customer on-boarding is a good example of a candidate RPA process since it involves a number of different steps and systems, but all can be defined and mapped. High volume processes are preferable as the business case will be stronger, but low volume processes can be automated if accuracy is crucial.

The important thing to remember though is that RPA robots are dumb. The software may be really clever in terms of what it can achieve, but the robots will do exactly what you have trained them to do each time, every time. This is both their greatest strength and their greatest weakness. A strength because you need to be sure that the robot will carry out the process compliantly and accurately, but a weakness because it precludes any self-learning capability.

This inability to self-learn leads to two distinct constraints for RPA, both of which, luckily, can be addressed by AI capabilities. The first is that the robots require structured data as their input, whether this be from a spreadsheet, a database, a webform or an API. When the input data is unstructured, such as a customer email, or semi-structured where there is generally the same information available but in variable formats (such as invoices) then artificial intelligence can be introduced to turn it into a structured format.

This type of AI capability uses a number of different AI technologies, including Natural Language Processing, to extract the relevant data from the available text, even if the text is written in free-from language, or if the information on a form looks quite different each time. For example, if you wrote an email to an online retailer complaining that the dress that was delivered was the wrong colour to the one you ordered, then the AI would be able to tell that this was a complaint, that the complaint concerned a dress, and the problem being it was the wrong colour. If the order information was not included in the original email then the AI could potentially work out which order it related to by triangulating the information it already has. Once it has gathered everything together, it can then route that query to the right person within the organisation, along with all of the supporting data. Of course, the ‘right person’ could actually be a robot who could reorder the correct colour dress and send an appropriate email to the customer.

For semi-structured data, the AI is able to extract the data from a form, even when that data is in different places on the document, in a different format or only appears sometimes. For an invoice, for example, the date might be in the top left hand corner sometimes, and other times in the top right. It might also be written longhand, or shortened. The invoice may or may not include a VAT amount, and this may be written above the Total Value or below it. Once trained, the AI is able to cope with all of this variability to a high degree of confidence. If it doesn’t know (i.e. its confidence level is below a certain threshold) then it can escalate to a human being, who can answer the question, and the AI will then learn from that interaction in order to do its job better in the future.

The second constraint for RPA is that it can’t make complex decisions, i.e. it can’t use judgement in a process. Some decisions are relatively straightforward and can certainly be handled by RPA, especially if they involve applying rules-based scores to a small number of specific criteria. For example, you may only offer a loan to someone who is over 18, is employed and owns a house – if they satisfy all of these criteria (the data for which would be available on your internal or external systems) then they pass the test. You could even apply some weightings, for example, so that they score better as they get older and earn more money. A simple calculation could decide whether the customer scores over a certain threshold or not.

But what about when the judgement required is more complex? There might be 20, or 50, different criteria to consider, all with different weightings. Some could be more relevant for some customers, and for others certain criteria could be completely irrelevant. This is where another type of AI, usually called ‘cognitive reasoning’, can be used to support and augment the RPA process.

Cognitive reasoning engines work by mapping all of the knowledge and experience that a subject matter expert may have about a process into a model. That model, a knowledge map, can then be interrogated by other humans or by robots, to find the optimal answer. In my earlier loan example, a cognitive reasoning engine would be able to consider many different variables, each with its own influence, or weighting, in order to decide whether the loan should be approved or not. This ‘decision’ would be expressed as a confidence level; if it was not confident enough it could request additional information (through a chatbot interface if dealing with a human, or by using RPA to access other systems where the data might be held) to help it increase its confidence level.

Of course AI does many more things than the two capabilities I have described here. I’ve already mentioned chatbots which can be used to interface between humans and other systems through typing in natural language, but there is also speech recognition which is used for similar purposes through the telephone. As well as understanding natural language, AI can also generate it, creating coherent passages of text from data and information that it is given. Through ’predictive analytics’ data created and collated by RPA can be used to help predict future behaviours. AI can also recognise images, such as faces, and can learn and plan scenarios for new problems that it encounters.

The crucial thing to remember about AI capabilities are that they are very narrow in what they can do. Each of the examples I have given are very distinct, so an AI that can recognise faces, for example, can’t generate text. The AI system that Deepmind created last year to beat the best player in the world at the Chinese game of Go would lose to you at a simple game of noughts and crosses. Therefore, AI needs to be considered in terms of its specific capabilities and how these might be combined to create a full solution.

As we have seen, RPA can deliver some significant benefits all by itself, but the real magic comes when the two work together. AI opens up many more processes for robotic process automation, and allows much more of the process to be automated, including where decisions have to be made.

And it goes beyond simply automating processes. Using RPA and AI, the whole process can be re-engineered. Parts of the process that may originally have been expensive to execute suddenly become much easier and cheaper to run – they could therefore potentially be done right at the start, rather than waiting to the end. Credit checks, for example, are only usually carried out once other steps and checks in a process are completed so as to minimise the amount of times they have to be done. But if it is automated, and therefore only at a marginal cost, why not do it straight away at the beginning for every case?

Some existing processes are held until late in the day, because it is easier for the staff to process them in bulk, especially if it means logging into multiple systems to extract information from them for each case. This means that turnaround times for cases that arrive in the morning are longer than they need to be. An automated solution on the other hand can log into the relevant systems many times a day to extract the information as soon as it is available. The relevant decisions, made through AI, can then be made sooner and more effectively, improving turnaround times and customer satisfaction.

As I mentioned at the beginning of this piece, there is certainly plenty of excitement around automation right now, but it is very important to have a solid and sober understanding of what the different automations capabilities are. As you start your automation journey it is therefore crucial to consider all types of automation in your strategy, and how they can support and augment each other to achieve your business objectives.

Source: disruptionhub.com-Robotic Process Automation & Artificial Intelligence – Disruption

How to Capitalize on Robotics: Savings Drivers with Digital Labor

For many of today’s organizations, moving forward with digital labor is no longer a question of if, but when.

For many of today’s organizations, moving forward with digital labor is no longer a question of if, but when. Companies know they need to jump on this trend as a differentiator, which encompasses robotic process automation and is the application of software technology to automate business processes ranging from transactional swivel-chair activities to more complex strategic undertakings.

However, like any other business decision, a business case needs to be made for digital labor and robotics efforts, which is built first by understanding the investment and financial savings opportunities, says David B. Kirk, PhD, Managing Director, Digital Labor / Robotic Process Automation – Shared Services and Outsourcing Advisory at KPMG. A case can’t be created, he explains, without understanding both the “cost to achieve” and the anticipated benefits – which includes direct cost savings as well as more qualitative benefits, such as improved customer satisfaction.

Digital labor: A financial puzzle

Understanding the investments and expected returns for digital labor is complicated by the fact that no two automation opportunities are the same — that is, your mileage will vary. In addition, digital labor can be categorized into three different classes that require different investments and that provide returns varying not only in magnitude, but also in the drivers that impact those savings. Basic Robotics Process Automation (RPA) leverages capabilities such as workflow, rules engines, and screen scraping/data capture to automate existing manual processes. Enhanced Process Automation leverages additional capabilities to address automation of processes that are less structured and often more specialized. Finally, Cognitive Automation combines advanced technologies such as natural language processing, artificial intelligence, machine learning, and data analytics to mimic human activities.

There are challenges in several of these areas, says Kirk. On the robotic process automation (RPA) end of the spectrum, one is “the simplicity of the deployment which can result in its adoption across the enterprise being explosive and disjointed, resulting in unnecessary expense and missed opportunities,” he says. On the cognitive side, the journey to get there is more complex, requiring proper guidance. “Predicting both the investment and anticipated outcomes is more of an art than a science,” he adds.

In order to solve the digital labor puzzle and glean the right understanding, organizations need to have a plan. “Understand that you need alignment between your opportunity, your appetite for both change and technology, and the skillsets you either have internally or are willing to purchase,” says Kirk. Also, organizations must recognize from the very beginning that digital labor is an enterprise-wide opportunity and is worth an enterprise-wide strategy.

Opportunities and capabilities of digital labor and automation

One of the biggest opportunities for RPA, which automates repetitive, routine explicit steps, is providing a “quick hit” automation fix for connecting disparate legacy systems together, where a human takes data from one system and then uses that data to perform activities in another system.

Enhanced process automation is similar, but it adds on other capabilities such as the ability to handle unstructured data, or built-in automations (such as an out-of-the-box knowledge library), as well as capabilities to assist in capturing new knowledge to add to the knowledge base (such as watch and record).  It is most applicable in automating activities in a specific functional area, in which the built-in knowledge can be leveraged, such as in finance or IT.

Cognitive tools are substantially different, he adds. “Those need to be taught about the work they will do, as opposed to programmed, and their future success depends greatly on the success of this training,” he explains.

Foundational and specific savings drivers

There are certainly some common foundational drivers that will impact the overall success and financial returns of digital labor investment. An important one is executive support, Kirk points out, in order to build an enterprise-wide plan that avoids duplication of investments and promotes best practices to maximize savings.

In addition, governance is critical. “Governance insures participants deliver on the business case and associated savings, leverage the agreed upon tools and methodologies, and follow the risk, compliance and security policies to avoid unnecessary risk and expense downstream,” says Kirk.

There are also specific savings drivers for each class of digital labor, which have “triggers” that identify opportunities for automation and degree of the associated savings impact. For example, in the RPA space, processes that follow well-defined steps, that are prone to human error, suffer from inconsistent execution, have a high execution frequency and require significant human effort to accomplish are likely to provide the most significant impact when automated.

Next, enhanced process automation tends to be more expensive than basic process automation, but as a result of built-in learning support, savings also tend to increase more greatly over time. Its biggest savings drivers are the availability of industry/process-specific starting knowledge; complex processes; automation expertise and rapidly evolving processes.

Cognitive process automation also is more expensive, but also provides enhanced savings capabilities and can be truly transformative, with savings drivers such as natural language; automation experience; highly regulated domains; and quality source documents.

Preparing for the digital labor journey to capitalize on savings

How can organizations best prepare for their digital labor journey in terms of capitalizing on savings?  Starting small is key, says Kirk. “We advise our clients to identify an executive sponsor and understand what that means from an enterprise deployment perspective,” he explains, “as that helps define required roadmap activities and challenges.” It can help to pinpoint a handful, or fewer, of processes that are good RPA candidates and canvass the associated automation tools for a good fit for your opportunity and your organization.

Also, companies should understand, and document, the mission statement for the automation of each process – “It’s not always about pure cost savings,” explains Kirk  – and use these processes as a proof of concept with a well-defined business case.  “As business units see the results of the proof of concept automations, prepare for the onslaught of requests by leveraging a well-defined intake process and centralized governance,” he says.

Source: cio.com-How to Capitalize on Robotics: Savings Drivers with Digital Labor

Rise of the machines – The future of robotics and automation

So many of the tasks that we now take for granted once had to be done manually. Washing a load of laundry no longer takes all day; our phone calls are directed to the correct departments by automated recordings; and many of our online orders are now selected and packed by robots.

Developments in this area are accelerating at an incredible rate. But as exciting as these new discoveries may be, they raise question after question around whether the research needed to deliver such innovations is viable, both from an economical and an ethical point of view.

As expert manufacturers of engineering parts that help to keep hundreds of different automated processes up and running, electronic repair specialists Neutronic Technologies are understandably very interested in where the future is going to take us. Is it going to take hundreds, if not thousands, of years for us to reach the kinds of automation that are lodged in the imaginations of sci-fi enthusiasts? Or are we a great deal closer to a machine takeover than we think?

According to the International Federation of Robotics, there are five countries in the developed world that manufacture at least 70 per cent of our entire robotics supply: Germany, the United States, South Korea, China and Japan.

By 2018, the Federation of Robotics predicts that there will be approximately 1.3 million industrial robots working in factories around the world. That’s less than two years away.

The development of automation has received a great deal more attention over the past few years. And undoubtedly what has brought it to people’s attention is the popularisation of the subject following the explosion of science fiction books and movies such as Isaac Asimov’s ‘i, Robot’ and ‘The Bicentennial Man’. And this has continued to emerge throughout the decades and has likely only heightened our curiosity about the world of robots.

Why are we even exploring robotics?

Developing robotics is the next stage in our search for automation. We already have automation integrated into so many aspects of our daily lives, from doors that open due to motion sensors to assembly lines and automobile production, robotics is simply the next step along that path.

I predict that the biggest developments in the automation world will come from the automobile industry – so the likes of self-driving cars that are already being tested – and the internet.

Another area of development within automation is likely to come from the growth of the internet. The concept of the ‘Internet of Things’ has been gaining momentum for some years now, even decades amongst technology companies, but the idea has only recently started to break into a mainstream conversation.

We have already seen glimpses of the future starting to creep into reality, most notably with the introduction of Amazon Dash. Linked to the person’s account and programmed to a certain item, all you have to do is press the button and an order is placed and delivered. Of course, this process is currently only half automated; a button still has to be manually pressed and Amazon shippers still post and deliver the item, but it certainly shows the direction in which we are headed.

But ultimately the Internet of Things can go even further than creating smart homes. The term ‘smart cities’ has been coined that could theoretically include connected traffic lights to control vehicle flow, smart bins that inform the right people when they need to be emptied, to even the monitoring of crops growing in fields.

How do we reach these automation goals?

Ultimately, the end goal of any research into robotics or automation is to emulate the actions of humans. People across the world engage in heated debates about whether machines will ever have the ability to think like people – a subject known as A.I. or Artificial Intelligence which is worthy of its own exploration. Whether that will become a reality in the future we cannot currently tell for sure, but researchers are hard at work across the world trying to inch our way closer.

There are, of course, issues that arise when we try to develop machines to take over certain tasks from humans, most notably to do with quality control and the increased margin for error. Some question whether a machine, that doesn’t necessarily have the capacity to consider extenuating circumstances or raise certain questions or react in a way, would be able to perform these tasks.

Let’s look at self-driving cars for example. So much of driving depends on the person behind the wheel being able to react in seconds to any changes around them. It is, therefore, essential that machines are able to “think” as close to humans as possible. If artificial intelligence and technology alone cannot achieve this, it would be very difficult for such vehicles to become road legal. However, experts in the industry have suggested a very clever solution.

Are there any disadvantages to the research?

As with any major development, there are always going to be people who oppose it, or at the very least point out reasons why we should proceed with caution – and with good reason.

One of the biggest, and indeed most realistic, fears that many people express, is all to do with economics and jobs. It’s no secret that the UK’s economy, and indeed the world’s economy, has been somewhat shaky over the past few years. This has led to many people showing concern that the development of automated processes, which are able to perform certain tasks with precision and accuracy that surpasses humans and at a much faster speed, will mean that many people’s jobs will become redundant.

Where are we headed?

It is unlikely that we are going to see any robot uprisings anytime soon. But the potential threats that an increase in automation brings to our society should not be underestimated. With the economic state of the world already so fragile, any attempts to research areas that could result in unemployment should be very carefully considered before implementation.

That being said, we are living in exciting times where we are able to witness such developments taking place. So much has already occurred over the past few years that many people may not be aware of. We may not have reached the exciting level of developments as seen in the movies – not yet anyway – but with the amount of ideas and research taking place in the world, the sky really is the limit.

Source: itproportal.com – Rise of the machines – The future of robotics and automation

The Countries Most (and Least) Likely to be Affected by Automation

Today, about half the activities that people are paid to do in the global economy have the potential to be automated by adapting currently demonstrated technology. In all, 1.2 billion full time equivalents and $14.6 trillion in wages are associated with activities that are technically automatable with current technology. This automation potential differs among countries, with the range spanning from 40% to 55%. Four economies—China, India, Japan, and the United States—dominate the total, accounting for just over half of the wages and almost two-thirds the number of employees associated with activities that are technically automatable by adapting currently demonstrated technologies.

 

Around the world, automation is transforming work, business, and the economy. China is already the largest market for robots in the world, based on volume. All economies, from Brazil and Germany to India and Saudi Arabia, stand to gain from the hefty productivity boosts that robotics and artificial intelligence will bring. The pace and extent of adoption will vary from country to country, depending on factors including wage levels. But no geography and no sector will remain untouched.

In our research we took a detailed look at 46 countries, representing about 80% of the global workforce. We examined their automation potential today — what’s possible by adapting demonstrated technologies — as well as the potential similarities and differences in how automation could take hold in the future.

Today, about half the activities that people are paid to do in the global economy have the potential to be automated by adapting demonstrated technology. As we’ve described previously, our focus is on individual work activities, which we believe to be a more useful way to examine automation potential than looking at entire jobs, since most occupations consist of a number of activities with differing potential to be automated.

In all, 1.2 billion full-time equivalents and $14.6 trillion in wages are associated with activities that are automatable with current technology. This automation potential differs among countries, ranging from 40% to 55%.

The differences reflect variations in sector mix and, within sectors, the mix of jobs with larger or smaller automation potential. Sector differences among economies sometimes lead to striking variations, as is the case with Japan and the United States, two advanced economies. Japan has an overall automation potential of 55% of hours worked, compared with 46% in the United States. Much of the difference is due to Japan’s manufacturing sector, which has a particularly high automation potential, at 71% (versus 60% in the United States). Japanese manufacturing has a slightly larger concentration of work hours in production jobs (54% of hours versus the U.S.’s 50%) and office and administrative support jobs (16% versus 9%). Both of these job titles comprise activities with a relatively high automation potential. By comparison, the United States has a higher proportion of work hours in management, architecture, and engineering jobs, which have a lower automation potential since they require application of specific expertise such as high-value engineering, which computers and robots currently are not able to do.

On a global level, four economies — China, India, Japan, and the United States — dominate the total, accounting for just over half of the wages and almost two-thirds the number of employees associated with activities that are technically automatable by adapting demonstrated technologies. Together, China and India may account for the largest potential employment impact — more than 700 million workers between them — because of the relative size of their labor forces. Technical automation potential is also large in Europe: According to our analysis, more than 60 million full-time employee equivalents and more than $1.9 trillion in wages are associated with automatable activities in the five largest economies (France, Germany, Italy, Spain, and the United Kingdom).

We also expect to see large differences among countries in the pace and extent of automation adoption. Numerous factors will determine automation adoption, of which technical feasibility is only one. Many of the other factors are economic and social, and include the cost of hardware or software solutions needed to integrate technologies into the workplace, labor supply and demand dynamics, and regulatory and social acceptance. Some hardware solutions require significant capital expenditures and could be adopted faster in advanced economies than in emerging ones with lower wage levels, where it will be harder to make a business case for adoption because of low wages. But software solutions could be adopted rapidly around the world, particularly those deployed through the cloud, reducing the lag in adoption time. The pace of adoption will also depend on the benefits that countries expect automation to bring for things other than labor substitution, such as the potential to enhance productivity, raise throughput, and improve accuracy and regulatory and social acceptance.

Regardless of the timing, automation could be the shot in the arm that the global economy sorely needs in the decades ahead. Declining birthrates and the trend toward aging in countries from China to Germany mean that peak employment will occur in most countries within 50 years. The expected decline in the share of the working-age population will open an economic growth gap that automation could potentially fill. We estimate that automation could increase global GDP growth by 0.8% to 1.4% annually, assuming that people replaced by automation rejoin the workforce and remain as productive as they were in 2014. Considering the labor substitution effect alone, we calculate that, by 2065, the productivity growth that automation could add to the largest economies in the world (G19 plus Nigeria) is the equivalent of an additional 1.1 billion to 2.2 billion full-time workers.

The productivity growth enabled by automation can ensure continued prosperity in aging nations and could provide an additional boost to fast-growing ones. However, automation on its own will not be sufficient to achieve long-term economic growth aspirations across the world. For that, additional productivity-boosting measures will be needed, including reworking business processes or developing new products, services, and business models.

How could automation play out among countries? We have divided our 46 focus nations into three groups, each of which could use automation to further national economic growth objectives, depending on its demographic trends and growth aspirations. The three groups are:

  • Advanced economies. These include Australia, Canada, France, Germany, Italy, Japan, South Korea, the United Kingdom, and the United States. They typically face an aging workforce, though the decline in working-age population growth is more immediate in some (Germany, Italy, and Japan) than in others. Automation can provide the productivity boost required to meet economic growth projections that they otherwise would struggle to attain. These economies thus have a major interest in pursuing rapid automation development and adoption.
  • Emerging economies with aging populations. This category includes Argentina, Brazil, China, and Russia, which face economic growth gaps as a result of projected declines in the growth of their working population. For these economies, automation can provide the productivity injection needed to maintain current GDP per capita. To achieve a faster growth trajectory that is more commensurate with their developmental aspirations, these countries would need to supplement automation with additional sources of productivity, such as process transformations, and would benefit from rapid adoption of automation.
  • Emerging economies with younger populations. These include India, Indonesia, Mexico, Nigeria, Saudi Arabia, South Africa, and Turkey. The continued growth of the working-age population in these countries could support maintaining current GDP per capita. However, given their high growth aspirations, and in order to remain competitive globally, automation plus additional productivity-raising measures will be necessary to sustain their economic development.

For all the differences between countries, many of automation’s challenges are universal. For business, the performance benefits are relatively clear, but the issues are more complicated for policy makers. They will need to find ways to embrace the opportunity for their economies to benefit from the productivity growth potential that automation offers, putting in place policies to encourage investment and market incentives to encourage innovation. At the same time, all countries will need to evolve and create policies that help workers and institutions adapt to the impact on employment.

 

Source: Harvard Business Review-The Countries Most (and Least) Likely to be Affected by Automation

Bill Gates Is Wrong: The Solution to AI Taking Jobs Is Training, Not Taxes

Let’s take a breath: Robots and artificial intelligence systems are nowhere near displacing the human workforce. Nevertheless, no less a voice than Bill Gates has asserted just the opposite and called for a counterintuitive, preemptive strike on these innovations. His proposed weapon of choice? Taxes on technology to compensate for losses that haven’t happened.

AI has massive potential. Taxing this promising field of innovation is not only reactionary and antithetical to progress, it would discourage the development of technologies and systems that can improve everyday life.

Imagine where we would be today if policy makers, fearing the unknown, had feverishly taxed personal computer software to protect the typewriter industry, or slapped imposts on digital cameras to preserve jobs for darkroom technicians. Taxes to insulate telephone switchboard operators from the march of progress could have trapped our ever-present mobile devices on a piece of paper in an inventor’s filing cabinet.

There simply is no proof that levying taxes on technology protects workers. In fact, as former US treasury secretary Lawrence Summers recently wrote, “Taxes on technology are more likely to drive production offshore than create jobs at home.”

Calls to tax AI are even more stunning because they represent a fundamental abandonment of any responsibility to prepare employees to work with AI systems. Those of us fortunate enough to influence policy in this space should demonstrate real faith in the ability of people to embrace and prepare for change. The right approach is to focus on training workers in the right skills, not taxing robots.

There are more than half a million open technology jobs in the United States, according to the Department of Labor, but our schools and universities are not producing enough graduates with the right skills to fill them. In many cases, these are “new collar jobs” that, rather than calling for a four-year college degree, require sought-after skills that can be learned through 21st century vocational training, innovative public education models like P-TECH (which IBM pioneered), coding camps, professional certification programs and more. These programs can prepare both students and mid-career professionals for new collar roles ranging from cybersecurity analyst to cloud infrastructure engineer.

At IBM, we have seen countless stories of motivated new collar professionals who have learned the skills to thrive in the digital economy. They are former teachers, fast food workers, and rappers who now fight cyber threats, operate cloud platforms and design digital experiences for mobile applications. WIRED has even reported how, with access to the right training, former coal miners have transitioned into new collar coding careers.

The nation needs a massive expansion of the number and reach of programs students and workers can access to build new skills. Closing the skills gap could fill an estimated 1 million US jobs by 2020, but only if large-scale public private partnerships can better connect many more workers to the training they need. This must be a national priority.

First, Congress should update and expand career-focused education to help more people, especially women and underrepresented minorities, learn in-demand skills at every stage. This should include programs to promote STEM careers among elementary students, which increase interest and enrollment in skills-focused courses later in their educational careers. Next, high-school vocational training programs should be reoriented around the skills needed in the labor market. And updating the Federal Work-Study program, something long overdue, would give college students meaningful, career-focused internships at companies rather than jobs in the school cafeteria or library. Together, high-school career training programs and college work study receive just over $2 billion in federal funding. At around 3 percent of total federal education spending, that’s a pittance. We can and must do more.

Second, Congress should create and fund a 21st century apprenticeship program to recruit and train or retrain workers to fill critical skills gaps in federal agencies and the private sector. Allowing block grants to fund these programs at the state level would boost their effectiveness and impact.

Third, Congress should support standards and certifications for new collar skills, just as it has done for other technical skills, from automotive technicians to welders. Formalizing these national credentials and accreditation programs will help employers recognize that candidates are sufficiently qualified, benefiting workers and employers alike.

Taking these steps now will establish a robust skills-training infrastructure that can address America’s immediate shortage of high-tech talent. Once this foundation is in place, it can evolve to focus on new categories of skills that will grow in priority as the deployment of AI moves forward.

AI should stand for augmented—not artificial—intelligence. It will help us make digital networks more secure, allow people to lead healthier lives, better protect our environment, and more. Like steam power, electricity, computers, and the internet before it, AI will create more jobs than it displaces. What workers really need in the era of AI are the skills to compete and win. Providing the architecture for 21st century skills training requires public policies based on confidence, not taxes based on fear.

Source: Wired-Bill Gates Is Wrong: The Solution to AI Taking Jobs Is Training, Not Taxes

Don’t Confuse Uncertainty with Risk

We are living in a digital era increasingly dominated by uncertainty, driven in part by the rise of exponential change. The problem is, we are generally clogging up the gears of progress and growth in our companies by treating that uncertainty as risk and by trying to address it with traditional mitigation strategies. The economist Frank Knight first popularized the differentiation between risk and uncertainty almost a century ago. Though it is a dramatic oversimplification, one critical difference is that risk is – by definition – measurable while uncertainty is not. [1]

 

Proof and Confidence. One way to parse uncertainty from risk, and in turn to assess differing levels of risk, is to consider what it should take for your organization to make a certain strategic move. One dimension of this is the “level of evidence required” in order to make the move. In other words, what amount of data and supporting information is necessary to understand the contours of the unknown and to shift from inaction to action? A

The first dimension is the “level of evidence required” in order to make the move. The second dimension is the “level of confidence” that we have in making the move in the first place.

Risk in the Known or Knowable. Since anything that can be called risky is measurable (e.g. via scenario modeling, financial forecasting, sensitivity analysis, etc.), it is by definition close enough to the standard and “knowable” business of today. Uncertainty is the realm outside of that: it’s “unknowable” and not measurable.

In risky areas, the level of analysis we do – and how much time we take to try to understand the risk and make a decision – should vary. The graphic above frames the three levels of risk described in more detail below, along with examples from some of our clients of the types of projects that we see falling into these categories:

 

1. Risky, Without Precursor – These are moves for which there is no “precursor” or analog that we have seen from elsewhere. We really want to do our homework when opportunities fall here, as exposure (e.g. financial or reputational) is high, and we have very little experience with the move and/or supporting data in the form of other’s success stories, analogs, etc.

Typical initiative: Collaborative and/or ecosystem-driven solution development – The City of Columbus was awarded the U.S. Department of Transportation’s $40MM Smart City Challenge in June of last year. The competition involved submissions from 78 cities “to develop ideas for an integrated, first-of-its-kind smart transportation system that would use data, applications, and technology to help people and goods move more quickly, cheaply, and efficiently.”[2] The solutions that were envisioned as part of the challenge were generally known (or at least had an identifiable development path) but required a complex ecosystem to deliver them. Columbus was awarded the prize because they created a compelling vision and because they were able to bring the right “burden of proof” to the USDOT that they would be able to pull it off – i.e. that they had ways to manage down the execution risk.

 

2. Risky, With Precursor – Exposure may be high, but we are highly confident about making the move. The argument for why the move makes sense should be reasonably straightforward.

Typical initiativeSensor-based business models and data monetization – A major aerospace sub-system provider had long been an industry leader in developing high-tech industrial parts and products. In recent years, new competitors had been coming online, and the company knew they needed to innovate to stay ahead of the game. In one initiative, they began adding sensors to their aircraft and aerospace products, initially for predictive maintenance needs. As they began rolling this out, they realized the data could be valuable in many other ways and actually create a whole new source of revenue from a whole new customer: pilots. Using this data, they decided to build a mobile platform that would allow pilots to view operating information from the parts and understand better ways to fly from point A to point B. The level of evidence they had was high – it was clear from many other industries that data could be used in this way to produce business value, but the confidence that it was the right decision for the brand was low at the beginning. They had to test it to find out. In this case, it was enormously successful, opening up a new business model and customer set that the company had never served before.

 

3. Low Risk, No Brainer – This is the domain of “just go do it,” perhaps because lots of solutions exist already and the opportunity for immediate economic value is high. There isn’t much reason to go study this to death.

Typical initiative: Robotic Process Automation (RPA) – RPA technology is essentially a software robot that has been coded to be able to do repetitive, highly logical and structured tasks. It has been around for a while, and there are extensive examples and case studies across industries, especially in banking. So, when JP Morgan decided to look into using softbots to automate higher order processes with investment banking, the right decision seemed obvious.[3] With growing pressure on margins, and with the success within the industry in automating structured tasks, raising demands on automation technology seemed like a logical next step. It was clear this was where the industry is going, and it was just a matter of time before all competitors would be doing it. Choosing not to innovate seemed like the bigger risk in this situation.

 

Dealing with the Uncertainty Quadrant. This is the domain of the “unknowable.” Operating in this space, many companies spend lots of time running around collecting data to reduce risk, in the attempt to make it more knowable. But if the action is truly uncertain, extensive research to lower your risk is just a waste of time.

The only way to consider a highly uncertain action is to “just go do it” – usually through prototyping and market testing – but in a way that minimizes financial or reputational exposure. Consider an old story about Palm Computing, a favorite of my friend Larry Keeley’s. As I have heard Larry tell it, the genesis story of Palm is rooted in a condition we are all too familiar with today: a low-level hypothesis that digital would matter when it came to being organized and connected, but with a high degree of uncertainty about how that would (and should) play out. This was a time of “spontaneous simultaneity” as various players worked with designs and technological solutions. The one who got it right (for a time) was the one who just did it.

Jeff Hawkins, one of the founders of Palm, epitomized the activity of prototyping. The (perhaps apocryphal) story is that in the very early days, Jeff would work in his garage to cut multiple pieces of balsa wood into organizer-shaped rectangles. He would load a bunch of those into his shirt pocket and carry them to meetings, sketching on each one in the moment something that occurred to him as being particularly helpful at the time. Contact entry, instant contact sharing, notes, calendar access, etc.: all started to appear on pieces of wood and craft an overall vision for the most important functionality to be built into the Palm. And unlike computers of the era, he discovered the criticality of instant-on functionality. To steal a phrase from the design world, the device ended up being “well-behaved” from the beginning because it was founded upon how people actually interacted. The rise and fall of Palm is a much longer story. But in the early days, Hawkins demonstrated the handling of uncertainty while minimizing exposure exquisitely.

As we carry these principles back into our organizations, discussion of whether something is risky or simply uncertain is almost “certainly” going to drift quickly towards the semantic. We should start training ourselves and our organizations to talk more about the level of evidence required (not to mention whether proof is even attainable) and level of confidence, and less about how risky something seems. With this approach, we might actually be able to start thriving in a world that is increasingly uncertain.

 

Source: Huffington Post-Don’t Confuse Uncertainty with Risk

Please Don’t Hire a Chief Artificial Intelligence Officer

Every serious technology company now has an Artificial Intelligence team in place. These companies are investing millions into intelligent systems for situation assessment, prediction analysis, learning-based recognition systems, conversational interfaces, and recommendation engines. Companies such as Google, Facebook, and Amazon aren’t just employing AI, but have made it a central part of their core intellectual property.

As the market has matured, AI is beginning to move into enterprises that will use it but not develop it on their own. They see intelligent systems as solutions for sales, logistics, manufacturing, and business intelligence challenges. They hope AI can improve productivity, automate existing process, provide predictive analysis, and extract meaning from massive data sets. For them, AI is a competitive advantage, but not part of their core product. For these companies, investment in AI may help solve real business problems but will not become part of customer facing products. Pepsi, Wal-Mart and McDonalds might be interested in AI to help with marketing, logistics or even flipping burgers but that doesn’t mean that we should expect to see intelligent sodas, snow shovels, or Big Macs showing up anytime soon.

As with earlier technologies, we are now hearing advice about “AI strategies” and how companies should hire Chief AI Officers. In much the same way that the rise of Big Data led to the Data Scientist craze, the argument is that every organization now needs to hire a C-Level officer who will drive the company’s AI strategy.

I am here to ask you not to do this. Really, don’t do this.

It’s not that I doubt AI’s usefulness. I have spent my entire professional lifeworking in the field. Far from being a skeptic, I am a rabid true believer.

However, I also believe that the effective deployment of AI in the enterprise requires a focus on achieving business goals. Rushing towards an “AI strategy” and hiring someone with technical skills in AI to lead the charge might seem in tune with the current trends, but it ignores the reality that innovation initiatives only succeed when there is a solid understanding of actual business problems and goals. For AI to work in the enterprise, the goals of the enterprise must be the driving force.

This is not what you’ll get if you hire a Chief AI Officer. The very nature of the role aims at bringing the hammer of AI to the nails of whatever problems are lying around. This well-educated, well-paid, and highly motivated individual will comb your organization looking for places to apply AI technologies, effectively making the goal to use AI rather than to solve real problems.

This is not to say that you don’t need people who understand AI technologies. Of course you do. But understanding the technologies and understanding what they can do for your enterprise strategically are completely different. And hiring a Chief of AI is no substitute for effective communication between the people in your organization with technical chops and those with strategic savvy.

One alternative to hiring a Chief AI Officer is start with the problems. Move consideration of AI solutions into the hands of the people who are addressing the problems directly. If these people are equipped with a framework for thinking about when AI solutions might be applicable, they can suggest where those solutions are actually applicable. Fortunately, the framework for this flows directly from the nature of the technologies themselves. We have already seen where AI works and where its application might be premature.

The question comes down to data and the task.

For example, highly structured data found in conventional databases with well-understood schemata tend to support traditional, highly analytical machine learning approaches. If you have 10 years of transactional data, then you should use machine learning to find correlations between customer demographics and products.

In cases where you have high volume, low feature data sets (such as images or audio), deep learning technologies are most applicable. So a deep learning approach that uses equipment sounds to anticipate failures on your factory floor might make sense.

If all you have is text, the technologies of data extraction, sentiment analysis and Watson-like approaches to evidence-based reasoning will be useful. Automating intelligent advice based on HR best practice manuals could fit into this model.

And if you have data that is used to support reporting on the status or performance of your business, then natural language generation is the best option. It makes no sense to have an analyst’s valuable time dedicated to analyzing and summarizing all your sales data when you can have perfectly readable English language reports automatically generated by a machine and delivered by email.

If decision-makers throughout your organization understand this, they can look at the business problems they have and the data they’re collecting and recognize the types of cognitive technologies that might be most applicable.

The point here is simple. AI isn’t magic. Specific technologies provide specific functions and have specific data requirements. Understanding them does not require that you hire a wizard or unicorn to deal with them. It does not require a Chief of AI. It requires teams that know how to communicate the reality of business problems with those who understand the details of technical solutions.

The AI technologies of today are astoundingly powerful. As they enter the enterprise, they will change everything. If we focus on applying them to solve real, pervasive problems, we will build a new kind of man-machine partnership that empowers us all to work at the top of our game and realize our greatest potential.

Source: Harvard Business Review-Please Don’t Hire a Chief Artificial Intelligence Officer