Making the Case for Employing Software Robots

One of the main tenets of advancing technology is to free up the time and effort workers are often required to put into relatively mundane tasks. Automating processes that once took hours for a person to complete has been a boon to a business’ bottom line while allowing IT workers to focus on tasks more central to advancing a company’s strategic initiatives. When it comes to Robotic Process Automation (RPA), Rod Dunlap, a director at Alsbridge, a global sourcing advisory and consulting firm, understands how RPA tools can positively impact workflow in industries such as health care and insurance. In this interview with CIO Insight, Dunlap expands on the RPA ecosystem, when it makes sense to employ RPA tools—and when it doesn’t.

For those unfamiliar, please describe Robotic Process Automation and explain a basic example of it in use.

RPA tools are software “robots” that use business rules logic to execute specifically defined and repeatable functions and work processes in the same way that person would. These include applying various criteria to determine whether, for example, a healthcare claim should be accepted or rejected, whether an invoice should be paid or whether a loan application should be approved.

What makes RPA attractive to businesses?

290_RobotTechFor one thing RPA tools are low in cost – a robot that takes on the mundane work of a human healthcare claims administrator, for example, costs between $5K and $15K a year to implement and administer. Another advantage is ease of implementation. Unlike traditional automation tools, RPA systems don’t require extensive coding or programming. In fact, it’s more accurate to say that the tools are “taught” rather than “programmed.” Relatedly, the tools can be quickly and easily adapted to new conditions and requirements. This is critical in, for example, the healthcare space, where insurance regulations are constantly changing. And while the tools require some level of IT support, they don’t have a significant impact on IT infrastructure or resources or require changes to any of the client’s existing applications.

What are the drawbacks of RPA?

RPA tools are limited in terms of their learning capabilities. They can do only what they have been taught to do, and can’t reason or draw conclusions based on what they encounter. RPA tools typically cannot read paper documents, free form text or make phone calls. The data for the Robots must be structured.

In what industries does RPA make the most sense?

They make sense in any situation that has a high volume of repeatable or predictable outcomes, on other words, where the same task is repeated over and over. We’ve seen a lot of adoption in the Insurance, Financial, Healthcare, Media, Services and Distribution industries.

Where does it make the least sense?

They don’t make sense in situations that have a high volume of one-off or unusual situations. To take the healthcare claims processing example, RPA is ideal for processing up to 90 percent of claims that an insurer receives. The remaining 10 percent of claims are for unusual situations. In these cases, while you could teach the robots the rules to process these claims, it’s more cost-effective to have a human administrator do the review.

If you automate a process once done by humans, and have it perfected by a robot, is it possible for the robot to determine a better way to accomplish the task?

Not with RPA. As mentioned, these tools will execute tasks only in the way in which they were taught. They can’t observe and suggest a different way to do things based on their experience, but what you are suggesting is indeed where the industry is heading.

What sort of data can be learned from RPA?

RPA tools can’t really provide insight from data on their own. They can log detailed data about every transaction they process. This can then be fed into a number of tools that will provide operation statistics. Also, they can work in tandem with more sophisticated cognitive tools that use pattern recognition capabilities to process unstructured data. For example, insurance companies have huge volumes of data sitting on legacy systems in a wide range of formats. Insurers are looking at ways to apply the cognitive tools to process and categorize this data and then use RPA tools to feed the data into new systems. Retailers are looking to apply the tools in similar ways to gain insight from customer data.

How much human oversight is needed to ensure mistakes are avoided?

The robots won’t make “mistakes” per se, but oversight is necessary to make sure that the robots are updated to reflect changes in business conditions and rules. An operator, similar to a human supervisor, can start and stop robots, change the tasks they perform and increase throughput all without worrying about who gets the window office.

Source: the Case for Employing Software Robots


This is what 9 big names in tech think about the rise of robots

In fulfilment centers around the US, thousands of tiny orange robots sort packages for Amazon. In a California factory, red, multi-armed machinesassemble Tesla’s electric vehicles of the future.


This is the world the tech industry is creating.

According to most available data, the next 20 years will involve rapid automation of manual labor and customer service jobs. Millions of employees could be forced to learn new skills or change roles entirely.

Here’s how the tech executives are responding to the threat of a robot takeover.


Bill Gates

The Microsoft co-founder believes so strongly in the idea of robots coming for people’s jobs that he’s already begun thinking about how companies ought to pay tax on those robots to make up for lost income tax.

“You cross the threshold of job-replacement of certain activities all sort of at once,” Gates told Quartz recently. “So, you know, warehouse work, driving, room cleanup, there’s quite a few things that are meaningful job categories that, certainly in the next 20 years [will go away].”


Mark Cuban

The “Shark Tank” investor and Dallas Mavericks owner has remarked on several occasions that artificially-intelligent robots will kill off jobs in droves in the coming years.

In February, Cuban criticized President Trump’s plans to bring back American factory jobs as a sign of the president’s poor understanding of technology and business.

“People aren’t going to have jobs,” Cuban said. “How does [Trump] deal with displaced workers?”


Vinod Khosla

Khosla, a Sun Microsystems co-founder and prominent venture capitalist, has stated that 80% of IT jobs are at risk of automation in the coming decades.

Many of the jobs Khosla envisions involve rote, repetitive data entry or simple troubleshooting.

“I think that’s exciting,” he said at a November 2016 conference of the impending robot takeover.


Devin Wenig

The eBay president and CEO has said artificial intelligence could eliminate entire industries within the next decade. But he remains optimistic, so long as employers recognize their role in training workers who may get displaced.

“As AI evolves, job training must evolve with it,” he wrote earlier this January. “There are already big shortages in fields closely related to AI, such as data science, engineering and operations.”


Elon Musk

The Tesla CEO told CNBC in a November 2016 interview that he believes robots will take so many jobs by the mid-21st century, the government will start paying people salaries even if they don’t work.

The idea is called universal basic income, and Musk is the latest tech entrepreneur to support the idea as a solution to robotic automation.

“I am not sure what else one would do,” he said. “I think that is what would happen.”


Sam Altman

Another basic income advocate, the Y Combinator president is almost positive robots will dominate industrialized economies in 100 years, and pretty sure they’ll create a big dent within the next 20.

“The question I find myself struggling with the most is what will happen to the economy and to jobs as automation becomes more and more of a powerful force,” Altman said in a recent video chat.


Jeff Bezos

Amazon’s chief executive has embraced the power of AI for years. In his own factories, there are more than 45,000 robots ferrying packages from one spot to another.

Amazon has also announced plans to build employee-free grocery stores.

“It’s probably hard to overstate how big of an impact it’s going to have on society over the next twenty years,” he said at a recent Code Conference.


Chris Hughes

Hughes, a Facebook co-founder, says a future filled with automated work is inescapable.

“The reality is that work has changed,” he told NPR. Many of the jobs once held by humans are now driven by computers, and increasingly so.

Hughes himself has supported basic income as a solution to the growing inconsistencies (and insecurity) of jobs.


Ray Kurzweil

Google’s engineering director isn’t exactly panicking about the future of work.

Kurzweil sees robots as a force for good, at least in terms of freeing people up to do what it is they love. He has predicted that by the 2030s, AI will outpace biological intelligence and self-driving cars will be everywhere.

“We are going to have new types of jobs creating new types of dollars that don’t exist yet and that has been the trend,” he told Entrepreneur.


Source: is what 9 big names in tech think about the rise of robots

Robotic Process Automation & Artificial Intelligence – Disruption

Robotic Process Automation & Artificial Intelligence. Two technologies for your business – great alone, better combined

Right now there is plenty of excitement around the huge potential of automation in businesses, particularly regarding Robotic Process Automation (RPA) and Artificial Intelligence (AI). These two technologies have the capability to drive significant, step-change efficiencies as well as generating completely new sources of value for organisations.

But, as businesses look to adopt RPA and AI, and seek to get the most value from these disruptive technologies, they need to have a clear picture as to what they do and don’t do, and how they can work together to deliver even more value.

The first thing to understand is that RPA and AI are very different types of technology, but they complement each other very well. One can use RPA without AI, and AI without RPA, but the combination of the two together is extremely powerful.

So, first to explain what RPA and AI actually are, starting with RPA as it is the easiest to define. Robotic Process Automation is a class of software that replicates the actions of humans operating computer systems in order to run business processes. Because the software ‘robots’ mimic exactly what the human operators do (by logging into a system, entering data, clicking on ‘OK’, copying and pasting data between systems, etc.) the underlying systems, such as ERP systems, CRM systems and Office applications, work exactly as they always have done without any changes required. And because the licenses for the robots are a fraction of the price of employing someone, as well as being able to work 24×7 if need be, the business case from a cost point of view only is very strong.

As well as cost savings, RPA also delivers other important benefits such as accuracy and compliance (the robots will always carry out the process in exactly the same way every time) and improved responsiveness (they are generally faster than humans, and can work all hours). They are also very agile – a single robot can do any rules-based process that you train it on, whether it is in finance, customer services or operations.

Processes that can be automated through RPA need to be rules-based and repetitive, and will generally involve processes running across a number of different systems. Customer on-boarding is a good example of a candidate RPA process since it involves a number of different steps and systems, but all can be defined and mapped. High volume processes are preferable as the business case will be stronger, but low volume processes can be automated if accuracy is crucial.

The important thing to remember though is that RPA robots are dumb. The software may be really clever in terms of what it can achieve, but the robots will do exactly what you have trained them to do each time, every time. This is both their greatest strength and their greatest weakness. A strength because you need to be sure that the robot will carry out the process compliantly and accurately, but a weakness because it precludes any self-learning capability.

This inability to self-learn leads to two distinct constraints for RPA, both of which, luckily, can be addressed by AI capabilities. The first is that the robots require structured data as their input, whether this be from a spreadsheet, a database, a webform or an API. When the input data is unstructured, such as a customer email, or semi-structured where there is generally the same information available but in variable formats (such as invoices) then artificial intelligence can be introduced to turn it into a structured format.

This type of AI capability uses a number of different AI technologies, including Natural Language Processing, to extract the relevant data from the available text, even if the text is written in free-from language, or if the information on a form looks quite different each time. For example, if you wrote an email to an online retailer complaining that the dress that was delivered was the wrong colour to the one you ordered, then the AI would be able to tell that this was a complaint, that the complaint concerned a dress, and the problem being it was the wrong colour. If the order information was not included in the original email then the AI could potentially work out which order it related to by triangulating the information it already has. Once it has gathered everything together, it can then route that query to the right person within the organisation, along with all of the supporting data. Of course, the ‘right person’ could actually be a robot who could reorder the correct colour dress and send an appropriate email to the customer.

For semi-structured data, the AI is able to extract the data from a form, even when that data is in different places on the document, in a different format or only appears sometimes. For an invoice, for example, the date might be in the top left hand corner sometimes, and other times in the top right. It might also be written longhand, or shortened. The invoice may or may not include a VAT amount, and this may be written above the Total Value or below it. Once trained, the AI is able to cope with all of this variability to a high degree of confidence. If it doesn’t know (i.e. its confidence level is below a certain threshold) then it can escalate to a human being, who can answer the question, and the AI will then learn from that interaction in order to do its job better in the future.

The second constraint for RPA is that it can’t make complex decisions, i.e. it can’t use judgement in a process. Some decisions are relatively straightforward and can certainly be handled by RPA, especially if they involve applying rules-based scores to a small number of specific criteria. For example, you may only offer a loan to someone who is over 18, is employed and owns a house – if they satisfy all of these criteria (the data for which would be available on your internal or external systems) then they pass the test. You could even apply some weightings, for example, so that they score better as they get older and earn more money. A simple calculation could decide whether the customer scores over a certain threshold or not.

But what about when the judgement required is more complex? There might be 20, or 50, different criteria to consider, all with different weightings. Some could be more relevant for some customers, and for others certain criteria could be completely irrelevant. This is where another type of AI, usually called ‘cognitive reasoning’, can be used to support and augment the RPA process.

Cognitive reasoning engines work by mapping all of the knowledge and experience that a subject matter expert may have about a process into a model. That model, a knowledge map, can then be interrogated by other humans or by robots, to find the optimal answer. In my earlier loan example, a cognitive reasoning engine would be able to consider many different variables, each with its own influence, or weighting, in order to decide whether the loan should be approved or not. This ‘decision’ would be expressed as a confidence level; if it was not confident enough it could request additional information (through a chatbot interface if dealing with a human, or by using RPA to access other systems where the data might be held) to help it increase its confidence level.

Of course AI does many more things than the two capabilities I have described here. I’ve already mentioned chatbots which can be used to interface between humans and other systems through typing in natural language, but there is also speech recognition which is used for similar purposes through the telephone. As well as understanding natural language, AI can also generate it, creating coherent passages of text from data and information that it is given. Through ’predictive analytics’ data created and collated by RPA can be used to help predict future behaviours. AI can also recognise images, such as faces, and can learn and plan scenarios for new problems that it encounters.

The crucial thing to remember about AI capabilities are that they are very narrow in what they can do. Each of the examples I have given are very distinct, so an AI that can recognise faces, for example, can’t generate text. The AI system that Deepmind created last year to beat the best player in the world at the Chinese game of Go would lose to you at a simple game of noughts and crosses. Therefore, AI needs to be considered in terms of its specific capabilities and how these might be combined to create a full solution.

As we have seen, RPA can deliver some significant benefits all by itself, but the real magic comes when the two work together. AI opens up many more processes for robotic process automation, and allows much more of the process to be automated, including where decisions have to be made.

And it goes beyond simply automating processes. Using RPA and AI, the whole process can be re-engineered. Parts of the process that may originally have been expensive to execute suddenly become much easier and cheaper to run – they could therefore potentially be done right at the start, rather than waiting to the end. Credit checks, for example, are only usually carried out once other steps and checks in a process are completed so as to minimise the amount of times they have to be done. But if it is automated, and therefore only at a marginal cost, why not do it straight away at the beginning for every case?

Some existing processes are held until late in the day, because it is easier for the staff to process them in bulk, especially if it means logging into multiple systems to extract information from them for each case. This means that turnaround times for cases that arrive in the morning are longer than they need to be. An automated solution on the other hand can log into the relevant systems many times a day to extract the information as soon as it is available. The relevant decisions, made through AI, can then be made sooner and more effectively, improving turnaround times and customer satisfaction.

As I mentioned at the beginning of this piece, there is certainly plenty of excitement around automation right now, but it is very important to have a solid and sober understanding of what the different automations capabilities are. As you start your automation journey it is therefore crucial to consider all types of automation in your strategy, and how they can support and augment each other to achieve your business objectives.

Source: Process Automation & Artificial Intelligence – Disruption

How to Capitalize on Robotics: Savings Drivers with Digital Labor

For many of today’s organizations, moving forward with digital labor is no longer a question of if, but when.

For many of today’s organizations, moving forward with digital labor is no longer a question of if, but when. Companies know they need to jump on this trend as a differentiator, which encompasses robotic process automation and is the application of software technology to automate business processes ranging from transactional swivel-chair activities to more complex strategic undertakings.

However, like any other business decision, a business case needs to be made for digital labor and robotics efforts, which is built first by understanding the investment and financial savings opportunities, says David B. Kirk, PhD, Managing Director, Digital Labor / Robotic Process Automation – Shared Services and Outsourcing Advisory at KPMG. A case can’t be created, he explains, without understanding both the “cost to achieve” and the anticipated benefits – which includes direct cost savings as well as more qualitative benefits, such as improved customer satisfaction.

Digital labor: A financial puzzle

Understanding the investments and expected returns for digital labor is complicated by the fact that no two automation opportunities are the same — that is, your mileage will vary. In addition, digital labor can be categorized into three different classes that require different investments and that provide returns varying not only in magnitude, but also in the drivers that impact those savings. Basic Robotics Process Automation (RPA) leverages capabilities such as workflow, rules engines, and screen scraping/data capture to automate existing manual processes. Enhanced Process Automation leverages additional capabilities to address automation of processes that are less structured and often more specialized. Finally, Cognitive Automation combines advanced technologies such as natural language processing, artificial intelligence, machine learning, and data analytics to mimic human activities.

There are challenges in several of these areas, says Kirk. On the robotic process automation (RPA) end of the spectrum, one is “the simplicity of the deployment which can result in its adoption across the enterprise being explosive and disjointed, resulting in unnecessary expense and missed opportunities,” he says. On the cognitive side, the journey to get there is more complex, requiring proper guidance. “Predicting both the investment and anticipated outcomes is more of an art than a science,” he adds.

In order to solve the digital labor puzzle and glean the right understanding, organizations need to have a plan. “Understand that you need alignment between your opportunity, your appetite for both change and technology, and the skillsets you either have internally or are willing to purchase,” says Kirk. Also, organizations must recognize from the very beginning that digital labor is an enterprise-wide opportunity and is worth an enterprise-wide strategy.

Opportunities and capabilities of digital labor and automation

One of the biggest opportunities for RPA, which automates repetitive, routine explicit steps, is providing a “quick hit” automation fix for connecting disparate legacy systems together, where a human takes data from one system and then uses that data to perform activities in another system.

Enhanced process automation is similar, but it adds on other capabilities such as the ability to handle unstructured data, or built-in automations (such as an out-of-the-box knowledge library), as well as capabilities to assist in capturing new knowledge to add to the knowledge base (such as watch and record).  It is most applicable in automating activities in a specific functional area, in which the built-in knowledge can be leveraged, such as in finance or IT.

Cognitive tools are substantially different, he adds. “Those need to be taught about the work they will do, as opposed to programmed, and their future success depends greatly on the success of this training,” he explains.

Foundational and specific savings drivers

There are certainly some common foundational drivers that will impact the overall success and financial returns of digital labor investment. An important one is executive support, Kirk points out, in order to build an enterprise-wide plan that avoids duplication of investments and promotes best practices to maximize savings.

In addition, governance is critical. “Governance insures participants deliver on the business case and associated savings, leverage the agreed upon tools and methodologies, and follow the risk, compliance and security policies to avoid unnecessary risk and expense downstream,” says Kirk.

There are also specific savings drivers for each class of digital labor, which have “triggers” that identify opportunities for automation and degree of the associated savings impact. For example, in the RPA space, processes that follow well-defined steps, that are prone to human error, suffer from inconsistent execution, have a high execution frequency and require significant human effort to accomplish are likely to provide the most significant impact when automated.

Next, enhanced process automation tends to be more expensive than basic process automation, but as a result of built-in learning support, savings also tend to increase more greatly over time. Its biggest savings drivers are the availability of industry/process-specific starting knowledge; complex processes; automation expertise and rapidly evolving processes.

Cognitive process automation also is more expensive, but also provides enhanced savings capabilities and can be truly transformative, with savings drivers such as natural language; automation experience; highly regulated domains; and quality source documents.

Preparing for the digital labor journey to capitalize on savings

How can organizations best prepare for their digital labor journey in terms of capitalizing on savings?  Starting small is key, says Kirk. “We advise our clients to identify an executive sponsor and understand what that means from an enterprise deployment perspective,” he explains, “as that helps define required roadmap activities and challenges.” It can help to pinpoint a handful, or fewer, of processes that are good RPA candidates and canvass the associated automation tools for a good fit for your opportunity and your organization.

Also, companies should understand, and document, the mission statement for the automation of each process – “It’s not always about pure cost savings,” explains Kirk  – and use these processes as a proof of concept with a well-defined business case.  “As business units see the results of the proof of concept automations, prepare for the onslaught of requests by leveraging a well-defined intake process and centralized governance,” he says.

Source: to Capitalize on Robotics: Savings Drivers with Digital Labor

Rise of the machines – The future of robotics and automation

So many of the tasks that we now take for granted once had to be done manually. Washing a load of laundry no longer takes all day; our phone calls are directed to the correct departments by automated recordings; and many of our online orders are now selected and packed by robots.

Developments in this area are accelerating at an incredible rate. But as exciting as these new discoveries may be, they raise question after question around whether the research needed to deliver such innovations is viable, both from an economical and an ethical point of view.

As expert manufacturers of engineering parts that help to keep hundreds of different automated processes up and running, electronic repair specialists Neutronic Technologies are understandably very interested in where the future is going to take us. Is it going to take hundreds, if not thousands, of years for us to reach the kinds of automation that are lodged in the imaginations of sci-fi enthusiasts? Or are we a great deal closer to a machine takeover than we think?

According to the International Federation of Robotics, there are five countries in the developed world that manufacture at least 70 per cent of our entire robotics supply: Germany, the United States, South Korea, China and Japan.

By 2018, the Federation of Robotics predicts that there will be approximately 1.3 million industrial robots working in factories around the world. That’s less than two years away.

The development of automation has received a great deal more attention over the past few years. And undoubtedly what has brought it to people’s attention is the popularisation of the subject following the explosion of science fiction books and movies such as Isaac Asimov’s ‘i, Robot’ and ‘The Bicentennial Man’. And this has continued to emerge throughout the decades and has likely only heightened our curiosity about the world of robots.

Why are we even exploring robotics?

Developing robotics is the next stage in our search for automation. We already have automation integrated into so many aspects of our daily lives, from doors that open due to motion sensors to assembly lines and automobile production, robotics is simply the next step along that path.

I predict that the biggest developments in the automation world will come from the automobile industry – so the likes of self-driving cars that are already being tested – and the internet.

Another area of development within automation is likely to come from the growth of the internet. The concept of the ‘Internet of Things’ has been gaining momentum for some years now, even decades amongst technology companies, but the idea has only recently started to break into a mainstream conversation.

We have already seen glimpses of the future starting to creep into reality, most notably with the introduction of Amazon Dash. Linked to the person’s account and programmed to a certain item, all you have to do is press the button and an order is placed and delivered. Of course, this process is currently only half automated; a button still has to be manually pressed and Amazon shippers still post and deliver the item, but it certainly shows the direction in which we are headed.

But ultimately the Internet of Things can go even further than creating smart homes. The term ‘smart cities’ has been coined that could theoretically include connected traffic lights to control vehicle flow, smart bins that inform the right people when they need to be emptied, to even the monitoring of crops growing in fields.

How do we reach these automation goals?

Ultimately, the end goal of any research into robotics or automation is to emulate the actions of humans. People across the world engage in heated debates about whether machines will ever have the ability to think like people – a subject known as A.I. or Artificial Intelligence which is worthy of its own exploration. Whether that will become a reality in the future we cannot currently tell for sure, but researchers are hard at work across the world trying to inch our way closer.

There are, of course, issues that arise when we try to develop machines to take over certain tasks from humans, most notably to do with quality control and the increased margin for error. Some question whether a machine, that doesn’t necessarily have the capacity to consider extenuating circumstances or raise certain questions or react in a way, would be able to perform these tasks.

Let’s look at self-driving cars for example. So much of driving depends on the person behind the wheel being able to react in seconds to any changes around them. It is, therefore, essential that machines are able to “think” as close to humans as possible. If artificial intelligence and technology alone cannot achieve this, it would be very difficult for such vehicles to become road legal. However, experts in the industry have suggested a very clever solution.

Are there any disadvantages to the research?

As with any major development, there are always going to be people who oppose it, or at the very least point out reasons why we should proceed with caution – and with good reason.

One of the biggest, and indeed most realistic, fears that many people express, is all to do with economics and jobs. It’s no secret that the UK’s economy, and indeed the world’s economy, has been somewhat shaky over the past few years. This has led to many people showing concern that the development of automated processes, which are able to perform certain tasks with precision and accuracy that surpasses humans and at a much faster speed, will mean that many people’s jobs will become redundant.

Where are we headed?

It is unlikely that we are going to see any robot uprisings anytime soon. But the potential threats that an increase in automation brings to our society should not be underestimated. With the economic state of the world already so fragile, any attempts to research areas that could result in unemployment should be very carefully considered before implementation.

That being said, we are living in exciting times where we are able to witness such developments taking place. So much has already occurred over the past few years that many people may not be aware of. We may not have reached the exciting level of developments as seen in the movies – not yet anyway – but with the amount of ideas and research taking place in the world, the sky really is the limit.

Source: – Rise of the machines – The future of robotics and automation

What’s Missing From Machine Learning

Machine learning is everywhere. It’s being used to optimize complex chips, balance power and performance inside of data centers, program robots, and to keep expensive electronics updated and operating. What’s less obvious, though, is there are no commercially available tools to validate, verify and debug these systems once machines evolve beyond the final specification.

The expectation is that devices will continue to work as designed, like a cell phone or a computer that has been updated with over-the-air software patches. But machine learning is different. It involves changing the interaction between the hardware and software and, in some cases, the physical world. In effect, it modifies the rules for how a device operates based upon previous interactions, as well as software updates, setting the stage for much wider and potentially unexpected deviations from that specification.

In most instances, these deviations will go unnoticed. In others, such as safety-critical systems, changing how systems perform can have far-reaching consequences. But tools have not been developed yet that reach beyond the algorithms used for teaching machines how to behave. When it comes to understanding machine learning’s impact on a system over time, this is a brave new world.

“The specification may capture requirements of the infrastructure for machine learning, as well as some hidden layers and the training data set, but it cannot predict what will happen in the future,” said Achim Nohl, technical marketing manager for high-performance ASIC prototyping systems at Synopsys. “That’s all heuristics. It cannot be proven wrong or right. It involves supervised versus unsupervised learning, and nobody has answers to signing off on this system. This is all about good enough. But what is good enough?”

Most companies that employ machine learning point to the ability to update and debug software as their safety net. But drill down further into system behavior and modifications and that safety net vanishes. There are no clear answers about how machines will function once they evolve or are modified by other machines.

“You’re stressing things that were unforeseen, which is the whole purpose of machine learning,” said Bill Neifert, director of models technology at ARM. “If you could see all of the eventualities, you wouldn’t need machine learning. But validation could be a problem because you may end up down a path where adaptive learning changes the system.”

Normally this is where the tech industry looks for tools to help automate solutions and anticipate problems. With machine learning those tools don’t exist yet.

“We definitely need to go way beyond where we are today,” said Harry Foster, chief verification scientist at Mentor Graphics. “Today, you have finite state machines and methods that are fixed. Here, we are dealing with systems that are dynamic. Everything needs to be extended or rethought. There are no commercial solutions in this space.”

Foster said some pioneering work in this area is being done by England’s University of Bristol in the area of validating systems that are constantly being updated. “With machine learning, you’re creating a predictive model and you want to make sure it stays within legal bounds. That’s fundamental. But if you have a car and it’s communicating with other cars, you need to make sure you’re not doing something harmful. That involves two machine learnings. How do you test one system against the other system?”

Today, understanding of these systems is relegated to a single point in time, based upon the final system specification and whatever updates have been added via over-the-air software. But machine learning uses an evolutionary teaching approach. With cars, it can depend upon how many miles a vehicle has been driven, where it was driven, by whom, and how it was driven. With a robot, it may depend upon what that robot encounters on a daily basis, whether that includes flat terrain, steps, extreme temperatures or weather. And while some of that will be shared among other devices via the cloud, the basic concept is that the machine itself adapts and learns. So rather than programming a device with software, it is programmed to learn on its own.

Predicting how even one system will behave using this model, coupled with periodic updates, is a mathematical distribution. Predicting how thousands of these systems will change, particularly if they interact with each other, or other devices, involves a series of probabilities that are in constant flux over time.

What is machine learning?
The idea that machines can be taught dates back almost two decades before the introduction of Moore’s Law. Work in this area began in the late 1940s, based on early computer work in identifying patterns in data and then making predictions from that data.

Machine learning applies to a wide spectrum of applications. At the lowest level are mundane tasks such as spam filtering. But machine learning also includes more complex programming of known use cases in a variety of industrial applications, as well as highly sophisticated image recognition systems that can distinguish between one object and another.

Arthur Samuel, one of the pioneers in machine learning, began experimenting with the possibility of making machines learn from experience back in the late 1940s—creating devices that can do things beyond what they were explicitly programmed to do. His best-known work was a checkers game program, which he developed while working at IBM. It is widely credited as the first implementation of machine learning.

Fig. 1: Samuel at his checkerboard using an IBM 701 in 1956. Six years later, the program beat checkers master Robert Nealey. Source: IBM

Machine learning has advanced significantly since then. Checkers has been supplanted by more difficult games such as chess, Jeopardy, and Go.

In a presentation at the Hot Chips 2016 conference in Cupertino last month, Google engineer Daniel Rosenband cited four parameters for autonomous vehicles—knowing where a car is, understanding what’s going on around it, identifying the objects around a car, and determining the best options for how a car should proceed through all of that to its destination.

This requires more than driving across a simple grid or pattern recognition. It involves some complex reasoning about what a confusing sign means, how to read a traffic light if it is obscured by an object such as a red balloon, and what to do if sensors are blinded by the sun’s glare. It also includes an understanding of the effects of temperature, shock and vibration on sensors and other electronics.

Google uses a combination of sensors, radar and lidar to pull together a cohesive picture, which requires a massive amount of processing in a very short time frame. “We want to jam as much compute as possible into a car,” Rosenband said. “The primary objective is maximum performance, and that requires innovation in how to architect everything to get more performance than you could from general-purpose processing.”

Fig. 2: Google’s autonomous vehicle prototype. Source: Google.

Programming all of this by hand into every new car is unrealistic. Database management is difficult enough with a small data set. Adding in all of the data necessary to keep an autonomous vehicle on the road, and fully updated with new information about potential dangerous behavior, is impossible without machine learning.

“We’re seeing two applications in this space,” said Charlie Janac, chairman and CEO of Arteris. “The first is in the data center, which is a machine-learning application. The second is ADAS, where you decide on what the image is. This gets into the world of convolutional neural networking algorithms, and a really good implementation of this would include tightly coupled hardware and software. These are mission-critical systems, and they need to continually update software over the air with a capability to visualize what’s in the hardware.”

How it’s being used
Machine learning comes in many flavors, and often means different things to different people. In general, the idea is that algorithms can be used to change the functionality of a system to either improve performance, lower power, or simply to update it with new use cases. That learning can be applied to software, firmware, an IP block, a full SoC, or an integrated device with multiple SoCs.

Microsoft is using machine learning for its “mixed reality” HoloLens device, according to Nick Baker, distinguished engineer in the company’s Technology and Silicon Group. “We run changes to the algorithm and get feedback as quickly as possible, which allows us to scale as quickly as possible from as many test cases as possible,” he said.

The HoloLens is still just a prototype, but like the Google self-driving car it is processing so much information so fast and reacting so quickly to the external world that there is no way to program this device without machine learning. “The goal is to scale as quickly as possible from as many test cases as possible,” Baker said.

Machine learning can be used to optimize hardware and software in everything from IP to complex systems, based upon a knowledge base of what works best for which conditions.

“We use machine learning to improve our internal algorithms,” said Anush Mohandass, vice president of marketing at NetSpeed Systems. “Without machine learning, if you don’t have an intelligent human to set it up, you get garbage back. You may start off and experiment with 15 things on the ‘x’ axis and 1,000 things on the ‘y’ axis, and set up an algorithm based on that. But there is a potential for infinite data.”

Machine learning assures a certain level of results, no matter how many possibilities are involved. That approach also can help if there are abnormalities that do not fit into a pattern because machine learning systems can ignore those aberrations. “This way you also can debug what you care about,” Mohandass said. “The classic case is a car on auto pilot that crashes because a chip did not recognize a full spectrum of things. At some point we will need to understand every data point and why something behaves the way it does. This isn’t the 80/20 rule anymore. It’s probably closer to 99.9% and 0.1%, so the distribution becomes thinner and taller.”

eSilicon uses a version of machine learning in its online quoting tools, as well. “We have an IP marketplace where we can compile memories, try them for free, and use them until you put them into production,” said Jack Harding, eSilicon’s president and CEO. “We have a test chip capability for free, fully integrated and perfectly functional. We have a GDSII capability. We have WIP (work-in-process) tracking, manufacturing online order entry system—all fully integrated. If I can get strangers on the other side of the world to send me purchase orders after eight lines of chat and build sets of chips successfully, there is no doubt in my mind that the bottoms-up Internet of Everything crowd will be interested.”

Where it fits
In the general scheme of things, machine learning is what makes artificial intelligence possible. There is ongoing debate about which is a superset of the other, but suffice it to say that an artificially intelligent machine must utilize machine-learning algorithms to make choices based upon previous experience and data. The terms are often confusing, in part because they are blanket terms that cover a lot of ground, and in part because the terminology is evolving with technology. But no matter how those arguments progress, machine learning is critical to AI and its more recent offshoot, deep learning.

“Deep learning, as a subset of machine learning, is the most potent disruptive force we have seen because it has the ability to change what the hardware looks like,” said Chris RowenCadence fellow and CTO of the company’s IP Group. “In mission-critical situations, it can have a profound effect on the hardware. Deep learning is all about making better guesses, but the nature of correctness is difficult to define. There is no way you get that right 100% of the time.”

But it is possible, at least in theory, to push closer to 100% correctness over time as more data is included in machine-learning algorithms.

“The more data you have, the better off you are,” said Microsoft’s Baker. “If you look at test images, the more tests you can provide the better.”

There is plenty of agreement on that, particularly among companies developing complex SoCs, which have quickly spiraled beyond the capabilities of engineering teams.

“I’ve never seen this fast an innovation of algorithms that are really effective at solving problems, said Mark Papermaster, CTO of Advanced Micro Devices. “One of the things about these algorithms that is particularly exciting to us is that a lot of it is based around the pioneering work in AI, leveraging what is called a gradient-descent analysis. This algorithm is very parallel in nature, and you can take advantage of the parallelism. We’ve been doing this and opening up our GPUs, our discrete graphics, to be tremendous engines to accelerate the machine learning. But different than our competitors, we are doing it in an open source environment, looking at all the common APIs and software requirements to accelerate machine learning on our CPUs and GPUs and putting all that enablement out there in an open source world.”

Sizing up the problems
Still, algorithms are only part of the machine-learning picture. A system that can optimize hardware as well as software over time is, by definition, evolving from the original system spec. How that affects reliability is unknown, because at this point there is no way to simulate or test that.

“If you implement deep learning, you’ve got a lot of similar elements,” said Raik Brinkmann, president and CEO of OneSpin Solutions. “But the complete function of the system is unknown. So if you’re looking at machine learning error rates and conversion rates, there is no way to make sure you’ve got them right. The systems learn from experience, but it depends on what you give them. And it’s a tough problem to generalize how they’re going to work based on the data.”

Brinkmann said there are a number of approaches in EDA today that may apply, particularly with big data analytics. “That’s an additional skill set—how to deal with big data questions. It’s more computerized and IT-like. But parallelization and cloud computing will be needed in the future. A single computer is not enough. You need something to manage and break down the data.”

Brinkmann noted that North Carolina State University and the Georgia Institute of Technology will begin working on these problems this fall. “But the bigger question is, ‘Once you have that data, what do you do with it?’ It’s a system without testbenches, where you have to generalize behavior and verify it. But the way chips are built is changing because of machine learning.”

ARM’s Neifert considers this a general-purpose compute problem. “You could make the argument in first-generation designs that different hardware isn’t necessary. But as we’ve seen with the evolution of any technology, you start with a general-purpose version and then demand customized hardware. With something like advanced driver assistance systems (ADAS), you can envision a step where a computer is defining the next-generation implementation because it requires higher-level functionality.”

That quickly turns troubleshooting into an unbounded problem, however. “Debug is a whole different world,” said Jim McGregor, principal analyst at Tirias Research. “Now you need a feedback loop. If you think about medical imaging, 10 years ago 5% of the medical records were digitized. Now, 95% of the records are digitized. So you combine scans with diagnoses and information about whether it’s correct or not, and then you have feedback points. With machine learning, you can design feedback loops to modify those algorithms, but it’s so complex that no human can possibly debug that code. And that code develops over time. If you’re doing medical research about a major outbreak, humans can only run so many algorithms. So how do you debug it if it’s not correct? We’re starting to see new processes for deep learning modules that are different than in the past.”

Source:’s Missing From Machine Learning

4 Unique Challenges Of Industrial Artificial Intelligence

Robots are probably the first thing you think of when asked to imagine AI applied to industrials and manufacturing. Indeed, many innovative companies like Rodney Brooks’ Rethink Robotics have developed friendly-looking robot factory workers who hustle alongside their human colleagues. Industrial robots have historically been designed to perform specific niche tasks, but modern-day robots can be taught new tasks and make real-time decisions.

As sexy and shiny as robots are, the bulk of the value of AI in industrials lies in transforming data from sensors and routine hardware into intelligent predictions for better and faster decision-making. 15 billion machines are currently connected to the Internet. By 2020, Cisco predicts the number will surpass 50 billion. Connecting machines together into intelligent automated systems in the cloud is the next major step in the evolution of manufacturing and industry.

In 2015, General Electric launched GE Digital to drive software innovation and cloud connectivity across all departments. Harel Kodesh, CTO of GE Digital, shares the unique challenges of applying AI to industrials that differ from consumer applications.

1. Industrial Data Is Often Inaccurate

“For machine learning to work properly, you need lots of data. Consumer data is harder to misunderstand, for example when you buy a pizza or click on an ad,” says Kodesh. “When looking at the industrial internet, however, 40% of the data coming in is spurious and isn’t useful”.

Let’s say you need to calculate how far a combine needs to drill and you stick a moisture sensor into the ground to take important measurements. The readings can be skewed by extreme temperatures, accidental man-handling, hardware malfunctions, or even a worm that’s been accidentally skewered by the device. “We are not generating data from the comfort and safety of a computer in your den,” Kodesh emphasizes.

2. AI Runs On The Edge, Not On The Cloud

Consumer data is processed in the cloud on computing clusters with seemingly infinite capacity. Amazon can luxuriously take their time to crunch your browsing and purchase history and show you new recommendations. “In consumer predictions, there’s low value to false negatives and to false positives. You’ll forget that Amazon recommended you a crappy book,” Kodesh points out.

On a deep sea oil rig, a riser is a conduit which transports oil from subsea wells to a surface facility. If a problem arises, several clamps must respond immediately to shut the valve. The sophisticated software that manages the actuators on those clamps tracks minute details in temperature and pressure. Any mistake could mean disaster.

The stakes and responsiveness are much higher for industrial applications where millions of dollars and human lives can be on the line. In these cases, industrial features cannot be trusted to run on the cloud and must be implemented on location, also known as “the edge.”

Industrial AI is built as an end-to-end system, described by Kodesh as a “round-trip ticket”, where data is generated by sensors on the edge, served to algorithms, modeled on the cloud, and then moved back to the edge for implementation. Between the edge and the cloud are supervisor gateways and multiple nodes of computer storage since the entire system must be able to run the the right load at the right places.

Source: Unique Challenges Of Industrial Artificial Intelligence

Semiconductor Engineering .:. The Great Machine Learning Race

Processor makers, tools vendors, and packaging houses are racing to position themselves for a role in machine learning, despite the fact that no one is quite sure which architecture is best for this technology or what ultimately will be successful.

Rather than dampen investments, the uncertainty is fueling a frenzy. Money is pouring in from all sides. According to a new Moor Insights report, as of February 2017 there were more than 1,700 machine learning startups and 2,300 investors. The focus ranges from relatively simple dynamic network optimization to military drones using real-time information to avoid fire and adjust their targets.

Fig. 1: The machine learning landscape. Source: Moor Insights

While the general concepts involved machine learning—doing things that a device was not explicitly programmed to do—date back to the late 1940s, machine learning has progressed in fits and starts since then. Stymied initially by crude software (1950s through 1970s), then by insufficient processing power, memory and bandwidth (1980s through 1990s), and finally by deep market downturns in electronics (2001 and 2008), it has taken nearly 70 years for machine learning to advance to the point where it is commercially useful.

Several things have changed since then:

  • The technology performance hurdles of the 1980s and 1990s are now gone. There is almost unlimited processing power, with more on the way using new chip architectures, as well as packaging approaches such as 2.5D and fan-out wafer-level packaging. Very fast memory is already available, with more types on the way, and advances in silicon photonics can speed up storage and retrieval of large blocks of data as needed.
  • There are ready markets for machine learning in the data center and in the autonomous vehicle market, where the central logic of these devices will need regular updates to improve safety and reliability. Companies involved in these markets have deep pockets or strong backing, and they are investing heavily in machine learning.
  • The pendulum is swinging back to hardware, or at the very least, a combination of hardware and software, because it’s faster, uses less power, and it’s more secure than putting everything into software. That bodes well for machine learning because of the enormous processing requirements, and it changes the economics for semiconductor investments.

Nevertheless, this is a technology approach riddled with uncertainty about what works best and why.

“If there was a winner, we would have seen that already,” said Randy Allen, director or advanced research at Mentor Graphics. “A lot of companies are using GPUs, because they’re easier to program. But with GPUs the big problem is determinism. If you send a signal to an FPGA, you get a response in a given amount of time. With a GPU, that’s not certain. A custom ASIC is even better if you know exactly what you’re going to do, but there is no slam-dunk algorithm that everyone is going to use.”

ASICs are the fastest, cheapest and lowest power solution for crunching numbers. But they also are the most expensive to develop, and they are unforgiving if changes are required. Changes are almost guaranteed with machine learning because the field is still evolving, so relying ASICs—or at least relying only on ASICs—is a gamble.

This is one of the reasons that GPUs have emerged as the primary option, at least in the short-term. They are inexpensive, highly parallel, and there are enough programming tools available to test and optimize these systems. The downside is they are less power efficient than a mix of processors, which can include CPUs, GPUs, DSPs and FPGAs.

FPGAs add the additional element of future-proofing and lower power, and they can be used to accelerate other operations. But in highly parallel architectures, they also are more expensive, which has renewed attention on embedded FPGAs.

“This is going to take 5 to 10 years to settle out,” said Robert Blake, president and CEO of Achronix. “Right now there is no agreed upon math for machine learning. This will be the Wild West for the next decade. Before you can get a better Siri or Alexa interface, you need algorithms that are optimized to do this. Workloads are very diverse and changing rapidly.”

Massive parallelism is a requirement. There is also some need for floating point calculations. But beyond that, it could be 1-bit or 8-bit math.

“A lot of this is pattern matching of text-based strings,” said Blake. “You don’t need floating point for that. You can implement the logic in an FPGA to make the comparisons.”

Learning vs. interpretation
One of the reasons why this becomes so complex is there are two main components to machine learning. One is the “learning” phase, which is a set of correlations or pattern matches. In machine vision, for example, it allows a device to determine whether an image is a dog or a person. That started out as 2D comparisons, but databases have grown in complexity. They now include everything from emotions to movement. They can discern different breeds of dogs, and whether a person is crawling or walking.

The harder mathematics problem is the interpretation phase. That can involve inferencing—drawing conclusions based on a set of data and then extrapolating from those conclusions to develop presumptions. It also can include estimation, which is how economics utilizes machine learning.

At this point, much of the inferencing is being done in the cloud because of the massive amount of compute power required. But at least some of that will be required to be on-board in autonomous vehicles. For one thing, it’s faster to do at least some of that locally. For another, connectivity isn’t always consistent, and in some locations it might not be available at all.

“You need real-time cores working in lock step with other cores, and you could have three or four levels of redundancy,” said Steve Glaser, senior vice president of corporate strategy and marketing at Xilinx. “You want an immediate response. You want it to be deterministic. And you want it to be flexible, which means that to create an optimized data flow you need software plus hardware plus I/O programmability for different layers of a neural network. This is any-to-any connectivity.”

How to best achieve that, however, isn’t entirely clear. The result is a scramble for market position unlike anything that has been seen in the chip industry since the introduction of the personal computer. Chipmakers are building solutions that include everything from development software, libraries, frameworks—with built-in flexibility to guard against sudden obsolescence because the market is still in flux.

What makes this so compelling for chipmakers is that the machine learning opportunity is unfolding at a time when the market for smart phone chips is flattening. But unlike phones or PCs, machine learning cuts across multiple market segments, each with the potential for significant growth (see fig. 2 below).

Fig. 2: Machine learning opportunities.

Rethinking architectures
All of this needs to be put in context of two important architectural changes that are beginning to unfold in machine learning. The first is a shift away from trying to do everything in software to doing much more in hardware. Software is easier to program, but it’s far less efficient from a power/performance standpoint and much more vulnerable from a security standpoint. The solution here, according to Xilinx’s Glaser, is leveraging the best of both worlds by using software-defined programming. “We’re showing 6X better efficiency in images per second per watt,” he said.

A second change is the emphasis on more processors—and more types of processors—rather than fewer, highly integrated custom processors. This reverses a trend that has been underway since the start of the PC era, namely that putting everything on a single die improves performance per watt and reduces the overall bill of materials cost.

“There is much more interest in larger numbers of small processors than big ones,” said Bill Neifert, director of models technology at ARM. “We’re seeing that in the number of small processors being modeled. We’re also seeing more FPGAs and ASICs being modeled than in the past.”

Because a large portion of the growth in machine learning is tied to safety-critical systems in autonomous vehicles, that requires better modeling and better verification of systems.

“One of the benefits of creating a model as early as possible is that you can inject faults for all possible safety requirements, so that when something fails—which it will—it can fail gracefully,” said Neifert. “And if you change your architecture, you want to be able to route data differently so there are no bottlenecks. This is why we’re also seeing so much concurrency in high-performance computing.”

Measuring performance and cost with machine learning isn’t a simple formula, though. Performance can be achieved in a variety of ways, such as better throughput to memory or faster, more narrowly written algorithms for specific jobs, and highly parallel computing with acceleration. Likewise, cost can be measured in multiple ways, such as total system cost, power consumption, and sometimes the impact of slow results, such as a piece of military equipment not making decisions quickly enough in an autonomous vehicle.

Beyond that, there are challenges involving the programming environment, which is part algorithmic and part intuition. “What you’re doing is trying to figure out how humans think without language,” said Mentor’s Allen. “Machine learning is that to the nth degree. It’s how humans recognize patterns, and for that you need the right development environment. Sooner or later we will find the right level of abstraction for this. The first languages are interpreters. If you look at most languages today, they’re largely library calls. Ultimately we may need language to tie this together, either pipelining or overlapping computations. That will have a lot better chance of success than high-level functionality without a way of combining the results.”

Kurt Shuler, vice president of marketing at Arteris, agrees. He said the majority of systems developed so far are being used to jump-start research and algorithm development. The next phase will focus on more heterogeneous computing, which creates a challenge for cache coherency.

“There is a balance between computational efficiency and programming efficiency,” Shuler said. “You can make it simpler for the programmer. An early option has been to use an “open” machine learning system that consists of a mix of ARM clusters and some dedicated AI processing elements like SIMD engines or DSPs. There’s a software library, which people can license. The chip company owns the software algorithms, and you can buy the chips and board and get this up and running early. You can do this with Intel Xeon chips too, and build in your or another company’s IP using FPGAs. But these initial approaches do not slice the problem finely enough, so basically you’re working with a generic platform, and that’s not the most efficient. To increase machine learning efficiency, the industry is moving toward using multiple types of heterogeneous processing elements in these SoCs.”

In effect, this is a series of multiply and accumulate steps that need to be parsed at the beginning of an operation and recombined at the end. That has long been one of the biggest hurdles in parallel operations. The new wrinkle is that there is more data to process, and movement across skinny wires that are subject to RC delay can affect both performance and power.

“There is a multidimensional constraint to moving data,” said Raik Brinkmann, CEO of OneSpin Solutions. “In addition, power is dominated by data movement. So you need to localize processing, which is why there are DSP blocks in FPGAs today.”

This gets even more complicated with deep neural networks (DNNs) because there are multiple layers of networks, Brinkmann said.

And that creates other issues. “Uncertainty in verification becomes a huge issue,” said Achim Nohl, technical marketing manager for high-performance ASIC prototyping systems at Synopsys. “Nobody has an answer to signing off on these systems. It’s all about good enough, but what is good enough? So it becomes more and more of a requirement to do real-world testing where hardware and software is used. You have to expand from design verification to system validation in the real world.”

Internal applications
Not all machine learning is about autonomous vehicles or cloud-based artificial intelligence. Wherever there is too much complexity combined with too many choices, machine learning can play a role. There are numerous cases where this is already happening.

NetSpeed Systems, for example, is using machine learning to develop network-on-chip topologies for customers. eSilicon is using it to choose the best IP for specific parameters involving power, performance and cost. And ASML is using it to optimize computational lithography, basically filling in the dots on a distribution model to provide a more accurate picture than a higher level of abstraction can intrinsically provide.

“There is a lot of variety in terms of routing,” said Sailesh Kumar, CTO at NetSpeed Systems. “There are different channel sizes, different flows, and how that gets integrated has an impact on quality of service. Decisions in each of those areas lead to different NoC designs. So from an architectural perspective, you need to decide on one topology, which could be a mesh, ring or tree. The simpler the architecture, the fewer potential deadlocks. But if you do all of this manually, it’s difficult to come up with multiple design possibilities. If you automate it, you can use formal techniques and data analysis to connect all of the pieces.”

The machine-learning component in this case is a combination of training data and deductions based upon that data.

“The real driver here is fewer design rules,” Kumar said. “Generally you will hard-code the logic in software to make decisions. As you scale, you have more design rules, which makes updating the design rules an intractable problem. You have hundreds of design rules just for the architecture. What you really need to do is extract the features so you can capture every detail for the user.”

NetSpeed has been leveraging with commercially available tools for machine learning. eSilicon, in contrast, built its own custom platform based upon its experience with both internally developed and commercial third-party IP.

“The fundamental interaction between supplier and customer is changing,” said Mike Gianfagna, eSilicon‘s vice president of marketing. “It’s not working anymore because it’s too complex. There needs to be more collaboration between the system vendor, the IP supplier, the end user and the ASIC supplier. There are multiple dimensions to every architecture and physical design.”

ASML, meanwhile, is working with Cadence and Lam Research to more accurately model optical proximity correction and to minimize edge placement error. Utilizing machine learning, model allowed ASML to improve the accuracy of mask, optics, resist and etch models to less than 2nm, said Henk Niesing, director of applications product management at ASML. “We’ve been able to improve patterning through collaboration on design and patterning equipment.”

Machine learning is gaining ground as the best way of dealing with rising complexity, but ironically there is no clear approach to the best architectures, languages or methodologies for developing these machine learning systems. There are success stories in limited applications of this technology, but looked at as a whole, the problems that need to be solved are daunting.

“If you look at embedded vision, that is inherently so noisy and ambiguous that it needs help,” said Cadence Fellow Chris Rowen. “And it’s not just vision. Audio and natural languages have problems, too. But 99% of captured raw data is pixels, and most pixels will not be seen or interpreted by humans. The real value is when you don’t have humans involved, but that requires the development of human cognition technology.”

And how to best achieve that is still a work in progress—a huge project with lots of progress, and still a very long way to go. But as investment continues to pour into this field, both from startups and collaboration among large companies across a wide spectrum of industries, that progress is beginning to accelerate.

Source: Engineering .:. The Great Machine Learning Race

The Countries Most (and Least) Likely to be Affected by Automation

Today, about half the activities that people are paid to do in the global economy have the potential to be automated by adapting currently demonstrated technology. In all, 1.2 billion full time equivalents and $14.6 trillion in wages are associated with activities that are technically automatable with current technology. This automation potential differs among countries, with the range spanning from 40% to 55%. Four economies—China, India, Japan, and the United States—dominate the total, accounting for just over half of the wages and almost two-thirds the number of employees associated with activities that are technically automatable by adapting currently demonstrated technologies.


Around the world, automation is transforming work, business, and the economy. China is already the largest market for robots in the world, based on volume. All economies, from Brazil and Germany to India and Saudi Arabia, stand to gain from the hefty productivity boosts that robotics and artificial intelligence will bring. The pace and extent of adoption will vary from country to country, depending on factors including wage levels. But no geography and no sector will remain untouched.

In our research we took a detailed look at 46 countries, representing about 80% of the global workforce. We examined their automation potential today — what’s possible by adapting demonstrated technologies — as well as the potential similarities and differences in how automation could take hold in the future.

Today, about half the activities that people are paid to do in the global economy have the potential to be automated by adapting demonstrated technology. As we’ve described previously, our focus is on individual work activities, which we believe to be a more useful way to examine automation potential than looking at entire jobs, since most occupations consist of a number of activities with differing potential to be automated.

In all, 1.2 billion full-time equivalents and $14.6 trillion in wages are associated with activities that are automatable with current technology. This automation potential differs among countries, ranging from 40% to 55%.

The differences reflect variations in sector mix and, within sectors, the mix of jobs with larger or smaller automation potential. Sector differences among economies sometimes lead to striking variations, as is the case with Japan and the United States, two advanced economies. Japan has an overall automation potential of 55% of hours worked, compared with 46% in the United States. Much of the difference is due to Japan’s manufacturing sector, which has a particularly high automation potential, at 71% (versus 60% in the United States). Japanese manufacturing has a slightly larger concentration of work hours in production jobs (54% of hours versus the U.S.’s 50%) and office and administrative support jobs (16% versus 9%). Both of these job titles comprise activities with a relatively high automation potential. By comparison, the United States has a higher proportion of work hours in management, architecture, and engineering jobs, which have a lower automation potential since they require application of specific expertise such as high-value engineering, which computers and robots currently are not able to do.

On a global level, four economies — China, India, Japan, and the United States — dominate the total, accounting for just over half of the wages and almost two-thirds the number of employees associated with activities that are technically automatable by adapting demonstrated technologies. Together, China and India may account for the largest potential employment impact — more than 700 million workers between them — because of the relative size of their labor forces. Technical automation potential is also large in Europe: According to our analysis, more than 60 million full-time employee equivalents and more than $1.9 trillion in wages are associated with automatable activities in the five largest economies (France, Germany, Italy, Spain, and the United Kingdom).

We also expect to see large differences among countries in the pace and extent of automation adoption. Numerous factors will determine automation adoption, of which technical feasibility is only one. Many of the other factors are economic and social, and include the cost of hardware or software solutions needed to integrate technologies into the workplace, labor supply and demand dynamics, and regulatory and social acceptance. Some hardware solutions require significant capital expenditures and could be adopted faster in advanced economies than in emerging ones with lower wage levels, where it will be harder to make a business case for adoption because of low wages. But software solutions could be adopted rapidly around the world, particularly those deployed through the cloud, reducing the lag in adoption time. The pace of adoption will also depend on the benefits that countries expect automation to bring for things other than labor substitution, such as the potential to enhance productivity, raise throughput, and improve accuracy and regulatory and social acceptance.

Regardless of the timing, automation could be the shot in the arm that the global economy sorely needs in the decades ahead. Declining birthrates and the trend toward aging in countries from China to Germany mean that peak employment will occur in most countries within 50 years. The expected decline in the share of the working-age population will open an economic growth gap that automation could potentially fill. We estimate that automation could increase global GDP growth by 0.8% to 1.4% annually, assuming that people replaced by automation rejoin the workforce and remain as productive as they were in 2014. Considering the labor substitution effect alone, we calculate that, by 2065, the productivity growth that automation could add to the largest economies in the world (G19 plus Nigeria) is the equivalent of an additional 1.1 billion to 2.2 billion full-time workers.

The productivity growth enabled by automation can ensure continued prosperity in aging nations and could provide an additional boost to fast-growing ones. However, automation on its own will not be sufficient to achieve long-term economic growth aspirations across the world. For that, additional productivity-boosting measures will be needed, including reworking business processes or developing new products, services, and business models.

How could automation play out among countries? We have divided our 46 focus nations into three groups, each of which could use automation to further national economic growth objectives, depending on its demographic trends and growth aspirations. The three groups are:

  • Advanced economies. These include Australia, Canada, France, Germany, Italy, Japan, South Korea, the United Kingdom, and the United States. They typically face an aging workforce, though the decline in working-age population growth is more immediate in some (Germany, Italy, and Japan) than in others. Automation can provide the productivity boost required to meet economic growth projections that they otherwise would struggle to attain. These economies thus have a major interest in pursuing rapid automation development and adoption.
  • Emerging economies with aging populations. This category includes Argentina, Brazil, China, and Russia, which face economic growth gaps as a result of projected declines in the growth of their working population. For these economies, automation can provide the productivity injection needed to maintain current GDP per capita. To achieve a faster growth trajectory that is more commensurate with their developmental aspirations, these countries would need to supplement automation with additional sources of productivity, such as process transformations, and would benefit from rapid adoption of automation.
  • Emerging economies with younger populations. These include India, Indonesia, Mexico, Nigeria, Saudi Arabia, South Africa, and Turkey. The continued growth of the working-age population in these countries could support maintaining current GDP per capita. However, given their high growth aspirations, and in order to remain competitive globally, automation plus additional productivity-raising measures will be necessary to sustain their economic development.

For all the differences between countries, many of automation’s challenges are universal. For business, the performance benefits are relatively clear, but the issues are more complicated for policy makers. They will need to find ways to embrace the opportunity for their economies to benefit from the productivity growth potential that automation offers, putting in place policies to encourage investment and market incentives to encourage innovation. At the same time, all countries will need to evolve and create policies that help workers and institutions adapt to the impact on employment.


Source: Harvard Business Review-The Countries Most (and Least) Likely to be Affected by Automation

Bill Gates Is Wrong: The Solution to AI Taking Jobs Is Training, Not Taxes

Let’s take a breath: Robots and artificial intelligence systems are nowhere near displacing the human workforce. Nevertheless, no less a voice than Bill Gates has asserted just the opposite and called for a counterintuitive, preemptive strike on these innovations. His proposed weapon of choice? Taxes on technology to compensate for losses that haven’t happened.

AI has massive potential. Taxing this promising field of innovation is not only reactionary and antithetical to progress, it would discourage the development of technologies and systems that can improve everyday life.

Imagine where we would be today if policy makers, fearing the unknown, had feverishly taxed personal computer software to protect the typewriter industry, or slapped imposts on digital cameras to preserve jobs for darkroom technicians. Taxes to insulate telephone switchboard operators from the march of progress could have trapped our ever-present mobile devices on a piece of paper in an inventor’s filing cabinet.

There simply is no proof that levying taxes on technology protects workers. In fact, as former US treasury secretary Lawrence Summers recently wrote, “Taxes on technology are more likely to drive production offshore than create jobs at home.”

Calls to tax AI are even more stunning because they represent a fundamental abandonment of any responsibility to prepare employees to work with AI systems. Those of us fortunate enough to influence policy in this space should demonstrate real faith in the ability of people to embrace and prepare for change. The right approach is to focus on training workers in the right skills, not taxing robots.

There are more than half a million open technology jobs in the United States, according to the Department of Labor, but our schools and universities are not producing enough graduates with the right skills to fill them. In many cases, these are “new collar jobs” that, rather than calling for a four-year college degree, require sought-after skills that can be learned through 21st century vocational training, innovative public education models like P-TECH (which IBM pioneered), coding camps, professional certification programs and more. These programs can prepare both students and mid-career professionals for new collar roles ranging from cybersecurity analyst to cloud infrastructure engineer.

At IBM, we have seen countless stories of motivated new collar professionals who have learned the skills to thrive in the digital economy. They are former teachers, fast food workers, and rappers who now fight cyber threats, operate cloud platforms and design digital experiences for mobile applications. WIRED has even reported how, with access to the right training, former coal miners have transitioned into new collar coding careers.

The nation needs a massive expansion of the number and reach of programs students and workers can access to build new skills. Closing the skills gap could fill an estimated 1 million US jobs by 2020, but only if large-scale public private partnerships can better connect many more workers to the training they need. This must be a national priority.

First, Congress should update and expand career-focused education to help more people, especially women and underrepresented minorities, learn in-demand skills at every stage. This should include programs to promote STEM careers among elementary students, which increase interest and enrollment in skills-focused courses later in their educational careers. Next, high-school vocational training programs should be reoriented around the skills needed in the labor market. And updating the Federal Work-Study program, something long overdue, would give college students meaningful, career-focused internships at companies rather than jobs in the school cafeteria or library. Together, high-school career training programs and college work study receive just over $2 billion in federal funding. At around 3 percent of total federal education spending, that’s a pittance. We can and must do more.

Second, Congress should create and fund a 21st century apprenticeship program to recruit and train or retrain workers to fill critical skills gaps in federal agencies and the private sector. Allowing block grants to fund these programs at the state level would boost their effectiveness and impact.

Third, Congress should support standards and certifications for new collar skills, just as it has done for other technical skills, from automotive technicians to welders. Formalizing these national credentials and accreditation programs will help employers recognize that candidates are sufficiently qualified, benefiting workers and employers alike.

Taking these steps now will establish a robust skills-training infrastructure that can address America’s immediate shortage of high-tech talent. Once this foundation is in place, it can evolve to focus on new categories of skills that will grow in priority as the deployment of AI moves forward.

AI should stand for augmented—not artificial—intelligence. It will help us make digital networks more secure, allow people to lead healthier lives, better protect our environment, and more. Like steam power, electricity, computers, and the internet before it, AI will create more jobs than it displaces. What workers really need in the era of AI are the skills to compete and win. Providing the architecture for 21st century skills training requires public policies based on confidence, not taxes based on fear.

Source: Wired-Bill Gates Is Wrong: The Solution to AI Taking Jobs Is Training, Not Taxes