Rise of the machines – The future of robotics and automation

So many of the tasks that we now take for granted once had to be done manually. Washing a load of laundry no longer takes all day; our phone calls are directed to the correct departments by automated recordings; and many of our online orders are now selected and packed by robots.

Developments in this area are accelerating at an incredible rate. But as exciting as these new discoveries may be, they raise question after question around whether the research needed to deliver such innovations is viable, both from an economical and an ethical point of view.

As expert manufacturers of engineering parts that help to keep hundreds of different automated processes up and running, electronic repair specialists Neutronic Technologies are understandably very interested in where the future is going to take us. Is it going to take hundreds, if not thousands, of years for us to reach the kinds of automation that are lodged in the imaginations of sci-fi enthusiasts? Or are we a great deal closer to a machine takeover than we think?

According to the International Federation of Robotics, there are five countries in the developed world that manufacture at least 70 per cent of our entire robotics supply: Germany, the United States, South Korea, China and Japan.

By 2018, the Federation of Robotics predicts that there will be approximately 1.3 million industrial robots working in factories around the world. That’s less than two years away.

The development of automation has received a great deal more attention over the past few years. And undoubtedly what has brought it to people’s attention is the popularisation of the subject following the explosion of science fiction books and movies such as Isaac Asimov’s ‘i, Robot’ and ‘The Bicentennial Man’. And this has continued to emerge throughout the decades and has likely only heightened our curiosity about the world of robots.

Why are we even exploring robotics?

Developing robotics is the next stage in our search for automation. We already have automation integrated into so many aspects of our daily lives, from doors that open due to motion sensors to assembly lines and automobile production, robotics is simply the next step along that path.

I predict that the biggest developments in the automation world will come from the automobile industry – so the likes of self-driving cars that are already being tested – and the internet.

Another area of development within automation is likely to come from the growth of the internet. The concept of the ‘Internet of Things’ has been gaining momentum for some years now, even decades amongst technology companies, but the idea has only recently started to break into a mainstream conversation.

We have already seen glimpses of the future starting to creep into reality, most notably with the introduction of Amazon Dash. Linked to the person’s account and programmed to a certain item, all you have to do is press the button and an order is placed and delivered. Of course, this process is currently only half automated; a button still has to be manually pressed and Amazon shippers still post and deliver the item, but it certainly shows the direction in which we are headed.

But ultimately the Internet of Things can go even further than creating smart homes. The term ‘smart cities’ has been coined that could theoretically include connected traffic lights to control vehicle flow, smart bins that inform the right people when they need to be emptied, to even the monitoring of crops growing in fields.

How do we reach these automation goals?

Ultimately, the end goal of any research into robotics or automation is to emulate the actions of humans. People across the world engage in heated debates about whether machines will ever have the ability to think like people – a subject known as A.I. or Artificial Intelligence which is worthy of its own exploration. Whether that will become a reality in the future we cannot currently tell for sure, but researchers are hard at work across the world trying to inch our way closer.

There are, of course, issues that arise when we try to develop machines to take over certain tasks from humans, most notably to do with quality control and the increased margin for error. Some question whether a machine, that doesn’t necessarily have the capacity to consider extenuating circumstances or raise certain questions or react in a way, would be able to perform these tasks.

Let’s look at self-driving cars for example. So much of driving depends on the person behind the wheel being able to react in seconds to any changes around them. It is, therefore, essential that machines are able to “think” as close to humans as possible. If artificial intelligence and technology alone cannot achieve this, it would be very difficult for such vehicles to become road legal. However, experts in the industry have suggested a very clever solution.

Are there any disadvantages to the research?

As with any major development, there are always going to be people who oppose it, or at the very least point out reasons why we should proceed with caution – and with good reason.

One of the biggest, and indeed most realistic, fears that many people express, is all to do with economics and jobs. It’s no secret that the UK’s economy, and indeed the world’s economy, has been somewhat shaky over the past few years. This has led to many people showing concern that the development of automated processes, which are able to perform certain tasks with precision and accuracy that surpasses humans and at a much faster speed, will mean that many people’s jobs will become redundant.

Where are we headed?

It is unlikely that we are going to see any robot uprisings anytime soon. But the potential threats that an increase in automation brings to our society should not be underestimated. With the economic state of the world already so fragile, any attempts to research areas that could result in unemployment should be very carefully considered before implementation.

That being said, we are living in exciting times where we are able to witness such developments taking place. So much has already occurred over the past few years that many people may not be aware of. We may not have reached the exciting level of developments as seen in the movies – not yet anyway – but with the amount of ideas and research taking place in the world, the sky really is the limit.

Source: itproportal.com – Rise of the machines – The future of robotics and automation

Advertisements

What’s Missing From Machine Learning

Machine learning is everywhere. It’s being used to optimize complex chips, balance power and performance inside of data centers, program robots, and to keep expensive electronics updated and operating. What’s less obvious, though, is there are no commercially available tools to validate, verify and debug these systems once machines evolve beyond the final specification.

The expectation is that devices will continue to work as designed, like a cell phone or a computer that has been updated with over-the-air software patches. But machine learning is different. It involves changing the interaction between the hardware and software and, in some cases, the physical world. In effect, it modifies the rules for how a device operates based upon previous interactions, as well as software updates, setting the stage for much wider and potentially unexpected deviations from that specification.

In most instances, these deviations will go unnoticed. In others, such as safety-critical systems, changing how systems perform can have far-reaching consequences. But tools have not been developed yet that reach beyond the algorithms used for teaching machines how to behave. When it comes to understanding machine learning’s impact on a system over time, this is a brave new world.

“The specification may capture requirements of the infrastructure for machine learning, as well as some hidden layers and the training data set, but it cannot predict what will happen in the future,” said Achim Nohl, technical marketing manager for high-performance ASIC prototyping systems at Synopsys. “That’s all heuristics. It cannot be proven wrong or right. It involves supervised versus unsupervised learning, and nobody has answers to signing off on this system. This is all about good enough. But what is good enough?”

Most companies that employ machine learning point to the ability to update and debug software as their safety net. But drill down further into system behavior and modifications and that safety net vanishes. There are no clear answers about how machines will function once they evolve or are modified by other machines.

“You’re stressing things that were unforeseen, which is the whole purpose of machine learning,” said Bill Neifert, director of models technology at ARM. “If you could see all of the eventualities, you wouldn’t need machine learning. But validation could be a problem because you may end up down a path where adaptive learning changes the system.”

Normally this is where the tech industry looks for tools to help automate solutions and anticipate problems. With machine learning those tools don’t exist yet.

“We definitely need to go way beyond where we are today,” said Harry Foster, chief verification scientist at Mentor Graphics. “Today, you have finite state machines and methods that are fixed. Here, we are dealing with systems that are dynamic. Everything needs to be extended or rethought. There are no commercial solutions in this space.”

Foster said some pioneering work in this area is being done by England’s University of Bristol in the area of validating systems that are constantly being updated. “With machine learning, you’re creating a predictive model and you want to make sure it stays within legal bounds. That’s fundamental. But if you have a car and it’s communicating with other cars, you need to make sure you’re not doing something harmful. That involves two machine learnings. How do you test one system against the other system?”

Today, understanding of these systems is relegated to a single point in time, based upon the final system specification and whatever updates have been added via over-the-air software. But machine learning uses an evolutionary teaching approach. With cars, it can depend upon how many miles a vehicle has been driven, where it was driven, by whom, and how it was driven. With a robot, it may depend upon what that robot encounters on a daily basis, whether that includes flat terrain, steps, extreme temperatures or weather. And while some of that will be shared among other devices via the cloud, the basic concept is that the machine itself adapts and learns. So rather than programming a device with software, it is programmed to learn on its own.

Predicting how even one system will behave using this model, coupled with periodic updates, is a mathematical distribution. Predicting how thousands of these systems will change, particularly if they interact with each other, or other devices, involves a series of probabilities that are in constant flux over time.

What is machine learning?
The idea that machines can be taught dates back almost two decades before the introduction of Moore’s Law. Work in this area began in the late 1940s, based on early computer work in identifying patterns in data and then making predictions from that data.

Machine learning applies to a wide spectrum of applications. At the lowest level are mundane tasks such as spam filtering. But machine learning also includes more complex programming of known use cases in a variety of industrial applications, as well as highly sophisticated image recognition systems that can distinguish between one object and another.

Arthur Samuel, one of the pioneers in machine learning, began experimenting with the possibility of making machines learn from experience back in the late 1940s—creating devices that can do things beyond what they were explicitly programmed to do. His best-known work was a checkers game program, which he developed while working at IBM. It is widely credited as the first implementation of machine learning.


Fig. 1: Samuel at his checkerboard using an IBM 701 in 1956. Six years later, the program beat checkers master Robert Nealey. Source: IBM

Machine learning has advanced significantly since then. Checkers has been supplanted by more difficult games such as chess, Jeopardy, and Go.

In a presentation at the Hot Chips 2016 conference in Cupertino last month, Google engineer Daniel Rosenband cited four parameters for autonomous vehicles—knowing where a car is, understanding what’s going on around it, identifying the objects around a car, and determining the best options for how a car should proceed through all of that to its destination.

This requires more than driving across a simple grid or pattern recognition. It involves some complex reasoning about what a confusing sign means, how to read a traffic light if it is obscured by an object such as a red balloon, and what to do if sensors are blinded by the sun’s glare. It also includes an understanding of the effects of temperature, shock and vibration on sensors and other electronics.

Google uses a combination of sensors, radar and lidar to pull together a cohesive picture, which requires a massive amount of processing in a very short time frame. “We want to jam as much compute as possible into a car,” Rosenband said. “The primary objective is maximum performance, and that requires innovation in how to architect everything to get more performance than you could from general-purpose processing.”


Fig. 2: Google’s autonomous vehicle prototype. Source: Google.

Programming all of this by hand into every new car is unrealistic. Database management is difficult enough with a small data set. Adding in all of the data necessary to keep an autonomous vehicle on the road, and fully updated with new information about potential dangerous behavior, is impossible without machine learning.

“We’re seeing two applications in this space,” said Charlie Janac, chairman and CEO of Arteris. “The first is in the data center, which is a machine-learning application. The second is ADAS, where you decide on what the image is. This gets into the world of convolutional neural networking algorithms, and a really good implementation of this would include tightly coupled hardware and software. These are mission-critical systems, and they need to continually update software over the air with a capability to visualize what’s in the hardware.”

How it’s being used
Machine learning comes in many flavors, and often means different things to different people. In general, the idea is that algorithms can be used to change the functionality of a system to either improve performance, lower power, or simply to update it with new use cases. That learning can be applied to software, firmware, an IP block, a full SoC, or an integrated device with multiple SoCs.

Microsoft is using machine learning for its “mixed reality” HoloLens device, according to Nick Baker, distinguished engineer in the company’s Technology and Silicon Group. “We run changes to the algorithm and get feedback as quickly as possible, which allows us to scale as quickly as possible from as many test cases as possible,” he said.

The HoloLens is still just a prototype, but like the Google self-driving car it is processing so much information so fast and reacting so quickly to the external world that there is no way to program this device without machine learning. “The goal is to scale as quickly as possible from as many test cases as possible,” Baker said.

Machine learning can be used to optimize hardware and software in everything from IP to complex systems, based upon a knowledge base of what works best for which conditions.

“We use machine learning to improve our internal algorithms,” said Anush Mohandass, vice president of marketing at NetSpeed Systems. “Without machine learning, if you don’t have an intelligent human to set it up, you get garbage back. You may start off and experiment with 15 things on the ‘x’ axis and 1,000 things on the ‘y’ axis, and set up an algorithm based on that. But there is a potential for infinite data.”

Machine learning assures a certain level of results, no matter how many possibilities are involved. That approach also can help if there are abnormalities that do not fit into a pattern because machine learning systems can ignore those aberrations. “This way you also can debug what you care about,” Mohandass said. “The classic case is a car on auto pilot that crashes because a chip did not recognize a full spectrum of things. At some point we will need to understand every data point and why something behaves the way it does. This isn’t the 80/20 rule anymore. It’s probably closer to 99.9% and 0.1%, so the distribution becomes thinner and taller.”

eSilicon uses a version of machine learning in its online quoting tools, as well. “We have an IP marketplace where we can compile memories, try them for free, and use them until you put them into production,” said Jack Harding, eSilicon’s president and CEO. “We have a test chip capability for free, fully integrated and perfectly functional. We have a GDSII capability. We have WIP (work-in-process) tracking, manufacturing online order entry system—all fully integrated. If I can get strangers on the other side of the world to send me purchase orders after eight lines of chat and build sets of chips successfully, there is no doubt in my mind that the bottoms-up Internet of Everything crowd will be interested.”

Where it fits
In the general scheme of things, machine learning is what makes artificial intelligence possible. There is ongoing debate about which is a superset of the other, but suffice it to say that an artificially intelligent machine must utilize machine-learning algorithms to make choices based upon previous experience and data. The terms are often confusing, in part because they are blanket terms that cover a lot of ground, and in part because the terminology is evolving with technology. But no matter how those arguments progress, machine learning is critical to AI and its more recent offshoot, deep learning.

“Deep learning, as a subset of machine learning, is the most potent disruptive force we have seen because it has the ability to change what the hardware looks like,” said Chris RowenCadence fellow and CTO of the company’s IP Group. “In mission-critical situations, it can have a profound effect on the hardware. Deep learning is all about making better guesses, but the nature of correctness is difficult to define. There is no way you get that right 100% of the time.”

But it is possible, at least in theory, to push closer to 100% correctness over time as more data is included in machine-learning algorithms.

“The more data you have, the better off you are,” said Microsoft’s Baker. “If you look at test images, the more tests you can provide the better.”

There is plenty of agreement on that, particularly among companies developing complex SoCs, which have quickly spiraled beyond the capabilities of engineering teams.

“I’ve never seen this fast an innovation of algorithms that are really effective at solving problems, said Mark Papermaster, CTO of Advanced Micro Devices. “One of the things about these algorithms that is particularly exciting to us is that a lot of it is based around the pioneering work in AI, leveraging what is called a gradient-descent analysis. This algorithm is very parallel in nature, and you can take advantage of the parallelism. We’ve been doing this and opening up our GPUs, our discrete graphics, to be tremendous engines to accelerate the machine learning. But different than our competitors, we are doing it in an open source environment, looking at all the common APIs and software requirements to accelerate machine learning on our CPUs and GPUs and putting all that enablement out there in an open source world.”

Sizing up the problems
Still, algorithms are only part of the machine-learning picture. A system that can optimize hardware as well as software over time is, by definition, evolving from the original system spec. How that affects reliability is unknown, because at this point there is no way to simulate or test that.

“If you implement deep learning, you’ve got a lot of similar elements,” said Raik Brinkmann, president and CEO of OneSpin Solutions. “But the complete function of the system is unknown. So if you’re looking at machine learning error rates and conversion rates, there is no way to make sure you’ve got them right. The systems learn from experience, but it depends on what you give them. And it’s a tough problem to generalize how they’re going to work based on the data.”

Brinkmann said there are a number of approaches in EDA today that may apply, particularly with big data analytics. “That’s an additional skill set—how to deal with big data questions. It’s more computerized and IT-like. But parallelization and cloud computing will be needed in the future. A single computer is not enough. You need something to manage and break down the data.”

Brinkmann noted that North Carolina State University and the Georgia Institute of Technology will begin working on these problems this fall. “But the bigger question is, ‘Once you have that data, what do you do with it?’ It’s a system without testbenches, where you have to generalize behavior and verify it. But the way chips are built is changing because of machine learning.”

ARM’s Neifert considers this a general-purpose compute problem. “You could make the argument in first-generation designs that different hardware isn’t necessary. But as we’ve seen with the evolution of any technology, you start with a general-purpose version and then demand customized hardware. With something like advanced driver assistance systems (ADAS), you can envision a step where a computer is defining the next-generation implementation because it requires higher-level functionality.”

That quickly turns troubleshooting into an unbounded problem, however. “Debug is a whole different world,” said Jim McGregor, principal analyst at Tirias Research. “Now you need a feedback loop. If you think about medical imaging, 10 years ago 5% of the medical records were digitized. Now, 95% of the records are digitized. So you combine scans with diagnoses and information about whether it’s correct or not, and then you have feedback points. With machine learning, you can design feedback loops to modify those algorithms, but it’s so complex that no human can possibly debug that code. And that code develops over time. If you’re doing medical research about a major outbreak, humans can only run so many algorithms. So how do you debug it if it’s not correct? We’re starting to see new processes for deep learning modules that are different than in the past.”

Source: semiengineering.com-What’s Missing From Machine Learning

4 Unique Challenges Of Industrial Artificial Intelligence

Robots are probably the first thing you think of when asked to imagine AI applied to industrials and manufacturing. Indeed, many innovative companies like Rodney Brooks’ Rethink Robotics have developed friendly-looking robot factory workers who hustle alongside their human colleagues. Industrial robots have historically been designed to perform specific niche tasks, but modern-day robots can be taught new tasks and make real-time decisions.

As sexy and shiny as robots are, the bulk of the value of AI in industrials lies in transforming data from sensors and routine hardware into intelligent predictions for better and faster decision-making. 15 billion machines are currently connected to the Internet. By 2020, Cisco predicts the number will surpass 50 billion. Connecting machines together into intelligent automated systems in the cloud is the next major step in the evolution of manufacturing and industry.

In 2015, General Electric launched GE Digital to drive software innovation and cloud connectivity across all departments. Harel Kodesh, CTO of GE Digital, shares the unique challenges of applying AI to industrials that differ from consumer applications.

1. Industrial Data Is Often Inaccurate

“For machine learning to work properly, you need lots of data. Consumer data is harder to misunderstand, for example when you buy a pizza or click on an ad,” says Kodesh. “When looking at the industrial internet, however, 40% of the data coming in is spurious and isn’t useful”.

Let’s say you need to calculate how far a combine needs to drill and you stick a moisture sensor into the ground to take important measurements. The readings can be skewed by extreme temperatures, accidental man-handling, hardware malfunctions, or even a worm that’s been accidentally skewered by the device. “We are not generating data from the comfort and safety of a computer in your den,” Kodesh emphasizes.

2. AI Runs On The Edge, Not On The Cloud

Consumer data is processed in the cloud on computing clusters with seemingly infinite capacity. Amazon can luxuriously take their time to crunch your browsing and purchase history and show you new recommendations. “In consumer predictions, there’s low value to false negatives and to false positives. You’ll forget that Amazon recommended you a crappy book,” Kodesh points out.

On a deep sea oil rig, a riser is a conduit which transports oil from subsea wells to a surface facility. If a problem arises, several clamps must respond immediately to shut the valve. The sophisticated software that manages the actuators on those clamps tracks minute details in temperature and pressure. Any mistake could mean disaster.

The stakes and responsiveness are much higher for industrial applications where millions of dollars and human lives can be on the line. In these cases, industrial features cannot be trusted to run on the cloud and must be implemented on location, also known as “the edge.”

Industrial AI is built as an end-to-end system, described by Kodesh as a “round-trip ticket”, where data is generated by sensors on the edge, served to algorithms, modeled on the cloud, and then moved back to the edge for implementation. Between the edge and the cloud are supervisor gateways and multiple nodes of computer storage since the entire system must be able to run the the right load at the right places.

Source: forbes.com-4 Unique Challenges Of Industrial Artificial Intelligence

Semiconductor Engineering .:. The Great Machine Learning Race

Processor makers, tools vendors, and packaging houses are racing to position themselves for a role in machine learning, despite the fact that no one is quite sure which architecture is best for this technology or what ultimately will be successful.

Rather than dampen investments, the uncertainty is fueling a frenzy. Money is pouring in from all sides. According to a new Moor Insights report, as of February 2017 there were more than 1,700 machine learning startups and 2,300 investors. The focus ranges from relatively simple dynamic network optimization to military drones using real-time information to avoid fire and adjust their targets.


Fig. 1: The machine learning landscape. Source: Moor Insights

While the general concepts involved machine learning—doing things that a device was not explicitly programmed to do—date back to the late 1940s, machine learning has progressed in fits and starts since then. Stymied initially by crude software (1950s through 1970s), then by insufficient processing power, memory and bandwidth (1980s through 1990s), and finally by deep market downturns in electronics (2001 and 2008), it has taken nearly 70 years for machine learning to advance to the point where it is commercially useful.

Several things have changed since then:

  • The technology performance hurdles of the 1980s and 1990s are now gone. There is almost unlimited processing power, with more on the way using new chip architectures, as well as packaging approaches such as 2.5D and fan-out wafer-level packaging. Very fast memory is already available, with more types on the way, and advances in silicon photonics can speed up storage and retrieval of large blocks of data as needed.
  • There are ready markets for machine learning in the data center and in the autonomous vehicle market, where the central logic of these devices will need regular updates to improve safety and reliability. Companies involved in these markets have deep pockets or strong backing, and they are investing heavily in machine learning.
  • The pendulum is swinging back to hardware, or at the very least, a combination of hardware and software, because it’s faster, uses less power, and it’s more secure than putting everything into software. That bodes well for machine learning because of the enormous processing requirements, and it changes the economics for semiconductor investments.

Nevertheless, this is a technology approach riddled with uncertainty about what works best and why.

“If there was a winner, we would have seen that already,” said Randy Allen, director or advanced research at Mentor Graphics. “A lot of companies are using GPUs, because they’re easier to program. But with GPUs the big problem is determinism. If you send a signal to an FPGA, you get a response in a given amount of time. With a GPU, that’s not certain. A custom ASIC is even better if you know exactly what you’re going to do, but there is no slam-dunk algorithm that everyone is going to use.”

ASICs are the fastest, cheapest and lowest power solution for crunching numbers. But they also are the most expensive to develop, and they are unforgiving if changes are required. Changes are almost guaranteed with machine learning because the field is still evolving, so relying ASICs—or at least relying only on ASICs—is a gamble.

This is one of the reasons that GPUs have emerged as the primary option, at least in the short-term. They are inexpensive, highly parallel, and there are enough programming tools available to test and optimize these systems. The downside is they are less power efficient than a mix of processors, which can include CPUs, GPUs, DSPs and FPGAs.

FPGAs add the additional element of future-proofing and lower power, and they can be used to accelerate other operations. But in highly parallel architectures, they also are more expensive, which has renewed attention on embedded FPGAs.

“This is going to take 5 to 10 years to settle out,” said Robert Blake, president and CEO of Achronix. “Right now there is no agreed upon math for machine learning. This will be the Wild West for the next decade. Before you can get a better Siri or Alexa interface, you need algorithms that are optimized to do this. Workloads are very diverse and changing rapidly.”

Massive parallelism is a requirement. There is also some need for floating point calculations. But beyond that, it could be 1-bit or 8-bit math.

“A lot of this is pattern matching of text-based strings,” said Blake. “You don’t need floating point for that. You can implement the logic in an FPGA to make the comparisons.”

Learning vs. interpretation
One of the reasons why this becomes so complex is there are two main components to machine learning. One is the “learning” phase, which is a set of correlations or pattern matches. In machine vision, for example, it allows a device to determine whether an image is a dog or a person. That started out as 2D comparisons, but databases have grown in complexity. They now include everything from emotions to movement. They can discern different breeds of dogs, and whether a person is crawling or walking.

The harder mathematics problem is the interpretation phase. That can involve inferencing—drawing conclusions based on a set of data and then extrapolating from those conclusions to develop presumptions. It also can include estimation, which is how economics utilizes machine learning.

At this point, much of the inferencing is being done in the cloud because of the massive amount of compute power required. But at least some of that will be required to be on-board in autonomous vehicles. For one thing, it’s faster to do at least some of that locally. For another, connectivity isn’t always consistent, and in some locations it might not be available at all.

“You need real-time cores working in lock step with other cores, and you could have three or four levels of redundancy,” said Steve Glaser, senior vice president of corporate strategy and marketing at Xilinx. “You want an immediate response. You want it to be deterministic. And you want it to be flexible, which means that to create an optimized data flow you need software plus hardware plus I/O programmability for different layers of a neural network. This is any-to-any connectivity.”

How to best achieve that, however, isn’t entirely clear. The result is a scramble for market position unlike anything that has been seen in the chip industry since the introduction of the personal computer. Chipmakers are building solutions that include everything from development software, libraries, frameworks—with built-in flexibility to guard against sudden obsolescence because the market is still in flux.

What makes this so compelling for chipmakers is that the machine learning opportunity is unfolding at a time when the market for smart phone chips is flattening. But unlike phones or PCs, machine learning cuts across multiple market segments, each with the potential for significant growth (see fig. 2 below).


Fig. 2: Machine learning opportunities.

Rethinking architectures
All of this needs to be put in context of two important architectural changes that are beginning to unfold in machine learning. The first is a shift away from trying to do everything in software to doing much more in hardware. Software is easier to program, but it’s far less efficient from a power/performance standpoint and much more vulnerable from a security standpoint. The solution here, according to Xilinx’s Glaser, is leveraging the best of both worlds by using software-defined programming. “We’re showing 6X better efficiency in images per second per watt,” he said.

A second change is the emphasis on more processors—and more types of processors—rather than fewer, highly integrated custom processors. This reverses a trend that has been underway since the start of the PC era, namely that putting everything on a single die improves performance per watt and reduces the overall bill of materials cost.

“There is much more interest in larger numbers of small processors than big ones,” said Bill Neifert, director of models technology at ARM. “We’re seeing that in the number of small processors being modeled. We’re also seeing more FPGAs and ASICs being modeled than in the past.”

Because a large portion of the growth in machine learning is tied to safety-critical systems in autonomous vehicles, that requires better modeling and better verification of systems.

“One of the benefits of creating a model as early as possible is that you can inject faults for all possible safety requirements, so that when something fails—which it will—it can fail gracefully,” said Neifert. “And if you change your architecture, you want to be able to route data differently so there are no bottlenecks. This is why we’re also seeing so much concurrency in high-performance computing.”

Measuring performance and cost with machine learning isn’t a simple formula, though. Performance can be achieved in a variety of ways, such as better throughput to memory or faster, more narrowly written algorithms for specific jobs, and highly parallel computing with acceleration. Likewise, cost can be measured in multiple ways, such as total system cost, power consumption, and sometimes the impact of slow results, such as a piece of military equipment not making decisions quickly enough in an autonomous vehicle.

Beyond that, there are challenges involving the programming environment, which is part algorithmic and part intuition. “What you’re doing is trying to figure out how humans think without language,” said Mentor’s Allen. “Machine learning is that to the nth degree. It’s how humans recognize patterns, and for that you need the right development environment. Sooner or later we will find the right level of abstraction for this. The first languages are interpreters. If you look at most languages today, they’re largely library calls. Ultimately we may need language to tie this together, either pipelining or overlapping computations. That will have a lot better chance of success than high-level functionality without a way of combining the results.”

Kurt Shuler, vice president of marketing at Arteris, agrees. He said the majority of systems developed so far are being used to jump-start research and algorithm development. The next phase will focus on more heterogeneous computing, which creates a challenge for cache coherency.

“There is a balance between computational efficiency and programming efficiency,” Shuler said. “You can make it simpler for the programmer. An early option has been to use an “open” machine learning system that consists of a mix of ARM clusters and some dedicated AI processing elements like SIMD engines or DSPs. There’s a software library, which people can license. The chip company owns the software algorithms, and you can buy the chips and board and get this up and running early. You can do this with Intel Xeon chips too, and build in your or another company’s IP using FPGAs. But these initial approaches do not slice the problem finely enough, so basically you’re working with a generic platform, and that’s not the most efficient. To increase machine learning efficiency, the industry is moving toward using multiple types of heterogeneous processing elements in these SoCs.”

In effect, this is a series of multiply and accumulate steps that need to be parsed at the beginning of an operation and recombined at the end. That has long been one of the biggest hurdles in parallel operations. The new wrinkle is that there is more data to process, and movement across skinny wires that are subject to RC delay can affect both performance and power.

“There is a multidimensional constraint to moving data,” said Raik Brinkmann, CEO of OneSpin Solutions. “In addition, power is dominated by data movement. So you need to localize processing, which is why there are DSP blocks in FPGAs today.”

This gets even more complicated with deep neural networks (DNNs) because there are multiple layers of networks, Brinkmann said.

And that creates other issues. “Uncertainty in verification becomes a huge issue,” said Achim Nohl, technical marketing manager for high-performance ASIC prototyping systems at Synopsys. “Nobody has an answer to signing off on these systems. It’s all about good enough, but what is good enough? So it becomes more and more of a requirement to do real-world testing where hardware and software is used. You have to expand from design verification to system validation in the real world.”

Internal applications
Not all machine learning is about autonomous vehicles or cloud-based artificial intelligence. Wherever there is too much complexity combined with too many choices, machine learning can play a role. There are numerous cases where this is already happening.

NetSpeed Systems, for example, is using machine learning to develop network-on-chip topologies for customers. eSilicon is using it to choose the best IP for specific parameters involving power, performance and cost. And ASML is using it to optimize computational lithography, basically filling in the dots on a distribution model to provide a more accurate picture than a higher level of abstraction can intrinsically provide.

“There is a lot of variety in terms of routing,” said Sailesh Kumar, CTO at NetSpeed Systems. “There are different channel sizes, different flows, and how that gets integrated has an impact on quality of service. Decisions in each of those areas lead to different NoC designs. So from an architectural perspective, you need to decide on one topology, which could be a mesh, ring or tree. The simpler the architecture, the fewer potential deadlocks. But if you do all of this manually, it’s difficult to come up with multiple design possibilities. If you automate it, you can use formal techniques and data analysis to connect all of the pieces.”

The machine-learning component in this case is a combination of training data and deductions based upon that data.

“The real driver here is fewer design rules,” Kumar said. “Generally you will hard-code the logic in software to make decisions. As you scale, you have more design rules, which makes updating the design rules an intractable problem. You have hundreds of design rules just for the architecture. What you really need to do is extract the features so you can capture every detail for the user.”

NetSpeed has been leveraging with commercially available tools for machine learning. eSilicon, in contrast, built its own custom platform based upon its experience with both internally developed and commercial third-party IP.

“The fundamental interaction between supplier and customer is changing,” said Mike Gianfagna, eSilicon‘s vice president of marketing. “It’s not working anymore because it’s too complex. There needs to be more collaboration between the system vendor, the IP supplier, the end user and the ASIC supplier. There are multiple dimensions to every architecture and physical design.”

ASML, meanwhile, is working with Cadence and Lam Research to more accurately model optical proximity correction and to minimize edge placement error. Utilizing machine learning, model allowed ASML to improve the accuracy of mask, optics, resist and etch models to less than 2nm, said Henk Niesing, director of applications product management at ASML. “We’ve been able to improve patterning through collaboration on design and patterning equipment.”

Conclusion
Machine learning is gaining ground as the best way of dealing with rising complexity, but ironically there is no clear approach to the best architectures, languages or methodologies for developing these machine learning systems. There are success stories in limited applications of this technology, but looked at as a whole, the problems that need to be solved are daunting.

“If you look at embedded vision, that is inherently so noisy and ambiguous that it needs help,” said Cadence Fellow Chris Rowen. “And it’s not just vision. Audio and natural languages have problems, too. But 99% of captured raw data is pixels, and most pixels will not be seen or interpreted by humans. The real value is when you don’t have humans involved, but that requires the development of human cognition technology.”

And how to best achieve that is still a work in progress—a huge project with lots of progress, and still a very long way to go. But as investment continues to pour into this field, both from startups and collaboration among large companies across a wide spectrum of industries, that progress is beginning to accelerate.

Source: semiengineering.com-Semiconductor Engineering .:. The Great Machine Learning Race

Bill Gates Is Wrong: The Solution to AI Taking Jobs Is Training, Not Taxes

Let’s take a breath: Robots and artificial intelligence systems are nowhere near displacing the human workforce. Nevertheless, no less a voice than Bill Gates has asserted just the opposite and called for a counterintuitive, preemptive strike on these innovations. His proposed weapon of choice? Taxes on technology to compensate for losses that haven’t happened.

AI has massive potential. Taxing this promising field of innovation is not only reactionary and antithetical to progress, it would discourage the development of technologies and systems that can improve everyday life.

Imagine where we would be today if policy makers, fearing the unknown, had feverishly taxed personal computer software to protect the typewriter industry, or slapped imposts on digital cameras to preserve jobs for darkroom technicians. Taxes to insulate telephone switchboard operators from the march of progress could have trapped our ever-present mobile devices on a piece of paper in an inventor’s filing cabinet.

There simply is no proof that levying taxes on technology protects workers. In fact, as former US treasury secretary Lawrence Summers recently wrote, “Taxes on technology are more likely to drive production offshore than create jobs at home.”

Calls to tax AI are even more stunning because they represent a fundamental abandonment of any responsibility to prepare employees to work with AI systems. Those of us fortunate enough to influence policy in this space should demonstrate real faith in the ability of people to embrace and prepare for change. The right approach is to focus on training workers in the right skills, not taxing robots.

There are more than half a million open technology jobs in the United States, according to the Department of Labor, but our schools and universities are not producing enough graduates with the right skills to fill them. In many cases, these are “new collar jobs” that, rather than calling for a four-year college degree, require sought-after skills that can be learned through 21st century vocational training, innovative public education models like P-TECH (which IBM pioneered), coding camps, professional certification programs and more. These programs can prepare both students and mid-career professionals for new collar roles ranging from cybersecurity analyst to cloud infrastructure engineer.

At IBM, we have seen countless stories of motivated new collar professionals who have learned the skills to thrive in the digital economy. They are former teachers, fast food workers, and rappers who now fight cyber threats, operate cloud platforms and design digital experiences for mobile applications. WIRED has even reported how, with access to the right training, former coal miners have transitioned into new collar coding careers.

The nation needs a massive expansion of the number and reach of programs students and workers can access to build new skills. Closing the skills gap could fill an estimated 1 million US jobs by 2020, but only if large-scale public private partnerships can better connect many more workers to the training they need. This must be a national priority.

First, Congress should update and expand career-focused education to help more people, especially women and underrepresented minorities, learn in-demand skills at every stage. This should include programs to promote STEM careers among elementary students, which increase interest and enrollment in skills-focused courses later in their educational careers. Next, high-school vocational training programs should be reoriented around the skills needed in the labor market. And updating the Federal Work-Study program, something long overdue, would give college students meaningful, career-focused internships at companies rather than jobs in the school cafeteria or library. Together, high-school career training programs and college work study receive just over $2 billion in federal funding. At around 3 percent of total federal education spending, that’s a pittance. We can and must do more.

Second, Congress should create and fund a 21st century apprenticeship program to recruit and train or retrain workers to fill critical skills gaps in federal agencies and the private sector. Allowing block grants to fund these programs at the state level would boost their effectiveness and impact.

Third, Congress should support standards and certifications for new collar skills, just as it has done for other technical skills, from automotive technicians to welders. Formalizing these national credentials and accreditation programs will help employers recognize that candidates are sufficiently qualified, benefiting workers and employers alike.

Taking these steps now will establish a robust skills-training infrastructure that can address America’s immediate shortage of high-tech talent. Once this foundation is in place, it can evolve to focus on new categories of skills that will grow in priority as the deployment of AI moves forward.

AI should stand for augmented—not artificial—intelligence. It will help us make digital networks more secure, allow people to lead healthier lives, better protect our environment, and more. Like steam power, electricity, computers, and the internet before it, AI will create more jobs than it displaces. What workers really need in the era of AI are the skills to compete and win. Providing the architecture for 21st century skills training requires public policies based on confidence, not taxes based on fear.

Source: Wired-Bill Gates Is Wrong: The Solution to AI Taking Jobs Is Training, Not Taxes

Don’t Confuse Uncertainty with Risk

We are living in a digital era increasingly dominated by uncertainty, driven in part by the rise of exponential change. The problem is, we are generally clogging up the gears of progress and growth in our companies by treating that uncertainty as risk and by trying to address it with traditional mitigation strategies. The economist Frank Knight first popularized the differentiation between risk and uncertainty almost a century ago. Though it is a dramatic oversimplification, one critical difference is that risk is – by definition – measurable while uncertainty is not. [1]

 

Proof and Confidence. One way to parse uncertainty from risk, and in turn to assess differing levels of risk, is to consider what it should take for your organization to make a certain strategic move. One dimension of this is the “level of evidence required” in order to make the move. In other words, what amount of data and supporting information is necessary to understand the contours of the unknown and to shift from inaction to action? A

The first dimension is the “level of evidence required” in order to make the move. The second dimension is the “level of confidence” that we have in making the move in the first place.

Risk in the Known or Knowable. Since anything that can be called risky is measurable (e.g. via scenario modeling, financial forecasting, sensitivity analysis, etc.), it is by definition close enough to the standard and “knowable” business of today. Uncertainty is the realm outside of that: it’s “unknowable” and not measurable.

In risky areas, the level of analysis we do – and how much time we take to try to understand the risk and make a decision – should vary. The graphic above frames the three levels of risk described in more detail below, along with examples from some of our clients of the types of projects that we see falling into these categories:

 

1. Risky, Without Precursor – These are moves for which there is no “precursor” or analog that we have seen from elsewhere. We really want to do our homework when opportunities fall here, as exposure (e.g. financial or reputational) is high, and we have very little experience with the move and/or supporting data in the form of other’s success stories, analogs, etc.

Typical initiative: Collaborative and/or ecosystem-driven solution development – The City of Columbus was awarded the U.S. Department of Transportation’s $40MM Smart City Challenge in June of last year. The competition involved submissions from 78 cities “to develop ideas for an integrated, first-of-its-kind smart transportation system that would use data, applications, and technology to help people and goods move more quickly, cheaply, and efficiently.”[2] The solutions that were envisioned as part of the challenge were generally known (or at least had an identifiable development path) but required a complex ecosystem to deliver them. Columbus was awarded the prize because they created a compelling vision and because they were able to bring the right “burden of proof” to the USDOT that they would be able to pull it off – i.e. that they had ways to manage down the execution risk.

 

2. Risky, With Precursor – Exposure may be high, but we are highly confident about making the move. The argument for why the move makes sense should be reasonably straightforward.

Typical initiativeSensor-based business models and data monetization – A major aerospace sub-system provider had long been an industry leader in developing high-tech industrial parts and products. In recent years, new competitors had been coming online, and the company knew they needed to innovate to stay ahead of the game. In one initiative, they began adding sensors to their aircraft and aerospace products, initially for predictive maintenance needs. As they began rolling this out, they realized the data could be valuable in many other ways and actually create a whole new source of revenue from a whole new customer: pilots. Using this data, they decided to build a mobile platform that would allow pilots to view operating information from the parts and understand better ways to fly from point A to point B. The level of evidence they had was high – it was clear from many other industries that data could be used in this way to produce business value, but the confidence that it was the right decision for the brand was low at the beginning. They had to test it to find out. In this case, it was enormously successful, opening up a new business model and customer set that the company had never served before.

 

3. Low Risk, No Brainer – This is the domain of “just go do it,” perhaps because lots of solutions exist already and the opportunity for immediate economic value is high. There isn’t much reason to go study this to death.

Typical initiative: Robotic Process Automation (RPA) – RPA technology is essentially a software robot that has been coded to be able to do repetitive, highly logical and structured tasks. It has been around for a while, and there are extensive examples and case studies across industries, especially in banking. So, when JP Morgan decided to look into using softbots to automate higher order processes with investment banking, the right decision seemed obvious.[3] With growing pressure on margins, and with the success within the industry in automating structured tasks, raising demands on automation technology seemed like a logical next step. It was clear this was where the industry is going, and it was just a matter of time before all competitors would be doing it. Choosing not to innovate seemed like the bigger risk in this situation.

 

Dealing with the Uncertainty Quadrant. This is the domain of the “unknowable.” Operating in this space, many companies spend lots of time running around collecting data to reduce risk, in the attempt to make it more knowable. But if the action is truly uncertain, extensive research to lower your risk is just a waste of time.

The only way to consider a highly uncertain action is to “just go do it” – usually through prototyping and market testing – but in a way that minimizes financial or reputational exposure. Consider an old story about Palm Computing, a favorite of my friend Larry Keeley’s. As I have heard Larry tell it, the genesis story of Palm is rooted in a condition we are all too familiar with today: a low-level hypothesis that digital would matter when it came to being organized and connected, but with a high degree of uncertainty about how that would (and should) play out. This was a time of “spontaneous simultaneity” as various players worked with designs and technological solutions. The one who got it right (for a time) was the one who just did it.

Jeff Hawkins, one of the founders of Palm, epitomized the activity of prototyping. The (perhaps apocryphal) story is that in the very early days, Jeff would work in his garage to cut multiple pieces of balsa wood into organizer-shaped rectangles. He would load a bunch of those into his shirt pocket and carry them to meetings, sketching on each one in the moment something that occurred to him as being particularly helpful at the time. Contact entry, instant contact sharing, notes, calendar access, etc.: all started to appear on pieces of wood and craft an overall vision for the most important functionality to be built into the Palm. And unlike computers of the era, he discovered the criticality of instant-on functionality. To steal a phrase from the design world, the device ended up being “well-behaved” from the beginning because it was founded upon how people actually interacted. The rise and fall of Palm is a much longer story. But in the early days, Hawkins demonstrated the handling of uncertainty while minimizing exposure exquisitely.

As we carry these principles back into our organizations, discussion of whether something is risky or simply uncertain is almost “certainly” going to drift quickly towards the semantic. We should start training ourselves and our organizations to talk more about the level of evidence required (not to mention whether proof is even attainable) and level of confidence, and less about how risky something seems. With this approach, we might actually be able to start thriving in a world that is increasingly uncertain.

 

Source: Huffington Post-Don’t Confuse Uncertainty with Risk

How to Win with Automation (Hint: It’s Not Chasing Efficiency)

In 1900, 30 million people in the United States were farmers. By 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the 20th century was still seen as an era of unprecedented prosperity.

In the decades to come, we are likely to see similar shifts. Today, just like then, many people’s jobs will be taken over by machines and many of the jobs of the future haven’t been invented yet. That inspires fear in some, excitement in others, but everybody will need to plan for a future that we can barely comprehend today.

This creates a dilemma for leaders. Clearly, any enterprise that doesn’t embrace automation won’t be able to survive any better than a farmer with a horse-drawn plow. At the same time, managers need to continue to motivate employees who fear their jobs being replaced by robots. In this new era of automation, leaders will need to identify new sources of value creation.

Identify Value At A Higher Level

It’s fun to make lists of things we thought machines could never do. It was said that that only humans could recognize faces, play chess, drive a car, and do many other things that are automated today. Yet while machines have taken over tasks, they haven’t actually replaced humans. Although the workforce has doubled since 1970, unemployment remains fairly low, especially among those that have more than a high school level of education. In fact, overall labor force participation for working age adults has risen from around 70% in 1970 to over 80% today.

Once a task becomes automated, it also becomes largely commoditized. Value is then created on a higher level than when people were busy doing more basic things. The value of bank branches, for example, is no longer to manually process deposits, but to solve more complex customer problems like providing mortgages. In much the same way, nobody calls a travel agency to book a simple flight anymore. They expect something more, like designing a dream vacation. Administrative assistants aren’t valuable because they take dictation and type it up on a typewriter, but because they serve as gatekeepers who prioritize tasks in an era of information overload.

So the first challenge for business leaders facing a new age of automation is not try to simply to cut costs, but to identify the next big area of value creation. How can we use technology to extend the skills of humans in ways that aren’t immediately clear, but will seem obvious a decade from now? Whoever identifies those areas of value first will have a leg up on the competition.

Innovate Business Models

Amazon may be the most successfully automated company in the world. Everything from its supply chain to its customer relationship management are optimized through its use of big data and artificial intelligence. Its dominance online has become so complete that during the most recent Christmas season it achieved a whopping 36.9% market share in online sales.

So a lot of people were surprised when it launched a brick and mortar book store, but as Apple has shown with its highly successful retail operation, there’s a big advantage to having stores staffed with well trained people. They can answer questions, give advice, and interact with customers in ways that a machine never could.

Notice as well that the Apple and Amazon stores are not your typical mom-and-pop shops, but are largely automated themselves, with industrial age conventions like cash registers and shopping aisles disappearing altogether. That allows the sales associates to focus on serving customers rather than wasting time and energy managing transactions.

Redesign Jobs

When Xerox executives first got a glimpse of the Alto, the early personal computer that inspired Steve Jobs to create the Macintosh, they weren’t impressed. To them, it looked more like a machine that automated secretarial work than something that would be valuable to executives. Today, of course, few professionals could function without word processing or spreadsheets.

We’re already seeing a similar process of redesign with artificially intelligent technologies. Scott Eckert, CEO of Rethink Robotics, which makes the popular Baxter and Sawyer robots told me, “We have seen in many cases that not only does throughput improve significantly, but jobs are redesigned in a way that makes them more interesting and rewarding for the employee.” Factory jobs are shifting from manual tasks to designing the work of robots.

Lynda Chin, who co-developed the Oncology Expert Advisor at MD Andersonpowered by IBM’s Watson, believes that automating cognitive tasks in medicine can help physicians focus more on patients. “Instead of spending 12 minutes searching for information and three with the patient, imagine the doctor getting prepared in three minutes and spending 12 with the patient,” she says.

“This will change how doctors will interact with patients.” she continues. “When doctors have the world’s medical knowledge at their fingertips, they can devote more of their mental energy to understanding the patient as a person, not just a medical diagnosis. This will help them take lifestyle, family situation and other factors into account when prescribing care.”

Humanity Is Becoming The Scarce Resource

Before the industrial revolution, most people earned their living through physical labor. Much like today, many tradesman saw mechanization as a threat — and indeed it was. There’s not much work for blacksmiths or loom weavers these days. What wasn’t clear at the time was that industrialization would create a knowledge economy and demand for higher paid cognitive work.

Today we’re seeing a similar shift from cognitive skills to social skills. When we all carry supercomputers in our pocket that can access the collective knowledge of the world in an instant, skills like being able to retain information or manipulate numbers are in less demand, while the ability to collaborate, with humans and machines, are rising to the fore.

There are, quite clearly, some things machines will never do. They will never strike out in Little League, get their heart broken, or worry about how their kids are doing in school. These limitations mean that they will never be able to share human experiences or show genuine empathy. We will always need humans to collaborate with other humans.

As the futurist Dr. James Canton put it to me, “It is largely a matter of coevolution. With automation driving down value in some activities and increasing the value of others, we redesign our work processes so that people are focused on the areas where they can deliver the most value by partnering with machines to become more productive.”

So the key to winning in the era of automation, where robots do jobs formerly performed by humans, is not simply more efficiency, but to explore and identify how greater efficiency creates demand for new jobs to be done.

 

Source: Harvard Business Review -How to Win with Automation (Hint: It’s Not Chasing Efficiency)

Please Don’t Hire a Chief Artificial Intelligence Officer

Every serious technology company now has an Artificial Intelligence team in place. These companies are investing millions into intelligent systems for situation assessment, prediction analysis, learning-based recognition systems, conversational interfaces, and recommendation engines. Companies such as Google, Facebook, and Amazon aren’t just employing AI, but have made it a central part of their core intellectual property.

As the market has matured, AI is beginning to move into enterprises that will use it but not develop it on their own. They see intelligent systems as solutions for sales, logistics, manufacturing, and business intelligence challenges. They hope AI can improve productivity, automate existing process, provide predictive analysis, and extract meaning from massive data sets. For them, AI is a competitive advantage, but not part of their core product. For these companies, investment in AI may help solve real business problems but will not become part of customer facing products. Pepsi, Wal-Mart and McDonalds might be interested in AI to help with marketing, logistics or even flipping burgers but that doesn’t mean that we should expect to see intelligent sodas, snow shovels, or Big Macs showing up anytime soon.

As with earlier technologies, we are now hearing advice about “AI strategies” and how companies should hire Chief AI Officers. In much the same way that the rise of Big Data led to the Data Scientist craze, the argument is that every organization now needs to hire a C-Level officer who will drive the company’s AI strategy.

I am here to ask you not to do this. Really, don’t do this.

It’s not that I doubt AI’s usefulness. I have spent my entire professional lifeworking in the field. Far from being a skeptic, I am a rabid true believer.

However, I also believe that the effective deployment of AI in the enterprise requires a focus on achieving business goals. Rushing towards an “AI strategy” and hiring someone with technical skills in AI to lead the charge might seem in tune with the current trends, but it ignores the reality that innovation initiatives only succeed when there is a solid understanding of actual business problems and goals. For AI to work in the enterprise, the goals of the enterprise must be the driving force.

This is not what you’ll get if you hire a Chief AI Officer. The very nature of the role aims at bringing the hammer of AI to the nails of whatever problems are lying around. This well-educated, well-paid, and highly motivated individual will comb your organization looking for places to apply AI technologies, effectively making the goal to use AI rather than to solve real problems.

This is not to say that you don’t need people who understand AI technologies. Of course you do. But understanding the technologies and understanding what they can do for your enterprise strategically are completely different. And hiring a Chief of AI is no substitute for effective communication between the people in your organization with technical chops and those with strategic savvy.

One alternative to hiring a Chief AI Officer is start with the problems. Move consideration of AI solutions into the hands of the people who are addressing the problems directly. If these people are equipped with a framework for thinking about when AI solutions might be applicable, they can suggest where those solutions are actually applicable. Fortunately, the framework for this flows directly from the nature of the technologies themselves. We have already seen where AI works and where its application might be premature.

The question comes down to data and the task.

For example, highly structured data found in conventional databases with well-understood schemata tend to support traditional, highly analytical machine learning approaches. If you have 10 years of transactional data, then you should use machine learning to find correlations between customer demographics and products.

In cases where you have high volume, low feature data sets (such as images or audio), deep learning technologies are most applicable. So a deep learning approach that uses equipment sounds to anticipate failures on your factory floor might make sense.

If all you have is text, the technologies of data extraction, sentiment analysis and Watson-like approaches to evidence-based reasoning will be useful. Automating intelligent advice based on HR best practice manuals could fit into this model.

And if you have data that is used to support reporting on the status or performance of your business, then natural language generation is the best option. It makes no sense to have an analyst’s valuable time dedicated to analyzing and summarizing all your sales data when you can have perfectly readable English language reports automatically generated by a machine and delivered by email.

If decision-makers throughout your organization understand this, they can look at the business problems they have and the data they’re collecting and recognize the types of cognitive technologies that might be most applicable.

The point here is simple. AI isn’t magic. Specific technologies provide specific functions and have specific data requirements. Understanding them does not require that you hire a wizard or unicorn to deal with them. It does not require a Chief of AI. It requires teams that know how to communicate the reality of business problems with those who understand the details of technical solutions.

The AI technologies of today are astoundingly powerful. As they enter the enterprise, they will change everything. If we focus on applying them to solve real, pervasive problems, we will build a new kind of man-machine partnership that empowers us all to work at the top of our game and realize our greatest potential.

Source: Harvard Business Review-Please Don’t Hire a Chief Artificial Intelligence Officer

From Bot Hype To Reality: 3 Keys to Success For Intelligent Automation by Enterprises

With the rapid adoption of messaging and artificial intelligence hitting the mainstream, it is ‘go’ time for enterprises to modernize and meet their customers where they want to be met: in mobile chat. Remember what email did to the fax machine? It won’t take long for email to meet a similar plight with messaging usurping its pole position in B2C communications.

In 2016, we saw the rise of chatbots. You couldn’t read a reputable editorial outlet without the term ‘chatbot’ appearing somewhere on the first page. But the hype quickly turned to a sad reality as many bots on Facebook, KiK, WeChat and other platforms failed to deliver on their promise. But then again, what was their promise? Do consumers really want to ‘chat’ with brands and have relatively meaningless ‘conversations’? I say no, and as a result, pragmatic AI is winning the day.

Pragmatic AI is the key to enterprise transformation in 2017 and beyond. It is the idea that machines can interact with humans through messaging conversations to resolve an issue quickly, efficiently and securely. Consumers are busy people. When they need something from a business, they want it immediately. Pragmatic AI doesn’t put you on hold, it doesn’t give you the wrong answer and it is always available – 24/7/365.

So, with this in mind, here are 3 ways enterprises can cut through the hype and modernize for the next generation of consumers:

 

1. Choosing the right AI

 

There are two flavors of AI: Open and Pragmatic.

Open AI – like the large-scale cognitive services with high-end AI capabilities – is the kind we’re accustomed to seeing in the headlines. But for the enterprise, this type of AI is often too ambitious to be put to any good use beyond data analytics. It lacks the performance-based capabilities and transactional components that are needed for day-to-day enterprise applications. It is extremely costly and requires a small army of system integrators to install and operate it.

Pragmatic AI, as defined above, works on a functional level. It takes IVR, call center and other scripts to create decision trees, and plugs into various backend APIs to execute a myriad of business processes. From changing passwords, to canceling accounts, binding policies and tracking claims, if a human can do it, Pragmatic AI can do it too.

We see the fallacy around deep learning and open AI catch up with many enterprises who are sometimes six to 12 months in on deployment (after feeling the pressure to adopt AI). These companies see no real solution in sight. Roughly 80% of call center inquiries don’t require cognitive services and deep learning. You have to start small, be practical and use bots that are nimble and functional. If you do this properly, your bots can also proactively engage consumers and replace email and social media as the primary channel for revenue-driving promotions and marketing initiatives.

 

2. Increasing loyalty by enabling transactions through automation

 

Enterprises exist in a world filled with a need to serve and deliver on consumer demands. Consumers are transaction-driven – when they want something, they want it instantaneously. So, when enterprises expand their communication strategies to explore new channels – such as chatbot-powered messaging – they need to ensure the new channels support an even greater level of functionality than all their other existing channels.

A major problem we’re seeing in the industry is enterprises deploying bots on 3rd-party channels that lack basic transactional functionality – whether that be payment processing, scheduling, file transfer and storage, or authentication. The resulting experience is usually a negative one for both the customer and the enterprise.

The technology exists to support rich customer interactions over messaging. After all, it is the next frontier for enterprise communication. Enterprise platforms are meant for enterprises. Social platforms are meant for socializing. Let’s keep business with business and pleasure with pleasure; mixing the two can result in major repudiation and fraud issues through identity theft.

 

3. Protecting customer data through an end-to-end solution

 

Right up there on the ‘mission critical list’ of every CIO is data privacy and protection. Mobile messaging is generating newfound challenges for businesses as consumers flock to apps that are unsecure and can’t support the needs of enterprise communication. This means when businesses add social messaging apps into their communication mix, they can’t provide the functionality for customers to do anything more than merely ‘chat’. The result is poor customer experiences and lost revenue. The same is true for bots. To avoid potential security risks and wasted investment, businesses need to ensure the platform they intend to use meets the desired requirements so they can adequately serve their customers.

Enterprises in the healthcare, financial services and insurance industries face significant challenges in this respect. Whether it is HIPAA, FISMA, FINRA or other, these enterprises need to meet the various state, federal and international regulatory criteria. A poorly devised automation and bot strategy where one vendor’s bots are bolted onto another vendor’s messaging system almost guarantees compliance failure and legal recourse.

Find an end-to-end solution where the automation, messaging, transactions and consumer experience are all one and the same, built around compliance, privacy, scalability and security.

 

Driving customer satisfaction and cost savings for the enterprise

 

There’s been enough hype about chatbots and AI to make a portion of consumers and enterprises a little disillusioned with the technology’s promise. Skeptics begin to question the practicality of bots. But it’s more a case of a tradesman blaming his tools than the tools letting him down. With a strategic and carefully planned approach to bots and automation, the results can transform any enterprise, driving up NPS and dramatically reducing costs. These are just three examples of how enterprises can launch their own thorough and ROI-driven automation strategies to connect with consumers in new and engaging ways.

 

Source: Huffington Post-From Bot Hype To Reality: 3 Keys to Success For Intelligent Automation by Enterprises

Blue Prism Software Which Makes Bots Productive Will Now Run in the Cloud

The tech world is besotted with bots. This technology, also known as chatbots, provides automated but theoretically human-like responses to a user request. If you’ve clicked on a customer service button on your bank’s site, you’ve dealt with a bot. Businesses see bots as a great way to help people buy products or receive support on products they’ve already bought.

But the bots themselves typically act as the front door to a world of processes happening behind the scenes. If you ask a bot for help, your request kicks off activities to get the job done.

“We sit between the chat bots at the front end and the infrastructure and business processes on the back end. We view ourselves as the operating system for this digital workforce,” Blue Prism chief executive Alistair Bathgate told Fortune.

On Wednesday, the company is announcing a new version of its software that will run on public clouds like Amazon Web Services, Microsoft Azure, and GoogleCloud Platform. Blue Prism’s software until now has typically run on customers’ own servers.

“If you call your bank to report a lost or stolen credit card, that starts as a five minute conversation followed by 25 minutes of administrative processes, where someone has to cancel the card, add a new card, initiate anti-fraud procedures. We automate that 25-minute piece,” he s said

By opening up its technology to run on massive public cloud data centers, Blue Prism can also take advantage of all the artificial intelligence and other services those clouds offer, Bathgate said.

In this arena, which techies call robotic process automation or RPA, Blue Prism competes with companies like UIPath and Automation Anywhere. The big cloud providers, which are pouring billions into AI and other services, could enter the fray on their own as well.

Source: Fortune-Blue Prism Software Which Makes Bots Productive Will Now Run in the Cloud