Public perception of artificial intelligence technology, seems to lie somewhere at the intersection of existential fear and cautious optimism. Yet there’s a growing movement of people who believe AI is crucial to the evolution of our species. These people aren’t outsiders or outliers — they’re actually directing research on the cutting edge at companies like Google.
Ray Kurzweil, Google’s guru of AI and futurism, spoke last week at the Council for Foreign Relations, in an intimate Q&A session. His views on the future of humanity might seem radical to a public that’s been cutting its teeth on doomsayer headlines featuring Elon Musk and Stephen Hawking warning about World War III.
He’s quick to point out that today, right now, is the best our species has ever had it. According to him, most people don’t know that the world we live in currently has less hunger and poverty than ever before. “Three billion people have smartphones, I thought it was two but I just found out it was three. In a few years that’ll be six billion.” he says.
The deadliest war in recorded human history, World War II, ended just 72 years ago. In the time since, humanity has engaged in what feels like countless skirmishes, police actions, and outright wars. And while the US remains engaged in the longest war in its history – with no end in sight – the human species is currently enjoying the most peaceful period in the history of our civilization.
The existential fear is that AI will somehow compromise this progress and send us careening into the next extinction-level event. If technology like the atom bomb made World War II so much worse than everything before it, doesn’t it follow that WWIII will be even more devastating?
It’s more complex than that, according to Kurzweil. He believes part of the reason we’re able to coexist so wonderfully (in the grand historical scheme) for so long is because democracy has begun to take hold globally. He also believes the rise of democracy is the direct result of advances made in communication technology. According to him:
You can count the number of democracies a century ago on the fingers of one hand, you can count the number of democracies two centuries ago on one finger. The world has become more peaceful. That doesn’t appear to be the case, because our information about what’s wrong with the world is getting exponentially better.
So what’s next? He believes we’ll all be less biological, because humans are always evolving, and the next step of our evolution will be the internal implementation of technology. The human-robot hybrid won’t be a monstrosity of metal. It’ll just be a chip in your brain instead of an iPhone in your hand.
In the future it’ll be no more shocking to think about the weather in Hong Kong and get an answer than it is to say “Hey Google, what’s the weather in China?” and receive accurate information from a glowing rectangle with a speaker inside of it.
Kursweil believes “medical robots will go inside our brain and connect our neo-cortex to the smart cloud” by the year 2029.
That’s a jaw-dropper, even for a technology journalist who writes about AI regularly. It’s pretty hard to imagine people walking around with their brains connected to the cloud before Justin Bieber turns 35.
But dismiss Ray Kurzweil’s predictions at your own peril: he’s seldom wrong. When it comes to technology he’s gone on the record with hundreds of predictions, which is what futurists do, and he’s correct over 90 percent of the time.
According to Kurzweil the future is incredible, but it’s also worth mentioning that his view of the present is pretty fantastic as well. He reminds us that “just a few years ago we had these devices that looked like smartphones but they didn’t work very well,” and he’s right.
Today’s smartphones know how to respond to complex voice commands like “find all the pictures from my trip to San Francisco” and “play Star Trek The Next Generation season three, episode 16.” Today’s phones can recognize who is talking, pick out your voice even when music is playing, and execute the command without a hitch.
But just a few years back, most of us quickly gave up on using voice control regularly, because we were sick of repeating ourselves. We figured we’d wait until the technology got better. Tada! It’s better now.
The truth about AI, according to experts such as Ray Kurzweil, is that there’s no part of our lives that won’t be directly affected by it. As individuals we probably won’t notice the changes in real-time, but our dependence on machine learning will increase at exponential rates.
The law of accelerating returns is behind the artificial intelligence revolution — and Ray Kurzweil’s predictions. The very limits of what is “possible” concerning machine learning are going to require reevaluation on a daily basis going forward.
Robotic Process Automation (RPA) is on the rise. This 7-step introduction to RPA is designed to dispel some urban myths. It’s the start of a journey in which I will share some of the challenges and benefits of RPA, best practices and success stories.
1. There are no physical robots involved!
RPA is an entirely software based technology, so don’t expect to see tiny robots coming to your office anytime soon! The robots are installed on normal computers such as your laptop or workstation. At a larger scale, they can also be deployed in server environments and/or virtual machines (including in the Cloud) – making flexibility one of RPA’s greatest strengths.
2. RPA emulates human execution
The core thing a robot does is to emulate human execution. That is, it performs tasks on a computer like a human does: launching applications, surfing the web, copy/pasting information, filling in forms, you name it. In other terms, imagine one of those self-playing pianos, but for your computer.
As such, you can configure a robot to execute a typical business process, or parts of it. Of course, there are some limitations to what a robot can do, both technically and cognitively, which means that not everything can be automated. At Accenture, we have common suitability and eligibility criteria to tackle these questions.
3. Robots use regular user interfaces
The way a robot manages to emulate human execution is by using the regular user interfaces on your computer (note to non-techies: the “regular user interface” is the normal application windows you have on your PC, including the buttons, fields, etc. you use to interact with them). This is incredibly powerful because it means that you do not require privileged or backdoor access to an application to be able to interact with it. The robot can interact with almost any type of application such as web apps, Java, win32 or even mainframes (and there are far more lying around than you would expect).
The downside of this is that the execution speed of the robot is limited by the application because it is subject to the same loading times and latencies as regular human users. Nevertheless, a robot is still generally much faster than its human counterpart.
4. RPA is a non-invasive technology
You should think of RPA as an extra layer on top of your existing technology stack. A robot is like any other worker in your organization, carrying out tasks by using the same applications as you do. There is no need to rewrite legacy software or integrate RPA with business-critical systems, as they are most likely to be supported out of the box. As such, it is a non-invasive technology.
“A start to end delivery of RPA from analysis to deployment can be as short as two weeks”
5. There is no need for complex coding
When meeting with clients, I like to use the analogy of a student worker to describe how to configure a robot: assume he knows nothing about your job and that he has never heard of the software you use. These are the two main things you need to build a robot: (1) what are the logic and flow of the task, and (2) what are the applications needed to carry out the said task. Coding is object oriented, which means that you can define the use of a certain application once and reuse that object with other robots as many times as you want, significantly reducing complexity.
6. RPA can be rapidly deployed
Depending on the complexity of the process (number of steps, decisions and application screens) and organization maturity (existence of RPA objects library, strong Center of Excellence/governance model), a start to end delivery of RPA from analysis to deployment can be as short as two weeks. These short deployment times mean that Agile is particularly well suited to this technology.
7. Robots are managed by business users
Once approval has been reached to put a robot into production, it is transferred from the delivery team to the execution team. Typically, the latter is composed of business users who have a good understanding of what the robot does. Their job is mainly to manage the robot, make sure everything is running smoothly and take care of exceptions and incidents.
Nothing gets the Silicon Valley-obsessed media more excited than watching the online mud-wrestling of two tech titans, especially when the fight is over the hottest topic of the day: Will AI destroy our jobs or will it be a force for good?
It all started with Elon Musk declaring that “robots will be able to do everything better than us,” creating the “biggest risk that we face as a civilization.” To which Mark Zuckerberg responded that the “naysayers” drumming up “doomsday scenarios” are “pretty irresponsible.” Musk retorted on Twitter (where else?) “I’ve talked to Mark about this. His understanding of the subject is limited,” and Zuckerberg blogged on Facebook (where else?) that he is “excited about all the progress [in AI] and it’s [sic] potential to make the world better.”
And so it goes. I don’t agree with the notion that only people who are actually doing AI can comment on AI and I’m sure both Musk’s and Zuckerberg’s understanding of AI is not limited. Like the rest of us, however, they inject into the debate their own biases, perspectives, and ambitions. It may help anyone interested in the question of what AI will do or not do to our jobs and civilization to study its history (you may want to start here), to look for evidence refuting what we believe in, and to assessments of the current and future impact of AI technologies that are based on relevant data analyzed with minimal assumptions.
Surveys, interviews and conversations with the people that actually make decisions about creating or eliminating jobs are an example of the latter category and they often serve as the basis for market landscape descriptions and better-informed speculations from industry analysts. A recent case in point—and recommended reading—is “Automation technologies, Robotics, and AI in the Workplace, Q2 2017” from Forrester’s J.P. Gownder (his blog post on the report is here).
Gownder and his Forrester colleagues discuss in detail (33 dense pages instead of 140 characters) a dozen “automation technologies”—all based on what we now generally refer to as “artificial intelligence”—that were selected because they play a role in either eliminating or augmenting jobs, require long-term planning for maximum impact, and (most importantly, in my opinion), generate questions from Forrester’s clients. In addition to assessing the developmental stage and long-term impact on jobs and businesses, Forrester provides definitions of the AI technologies/categories they discuss, valuable simply because definitions are often sorely missing from discussions of “artificial intelligence.”
Here is my summary of the 6 AI technologies that will have the most impact on jobs—positive and negative—in the near future:
Customer Self-Service: Customer-facing physical solutions such as kiosks, interactive digital signage, and self-checkout. Improved by recent innovations (better touchscreens, faster processors, improved connectivity and sensors), it is also entering new markets and applications—a prime example being the experimental Amazon Go convenience store. Example vendors: ECRS, Four Winds Interactive, Fujitsu, Kiosks Information Systems, NCR, Olea Kiosks, Panasonic, Protouch Manufacturing, Samsung, and Stratacache.
AI-Assisted Robotic Process Automation: Automating organizational workflows and processes using software bots. Analyzing 160 AI-related Deloitte consulting projects, Tom Davenport found it to be one of the fastest growing AI applications, an observation confirmed by Forrester. Example vendors: Automation Anywhere, Blue Prism, Contextor, EdgeVerve Systems, Kofax, Kryon Systems, NICE, Pegasystems, Redwood Software, Softomotive, Symphony Ventures, UiPath, and WorkFusion.
Industrial Robots: Physical robots that execute tasks in manufacturing, agriculture, construction, and similar verticals with heavy, industrial-scale workloads. The Internet of Things, improved software and algorithms, data analytics, and advanced electronics have contributed to a wider array of form factors, ability to perform in semi- and unstructured environments, and the “intelligence” to learn and operate autonomously. A rising sub-category is collaborative robots (cobots), working safely alongside humans. Example vendors: ABB, Aethon, Blue River Technology (agriculture), Clearpath Robotics (autonomous, multiterrain), Denso, FANUC (traditional robots and cobots), Kawasaki, Kuka, Mitsubishi, Nachi Robotics, OptoFidelity, RB3D (cobots), Rethink Robotics (cobots), and Yaskawa.
Retail and Warehouse Robots: Physical robots with autonomous movement capabilities used in retailing and/or warehousing. Picking up objects is still the biggest challenge, but retailers such as Hudson’s Bay and JD.com, and of course Amazon, are investing in potential solutions. Example vendors: Amazon Kiva Systems (structured environments), Fetch Robotics (unstructured), Locus Robotics (unstructured), and Simbe Robotics (retail scanning robots for product restocking).
Virtual Assistants: Personal digital concierges that know users and their data and are discerning enough to interpret their needs and make decisions on their behalf. Developed for the consumer market just a few years ago, these assistants can be used by companies in a business-to-consumer setting (e.g., answer questions at home or augment the work of call center employees) or inside the business organization (e.g., serve as subject matter experts or support business processes). Example vendors: Amazon Alexa, Apple Siri, Dynatrace for ITSM, Google Now and Google Assistant, IBM Watson conversational interface, IBM Watson Virtual Agent, IPsoft Amelia, Microsoft Cortana, Nuance Communications Nina, and Samsung Bixby.
Sensory AI: Improving computers ability to identify, “understand,” and even express human sensory faculties and emotions via image and video analysis, facial recognition, speech analytics, and/or text analytics. Example vendors: Affectiva, Amazon Lex, Amazon Rekognition, Aurora Computer Services, Caffe, Clarifai, Deepomatic, Ditto, Equals 3 Lucy, FaceFirst, Google Cloud Platform APIs, HyperVerge, IBM Watson Developer Cloud, KeyLemon, Linkface, Microsoft Cognitive Services, Microsoft Cortana Intelligence Suite, ModiFace, Nuance Communications, OpenText, Revuze, Talkwalker, and Verint Systems.
The first 4 categories have been around for a while (Forrester calls them “mature”) but have recently become energized by hardware and software innovations. It is interesting to note that the key reason for the recent excitement about and fear of AI—the rapid advancement in a number of narrow AI tasks (e.g. object identification) due to improvements in deep learning techniques—has not contributed greatly to the newly-found sexiness of these 4 categories. But deep learning has been a key contributor to the nascent success of the other 2 hot categories—virtual assistants and sensory AI. My general conclusion from these observations is that the excitement (and fear) generated by specific “triumphs” of AI technologies can obscure for us a very fundamental fact of technology adoption throughout history, including recent history—it takes a very long time. This has important implications for our assumptions and projections regarding the question when will AI eliminate (lots of) jobs.
It’s tough to make predictions about the timeframe and magnitude of job elimination, especially when we consider the future of employment (to paraphrase a very wise man). But the difficulties inherent in saying anything about the future, especially the future of jobs in a dynamic, constantly evolving, and multi-faceted economy (e.g., persistent low wages may postpone the adoption of robots), have never stood in the way of people writing and/or analyzing and/or speaking for fame and fortune (or more simply, for continuous employment).
The current cycle of here-are-authoritative-numbers-on-how-many-jobs-will-be-eliminated-by-AI started 4 years ago by two Oxford academics (47% percent of jobs in the US are at risk of being automated in the next 20 years). Forrester’s analysts could not resist the much in-demand forecasting exercise and, in what became “one of the five best-read among all reports at Forrester,” estimated that automation will destroy 17% of US jobs by 2027. But, unlike many other commentators on the subject, they also looked at the glass-half-full and estimated that automation will add 10% of new jobs to the US economy by 2027, for a net loss of 7%.
Whether it will be 7% or 47% or any other quantitative or qualitative speculation about the future impact of AI on employment, the debate over when and how muchdoes not even take into consideration the question of if. Will robots really “be able to do everything better than us,” as Musk believes, and not just in 20 or 100 years, but anytime in the future? I know, it’s tough to make predictions, especially about the future of technology. What is certain is that inquiry minds steeped in the scientific ethos, such as Musk’s, should consider all possibilities and avoid making dogmatic statements, either of the AI-will-destroy-civilization type or AI-will-cure-all-diseases kind. Why not consider the possibility that intelligent machines will not take over because they will never be human and that the futile quest for “human-level intelligence” has actually slowed down progress in AI research?
There is no question that we will continue to see in the future the same disruption in the job market that we have witnessed in the last sixty-plus years of computer technology creating and destroying jobs (like other technologies that preceded it). The type of disruption that has created Facebook and Tesla. Facebook had a handful of employees in 2004 and today employs 20,000. Tesla was founded in 2003 and today has 33,000 employees. Whether AI technologies progress fast or slow and whether AI will continue to excel only at narrow tasks or succeed in performing multi-dimensional activities, entrepreneurs like Zuckerberg and Musk (and Jack Ma and Vijay Shekhar Singh Sharma and Masayoshi Son) will seize new business opportunities to both destroy and create jobs. Humans, unlike bots and robots (now and possibly forever), adapt to changing circumstances.
The market for robotic process automation (RPA) and Intelligent Automation continues to be obfuscated by smoke and mirrors. If you listen to our friends at Gartner, satisfaction levels are allegedly at an unprecedented 96% while our own data rather suggest that roughly only half of the deployments led to satisfactory levels. So where is the market really at and what needs to be done to accelerate the journey? But more importantly what can be learned from the early deployments? Thus, our recent Summit in Chicago was a welcome opportunity to check what really is on buyers’ minds.
Buyers struggle to scale RPA projects
When we asked buyers in Chicago, how satisfied they are with their RPA projects, the results that can be seen in Exhibit 1 were astounding. The surprise was less around the low scores at the suggestion that their expectations were fully achieved, but more that many are struggling to scale projects and that they didn’t anticipate the impact on adjacent workflows and processes. Those suggestions are food for thought and are building on HfS’ much more detailed research on RPA satisfaction levels.
Exhibit 1: Polling question from HfS Summit in Chicago “Buyers, how satisfied are you with your RPA projects?”
Source: HfS Research 2017, n=36
One buyer succinctly articulated the implications of those concerns: “You don’t buy RPA, AI or Blockchain, you buy an outcome, yet providers’ organizational issues are pulling us back to technology.” It is here where the overselling of the supply side cuts in. Until compensation schemes and organizational issues change, we have to cope with an enormous amount of smoke and mirrors. Another buyer built on these issues in progressing on the automation journey, “AIis nothing you take off the shelf, it is a disparate set of capabilities, it is about orchestration, ecosystem, data.” We had heard similar concerns at our last Summit in New York: “We need to move beyond technology by being specific, in particular, specific about the use cases. And we have to move from bots to data.” This raises a plethora of questions from compliance to governance.
So, what holds the future for RPA? When we asked the audience in Chicago where they see RPA in 12 months’ time, we got clear answers. As exhibit 2 highlights, 34% reinforced the message that RPA will be all about transformation and not products. Slightly surprising 20% suggests that either Google, Microsoft, or AWS will enter and disrupt the market. While we have argued that around AI will see a shift toward mega ISVs, a direct involvement in RPA would certainly come as a surprise to us.
Exhibit 2: Polling question from HfS Summit in Chicago “Where do you see RPA in 12 months’ time?”
Source: HfS Research 2017, n=59
To get a more nuanced feedback on the issues surrounding RPA deployments, HfS did run two breakouts titled “The raw truth about RPA”. In those sessions we leverage a simplified Design Thinking method that facilitates constructive feedback on any given topic, using simple statements that convey feelings – I Like, I Wish, and What If? In both sessions, we saw a surprising convergence of thoughts, experiences and ideas by a wide range of RPA stakeholders – services buyers (both new to RPA and experienced practitioners), RPA vendors, and sourcing and automation advisors. We present the synthesized RPA experiences in the same design format below.
Constructive feedback on RPA to the services industry
That RPA works! It brings us efficiency and quality, increased throughput, and helps us in managing volatility.
That RPA has a low entry barrier. It has a relatively low entry cost, allows us to test quickly and fail fast, is relative ease of use, and has a good time to value.
That RPA solves stubborn business problems. Things we couldn’t address or bring up with IT before can now be dealt with by ourselves.
That RPA brings operations and IT together. Without change management, projects are likely to fail.
That RPA creates a new source of value to clients. It brings us (service providers) closer to clients. The branding alone is valuable – robotics sells.
That RPA documents undocumented processes! We can derive intelligence from automation, standardization, thus creating new levels of transparency.
We could stop calling RPA new. The basic concepts have been around for a decade.
We had more realistic expectations for all stakeholders – and definitions, offer insights as to what is reality and what is hype.
We had more RPA maturity overall. Maturity around change management and more education on the realities, use cases, examples of failures and successes.
We wouldn’t see Machine Learning and RPA as silver bullets or a cure. We need broader education around this. Machine Learning is not about having the best algorithm, it’s about the best integration into the fabric of the process.
We could get agiler. That is experiment faster, leverage bot libraries; furthermore, that we had reusable business knowledge, central business rules engines
We could rethink our RPA business case. Take a more strategic view, longer-term – albeit with softer criteria. We would move beyond narrow notions of cost. Fundamentally, it is not about FTE reduction, we need more clarity of the underlying costs including attrition
We would look at the investments as a strategic opportunity, in particular, that at least half of the cost savings would be used for transformational projects.
RPA was free? It is already commoditizing; RPA costs get reduced year-on-year.
We had common RPA standards? Furthermore, a knowledge hub for adjacent knowledge where FTEs get freed up.
There were people in the industry with functional expertise who were also experts in RPA? The talent that understands the impact of RPA on process chains and workflows is scarce.
We could share data across industries? The same applies for benchmarks and metrics.
We could measure less tangible benefits! Thus, the business case and communication internally would become easier.
Risk education was more advanced? We could overcome data breaches, security.
We don’t need bots in the future? Because of the speed of digital transformation, we could leapfrog legacy and have native automation.
We could look at this as a continuum? RPA/RDA could be part of something bigger (reengineering), thus RPA is a wake-up call!
We truly transformed to a customer-centric platform that is companies with legacy processes. OneOffice in all but name!
Bottom-line: RPA needs to support outcomes through orchestration of disparate sets of technology and data
The voices of the RPA community in Chicago were loud and clear: On a basic level RPA works and yields results. However, buyers are struggling to scale projects and often lack an understanding how they can advance to more data-centric models. On this journey, standards that can help with the communication, and case studies that convey the lessons learned would go a long way. While they acknowledge that RPA could evolve into a crucial lever for progressing toward the OneOffice, the buyers criticize that cost savings are not being reinvested for transformational projects. They are in agreement that in order to support outcomes, RPA needs to be integrated with other disparate sets of technologies as well as data. To succeed with those projects, the industry urgently needs a new breed of talent that blends functional experience with practical understanding of those innovative technologies.
“Gartner’s research chief couldn’t have opened the company’s flagship conference with a more astounding proclamation if he had claimed that next year’s event would be held on the International Space Station and Gartner was offering free rides.”
Actually, I agree with Peter – I wrote a whole book, Silicon Collar which looks at a century of automation and how humans go through panic attacks every couple of decades about automation and impact on jobs. Automation tends to target tasks, not complete jobs. In general, it transforms jobs, not destroy them. And societies have “circuit breakers” which slow down rapid mass adoption of automation technology as I wrote here.
What I would I have liked to hear from Peter was “we were too pessimistic just 3 years ago”, when he said from the same podium
“Gartner predicts one in three jobs will be converted to software, robots and smart machines by 2025…New digital businesses require less labor; machines will make sense of data faster than humans can. By 2018, digital business will require 50% fewer business process workers.”
And I would liked him to say “we really fxxked up” when we predicted that by next year (2018)
20 percent of business content will be authored by machines.
more than 3 million workers globally will be supervised by a “robo-boss.”
45 percent of the fastest-growing companies will have fewer employees than instances of smart machines.
In contrast, Oracle Co-CEO Mark Hurd shared with the OpenWorld audience a few hours later some of the “mean tweets” as he called it about some of the predictions he has been making about the cloud market.
Later in a Q&A, I joked with Mark that as an industry analyst he would have the luxury of hedging and assigning a probability to his predictions and then never publicly having to audit or redact his predictions.
When it comes to beginning the robotic process automation (RPA) journey, many organisations are preconditioned to believe that the first step is to purchase and install the software to ensure it works. Time and time again, the same outcome is achieved: the software installation appears to be successful.
However, the process of proving the software works only delays the organisation from using the technology to achieve key business goals, streamline processes and alleviate employees of mundane, rules-based work.
Rather than waste time with a “proof of concept,” organisations should begin by performing a “proof of value.” This prompts the business to identify where automation will drive the most value and how it can be configured to solve critical problems and set the business up for greater success (and competitive advantage).
Here are five steps every organization must follow to implement RPA and prove its value based on unique, individual business needs:
1. Look for people, not processes
RPA is not built to address the same business problems across varying organisations; what may be used as a payroll tool for one business can also be used to streamline HR on-boarding processes for another.
In many scenarios, organisations will first identify various processes that they’d like to automate based on the success cases of other businesses. Not only does this complicate RPA implementation, but it also masks the true value of automation, forcing it to be applied to areas that aren’t necessarily organisational pain points.
Perhaps ironically, a business should first think about its human employees when considering a new software deployment. Where are people being forced to perform non-value adding work (i.e., structured data entry or invoice processing)?
What needs to change so employees can pursue the judgement-based roles they’re more suited for – like creative problem solving, strategising and critical thinking? These are the processes that will benefit the most from automation and will help solve the challenges the organisation is facing.
2. Ask the experts
Once businesses identify the processes to be automated, the next step is to sit down with experts who can determine the areas where configuration will be complex. The experts will work with the employees who perform the tasks that will be automated to discuss how the process works and where the tool will be deployed. The experts will also train employees to manage the tool and ensure it remains consistent in performing the task at hand, making future deployments more efficient.
3. Map the impact
Using the experts’ advice and insight, organisations should design a model that depicts the business structure and processes that will be most affected by RPA. It’s important to identify the ripple effects – starting from the internal resources, applications and systems, and ending with the organisation’s external stakeholders.
This process accounts for the time required to implement RPA and the benefits of doing so – specifically in allowing employees to pursue more value-adding roles. Having the impact mapped will also identify areas within the organisation that can use the additional resources that will be available thanks to automation.
4. Calculate the cost (and savings!)
RPA implementation doesn’t happen overnight; it’s important that businesses set expectations on the time required to both get the tool up and running and start seeing benefits and change.
However, it is possible to calculate and forecast the financial model attributed to RPA, including the cost to maintain and update the solution, as well as the savings that the transformation will bring.
5. Present the proposal
With a clear strategy, expert assistance and financial business case in hand, the final step for organisations is to present the benefits of RPA implementation so it will be prioritised over other projects.
With the monetary savings, increased employee satisfaction and scoped transformation, the benefits of automation in helping the business solve organisational challenges and drive value will be clear. Once the project is approved, organisations can move forward with implementing the solution and be prepared to see the ROI.
While it’s a simple task to prove that RPA works, identifying how it can bring transformational benefits to an organisation is the key to successful deployment. It doesn’t have to take long – the assessment of suitable processes, design and forecasting the implementation plan and production of a business case can be completed in less than six weeks.
Using these five steps will provide organisations with the strategy, sponsorship and access to resources to prove RPA’s value and ensure it gets successfully implemented.
“Cobots”, or collaborative robots, are making inroads into work previously considered too difficult to automate. But as cobots get better at performing tasks such as material handling or packaging, their designers are having to consider the effects on their colleagues of the machines’ improved ability to interact with humans.
In its early stages, this new technology has been safe if underwhelming, says David Mindell, a professor at Massachusetts Institute of Technology. Of the cobots, he says: “They don’t do much collaboration, but at least they won’t cut your head off.”
Small, light and slow moving, cobots are generally harmless — the sensors and machine-learning software that enable them to “understand” their environment have a simple override: if a human gets too close, they are programmed to shut down.
The first job has been to design the software models to allow robots to operate in the human world, says Manuela Veloso, head of machine learning at Carnegie Mellon’s School of Computer Science. “It’s very important to be able to envision a mobile creature moving around in our space,” she says. For instance, getting machines to work alongside people will require an understanding of “safety zones” of the body: “We’re trying to model a person. You don’t want to hit an eye — an elbow is less important.”
As the software becomes more sophisticated, it promises more flexible machines that can be released from their cages. “We’ve got people doing jobs today because the regular robots can’t do it,” says Jim Lawton, head of product and marketing at Rethink Robotics, a Boston-based maker of cobots. These often involve repetitive actions that strain human limbs, are mind-numbingly dull and consign workers to jobs with no chance of career advancement, he says.
Mindell, author of Our Robots, Ourselves, a 2015 book about human-robot interaction, agrees there is much to be gained in the way of worker wellbeing: “If your work is truly about to be augmented, or made less dangerous or less straining, these are good things.” But he says that limits in both the technology and imagination on how to apply it have made this more promise than reality.
Designing complex interactions between robots and people will take a change in mindset, he says, adding that the history of automation has largely been about treating humans like robots, to fit into automated processes. “The computer science world still has a long way to go before it has a clue about how to deal with people,” he says.
At a simple level, makers of cobots are working to reduce the sense of weirdness for people working alongside machines whose level of intelligence they find hard to judge. Rethink, for instance, experimented with putting smiling mouths on its robots to make them seem more “human”. The result was the opposite, says Lawton: people thought the machines were smirking at them, and found them “arrogant and condescending”. Moving into the “uncanny valley” where robots start to copy humans too closely “spooked people”, he says.
Veloso says there are hurdles that will have to be overcome to improve the human experience of working with the machines. One is that the machines have to be more understandable. “The more humans infer what a robot will do next, the safer it will be,” she says.
Rethink’s answer has been to give its robots “eyes” (an image on a tablet computer) that indicate the direction the machine is about to move in — a simple way to prepare people around them that they are about to do something, says Lawton.
The computer science world still has a long way to go on how to deal with people
Another key is to design a form of robot-human symbiosis in which each helps the other achieve its goal, says Veloso. That will mean teaching people to respond to requests from the robots, or to anticipate their needs, as much as the other way around. As interactions like this become more subtle and machines take over more work alongside people, the long-term impact on the wellbeing of human workers is hard to predict. Against the obvious benefits of taking dangerous or tedious work away from people, there may be unexpected side-effects. “When people invented keyboards, they weren’t imagining carpal tunnel syndrome,” Veloso points out.
As more automation creeps in, there may be subtle but far-reaching effects on the way work is designed. There is a fear that the iterative process improvements that are a product of lean manufacturing — constantly learning and implementing better ways of working — may be threatened, says Lawton. If existing work processes are automated, the result could be an ossific9ation that prevents this steady improvement.
Like much technology whose benefits are clear in the short term, even if their long-term effects on human wellbeing are hard to judge, the advance of the cobots is unlikely to be slowed. People are likely to take to their new robot colleagues as enthusiastically as they took to their smartphones, says Mindell. “People have their fears — in some ways, they are legitimate fears,” he says. “At the same time, they are addicted to their technology.”
‘Algorithms took our jobs’
Tom Gordon was 45 when his lucrative career as an oil trader suddenly faced a new threat. Electronic trading, which originally had been introduced to expand trading capacity overnight, was now operating head-to-head with Gordon and his colleagues on the floor of the exchange during the day.
Gordon says he used to handle between 500 and 750 trades a day. In his nearly 25 years as a trader he recalls recording only two months of losses. But even the high volumes that a successful trader like Gordon could handle were quickly overshadowed by the volumes electronic systems were capable of processing.
For Gordon, working alongside the electronic market was like being hit by a truck. “I saw the transition was coming and knew [traders] were going to get run over,” he says. He eventually left and retrained as a social worker.
He was wise to do so, because a few years later, in 2016, CME Group, which owns the New York Mercantile Exchange (Nymex), closed the last of its remaining commodity-trading pits.
Gordon says some of his former colleagues have struggled to cope in their new lives. “Some have done quite well, but for many of the people it really broke their lives and their spirit.”
Losing a job to a machine or algorithm carries a unique psychological burden, says Marty Nemko, a psychologist and career counsellor.
No training exists that can help a human match the speed and efficiency of artificial intelligence. “There is an inevitability of [one’s] inferior ability that accrues,” Nemko says.
Tim Leberecht, a consultant on business leadership, agrees: “If we lose our jobs due to automation and can’t get back into the workforce, then there is this huge void of purpose and meaning.”
“The big issue with this fourth industrial revolution is that we don’t have the social institutions that are facilitating and enabling the transition,” says Ravin Jesuthasan, managing director at Willis Towers Watson, and leader of the consulting group’s research area, “Future of Work”.
Research on the threat of automation paints a complicated picture. A 2016 OECD report found an average of 9 per cent of all jobs across the 21 countries the research covered could be automated, given current technology. A report by consultants McKinsey puts the global figure at less than 5 per cent.
Many researchers suggest the more nuanced effect of this transition will be on the handful of tasks across all sectors that are routine and repetitive.
According to another McKinsey report, more than 70 per cent of tasks performed by workers in the food service and hospitality sector could be carried out by machines. In manufacturing, nearly 60 per cent of tasks in jobs such as welding and maintaining equipment are at risk.
Higher-paying jobs are not immune from the disruption. McKinsey found that up to 50 per cent of tasks in the financial services industry could be automated, as could about a third of jobs in healthcare.
Jesuthasan says this refocusing of tasks can give people the space to do more meaningful work. “Leaving behind all of those routine things [creates] a huge emphasis on creativity and empathy and care,” he says.
After witnessing his original job as a trader vanish, it is perhaps no surprise that Gordon has found himself engrossed in work requiring these human characteristics. “I want to do my part,” he says. “Will I make a difference? I don’t know, but I’m going to give it a shot.”
Future robots will work side by side with humans, just as they do today. Ronny Hartmann/AFP/Getty Images
The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.
While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.
Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”
The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.
Lessons from history
There is no question coming technologies like AI will eliminate some jobs, as did those of the past.
But new technologies will also create new jobs. After steam engines replaced water wheels as the source of power in manufacturing in the 1800s, the sector expanded sevenfold, from 1.2 million jobs in 1830 to 8.3 million by 1910. Similarly, many feared that the ATM’s emergence in the early 1970s would replace bank tellers. Yet even though the machines are now ubiquitous, there are actually more tellers today doing a wider variety of customer service tasks.
So trying to predict whether a new wave of technologies will create more jobs than it will destroy is not worth the effort, and even the experts are split 50-50.
It’s particularly pointless given that perhaps fewer than 5 percent of current occupations are likely to disappear entirely in the next decade, according to a detailed study by McKinsey.
Instead, let’s focus on the changes they’ll make to how people work.
The invention of the automated teller machine didn’t kill off the bank teller. It simply altered what tasks the human teller performs. Justin Sullivan/Getty Images
It’s about tasks, not jobs
To understand why, it’s helpful to think of a job as made up of a collection of tasks that can be carried out in different ways when supported by new technologies.
And in turn, the tasks performed by different workers – colleagues, managers and many others – can also be rearranged in ways that make the best use of technologies to get the work accomplished. Job design specialists call these “work systems.”
One of the McKinsey study’s key findings was that about a third of the tasks performed in 60 percent of today’s jobs are likely to be eliminated or altered significantly by coming technologies. In other words, the vast majority of our jobs will still be there, but what we do on a daily basis will change drastically.
To date, robotics and other digital technologies have had their biggest effects on mostly routine tasks like spell-checking and those that are dangerous, dirty or hard, such as lifting heavy tires onto a wheel on an assembly line. Advances in AI and machine learning will significantly expand the array of tasks and occupations affected.
And that strategy starts with helping define the problems humans want new technologies to solve. We shouldn’t be leaving this solely to their inventors.
Fortunately, some engineers and AI experts are recognizing that the end users of a new technology must have a central role in guiding its design to specify which problems they’re trying to solve.
The second step is ensuring that these technologies are designed alongside the work systems with which they will be paired. A so-called simultaneous design process produces better results for both the companies and their workers compared with a sequential strategy – typical today – which involves designing a technology and only later considering the impact on a workforce.
An excellent illustration of simultaneous design is how Toyota handled the introduction of robotics onto its assembly lines in the 1980s. Unlike rivals such as General Motors that followed a sequential strategy, the Japanese automaker redesigned its work systems at the same time, which allowed it to get the most out of the new technologies and its employees. Importantly, Toyota solicited ideas for improving operations directly from workers.
In doing so, Toyota achieved higher productivity and quality in its plants than competitors like GM that invested heavily in stand-alone automation before they began to alter work systems.
Similarly, businesses that tweaked their work systems in concert with investing in IT in the 1990s outperformed those that didn’t. And health care companies like Kaiser Permanente and others learned the same lesson as they introduced electronic medical records over the past decade.
Each example demonstrates that the introduction of a new technology does more than just eliminate jobs. If managed well, it can change how work is done in ways that can both increase productivity and the level of service by augmenting the tasks humans do.
But the process doesn’t end there. Companies need to invest in continuous training so their workers are ready to help influence, use and adapt to technological changes. That’s the third step in getting the most out of new technologies.
And it needs to begin before they are introduced. The important part of this is that workers need to learn what some are calling “hybrid” skills: a combination of technical knowledge of the new technology with aptitudes for communications and problem-solving.
Companies whose workers have these skills will have the best chance of getting the biggest return on their technology investments. It is not surprising that these hybrid skills are now in high and growing demand and command good salaries.
None of this is to deny that some jobs will be eliminated and some workers will be displaced. So the final element of an integrated strategy must be to help those displaced find new jobs and compensate those unable to do so for the losses endured. Ford and the United Auto Workers, for example, offered generous early retirement benefits and cash severance payments in addition to retraining assistance when the company downsized from 2007 to 2010.
Examples like this will need to become the norm in the years ahead. Failure to treat displaced workers equitably will only widen the gaps between winners and losers in the future economy that are now already all too apparent.
In sum, companies that engage their workforce when they design and implement new technologies will be best-positioned to manage the coming AI revolution. By respecting the fact that today’s workers, like those before them, understand their jobs better than anyone and the many tasks they entail, they will be better able to “give wisdom to the machines.”
Robotic automation is nothing new and goes back to ancient times. The science of robotics is perceived as a very recent development in today’s society but the truth is that history of artificial intelligence goes back centuries into ancient times.
AI dates back to 2000 BC where early legends like Cadmus mentioned artificial people in his mythologies. In Chinese legends it goes to the 10th century BC and Yan Shi who had written down ideas of transforming a human into an account.
Many other records and stories can be found in the Greek mythology, Christian legends and the Indian history. In Christian legends, one of the most famous legends includes plan for the construction of an entire android.
First Constructions of Artificial Intelligence
The first real concepts of AI are recorded from the 4th century BC on. The greek mathematician Archytas of Tarentum constructed “The Pigeon”, a mechanical bird driven by steam. The ancient greek philosopher Aristotle suspected that automation could one day end slavery because it could bring human equality. Another influential figure was Al-Jazari, who lived in the 12th century. He constructed a variety of different machines like kitchen appliances and automation machines powered by water.
Even the genius Leonardo da Vinci (1452-1519) mentioned AI and suggested designs of human robots with detailed plannings. Very interesting concepts were made the Japanese Hisaghige Tanaka who developed mechanical toys for the purpose of serving tea.
Over the centuries robotic concepts gained popularity and came to the attention of national leaders like Frederick the Great and Napoleon Bonaparte.
Although there were already some really functional and remarkable examples of AI before, the Industrial Revolution and the progress in engineering and science gave robotics a major boost in the years to come. Charles Babbage (1791-1871) was an influential figure. He worked to develop the foundations of computer science in the early 19th century. Furthermore, in the 19th century factories started to use automation of robots to improve efficiency or the machine loads and the production numbers.
The 20th century to Today: The Modern Era of Robotics
Isaac Asimov came up with the concept of the “Three Laws of Robotics”. He wrote science fiction and robot stories that inspired other editors to write science fiction films. His ideas and concepts influenced the 2004 film I, Robot, starring Will Smith.
In the 1950s a robotic device was designed by George Devol that was used in plants of General Motors in the United States. Further other companies followed and successively implemented new technological advancements into their production and assembly lines. People started specializing in robotic and perfectioning ideas into reality. Robotic inventions become more and more popular and found appearance on the Tonight Show in 1966.
Over the decades, the size of the robotic industry has massive grown and AI found its place in many industries such as sales, retail, engineering, finance, construction, e-commerce, real estate. Companies spend millions and billions on research and development to be use AI to their advantage and to cope with trends and developments.
If you want to find out about the lastest technological trends in robotic automation, then check out Thoughtonomy.
The AI and robotic automation industry is expected to exponentially grow in the years ahead to 2020 and a large number of jobs and industries will potentially be affected by it. However, studies show that more robots don’t necessarily lead to a reduction in the number of jobs. In fact, the opposite effect has been observed in some instances because it opens many doors to use human creativity in other jobs. One such instance is online logo maker, Logojoy, and the way they’ve taken graphic design and made it
Cheaper, more capable, and more ﬂexible technologies are accelerating the growth of fully automated production facilities. The key challenge for companies will be deciding how best to harness their power.
At one Fanuc plant in Oshino, Japan, industrial robots produce industrial robots, supervised by a staff of only four workers per shift. In a Philips plant producing electric razors in the Netherlands, robots outnumber the nine production workers by more than 14 to 1. Camera maker Canon began phasing out human labor at several of its factories in 2013.
This “lights out” production concept—where manufacturing activities and material flows are handled entirely automatically—is becoming an increasingly common attribute of modern manufacturing. In part, the new wave of automation will be driven by the same things that first brought robotics and automation into the workplace: to free human workers from dirty, dull, or dangerous jobs; to improve quality by eliminating errors and reducing variability; and to cut manufacturing costs by replacing increasingly expensive people with ever-cheaper machines. Today’s most advanced automation systems have additional capabilities, however, enabling their use in environments that have not been suitable for automation up to now and allowing the capture of entirely new sources of value in manufacturing.
Falling robot prices
As robot production has increased, costs have gone down. Over the past 30 years, the average robot price has fallen by half in real terms, and even further relative to labor costs (Exhibit 1). As demand from emerging economies encourages the production of robots to shift to lower-cost regions, they are likely to become cheaper still.
People with the skills required to design, install, operate, and maintain robotic production systems are becoming more widely available, too. Robotics engineers were once rare and expensive specialists. Today, these subjects are widely taught in schools and colleges around the world, either in dedicated courses or as part of more general education on manufacturing technologies or engineering design for manufacture. The availability of software, such as simulation packages and offline programming systems that can test robotic applications, has reduced engineering time and risk. It’s also made the task of programming robots easier and cheaper.
Ease of integration
Advances in computing power, software-development techniques, and networking technologies have made assembling, installing, and maintaining robots faster and less costly than before. For example, while sensors and actuators once had to be individually connected to robot controllers with dedicated wiring through terminal racks, connectors, and junction boxes, they now use plug-and-play technologies in which components can be connected using simpler network wiring. The components will identify themselves automatically to the control system, greatly reducing setup time. These sensors and actuators can also monitor themselves and report their status to the control system, to aid process control and collect data for maintenance, and for continuous improvement and troubleshooting purposes. Other standards and network technologies make it similarly straightforward to link robots to wider production systems.
Robots are getting smarter, too. Where early robots blindly followed the same path, and later iterations used lasers or vision systems to detect the orientation of parts and materials, the latest generations of robots can integrate information from multiple sensors and adapt their movements in real time. This allows them, for example, to use force feedback to mimic the skill of a craftsman in grinding, deburring, or polishing applications. They can also make use of more powerful computer technology and big data–style analysis. For instance, they can use spectral analysis to check the quality of a weld as it is being made, dramatically reducing the amount of postmanufacture inspection required.
Robots take on new roles
Today, these factors are helping to boost robot adoption in the kinds of application they already excel at today: repetitive, high-volume production activities. As the cost and complexity of automating tasks with robots goes down, it is likely that the kinds of companies already using robots will use even more of them. In the next five to ten years, however, we expect a more fundamental change in the kinds of tasks for which robots become both technically and economically viable (Exhibit 2). Here are some examples.
The inherent flexibility of a device that can be programmed quickly and easily will greatly reduce the number of times a robot needs to repeat a given task to justify the cost of buying and commissioning it. This will lower the threshold of volume and make robots an economical choice for niche tasks, where annual volumes are measured in the tens or hundreds rather than in the thousands or hundreds of thousands. It will also make them viable for companies working with small batch sizes and significant product variety. For example, flex track products now used in aerospace can “crawl” on a fuselage using vision to direct their work. The cost savings offered by this kind of low-volume automation will benefit many different kinds of organizations: small companies will be able to access robot technology for the first time, and larger ones could increase the variety of their product offerings.
Emerging technologies are likely to simplify robot programming even further. While it is already common to teach robots by leading them through a series of movements, for example, rapidly improving voice-recognition technology means it may soon be possible to give them verbal instructions, too.
Highly variable tasks
Advances in artificial intelligence and sensor technologies will allow robots to cope with a far greater degree of task-to-task variability. The ability to adapt their actions in response to changes in their environment will create opportunities for automation in areas such as the processing of agricultural products, where there is significant part-to-part variability. In Japan, trials have already demonstrated that robots can cut the time required to harvest strawberries by up to 40 percent, using a stereoscopic imaging system to identify the location of fruit and evaluate its ripeness.
These same capabilities will also drive quality improvements in all sectors. Robots will be able to compensate for potential quality issues during manufacturing. Examples here include altering the force used to assemble two parts based on the dimensional differences between them, or selecting and combining different sized components to achieve the right final dimensions.
Robot-generated data, and the advanced analysis techniques to make better use of them, will also be useful in understanding the underlying drivers of quality. If higher-than-normal torque requirements during assembly turn out to be associated with premature product failures in the field, for example, manufacturing processes can be adapted to detect and fix such issues during production.
While today’s general-purpose robots can control their movement to within 0.10 millimeters, some current configurations of robots have repeatable accuracy of 0.02 millimeters. Future generations are likely to offer even higher levels of precision. Such capabilities will allow them to participate in increasingly delicate tasks, such as threading needles or assembling highly sophisticated electronic devices. Robots are also becoming better coordinated, with the availability of controllers that can simultaneously drive dozens of axes, allowing multiple robots to work together on the same task.
Finally, advanced sensor technologies, and the computer power needed to analyze the data from those sensors, will allow robots to take on tasks like cutting gemstones that previously required highly skilled craftspeople. The same technologies may even permit activities that cannot be done at all today: for example, adjusting the thickness or composition of coatings in real time as they are applied to compensate for deviations in the underlying material, or “painting” electronic circuits on the surface of structures.
Working alongside people
Companies will also have far more freedom to decide which tasks to automate with robots and which to conduct manually. Advanced safety systems mean robots can take up new positions next to their human colleagues. If sensors indicate the risk of a collision with an operator, the robot will automatically slow down or alter its path to avoid it. This technology permits the use of robots for individual tasks on otherwise manual assembly lines. And the removal of safety fences and interlocks mean lower costs—a boon for smaller companies. The ability to put robots and people side by side and to reallocate tasks between them also helps productivity, since it allows companies to rebalance production lines as demand fluctuates.
Robots that can operate safely in proximity to people will also pave the way for applications away from the tightly controlled environment of the factory floor. Internet retailers and logistics companies are already adopting forms of robotic automation in their warehouses. Imagine the productivity benefits available to a parcel courier, though, if an onboard robot could presort packages in the delivery vehicle between drops.
Agile production systems
Automation systems are becoming increasingly flexible and intelligent, adapting their behavior automatically to maximize output or minimize cost per unit. Expert systems used in beverage filling and packing lines can automatically adjust the speed of the whole production line to suit whichever activity is the critical constraint for a given batch. In automotive production, expert systems can automatically make tiny adjustments in line speed to improve the overall balance of individual lines and maximize the effectiveness of the whole manufacturing system.
While the vast majority of robots in use today still operate in high-speed, high-volume production applications, the most advanced systems can make adjustments on the fly, switching seamlessly between product types without the need to stop the line to change programs or reconfigure tooling. Many current and emerging production technologies, from computerized-numerical-control (CNC) cutting to 3-D printing, allow component geometry to be adjusted without any need for tool changes, making it possible to produce in batch sizes of one. One manufacturer of industrial components, for example, uses real-time communication from radio-frequency identification (RFID) tags to adjust components’ shapes to suit the requirements of different models.
The replacement of fixed conveyor systems with automated guided vehicles (AGVs) even lets plants reconfigure the flow of products and components seamlessly between different workstations, allowing manufacturing sequences with entirely different process steps to be completed in a fully automated fashion. This kind of flexibility delivers a host of benefits: facilitating shorter lead times and a tighter link between supply and demand, accelerating new product introduction, and simplifying the manufacture of highly customized products.
Making the right automation decisions
With so much technological potential at their fingertips, how do companies decide on the best automation strategy? It can be all too easy to get carried away with automation for its own sake, but the result of this approach is almost always projects that cost too much, take too long to implement, and fail to deliver against their business objectives.
A successful automation strategy requires good decisions on multiple levels. Companies must choose which activities to automate, what level of automation to use (from simple programmable-logic controllers to highly sophisticated robots guided by sensors and smart adaptive algorithms), and which technologies to adopt. At each of these levels, companies should ensure that their plans meet the following criteria.
Automation strategy must align with business and operations strategy. As we have noted above, automation can achieve four key objectives: improving worker safety, reducing costs, improving quality, and increasing flexibility. Done well, automation may deliver improvements in all these areas, but the balance of benefits may vary with different technologies and approaches. The right balance for any organization will depend on its overall operations strategy and its business goals.
Automation programs must start with a clear articulation of the problem. It’s also important that this includes the reasons automation is the right solution. Every project should be able to identify where and how automation can offer improvements and show how these improvements link to the company’s overall strategy.
Automation must show a clear return on investment. Companies, especially large ones, should take care not to overspecify, overcomplicate, or overspend on their automation investments. Choosing the right level of complexity to meet current and foreseeable future needs requires a deep understanding of the organization’s processes and manufacturing systems.
Platforming and integration
Companies face increasing pressure to maximize the return on their capital investments and to reduce the time required to take new products from design to full-scale production. Building automation systems that are suitable only for a single line of products runs counter to both those aims, requiring repeated, lengthy, and expensive cycles of equipment design, procurement, and commissioning. A better approach is the use of production systems, cells, lines, and factories that can be easily modified and adapted.
Just as platforming and modularization strategies have simplified and reduced the cost of managing complex product portfolios, so a platform approach will become increasingly important for manufacturers seeking to maximize flexibility and economies of scale in their automation strategies.
Process platforms, such as a robot arm equipped with a weld gun, power supply, and control electronics, can be standardized, applied, and reused in multiple applications, simplifying programming, maintenance, and product support.
Automation systems will also need to be highly integrated into the organization’s other systems. That integration starts with communication between machines on the factory floor, something that is made more straightforward by modern industrial-networking technologies. But it should also extend into the wider organization. Direct integration with computer-aided design, computer-integrated engineering, and enterprise-resource-planning systems will accelerate the design and deployment of new manufacturing configurations and allow flexible systems to respond in near real time to changes in demand or material availability. Data on process variables and manufacturing performance flowing the other way will be recorded for quality-assurance purposes and used to inform design improvements and future product generations.
Integration will also extend beyond the walls of the plant. Companies won’t just require close collaboration and seamless exchange of information with customers and suppliers; they will also need to build such relationships with the manufacturers of processing equipment, who will increasingly hold much of the know-how and intellectual property required to make automation systems perform optimally. The technology required to permit this integration is becoming increasingly accessible, thanks to the availability of open architectures and networking protocols, but changes in culture, management processes, and mind-sets will be needed in order to balance the costs, benefits, and risks.
Cheaper, smarter, and more adaptable automation systems are already transforming manufacturing in a host of different ways. While the technology will become more straightforward to implement, the business decisions will not. To capture the full value of the opportunities presented by these new systems, companies will need to take a holistic and systematic approach, aligning their automation strategy closely with the current and future needs of the business.