A new survey shows the dramatic impact artificial intelligence technologies will make on business processes
With this announcement that the UK government is spending an estimated £327 million on research into robotics and autonomous systems, and as businesses began to realise the economic benefits of using artificial intelligence, the stage is set for Intelligent Process Automation (IPA) technologies to have a real impact on the future of work.
Senior executives across multiple industries think new software ‘robots’, utilising attributes such as machine learning, artificial intelligence, and effective use of big data, is about to unlock significant value within the next three to five years, according to a new study released by IT consultant firm Cognizant.
More than 537 senior business and technology decision-makers think that the benefits of intelligent process automation, and mining the resulting big data with automation-enabled analytics, will bring money and meaning for their businesses: faster processing with fewer errors, unlimited scalability and lower cost of ownership, along with the ability to make more timely business decisions.
Respondents estimate they are already automating, on average, 25-40% of their workflow today, indicating this automation is occurring with workflows that follow rote procedures and manual inputs, paving the way for next generation IPA technologies to drive greater cost savings and efficiency while driving richer business insights when applied to more complex workflows.
About half of the respondents saw automation as significantly improving their business processes within three to five years. However most are still in the early stages of using process automation – the study concludes there is a long tail of process systems yet to be automated, as machine learning and artificial intelligence enable a new generation of knowledge ‘robots’ that can mimic human actions whilst interacting with multiple applications.
‘The future of process work includes connecting skilled people to increasingly powerful technologies such as autonomic computing – including artificial intelligence, machine learning and deep learning – that can increase savings, enhance insights, and accelerate business. This shift is playing out in just about every industry,’ said Gajen Kandiah, executive VP, Business Process Services, Cognizant. ‘Our new study findings show that this trend will only accelerate over coming years as business leaders seek agility, better customer understanding, and cost savings.’
And businesses are taking a new approach to their organisational and business process models using automation as a key delivery model to digitise and analyse.
Charles Sutherland, Executive Vice President of Research at HfS Research, who has been closely researching developments in technology and process automation, said that by implementing software robots, service providers can ensure that work is done around the clock, eliminate human error, and ensure scalability as they save costs and drive revenue.
‘Process automation also allows clients and service providers to share benefits including enhanced compliance, reduced risk and improved job satisfaction of staff,’ said Sutherland.
In the 3rd of our series on the important Intelligent Automation Skills of the future Digital Worker, we’ll focus on Planning and Sequencing – a complex yet crucial skill set needed for any software robot to be productive, efficient, and effective in the Digitally Transformed workplace.
Planning and Sequencing, which is sometimes simply referred to as “Cognitive Planning”, is classed as one of the “executive functions” of the brain. It is the skill that is involved in the formulation, evaluation, and selection of a sequence of thoughts and actions to achieve a desired goal. As humans, we do this without conscious thought every day. Even the simplest task you perform – such as making a cup of coffee – requires a carefully orchestrated sequence of tasks to achieve the end goal.
Digital Workers must be able to work within the same operational parameters as their human counterparts, which means that an Intelligent Digital Workforce should be able to cope with complex orchestration and sequencing of its tasks, with the minimum amount of human intervention as possible.
Careful Planning = High Performance and Reliability
To automate a process effectively, the process needs to be captured and optimized for execution by a Digital Worker. Recording and automating broken and inefficient processes—the “Digital Duct Tape” approach – will not generate the results you’re looking for. Neither will it move you closer towards a truly digitally transformed process, because the result will be sub-optimal and guaranteed to require unnecessary human intervention.
To achieve true digital transformation of a process, you must understand that process in its current form – including its inefficiencies – and attempt to repair them before automating. Yes, it’s true that this takes longer than simply hitting a record button (as with up-in-no-time Desktop Automation technologies), but the results are stark, when you look at the ROI delivered by some of our customers over long periods of time. A Blue Prism Digital Worker is designed to require as close to zero intervention as possible.
Take our customer Npower as an example. 2 million hrs of work are currently delivered back to the business annually, with 400 robots. These robots are managed by just 2 people. This efficiency and reliability is just not possible without a methodical approach to planning.
Of course, now there are technological advances, like Process Mining – a feature brought to the RPA industry over a year ago, through a first-of-its-kind Technology Partnership between Blue Prism and our partner MINIT – that are allowing us to simplify this process capture and planning stage.
Process Mining allows you to take unstructured data from logs and other “dark data” and build a picture of your process as it is today. This gives you some valuable indications of where you may have inefficiencies – and therefore opportunities to automate.
However, Process Mining alone is not enough. Dark data will not give you insights into the human driven decision making or intuition that went into the process. Neither will it provide you with the “WHY?” behind highlighted inefficiencies. For that, you need to dig deeper and Blue Prism is diligently researching ways to bring together dark data and human insight in such a way that addresses both aspects of this problem. Our partner DXC is beginning to tackle this problem in their APA platform, by combining customised process mining with human process annotation. Armed with this level of insight and process transparency, business leaders can automate processes in their fully optimised forms.
The Importance of Orders of Operation
While it’s important for you to properly plan the process and program your Digital Worker, all that planning would be futile if proper sequencing and orchestration weren’t in place.
Any process will involve a sequence of steps that are likely to have a very critical order of execution but will also involve many decisions and diverging paths along the way. Many of these will involve re-usable and repeated steps. The Blue Prism platform was designed from the ground up to encourage modularity and reusability. The Blue Prism designer and release manager enables a “build once, use many times” approach to process automation. Once you have programmed your resilient integration with an application, this can be easily used in many business processes with ease. Everything is designed to be parameterized, to maximise the reusability and minimise process specific customization.
Thanks to our advanced Control Room and Queue management system, Digital Workers can work in tightly co-ordinated harmony across one or many different tasks. Building a Control Room was not an afterthought for us – it is a core component of the software that has been embedded and refined for almost 10 years.
This efficient sequencing and reusability means we are able to deliver at much higher scale than other products.
Take our customer Western Union. In just 6 months, they have been able to automate more than 21 processes, realise £1m saving and reassign 48 FTEs to more valuable workload.
Where next? Our current research and product development will focus on making this orchestration fully autonomous – the Digital Workers of the future will be fully self-managing.
Data is the New Gold
It’s important to be sure that all the steps performed by your Digital Workers can be audited and explained along the way—especially because of regulatory requirements and compliance audits – but also because having a strong, built in audit trail and metrics from your processes and Digital Workers enables you to identify areas for greater cost saving and process efficiencies. The Blue Prism platform offers best in class, built in analytics and supports feeding data into an external analytics engine. Ultimately, this data will support greater insights and Machine Learning to further improve your efficiency.
Blue Prism is also the only platform that offers true, enterprise grade audit and non-repudiation. Our platform automatically logs and records every action take and changed to give you 100% visibility into your process workflows. The data collected is centrally stored in a tamper-resistant environment, making it an irrefutable piece of a compliance audit, if needs be.
Final Thought – Planning and Sequencing in an AI Enabled Future
In the world of GDPR and the digitally transformed workplace, non-repudiation and audit will begin to become even more important than ever before. Consider how you will explain to your auditors how you arrived at a decision, if that decision was fully or partially automated? Do you understand which parts of your process are driven by deterministic, rules-based decisions and which are based on non-deterministic, machine learning driven decisions? Can you guarantee that all these steps are audited BY DEFAULT, without relying on a developer to remember to program them into the process?
The Blue Prism platform is already the most prepared to support these new regulatory requirements and our latest research is focused around making it even easier for you to extract the data and create the specialized reports you need to protect your business.
While we were discussing the confusing realities of the RPA hype at the HfS FORA Summit, we got a sneak preview of the interim data from the 2018 State of Operations and Outsourcing Study, conducted in conjunction with KPMG, where 250 interviews with Global 2000 operations leaders have now been completed.
We asked them where their investment priorities were currently lying when it comes to 2018 cost reduction:
So it’s abundantly clear all the hype about rampant adoption has been warranted, and we can hang our hats on our recent enterprise robotics software and services forecast, which now appears conservative, increasing with 47% growth to $1.46bn this year (click here for full forecast):
The Bottom-line: RPA has succeeded in being positioned as the “easiest silver bullet to target that next wave of cost take-out”. Now let the real fun and games begin…
We have discussed, argued and deliberated the true value, impact and effective ways to run RPA software for many, many hours here on HfS… for over five and a half years. And you only need to read our recent work to conclude that “RPA often starts out like a teenage romance, with a lot of enthusiastic fumbling around that ends quickly, frequently leading to disappointment“. And you can also read the RPA Bible, which preaches best and worst RPA practices to such an extent, you’ll need to visit your local RPA Rabbi, Bhikkhu, Priest or Mullah to find your soul again.
The real issue, here, is that the majority of enterprises are taking the plunge and investing the dollars, with 81% actually taking RPA seriously, and 53% very seriously. So what’s going to happen in a few months when those ambitious CIOs and CFOs ask to see real, tangible demonstrations of the resultant cost takeout? Can C-Suite leaders quickly learn to love metrics that are tied to growth, value and effectiveness, as opposed to a simple reduction in operating expenses to feel rewarded for those expensive bot licenses? Are operations leaders generally going to be ready to quantify the value effectively? Can they really convince their superiors that there is true value impact beyond merely offering up headcount elimination?
What’s more, what if headcount reductions were promised to offset investments, and adopters have failed to free up the workload that can enable them? And can they reward the staff, who cooperated in the automation work, by getting them “retrained”? Is there really a plan? While the “one human to oversee every 10 bots” is becoming the latest robo-governance rule-of-thumb, how real is this? Or are we just all bull*****g ourselves about the future, and merely circling the hype to stay relevant today? Do we really care about our companies anymore, or are we more obsessed with adding big sexy initiatives to our CVs? Is this really anything different to yesteryear, where you needed to have an SAP rollout on your CV to be a credible CIO, or oversaw a 1000 FTE outsourcing deal to prove you were worth that $1.2m/ year GBS salary (yes, that’s what some get…). In this world of #fakenews, does anything really matter anymore, when we can spin our realities into whatever shiny new thing is out there?
One thing is clear is that the back office needs to be submerged into the value end of the organization. There is little more headcount elimination to be had for most companies – sure, there are still many areas that have too many people working on too few valuable tasks, and technologies like RPA are terrific tools for breathing new life into legacy systems and creating digital process flows, where before there was only spaghetti code, manual workarounds and swamps of data polluting the corporate underbelly.
One thing is clear, it’s very murky out there, and all we can really do is hatch a semi-realistic plan and try and stay on top of it as the future unravels in front of us…
Artificial intelligence (AI) is delivering new insights − previously hidden in vast pools ofdata − to add intelligence to many products and services that ultimately transform the way organizations and machines interact. While human-like intelligence will remain the stuff of fantasy novels and movies for the near future, most organizations should explore incorporating AI into their business, products, and IT projects. Our firm’s research concludes that AI can improve productivity of internal applications, increase revenue, reduce costs, and improve products and services with added functionality or communication modes.
You can download the paper here.
With every new software release from RPA sector leaders, there is always much to be excited about as vendors continue to push the technological boundaries of workplace automation. Whether those new capabilities focus on cognition, or security, or scalability, the technology available to us continues to be a source of inspiration and innovative thinking in how those new capabilities can be applied.
But success in an RPA deployment is not entirely dependent just on the technology involved. In fact, the implementation design framework for RPA is often just as important – if not more so – in determining whether a deployment is successful. Install the most cutting-edge platform available into a subpar implementation design framework, and no amount of technological innovation can overcome that hindrance.
With this in mind, here are seven tasks that should be part of any RPA implementation plan before organizations put pen to paper to sign up with an RPA platform vendor.
Create a cohesive vision of what automation will achieve
Automation is the ultimate strict interpretation code: it does precisely as it’s told, at speed, and in volume. But it must be pointed at the right corporate challenges, with a long-term vision for what it is (and is not) expected to do in order to be successful in that mission. That process involves asking some broad-ranging questions up-front:
- What stakeholders are involved – internally and externally – in the automation initiative?
- What are our organization’s expectations of the initiative?
- How will we know if we succeeded or fail?
- What metrics will drive those assessments?
- Where will this initiative go next within our organization?
- Will we involve our supply chain partners or technology allies in this process?
Ensure a staff model that can scale at the speed of enterprise automation
We tend to spend so much time talking about FTE reduction in the automation sector that we overlook the very real issue of FTE sourcing (in volume!) in relation to the implementation of automation at enterprise scale. Automation needs designers, coders, project managers, and support personnel, all familiar with the platform and able to contribute new code and thoughtware assets at speed.
Some vendors are addressing this issue head-on with initiatives like Automation Anywhere University, UiPath Academy, and Blue Prism Learning and Accreditation, and others have similar initiatives in the works. It is also important that organizational HR professionals be briefed on the specific skillsets necessary for automation-related hires; this is a relatively new field, and partnering up-front on talent acquisition can yield meaningful benefits down the road.
Plan in detail for a labor outage
The RPA sector is rife with reassurances about digital workers: they never go on strike; they don’t sleep or require breaks; they don’t call in sick. But things do go wrong. And while the RPA vendors offer impressive SLAs with respect to getting clients back online quickly, sometimes it’s necessary to handle hours, or even days, of automated work manually. Having mature high-availability and disaster recovery capability built into the platform – as Automation Anywhere included in Enterprise Release 11 – mitigates these concerns to a specific degree, but planning for the worst means just that.
Connect with the press and the labor community
Don’t skip this section because it sounds like organized labor management only, although that’s a factor too. Automation stories get out, and local and national press alike are eager to cover RPA initiatives at large organizations. It’s a hot-button topic and an easily accessible story.
Unfortunately, it’s also all too easy to take an automation story and run with the sensationalist aspects of FTE displacement and cost reduction. By interacting with journalist and labor leaders in advance of launching an automation initiative, you’re owning the story before it can be owned elsewhere in the content chain.
Have a retraining and upskilling initiative parallel to your automation COE
Automation can quickly reduce the number of humans necessary in a work area by half or even more. What is your organization’s plan for redeployment of that human capital to other, higher-value tasks? Who occupies those task chairs now – and what will they be doing?
Once the task of automation deployment is complete, there is still process work to be done in finding value-added work for humans who have a reduced workload due to automation. Some organizations are finding and unlocking new sources of enterprise value in doing so – for example, front-line workers who have their workloads reduced through automation can often ‘see the forest’ better and can advise their superiors on ways to streamline and improve processes.
Similarly, automation can bring together working groups on tasks that have connected automations between departments, allowing for new conversations, strategies, and processes to take shape.
Have an articulation plan for RPA and other advanced technologies
RPA and cognitive automation do more than improve the quality and consistency of work – they also improve the quality and consistency of task-related data. That is an invaluable characteristic of RPA from the organizational data and analytics perspective, and one that is often overlooked in the planning process.
While it might take days for a service center to spot a trend in common product complaints, RPA platforms could see the same trend in hours, combine that data in an organizational data discovery environment with IoT data from the production line, and identify a product fault faster and more efficiently than a traditional workforce might. When designing an automation initiative, it is vital to take these opportunities into account and plan for them.
Create a roadmap to cognitive automation and beyond
RPA is no more a destination than business rules engines were, or CRM, or ERP. These were all enabling technologies that oriented and guided organizations towards greater levels of agility, awareness and capability. Similarly, deploying RPA provides organizations with insight into the complexity, structure and dependencies of specific tasks. Working towards task automation yields real clarity, on a workflow-by-workflow basis, of what level of cognition will be necessary to achieve meaningful automation levels.
While many tasks can be achieved by current levels of vendor RPA capability, others will require more evolved cognitive automation, and some will be reserved for the future, when new AI capabilities become available. By designating relevant work processes to their automation ‘containers’, an enterprise roadmap to cognitive automation and AI begins to take shape.
By Leslie Willcocks
Robotic Process Automation (RPA) continues to be a growing success story. In 2016, RPA alone experienced a 68 percent growth rate in the global market, with 2017 maintaining this momentum. Some reports have even predicted a US$ 8.75 billion market by 2024. However, merely investing in RPA is not an instant recipe for growth.
In “Service Automation Robots and The Future of Work” (2016), my colleague Mary Lacity and I highlighted successful RPA deployments and how organizations were achieving triple wins for their shareholders, customers, and employees alike. We continued tracking these developments in 2017 and also noticed something different — many less successful journeys. In practice, it appears that automation success is far from guaranteed. Wider reports provide anecdotal evidence of between 30 to 50 percent of initial projects stalling, failing to scale, being abandoned or moving to other solutions. Our most recent research has examined in detail both successful and more challenged automation deployments. It turns out that service automation — like all organizational initiatives that try to scale — can be fraught with risk. We’re seeing 41 specific risks that need to be managed in eight areas: strategy, sourcing, tool selection, stakeholder buy-in, project execution, change management, business maturity and an automation center of excellence.
One of the key risk areas is tool/platform selection. Because of the hype and confusion in the RPA marketplace, clients risk choosing the wrong tool(s), too many tools, or bad tool(s). By early 2018, over 45 tools or platforms were being sold as “RPA” and over 120 tools were being sold as some form of cognitive automation. Because the space is relatively new to many clients, it’s difficult to assess the actual capabilities and suitability of these tools. Clients must be wary of hype and “RPA washing”.
In our new report on Benchmarking the Client Experience, we extensively polled clients at Blue Prism on the results they’ve been getting by integrating RPA into existing business processes. In order to get the most valuable feedback, we set the bar high in requesting client assessments of the Blue Prism RPA platform on the following criteria: scalability, adaptability, security, service quality, employee satisfaction, ease of learning, deployment speed and overall satisfaction. From our qualitative research into process automation, these emerged as the most critical and essential characteristics and requirements for a successful enterprise-grade RPA implementation.
The overall level of satisfaction with the Blue Prism platform was extremely high in our survey. Respondents reported a 96 percent overall satisfaction rate, with 79 percent of respondents ranking Blue Prism’s platform a six or seven on a seven-point Likert Scale. Based on our 25-year research history into process improvement initiatives (BPM, shared services, outsourcing, six sigma, etc.), these are extremely high RPA satisfaction levels. Our research into IT and Business Services outsourcing finds only 20 percent of vendors getting “world class” performance, 25 percent getting good performance, 40 percent “doing OK”, while 15 percent experience poor outcomes. The record on IT projects also continues to frustrate. The most recent (2017) Standish Group CHAOS report found only a third of IT projects were successfully completed on time and on budget over the past year – the worst failure rate the Standish Group has recorded.
What, then, accounts for the impressive 96 percent overall satisfaction rate with Blue Prism?
Our observation is that not all RPA offerings are the same. The capability of RPA software depends greatly on the origins and orientations of the supplier. If designed as a desktop assistant, many RPA tools experience problems with scaling, security and integration with other information systems. Other RPA vendors offer RPA which is effectively a disguised form of what we have described as a “software-development kit,” needing a lot more IT development by the in-house team or the RPA vendor than first imagined, and incurring unanticipated expense, time and resources. True enterprise RPA, however, is designed from the start with a platform approach, to fit with wider enterprise systems. This might make it more expensive initially, and require more attention in the first few months of trial, but true enterprise RPA platforms have proven to be an investment in success later in the deployment cycle, when compared to other RPA software that tends to run into real problems.
Our qualitative research also suggests that some RPA tools are not easily scalable, especially those based on a recording capability, or requiring a lot of IT development. This occurs because some RPA tools are not designed as configurable service delivery platforms that can be integrated with other existing systems. These also need a lot more management involvement than clients and their vendors often expect. Many clients, moreover, do not put in place the necessary IT, project and program governance (rules and constitution, who does what, roles and responsibilities), and often do not use built-in tools that contain technical governance.
This, of course, is not the whole story. An RPA and cognitive skills shortage is already upon us. This means that retained capability and in-house teams are sometimes not strong enough – a situation not helped by sometimes skeptical senior management under-resourcing automation initiatives and not taking a strategic approach. Consultants are also hit by skills shortages and cannot always provide the support necessary — this is also true with business services outsourcing providers. We are also finding that clients often do not give enough attention to stakeholder buy-in and change management. Given these emerging challenges, the Blue Prism client satisfaction level are very notable indeed.
To download the report, click here.
Leslie Willcocks is Professor in the Department of Management at the London School of Economics, and co-author, with Mary Lacity, John Hindle and Shaji Khan, of the Robotic Process Automation: Benchmarking The Client Experience (Knowledge Capital Partners, London).
Most successful RPA projects emanate from a good design. Regardless of one’s preferred method for arriving at that design (e.g. Agile, Waterfall, etc.), the process should include the development of a formalized specification document (spec), that details the road ahead and builds project team consensus. While this might seem obvious to the experienced developer, RPA’s new-found celebrity has attracted a lot of new and eager practitioners whose enthusiastic desire to produce quick wins might also encourage some short-cutting. The purpose of this article is to articulate some the best practices our professional services team has identified over the years when it comes to building an RPA specification document.
Just the facts. The most important aspect of constructing the spec document is that it precisely captures just the process being automated. While “color commentary” can be helpful to the PD (Process Designer), when trying to understand the process and assessing potential improvements, such commentary should not be included in the spec. The goal of the document is to focus the AA (Automation Architect), only on the aspects of the process that are required to complete the transaction. All information beyond what is needed to complete the process is often considered noise. In other words, the AA cares about the: “who, what, which and where”, and not much about the “why”. While the spec must contain as many details as possible regarding the transaction, the details should focus on:
- Where to start?
- Which screens to navigate?
- Which user interface controls to manipulate?
- What data should be extracted and/or pasted?
- How should data be manipulated?
- What application and screen states to look for?
- How to handle error conditions and exceptions?
- What state to leave the application in when the transaction completes?
- How should logging be performed?
Minimize jargon. The AA is usually not intimately familiar with the user’s business nor conversant in the specific language of the business. Therefore, keeping jargon down to a minimum is important. The best method is to try to relate jargon to standardized business terms most AAs do understand such as: invoices, purchase orders, inventory item, price, etc. If it is required to include jargon to help the customer team understand the spec, make sure you include a glossary of terms early in the spec.
Make each step as atomic as reasonably possible. Each individual function the automation must perform is called a “step”. In the spec, each step is uniquely numbered so it can be cross-referenced and tested individually, and linked back to when viewing logs. For this reason, it is important to define each discrete function the automation performs, (e.g. the pasting of data to one field, or the clicking of a button), as its own step, and not lump multiple functions together. Break each process down to its most reasonable atomic function. When I say “reasonable”, I mean the step that represents a logical unit of work that you would want to link back to from a log. Let us review a couple of simple examples.
Example 1: If a step calls for pressing the key combination, “Alt+K”, this should be expressed as one step, not two.
Example 2: What about if you want select a sub navigation menu bar that requires two sets of key presses (e.g. “Alt+F” and “O” to pop a File Open dialog). Should this be represented as one step or two? In this case, it makes sense to combine the key presses into one step since the logical unit of work is the popping of the File Open dialog. However, it would not be wrong to break the process down into two steps. It ultimately comes down to style and it is probably more important to represent these steps consistently in one given spec than to definitively handle them one way or another.
Steps should be numbered and sub-numbered. Well-designed automations can cross reference their functions back to the steps defined in a spec when a user is viewing an action execution log. This allows the user to easily figure out, using the language of the user, what the action was supposed to be doing at a specific point during the execution. However, this cross-referencing can only happen if the steps defined in the spec are uniquely numbered.
A picture is worth a thousand words. While narrative descriptions are important, nothing informs the AA more about the task at hand than a picture (see Figure 1).
A spec should make heavy use of screen shots to communicate information such as:
- What should the state of the screen look like at this step?
- Via highlights, which user interface (UI), elements are the elements to be manipulated in this step(s).
Other points to consider when working with screen shots:
- Screen highlights should use colors that are not contained within the screen upon which they are overlaid and those colors should be used consistently throughout the spec.
- It is a common practice to include the step number in a screen shot highlight for each UI element highlighted.
- Screen shots usually embody references to multiple steps (i.e. multiple screen shots of the same screen should be avoided).
Spec the negative condition. Documenting a process where everything goes according to plan is easy. However, accounting for error or unexpected conditions can be more of a challenge. For example, a step that states; “4. Select part number from list.”. What should the automation do if the part number is not in the list? This is the kind of “negative” condition that should be accounted for in your spec. The more negative conditions you can capture in the spec (again, within reason), the less back and forth the AA will have with the user team for clarifications.
Seek out keyboard short cuts and mnemonics where possible. While most RPA tools support drag/drop and icon clicking, it is always faster and less subject to error when a keyboard shortcut or menu mnemonic is used in an automation. Although the AA will ultimately decide the best method for automating the user interface, calling out such shortcuts are always helpful.
Demarcate commentary via notes. Even though sticking to the facts is paramount when building a spec, there are times when some commentary is required (e.g. how something is calculated or the conditions under which certain states arise). In these cases, it is a best practice to not include the commentary in a step, but rather, break it out as its own “section note”.
Include an automation start state & preparation section if applicable (usually applicable to attended bot automations). If access to the development environment is proctored, then in most cases, the proctor should be able to navigate the AA to the application screens from which the automation is initiated. However, if access is not proctored, it is important to include in the spec (prior to the step definition), an automation start state section that includes the following:
- Application load methods.
- Login credentials for the applications.
- Navigation path to get to the automation start screen.
- Any data required to support the navigation path.
Include incomplete transaction rollback instructions. Some transactions commit data at specific steps prior to the completion of the process. If this is the case, the spec should include the process for rolling back the transaction to its pre-processing state. If the rollback conditions are only applicable to the testing of the action, it should be demarcated as a note. If the rollback must happen in production as well, it should be documented as its own set of steps.
Defines skills required to handle a prompt. This point applies exclusively to unattended bot projects. When an unattended bot encounters a processing condition that requires assistance from a human, it can raise a “prompt” and send a notification to one or more authorized users. When the prompt notification is received, the user can either provide the information requested by the prompt or click through the notification and take control of the unattended bot’s desktop. This is a powerful RPA feature that helps reduce job rejections and speeds up transaction processing. However, not all users may be able to handle all raised prompts. Most RPA tools that support prompting usually allow you to associate a “skill” with a prompt so that only users possessing that skill will be sent specific prompts. This being the case, it is important the spec define where and what specific skills are required to address any defined prompts.
Building and getting the spec approved is an iterative process. Though things seem clear upon the first pass of a design, there are always clarifications and modifications that take place as people give the process more scrutiny. All changes should be incorporated into a new version of the spec. The initial version of the spec draft should be versioned “v1.0”, with subsequent versions incrementing the sub number (e.g. v1.1, v1.2, v1.3, etc.), assuming the modifications and clarifications do not change the scope of the project. Most specs do not require the major number to be incremented, but it does happen. This is usually the case when a project has major functional changes added to it during the design phase.
Once the specification is approved by the user, that version is considered the “build draft”. I call it a build draft because, undoubtedly, the development process will uncover issues that were not captured properly in the original spec, thus requiring final modifications. This is normal part of the process.
Finally, one of the most important aspects of the spec document is that it be kept current during the development and testing phases. It is critical that any modifications made to the process to accommodate variances uncovered downstream of the build draft get incorporated back into the spec. Otherwise, the spec will have little use when users try to use it to understand exceptions or use it as the basis for a phase II project.
Modern businesses are increasingly embracing digital transformation — leveraging new technologies to reinvent core processes, business models, product offerings, and the customer experience. As businesses embark on the digital transformation journey, they need to modernize their underlying technology infrastructure to enable a more agile, customer focused, flexible, and innovative digital workplace. For more information, visit www.DellEMC.com/GetModern
Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries
You can download it here