CROWDSOURCING HAS REAL POTENTIAL IN INTELLIGENT AUTOMATION

In discussions with stakeholders in the evolution of Intelligent Automation (IA), HfS is constantly on the lookout for new technology providers that could add innovative and complementary approaches to automating service delivery. As part of these discussions, we were briefed by executives of WorkFusion on how combining crowdsourcing and machine learning is offering new capabilities to the continuum of Intelligent Automation solutions.

While the key value proposition of IA is to get higher levels of automation by shifting service delivery from labor arbitrage people based solutions to delivery through innovative automation tools, WorkFusion’s approach (at least in parts) is to break up processes into micro tasks which can be completed through automated systems and crowd sourced staff. It is achieving that through a platform approach that is combining business process microtasking and machine learning (ML) through BPM functionality.

Looking the value proposition in more detail, WorkFusion breaks up complex processes into microtasks and delegates each task to either a machine tool (e.g., RPA, scrapers, OCR, text analytics) or a person depending on the complexity of the work. For tasks that require human judgment, the platform integrates internal employees, contingent workers, outsourced contract workers, and cloud workers. WorkFusion has APIs with global on-demand cloud talent markets, which is useful for increasing scale and language coverage for human tasks. The platform provides interfaces, which guide analyst through the day-to-day work of unstructured data categorization and extraction. As analysts perform the tasks, WorkFusion applies statistical quality control to ensure data accuracy and uses this high quality data to train ML to automate predictable work. The result is a hybrid workforce of robotic process automation (RPA), learning algorithms, and human talent for exceptions that together represent the continuum of process automation, including RPA and cognitive capabilities.

The service provider’s sweet spot are processes that are both high volume and high complexity. Examples are reference data, know your customer (KYC) processes, gaps in anti-money laundering (AML) standards, trade operations, supply chain and marketing research. Consequently, the vertical focus is on financial services and in particular on information providers. In KYC and AML WorkFusion’s approach competes with other IA propositions while in other processes it could sit on top of RPA or other automation tools.

WorkFusion is providing an additional innovative instrument for the IA toolbox available to enterprises and service providers. It is another example of the interdependent and often overlapping approaches on the continuum of IA. This approach breaks up processes into micro tasks is augmenting automation with labor. By expanding lower level RPA capabilities and Machine Learning it is augmenting business analysts with deeper insights.

Many organizations as well as service providers are building out a portfolio of IA tools. Therefore, given the nascent, albeit maturing state of the market, there are many opportunities for technology startups. HfS would welcome discussions with other organizations that can add distinctive approaches on the journey toward the As-a-Service Economy.

 

Source: HFSresearch-CROWDSOURCING HAS REAL POTENTIAL IN INTELLIGENT AUTOMATION

Advertisements

Can managed services be the key to bimodal IT success for mid-market businesses?

IT professionals are buzzing about Gartner’s newly coined term “bimodal IT”, a method of service delivery that allows IT teams to split their focus into two separate, coherent modes: stability and agility. The latter encompasses day-to-day IT operations, which are essential to the safety and reliability of an organisation’s IT environment. The former is centred on innovation that allows the IT team to experiment and identify new ways of using technology to meet the fast-evolving demands of the business.

Today, according to Gartner, 45 per cent of CIOs use a bimodal IT service management strategy. By 2017, Gartner predicts that 75 per cent of organisations will have implemented bimodal IT strategies. In a different survey by a separate organisation, however, the majority of CIOs – 63 per cent, to be exact – said their IT spending was weighted too heavily toward the maintenance side of IT, leaving little room for new projects.

Lack of resources to dedicate to innovation is a challenge for mid-market businesses especially, as they typically have limited IT staff, and the staff they do have are too consumed with troubleshooting and helpdesk issues to drive new developments.

So how are the majority of businesses going to adopt a bimodal approach to IT service delivery within the next couple of years if the modes are imbalanced? For some businesses, the solution lies with managed services.

Bimodal IT and managed services

Some organisations are choosing to outsource basic IT functions such as helpdesk, which allows the IT unit to take a more holistic role in the business. With the changes in the nature of IT sourcing, Gartner believes that smaller IT suppliers will be able to respond rapidly to requirements, while also scaling quicker solutions by utilising cloud capabilities.

By outsourcing IT functions to a managed service provider (MSP), internal staff are then free to invest more time in IT innovation that allows the business to remain agile in a competitive marketplace. But while innovation plays a key role in advancing the business, organisations can’t afford for their service delivery times to suffer. For a business to have a successful MSP relationship that doesn’t detract from the organisation’s future progress, the business should ask the MSP the following questions:

Are you able to accommodate our current and future IT needs?

It may seem obvious, but it’s crucial for an MSP to have experience supporting the specific functions that will be outsourced. If a business is outsourcing the helpdesk, for example, any IT vendor that will be working on hardware should have engineers that possess a variety of skills and are fully qualified to repair equipment from an array of manufacturers (having someone without the proper qualifications work on equipment could void manufacturer warranties). The MSP will also need to be equipped to support any new technologies the business plans to implement in the future.

What are your guaranteed service levels?

The MSP’s promised service levels must be backed by service level agreements (SLAs), and the business must carefully examine these SLAs to ensure they align with the organisation’s goals and expectations. If possible, the business should also evaluate the MSP’s service quality by having the provider demonstrate typical response times and provide quantifiable metrics of success.

What degrees of support do you offer?
To prevent operational IT issues distracting the IT team from new projects, the business needs to contract the amount of coverage that guarantees the MSP will resolve IT issues satisfactorily. MSPs typically offer varying degrees of support, such as 24/7, remote or on-site support. If the MSP doesn’t offer a package that meets the organisation’s needs in terms of both budget and support coverage, the business might need to request that the MSP create a customised solution.
For businesses to stay afloat in a competitive marketplace, technological innovation is a must. Although mid-market businesses might struggle to implement bimodal IT strategies with dedicated in-house staff, working with an MSP can free up the time and skill necessary to focus on progress and ensure their spot as one of Gartner’s projected 75 per cent of businesses that have implemented a bimodal strategy.

Source: itproportal – Can managed services be the key to bimodal IT success for mid-market businesses?

Six CIO tips for business innovation with data

First Utility CIO Bill Wilkins has a job that relies on data. The company started as a small, entrepreneurial business in 2008 and experienced rapid growth as a pioneer in smart meters. By 2014, the firm had become the seventh-largest energy supplier in the UK, with over a million customers and a market share of 2%.

“Because of the company’s rapid growth, every year has been different,” says Wilkins.

The firm is now scaling up for further growth through a focus on its digital platform. Wilkins, who joined First Utility full-time in 2010 after spells with Sun Microsystems and SeeBeyond, is drawing on his experience to push a data-led process of innovation.

Wilkins offers six best-practice tips for other CIOs from his experiences of running data projects, covering areas such as organisational culture, external partnership and continuous innovation.

1. Create a customer-driven approach to data analytics

Wilkins says First Utility benefits from access to a huge data asset that it uses in three key ways. First, the business runs a number of information-led initiatives to boost customer engagement. “We are, essentially, a retailer and we want to have long-term, valuable relationships with our clients,” he says.

The second way First Utility uses data is for optimisation. “Information helps us to understand what processes work, which processes are causing us problems and how we can use our experience around those processes to make the business better,” says Wilkins.

The third way the firm uses information is strategically, says Wilkins. “Now we’ve built a platform, we want to know our technology is working and where the business can use systems and services to develop and grow,” he adds. “It’s all about making the most of data to find new opportunities and to market to new sets of customers.”

Wilkins says the firm’s customer-driven approach goes further – other external stakeholders are included, too. “From a very early stage of operation, the firm – because of its strong focus on data – has had access to detailed market information,” he says.

“It was not until the IT team built an application on top of that data that a wide base of users started making the most of our knowledge. The awareness within our organisation about how competitive we are in the marketplace is now much clearer because we created a visual representation of data for our employees.”

2. Get your organisational structure right

Wilkins says that, while his firm’s use of data is very broad, he can benefit from a tight organisational set-up. “We have the advantage of still being focused as an organisation, despite our rapid growth,” he says.

Take information management, for example. Here Wilkins benefits from access to a single data team. “If you go into many other billion-pound businesses, you’d have a much more established set of functions with their own silos of data,” he says.

“We’ve managed to retain a coherent organisational structure. We also have a central repository for data and that represents a huge advantage, because it means we can look at information and synthesise it in many different ways.”

Wilkins says he has strived to achieve an integrated approach to data, both in terms of human skills and technical resources. One key factor is that he combined the role of head of enterprise architecture with that of data delivery at a very early stage of his tenure as CIO.

This single manager has design authority for data inputs, but also needs to drive insight from the information. “In a period of rapid growth and innovation, we can use this integrated approach to make sure our aims and objectives are still as aligned as they possibly can be,” he says.

3. Look to evolve, rather than to keep starting afresh

Wilkins says that from the start, the investors at First Utility recognised that the company would use technology to create a competitive differentiation. The firm wanted to deliver smart, rather than standard, energy and it spent a lot of time building an end-to-end infrastructure.

“At the time, you couldn’t get a smart gas meter,” he says. “The business ended up building its own hardware to measure volume and send the data back to the office. The senior executives got involved in a lot of low-level, but clever, technology to get their smart proposition set up.”

“Work with a partner, learn from their experience, innovate for your customers and differentiate from your competitors”

Bill Wilkins, First Utility

On joining the business, Wilkins was able to inherit this foundation work. The company had already solved the problem of taking heterogeneous data and creating a normalised, standard view of information that worked in a billing system. The problem, however, was that the system did not scale.

The answer to that challenge, says Wilkins, was to modify the foundation, which he says represents key advice to other CIOs facing a similar data conundrum. “What you have to do is to take what’s already there and look at ways to evolve that approach,” he says.

4. Partner with external specialists to build engagement

With the platform in place, Wilkins started to look at other ways to help First Utility develop its smart approach to energy provision. He realised there was a huge opportunity for using the firm’s half-hourly collected smart meter data to create a new form of engagement with customers.

“The call to action was that we realised we had this rich data set which contained lots of interesting information. What we had to do was to turn it into knowledge that could inform our customers about their energy use,” says Wilkins, who explains how the firm partnered initially with external specialist Opower to create its My Energy programme.

Read more about innovating with data

  • Driving innovation with big data: Rather than refusing the wider use of data outside the organisation, only by opening it up to third parties can we realise its true value.
  • An innovation and collaboration centre has been launched in London to develop ideas relating to data and the UK economy.
  • The Digital Catapult in London’s King’s Cross is home to ambitious technology startups innovating around big data.

“That’s another tip from me – don’t try to build it all yourself,” he says. “Innovation happens in many places, so look for partners that can complement what you do. We worked with Opower, as the leading US player in energy analytics, and learnt from their knowledge and experience.”

First Utility then decided to bring the My Energy initiative back in-house, because it thought the US-centric platform was not necessarily the best way to serve its UK customer base. “Partnering allowed us to get a product out there and to learn from its use in the real world very quickly,” says Wilkins.

5. Use experience to build your own data platform

Knowledge from the first, externally developed iteration of My Energy proved essential as First Utility created its own version of the platform, which was launched at the end of 2014. “For other CIOs, I would say the lesson is to work with a partner, learn from their experience, innovate for your customers and differentiate from your competitors,” says Wilkins.

To achieve this level of differentiation, Wilkins built a dedicated team of internal specialists. He initially thought First Utility would need to employ a broad range of analytical specialists, but quickly discovered that customer experience expertise would be more useful.

“As we started building out the My Energy platform, we realised we needed people who could translate the data into areas that customers would be keen to investigate and use,” says Wilkins. “We inevitably spent much more time and money on the look and feel of the service and less on the data side.”

“We’ve learnt that the way you present information to different stakeholders is very important. Half-hourly updates have a very low value for consumers, but we’ve used My Energy to take that information and present it in a more informative manner for customers.”

6. Think in an entrepreneurial fashion and continue to innovate

As a smaller utility firm, First Utility must try to keep pace with larger competitors, but often with fewer resources, says Wilkins. He points to the firm’s mobile programme, which – when compared to the big budget spend of some competitors – was launched with the help of just four engineers in under a year.

One area of pioneering development is the firm’s partnership with Cosy, a Cambridge firm specialising in the development of smart heating systems. Wilkins says First Utility’s aim is to be in a position to offer the Cosy technology to all its customers by the end of the year.

“Cosy is all about bringing in a new data set concerning the heating characteristics of a house,” he says. “Customers get to control their heating from an app, and we get fine-grain information on their requirements, and the efficiency of their boiler and insulation. That information is then fed back into the My Energy platform.”

Wilkins says data becomes more valuable when it is weaved together and used for cross-purposes. As well as Cosy, First Utility is also set to launch a new Auto Read feature as part of its mobile app for customers who do not yet have smart meters. A UK first, the app uses the phone’s camera to take a snapshot of the meter and helps to create more accurate readings.

“Both innovations – Cosy and Auto Read – are concerned with how we can get better-quality data into our analytics engine,” he says. “Getting an accurate view of energy consumption is a challenge for utility firms. Unless you get it right, you start billing estimates, which isn’t great for customers and doesn’t help create certainty in terms of revenue for the business.”

 

Souce: computerweekly.com-Six CIO tips for business innovation with data

Time to cut IT costs again, predicts Gartner

IT spending is set to rise slightly in 2016, with IT spending increasing in datacentres, services and software, but devices and telecommunications spending is set to fall.

According to Gartner’s latest spending forecast, spending on datacentre systems is projected to reach $175bn in 2016, a 2.1% increase from 2015. Global enterprise software spending is on pace to total $321bn, a 4.2% increase from 2015. Spending on IT services is expected to reach $921bn, a 2.1% rise from 2015.

Gartner research vice-president John-David Lovelock said the top line of 0.5% of growth in 2016 follows two years of decline, due to the strength of the US dollar.

“When we look at western Europe, its growth will be 0.2%, but there was 0.7% growth in 2015,” said Lovelock.

He expects spending in Europe will rise in 2017. Companies have continued to spend when necessary, such as replacing aging servers, but he said there was a retraction in the phones and devices markets.

The analyst firm predicted the device market (PCs, ultramobiles, mobile phones, tablets and printers) would decline 3.7% in 2016. The smartphone market is approaching global saturation, which is slowing growth, said Gartner.

The main factor limiting IT spending, according to Lovelock, is worsening economic conditions.

Read more about digital transformation

  • Computer Weekly looks at the key characteristics of successful leaders as digital transformation becomes a business priority.
  • In this presentation from Computer Weekly’s CW500 event, digital transformation programme director Richard Philips explains how the AA tackled the challenges of delivering people, cultural and organisational change.

When asked about Europe, he said: “There is a shift from growth to cost optimisation.” But this is not like the stagnated market in 2001, where overspending in IT led to massive cut costs and redundancies.

“No one has revenue growth to transform to a digital business. CIOs must now optimise IT and business to fund spending on digital projects,” said Lovelock. As an example, he said the savings from legacy system optimisation and enhancements can be redirected to fund digital initiatives.

It is necessary to reduce costs to become a digital business, he added. One of the approaches Gartner promotes is so-called Mode 2 development, which Lovelock said costs less than traditional, or Mode 1 IT.

Businesses then need to move away from owning assets to using services such as software as a service (SaaS) and infrastructure as a service (IaaS). “Instead of buying IT, businesses will buy services,” Lovelock predicted.

“Things that once had to be purchased as an asset can now be delivered as a service. Most digital service twin offerings change the spending pattern from a large upfront payment to a smaller reoccurring monthly amount. This means the same level of activity has a very different annual spend,” he said.

Source: computerweekly.com-Time to cut IT costs again, predicts Gartner

Google cloud outage highlights more than just networking failure

Google Cloud Platform went dark some weeks ago in one of the most widespread outages to ever hit a major public cloud, but the lack of outcry illustrates one of the constant knocks on the platform.

Users in all regions lost connection to Google Compute Engine for 18 minutes shortly after 7 p.m. PT on Monday, April 11. The Google cloud outage was tied to a networking failure and resulted in a black eye for a vendor trying to shed an image that it can’t compete for enterprise customers.

Networking appears to be the Achilles’ heel for Google, as problems with that layer have been a common theme in most of its cloud outages, said Lydia Leong, vice president and distinguished analyst at Gartner. What’s different this time is that it didn’t just affect one availability zone, but all regions.

“What’s important is customers expect multiple availability zones as reasonable protection from failure,” Leong said.

Amazon has suffered regional outages but has avoided its entire platform going down. Microsoft Azure has seen several global outages, including a major one in late 2014, but hasn’t had a repeat scenario over the past year.

This was the first time in memory a major public cloud vendor had an outage affect every region, said Jason Read, founder of CloudHarmony (now owned by Gartner), which has monitored cloud uptime since 2010.

Based on the postmortem Google released, it appears a number of safeguards were in place, but perhaps they should have been tested more prior to this incident to ensure this type of failure could have been prevented, Read said.

It sounds like, theoretically, they had measures in place to prevent this type of thing from happening, but those measures failed.

Jason Readfounder, CloudHarmony

“It sounds like, theoretically, they had measures in place to prevent this type of thing from happening, but those measures failed,” he said.

Google declined to comment beyond the postmortem.

Google and Microsoft both worked at massive scale before starting their public clouds, but they’ve had to learn there is a difference between running a data center for your own needs and building one used by others, Leong said.

“You need a different level of redundancy, a different level of attention to detail, and that takes time to work through,” she said.

With a relatively small market share and number of production applications, the Google cloud outage probably isn’t a major concern for the company, Leong said. It also may have gone unnoticed by Google customers, unless they were doing data transfers during those 18 minutes, because many are doing batch computing that doesn’t require a lot of interactive traffic with the broader world.

“Frankly, this is the type of thing that industry observers notice, but it’s not the type of thing customers notice because you don’t see a lot of customers with a big public impact,” Leong said. By comparison, “when Amazon goes down, the world notices,” she said.

Measures have already been taken to prevent a reoccurrence, review existing systems and add new safeguards, according to a message on the cloud status website from Benjamin Treynor Sloss, a Google executive. All impacted customers will receive Google Compute Engine and VPN service credits of 10% and 25% of their monthly charges, respectively. Google’s service-level agreement calls for at least 99.95% monthly uptime for Compute Engine.

Networking failure takes down Google’s cloud

The incident was initially caused by dropped connections when inbound Compute Engine traffic was not routed correctly, as a configuration change around an unused IP block didn’t propagate as it should. Services also dropped for VPNs and L3 network load balancers. Management software’s attempts to revert to previous configuration as a failsafe triggered an unknown bug, removed all IP blocks from the configuration and pushed a new, incomplete configuration.

A second bug prevented a canary step from correcting the push process, so more IP blocks began dropping. Eventually, more than 95% of inbound traffic was lost, which resulted in the 18-minute Google cloud outage that was finally corrected when engineers reverted to the most recent configuration change.

The outage didn’t affect Google App Engine, Google Cloud Storage or internal connections between Compute Engine services and VMs, outbound Internet traffic, and HTTP and HTTPS load balancers.

SearchCloudComputing reached out to a dozen Google cloud customers to see how the outage may have affected them. Several high-profile users who rely heavily on its resources declined to comment or did not respond, while some smaller users said the outage had minimal impact because of how they use Google’s cloud.

Vendasta Technologies, which builds sales and marketing software for media companies, didn’t even notice the Google cloud outage. Vendasta has built-in retry mechanisms and most system usage for the company based in Saskatoon, Sask., happens during normal business hours, said Dale Hopkins, chief architect. In addition, most of Vendasta’s front-end traffic is served through App Engine.

In the five years Vendasta has been using Google’s cloud products, on only one occasion did an outage reach the point where the company had to call customers about it. That high uptime means the company doesn’t spend a lot of time worrying about outages and isn’t too concerned about this latest incident.

“If it’s down, it sucks and it’s a hard thing to explain to customers, but it happens so infrequently that we don’t consider it to be one of our top priorities,” Hopkins said.

For less risk-tolerant enterprises, reticence in trusting the cloud would be more understandable, but most operations teams aren’t able to achieve the level of uptime Google promises inside their own data center, Hopkins said.

Vendasta uses multiple clouds for specific services because they’re cheaper or better, but it hasn’t considered using another cloud platform for redundancy because of the cost and skill sets required to do so, as well as the limitations that come with not being able to take advantage of some of the specific platform optimizations.

All public cloud platforms fail, and it appears Google has learned a lesson on network configuration change testing, said Dave Bartoletti, principal analyst at Forrester Research, in Cambridge, Mass. But this was particularly unfortunate timing, on the heels of last month’s coming-out party for the new enterprise-focused management team at Google Cloud.

“GCP is just now beginning to win over enterprise customers, and while these big firms will certainly love the low-cost approach at the heart of GCP, reliability will matter more in the long run,” Bartoletti said.

 

Source: searchcloudcomputing.techtarget.com- Google cloud outage highlights more than just networking failure

Hilton hotel chain powers robot concierge with IBM Watson

The Hilton Worldwide hotel chain is trialling a robotic concierge service – powered by IBM’s Watson cognitive computing technology – to assist front desk staff in its hotels.

Named after its founder Conrad Hilton, “Connie” – who is being tested out at a hotel McLean, Virginia, near Washington DC – uses domain knowledge from Watson and WayBlazer, a cognitive travel recommendation engine, to provide guests with information on tourist attractions, local restaurants and hotel amenities.

“We’re focused on re-imagining the entire travel experience to make it smarter, easier and more enjoyable for guests,” said Jonathan Wilson, vice-president of product innovation and brand services at Hilton Worldwide.

“By tapping into innovative partners like IBM Watson, we’re wowing our guests in the most unpredictable ways.”

Connie will work alongside Hilton staff to assist with guest requests and personalise guests’ stay. The robot uses a number of Watson application programming interfaces (APIs) – including dialogue, speech-to-text, text-to-speech and natural language classification – to greet arrivals and answer their questions by tapping into WayBlazer’s database of travel domain knowledge.

Hilton hopes Connie will be able to learn, adapt and improve its recommendations over time as more guests interact with it. Staff, meanwhile, will be able to access a log of what guests ask it, and the answers they receive, to help them improve the service they provide.

Read more about Watson and the internet of things (IoT)

IBM cuts the ribbon on a global headquarters for its Watson IoT business unit in Munich.
At the IBM PartnerWorld Leadership Conference, distributor Avnet and IBM announced partnerships focused on the IoT and security as a service.
Finnish lift manufacturer Kone is using IBM Watson as part of its IoT strategy after a multi-year agreement with the IT giant.
Robots promote brand loyalty

Rob High, vice-president and CTO of IBM Watson, said the test represented an important shift in human-machine interaction.

“Watson helps Connie understand and respond naturally to the needs and interests of Hilton’s guests – which is an experience that’s particularly powerful in a hospitality setting, where it can lead to deeper guest engagement,” he said.

WayBlazer CEO Felix Laboy added: “We believe providing personalised and relevant insights and recommendations – specifically through a new form factor such as a robot – can transform brand engagement and loyalty at the Hilton.”

Hilton has been testing out a number of other digital innovations in recent years to enhance its guest experience, including digital check-in services and keys, and partnerships with taxi service Uber.

Source: computerweekly.com-Hilton hotel chain powers robot concierge with IBM Watson

Automation, Robots and Autonomics – Know Your Terminology – Thoughtonomy

Some concepts to deal with Robotics Process Automation and Artificial Intelligence.

Automation Software

Automation is the use of machinery, control systems or technology to manage the execution of activity which would otherwise require human input and/or intervention. While it is arguable, given this classification, that all computer software is delivering automation, the term automation software typically refers to solutions designed specifically for the purpose of automating a defined task, activity or process. In its simplest form automation includes techniques such as macro-routines and scripting, while in other cases automation software is designed to automate a highly specific task, activity or function. The most advanced and flexible manifestations of automation software will include those which deliver the orchestration and execution of a variety of activities and the management of their relationships and inter-dependencies.

Robotic Process Automation

Robotic Process Automation (RPA) refers to an approach to removal of human activity whereby automation software carries out tasks and activities in other applications and systems by interacting with them in the same way as a human – hence the use of the term “Robotic”. Typically this involves the use of automation routines or “software robots” interacting with these applications via an application GUI (graphical user interface) or CLI (command line interface) though can also include other methods of “driving” an application such as calling web services or scripted routines.

The key difference between RPA and other automation methods is that due to the approach of emulating humans in utilising other applications via a standard interface, the software can be deployed without modification to the applications or systems being automated.

Desktop Automation

Desktop Automation is a form of RPA software deployed locally on a user’s desktop or laptop machine whereby the software is initiated on demand or against a schedule to carry out an automated action. The software executes tasks by emulating the human user, and by having the software execute the “grunt work” within a task or process, operators can manage a significantly increased workload. Desktop Automation is simple to deploy at relatively low cost, and can be a very simple way to deliver efficiency improvements where human workers can call automated routines on demand. However, given the distributed nature of a desktop RPA deployment, attention should be given to the implications on security and management control, on the change and release management of automated processes, and the auditability and reporting of activities.

Enterprise RPA

Unlike desktop automation, Enterprise RPA is not installed locally onto a user’s environment. Instead, virtual environments are created where an automated process is executed by a pseudo-user (the “robot”) emulating the human worker, in a completely hands-off fashion. The virtualised user environment is typically implemented into a datacentre environment with consideration given to factors such as availability, security, management and control which are not addressed in desktop automation. Typical deployments are into business users for high-volume transaction based activities and processes, and execution of processes, rather than initiated locally by an operator, are provided against a defined schedule or through existing task queues and case management applications. An Enterprise RPA deployment is generally configured to operate 24×7 as it does not rely on the presence of a user or their desktop environment in order to execute.

Intelligent Process Automation

Intelligent Process Automation (IPA) is becoming an increasingly common phrase, and attempts to draw a distinction between the more static, rules based approaches of a typical RPA use case, and the use of similar approaches coupled with a level of machine learning or artificial intelligence (see below), such that the automation is operating in a more dynamic environment where multiple factors, data sources and contextual differences might define the action to be taken.

As with much of the current terminology, there is no clear definition of when a process is “robotic” versus “intelligent” and some implementations of RPA technology are in fact using multiple, complex and dynamic sources of information to define the execution of activities. (See Adaptive Automation)

Software Robot

There is no standard definition of what entity constitutes a “robot”. Some providers use the term to describe each time an automated process runs, others refer to each unique automated procedure or scripted action as an individual ‘bot, some consider each desktop agent a robot, and yet more (such as Enterprise RPA vendors) use the same term to describe a runtime resource capable of operating many different processes as a pseudo FTE – the software equivalent of a human operator and their computer virtualised as a single entity.

While there are arguments for each classification, and standardisation of taxonomy is unlikely, the differences can lead to some considerable confusion in pricing and scoping against RPA requirements. Prospective buyers should avoid inaccurate comparisons on a “per-robot” basis and instead seek to relate the costs of an RPA solution to a business case based on the scope of automation possible and the scale or volume of work a solution can deliver.

Autonomics

Autonomics in IT refers to a self-managing computing model named after, and patterned on, the human body’s autonomic nervous system. An autonomic computing system is designed to control the functioning of applications and systems without input from the user, in the same way that the autonomic nervous system regulates body systems without conscious input from the individual. The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system’s complexity invisible to the user.

The term is often used to describe the deployment of automation into IT management scenarios, whereby the automated management and resolution of conditions, events and failures, and/or automated response to demand or context based conditions (e.g. auto-regulating performance by scaling and adapting available resources based on demand) effectively delivers self-managing – or autonomic – systems capable of operating and adapting to circumstances independently of human input.

Heuristics

Heuristics is the application of experience-derived knowledge to a problem or task. Using a basic form of machine learning, heuristics will use historical data (experience) to inform an action or activity. One example of heuristic software is mail quarantine applications which screen and filter out messages likely to contain a computer virus or other undesirable content, based on data from previous activity. Heuristics can be very effective at filtering or processing information based on probability as defined by previous experience, and by definition should become increasingly accurate over time, though is unlikely to be 100% accurate and can result in “false positives” such as incorrectly filtering.

Adaptive Automation

The term Adaptive Automation is used to describe the use of Heuristics in an automated process such that the automation routine or process will be defined based on previous experiences and executions. Examples of adaptive automation are event management processes in system and application support, or automated security management systems which, over time, learn an ever more accurate pattern of “normal” behaviour and will deliver a different automated response based on deviations from that normal pattern.

Unlike AI (see Artificial Intelligence), Adaptive and Heuristic automation remains rules based and, within those rules, actions and outcomes can be modeled and/or predicted.

Artificial Intelligence

In its pure sense, artificial intelligence (AI) refers to systems which are self-aware, and capable of rational thought. However in recent years, the term has been used more broadly to encapsulate the simulation of human intelligence processes by machines, especially IT systems. These processes include learning (the acquisition of information and rules for using that information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction (identifying that a course of action is proving or likely to prove unsuccessful and modifying that course). Particular applications of AI include expert systems, speech recognition, and machine vision.

Considerations in deploying truly artificially intelligent systems to automate work include the potential inability of a user to completely and accurately predict how the system will respond to a situation or given set of circumstances.

Machine Learning

Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. This area of AI focuses on the development of computer programs that can teach themselves to adapt and change when exposed to new data. Unlike heuristics, which uses historical data to inform decisions, machine learning can include experimentation – testing various approaches via trial and error in order to “learn” what will deliver a successful outcome or the timeliest solution to a problem.

Virtual Workforce

The Thoughtonomy Virtual Workforce® is an Enterprise automation solution encompassing many of the principles covered in this overview. It is an as-a-service software solution which provides a platform for clients to automate a wide variety of IT and business support processes and activities. It is focused on delivering high levels of resilience, security and scalability and a commercial approach which allows users to relate the cost of the solution directly to the benefits being realised. The Virtual Workforce utilises RPA approaches, adding advanced load balancing, workload management, multi-tasking and auto-scaling algorithms to provide a highly flexible platform which can deliver rapid and non-disruptive automation.

It’s integrated web portal provides a custom interface to allow users to interact with automated processes and vice versa, providing a single platform for both back-office and front-office or self-service automation. Thus a single solution can offer both zero-touch automation more typically targeted by RPA, and self-service automation more usually delivered with desktop automation, but with the security and management controls not possible with distributed desktop alternatives.

Typical deployments are into service providers, IT and business process outsourcers and Enterprise IT functions for use against a wide range of both high-volume/low-complexity and low-volume/high-complexity IT and business support processes.

 

Source: thoughtonomy.com-Automation, Robots and Autonomics – Know Your Terminology – Thoughtonomy

Five things to know to land a cloud architect job

 

Demand for cloud architects is growing in the enterprise, but competition for jobs is tough. Here are five questions to help you ace a cloud architect interview.

Cloud computing is becoming a key way for businesses to deploy new applications, which is rapidly changing the IT job market. And demand for cloud architects is especially high.
In fact, roughly 11,100 cloud architect jobs are currently listed on career website Indeed.com, with salaries ranging from $75,000 to more than $150,000 annually. But before landing that dream cloud architect job, you have to wow potential employers during the interview process.

Here are five key questions you can expect an employer to ask during a cloud architect interview, along with advice for how to respond.

1. How do a cloud architect’s responsibilities differ from those of other data center professionals?

A cloud architect focuses more on the meta, or big-picture, view of the data center and less on an individual server’s configuration and throughput. For instance, cloud architects examine how an organization’s central authentication system ensures that only authorized employees access system resources. By comparison, a system analyst is tasked with tying the authentication system to a specific application, such as Salesforce.com.

2. Where do you see technology in one year? How about three years?

Rather than get caught up in the daily grind of data center and cloud operations, cloud architects must think ahead. They need to be blue-sky, big-picture thinkers. Cloud architects determine how emerging technologies, like biometrics and the Internet of Things, will impact enterprise systems and cloud infrastructure. They also need to craft a roadmap that shows where the business’ systems are today and where they need to be in a few years.

3. How do containers fit into a company’s cloud architecture?

Businesses are constantly trying to make software more portable, and containers are the latest variation on that theme — which makes them a critical technology for cloud architects to know. First, cloud architects must understand the capabilities containers offer. Containers work at a layer above the OS and virtualization software. Theoretically, they offer more portability, but they pay a price for that easy movement: decreased security. The software running in the containers does not include the inherent security checks found at the OS or virtualization layer. Consequently, running containers within a firm’s data center and behind its security perimeter makes sense, while putting the software onto a public cloud is a bit risky.

4. What standard interfaces should a company use?

OpenStack has emerged as a key platform, enabling companies to tie different cloud applications together. Businesses primarily use free, open source software as an IaaS platform. OpenStack, which is available under an Apache license, consists of a group of interrelated components that control pools of processing, storage and networking resources.

5. What cloud architect certifications do you have, or are pursuing?

Cloud architect certification programs come from two different sources. Independent training and certification companies, like Arcitura Education, CompTIA and EXIN, offer vendor-neutral certifications. In addition, the industry’s biggest vendors, such as EMC, Hewlett-Packard, IBM and Microsoft, have cloud architect certifications geared toward their particular products.

Cloud architects are becoming more popular. By knowing answers to key questions, IT pros can position themselves to land a high-paying cloud architect job.

Source: Techtarget-Five things to know to land a cloud architect job by Paul Korzeniowski

Top 5 predictions for project management in 2016

As a discipline, I see project management as being fairly static. Still, there are changes and movements happening. Here are my top five for 2016.

What’s going to happen with project management in 2016? Since project management as a discipline is fairly static, I liken this concept of predicting changes in project management to a conversation two fictional characters had on one of my favorite shows, “The Big Bang Theory,” a few years ago. Leonard Hofstadter is an experimental physicist and his future girlfriend and wife, Penny, is asking questions about his job while they are out to dinner together.

Penny: “So, what’s new in the world of physics?”

Leonard: “Nothing.”

Penny: “Really, nothing?”

Leonard: “Well, with the exception of string theory, not much has happened since the 1930’s, and you can’t prove string theory, at best you can say “hey, look, my idea has an internal logical consistency.”

Penny: “Ah. Well I’m sure things will pick up.”

I think of project management changes when I think of this conversation about experimental physics. Still, I believe there are slow changes happening and some shifts in focus and management about to happen.
Here are my top five predictions for project management I 2016.

1. Emergence of CPOs. I think 2016 is the year of that the CPO position – or Chief Project Officer – begins to get real traction. In the late 1980’s many technical experts and business leaders were suggesting that Chief Information Officers (CIOs) would be the next critical C-level position in organizations. It happened. We’ve also seen the emergence of CFOs in the last decade and now CMOs (Chief Marketing Officers). My prediction for the next big C-level position to emerge is the CPO. It may mean the end of PMO directors and/or centralized project management offices (PMOs)…we will have to see how that plays out.

2. Decrease in PMOs. Project management offices are still failing or at least not serving many organizations very well. Sometimes it’s due to a lack of strong leadership at the top of the PMO, sometimes it’s putting a great project manager in charge who ends up spending too much of this time managing projects rather than managing the PMO, and sometimes it’s just a disorganized mess led by whatever resource manager needs a position of responsibility this week. Not enough are formed around the principles of strong leadership, executive buy-in, and established practices, policies and templates. Executives in the organization can only stand so many restarts before they move in the direction of a decentralized project management infrastructure.

3. Shift away from PM certification focus. While many organizations and job postings will still list certification as a “nice to have” or “preferred”, fewer will focus on that aspect of a candidates background or experience. In 2015, I consulted with two organizations where they were looking for someone who was an experienced project manager and the consulting search listed several key responsibilities and qualifications and PMP certification was listed as “preferred.” However, it was never even discussed during any of the proceedings leading up to the engagements. I’m seeing it listed, I’m not hearing about it being discussed.
4. Decentralized project management in all but the largest of organizations. I realize this may seem to contradict the “emergence of CPOs” that I discussed above, but not really. I think we will still see the CPO position start to mean something in the PM community, but we will also see the increased use of project managers and consultants throughout the organization in individual business units and departments or just more of an independent pool of professionals.

5. Increasing reliance on remote project managers and consultants – growth of virtual team situations. It only makes sense. Professional service organizations who base most of their business on seeking out and providing project solutions are moving more and more to geographically dispersed teams, project managers and teams that may never meet face to face, and offshore development teams who provide great development services at a fraction of the price of high priced co-located project teams. Let’s face it, project teams rarely need to sit at the same table and by allowing your PM’s and project teams to work remotely mean you can always find and obtain – at least on a contractual basis – the best of the best by not making them relocate just for the privilege of having them take up space at your company headquarters.

Summary / call for input

I see project management, in general, as a being fairly static. Important – often critical – in organizations who rely on steady and strong project management to bring home profitable and successful customer implementations in order to succeed as a company. But still, fairly static. There are always new project management software tools and templates available for organizations looking for a change or improvement, but many offer fairly similar capabilities.

But for 2016, I’m going to predict this five things I’ve mentioned above. How about our readers – what changes do you see coming for project management and PM infrastructures or methodologies in the coming year. Please share and let’s discuss.

Source: CIO-Top 5 predictions for project management in 2016 By Brad Egeland

CIOs: Acquire new skills for technology management

CIOs should be seen as enablers and leaders of change in their business.

CIOs should be seen as enablers and leaders of change in their business. In this buyer’s guide, Computer Weekly looks at how the age of the customer requires IT leaders to
focus on both the business technology and IT agendas; why IT leaders need to use blogging and social media to raise their profile and build influence in their organisations; and how IT leaders can roll out projects ever more quickly without running unacceptable risks.

In this 14-page buyer’s guide, Computer Weekly explores:

  • Ways of acquiring new technology management skills
  • Lessons and literature for CIOs
  • How to role out IT projects faster, without running unacceptable risks
  • Case Study: How Atkins ramps up IT speed

Download the guide from: Computerweekly-Acquire new skills for technology management