Which Team Member Are You? Identifying Your Project Management Style

As with many tasks in life, most companies, organizations, and individuals begin to nourish the seedlings of success for their project well before they start the work itself.

Selecting an effective framework for your project is essential in helping it to move smoothly—yet many struggle when it comes to choosing the most suitable management methodology.

Each manager has their own approach when it comes to organizing tasks and setting the wheels in motion to achieve a particular goal. Knowing exactly which format you are going to follow will help you to interact efficiently with other members of your team—thereby delivering better results overall.

So how do you know where your project management style falls between some of the more common options such as waterfall, scrum, agile, and lean? First of all, you start by getting to know each concept better.

The Waterfall Project Management Method
First and foremost, the waterfall project management method is the most traditional option and requires extensive and detailed planning at the start of a project. All of your future steps need to be mapped and laid out so that you can move onto the next stage only after successfully completing the previous one.

The name “waterfall” refers to the fact that each phase of the project takes place in sequence—ensuring that progress flows downwards towards the final goal.

Winston W. Royce established the first formal description of the waterfall method, and since then there have been numerous well-known and highly-commended waterfall methodologies used throughout various industries. Similar to most business approaches, there are a number of negative and positive aspects to consider when regarding the waterfall approach.

For instance, on the positive side—the development process of the waterfall methodology is well-documented, as this method places a greater degree of emphasis on documentation through planning. What’s more, customers and developers can come together to agree upon the desired product, early in the development lifecycle—thereby making design more straightforward.

However, the waterfall approach uses long-term planning, and requires a great deal of time for effective completion, which is one of the reasons why people began turning to agile project management, especially in regards to projects dealing with non-physical deliveries and services such as code, design projects, and copywriting.

The Agile Project Management Method
Unlike the waterfall system for project management, the agile method is a short-term value-driven approach that aims to help project managers deliver high-quality, high-priority work as quickly as possible.

The agile approach to project management is a fast and flexible option based on principles of adaptability, collaboration, and continuous improvement. Far removed from the organized stages of the waterfall approach – the agile project management system is set into quick, iterative project release cycles—which often means that it is less suitable for projects with strictly defined requirements.

With agile project management, your process for delivery should continue to get better and better—improving your value to consumers and clients. It allows for quick correction based on feedback from the stakeholder, and adaptation all the way through the process—even in the later stages of development.

It can be the perfect option for teams that need to work efficiently and creatively, as it includes collaboration and engagement from all members of the team at all times.

Perhaps the most significant benefit of this methodology is that it evaluates cost and time as primary constraints. Continuous adaptation and rapid feedback create significant portions of the team’s schedules—ensuring proven progress and top-quality output. The net result is often a working product delivered in a matter of weeks, instead of months, and fewer costly surprises at the end of the project.

The Scrum Project Management Method
The scrum method for project management is another fast-paced option—part of the Agile movement. Scrum uses three primary roles: the scrum master, the team, and the product owner.

The process works as follows: the owner of the project writes up a backlog of project requirements ordered according to their priority level. From there, a sprint team for planning begins to work on the first items written in the backlog according to a particular deadline.

During this time, daily meetings permit the scrum master to direct the team towards completion and track progress at the same time, and a review of the completed sprint precedes the team beginning work on the next.

The advantage of the Scrum sprint process is that it allows for adequate development that provides a saleable product, even while the project is still underway. The delivery system works on an incremental basis to shorten the time between creation and market, thereby resulting in a higher revenue.

Considered an “inspect and adapt” framework for project development, the scrum methodology can apply to a number of different people and industries—particularly those managing large to-do lists and complex problems.

The Lean Project Management Method
Finally, the lean method for project management is particularly useful within the IT industry. Lean concepts are useful for preventing waste, and when used in the context of project management – they operate on the ambition to deliver more value, with less waste.

The term “waste” in lean project management can mean extra labor, time, and materials that don’t provide any extra value to the project itself. For example, this could refer to unproductive status meetings or lengthy documentation for a project that is not used. The steps utilized in Lean project management are as follows:

Identify value by breaking down the project and examining the elements within.
Map plans essential to the success of the project.
Break out small manageable tasks and measure productivity along the way to enhance the flow of the project. This will help you to assign tasks based on the strengths of different staff members.
Ensure that all participants in the project agree to a list of desired goals.
Empower your teams to seek the best possible results and promote improvement through communication.
Finding The Right Project Management Style
Discovering the ideal management method for your specific circumstances will be an issue that depends on your team, your project, and your desired goals. Once you have selected a planning style, make sure that you have the project management software at hand to ensure that you and your team have the best organization available.

What are your thoughts on the different project management methods available? Have you tried multiple different solutions and found one that works best for you, or do you stick to a single option?

Source: Business.com-Which Team Member Are You? Identifying Your Project Management Style by Aaron Continelli

Advertisements

What’s really required to make application modernization work

Government agencies with aging application inventories must modernize in order to improve agility and efficiency, but application modernization goes well beyond rewriting code. It involves adopting several organizational processes that can play key roles in helping agencies achieve their agility and efficiency goals.

Here are eight principles agency developers should embrace as they undertake application modernization efforts:

1. Focus on developing new capabilities

Support innovation efforts by prioritizing the development of new capabilities. Avoid redesigning or recoding an application without implementing new features. Exceptions can be made for software that is no longer supported, fails to meet requirements or has become too costly. In general, however, focus on adding new capabilities that will propel the agency forward.

2. Consider prior investments

Replacing software components can be a costly undertaking. Evaluate the cost of old and new components relative to their respective features and consider any potential cost or schedule impacts to re-establishing integration with other systems.

3. Recognize there will be future change

When implementing software changes, understand why the old software is being replaced. This insight can help in developing new components that are more adaptable to changes in requirements for internal functions and external interfaces.

4. Do not promulgate vendor-specific APIs

Implement standard application programming interfaces rather than enabling applications to use a vendor API directly. Hiding the vendor-specific details can allow for easier replacement. Think of a mobile phone that works on only one carrier’s network, preventing users from easily switching between cellular providers. Just as it’s best to choose an unlocked phone for greater portability, it’s better to select APIs that work across vendors for maximum customer choice.

5. Use microservices rather than a “big bang” approach

Massive changes tend to be more disruptive to users and make it harder to determine the root cause of issues. Incremental changes are preferable, as they enable “quick wins,” a controlled introduction to end users and more agility when confronted with potential project reprioritization. So consider a microservices architecture where appropriate. These are collections of small services that communicate using lightweight mechanisms and can be deployed, scaled, upgraded or replaced independently. They make application management far simpler and less disruptive.

6. Support multiple API versions concurrently

Some of the most difficult aspects of modern interconnected application architectures are support for heterogenous environments and change management. To address these issues, use technologies that enable access through different programming languages, transport over different protocols and have the ability to encode data using different formats. This broadens the value to external users and helps prepare the agency for inevitable changes to internal software.

7. Modernize development processes to improve software quality

Introduce agile development practices such as Continuous Integration and Delivery and strengthen collaboration within an agency by promoting a DevOps culture. Both of these approaches can help increase flexibility and collaboration and allow for more rapid response to changing needs.

8. Use integration middleware for application modernization

Integration middleware connects applications, data and devices to create efficient and agile information systems. It also allows for rapid refactoring to ease the introduction of new capabilities while preserving prior investment, loose coupling to reduce vendor dependency and agile configuration to accommodate future change. Use well known Enterprise Integration Patterns, a collection of prebuilt technology connectors to the most popular protocols and APIs to more efficiently and reliably deliver solutions to meet emerging agency requirements.

Consider a military logistics application that must be modernized to handle an upgrade to a dependent enterprise resource planning system and new requirements to support asset tracking with radio frequency ID technology. Integration middleware and a dynamic router pattern can be used to easily manage the switchover to the new system. Similarly, out-of-the-box connectors for messaging protocols commonly used by RFID vendors can help save in development efforts. All of this can result in cost savings, faster introduction of new capabilities and more time for developers to focus on additional requirements.

While it’s unrealistic to expect government agencies to abandon all current baselines and existing supporting systems, they must still prepare for application modernization. Understanding the eight organizational principles is key to making a successful transition from legacy systems to a more modern, agile and cost-effective IT environment.

Source: GNC.com-What’s really required to make application modernization work By Matthew Zager

How to Avoid Pitfalls in Project Management

As AV technology plays a more central role in communications, the need for competent project management has become paramount to an integrator’s success.
 At least, that’s the way Travis Deatherage, president at Linx Multimedia, sees it. “We’re now a very integral part of the construction process, and our infrastructure needs often need to be communicated early in the project because those things are happening right away, when the project kicks off,” he said. And, because AV integrates with so many different trades, someone must be charged with communicating with all of the project stakeholders throughout. “There’s a need for more project management time in a project to ensure it’s going to be successful at the end, because of that integration with all of the other trades.”
Headquartered in Denver, CO, Linx Multimedia is the full-service audiovisual design and integration division of Linx, which also specializes in structured cabling and security. The multimedia group has 90 employees on staff, seven of which are project managers. Deatherage explains that the PMs at Linx Multimedia oversee projects from contract to closeout, both in terms of execution––getting the job done––and dollars. “They’re financially responsible for the success of the job, its profitability, how things get billed, when they get billed, and ensuring change orders get processed and managed appropriately,” he said. To streamline this process, the firm’s foremen manage the actual job sites, so project managers may focus on client communications, coordination with other trades, resource planning, product procurement, and, inevitably, solving problems. “All of those things take a lot of time, and without managing them properly, it’s very difficult to have a successful integration project.”
Christopher Maione, president of Christopher Maione Associates in Northport, NY, a firm that provides business and technical consulting and training to the AV industry, notes that these days, with many of the larger RFPs calling for AV, IT, and security services, it’s not sustainable to dedicate three PMs––one for each category––but rather one project manager who oversees the entire job, and who has access to in-house experts, or lead consultants, with deeper technical knowledge in each of the three disciplines.
“The project manager is facing the client, talking to them and interpreting their different needs, but they don’t need that same level of technical expertise [as they would on smaller jobs] because the lead consultants are supporting them in that role,” Maione said.
 Each summer, Linx hosts Linx Academy, which offers employees workshops on leadership, planning, and communication. For project managers, these sessions have focused on what goes into creating estimates and writing project plans, as well as role-playing exercises to demonstrate what goes on during client meetings. “We’ve also had project planning exercises on creating, writing, and managing project plans, and then having change introduced––whether that’s the schedule, or financial or resource challenges, and then teaching them how to deal with those challenges, because jobs never go exactly as planned,” Deatherage said.
However, he emphasized that if you have a solid plan in the first place, managing change is a lot easier. “When you have that plan––that baseline––and you’ve communicated that clearly with the customer, and you’ve set expectations up front, and when that change occurs it’s a lot easier to have that conversation with the customer about the impact of that change because you had an agreed upon set of expectations up front. And that’s probably one of the most important steps in project management—to clarify the schedule and expectations early in the process.”
One issue that project managers often face is that although they may be responsible for a project’s success, they don’t necessarily have full control from the outset. “[Oftentimes] sales set up the project in such a way that it may ultimately lose money, and when a project loses money, the project manager is blamed, even though sales set up a bad job,” said Bradley Malone, PMP, partner at Navigate Management Consulting (formerly Twin Star Consulting) in Hinsdale, IL, and an InfoComm senior instructor. By “bad job,” Malone means that the sales department may not have accounted for any number of variables that could have a considerable impact on a project’s profitability. “[For example,] sales wrote up a scope of work that was more bill of materials-based versus project execution-based. They did a site survey that talked about the size and shape of the room and where the equipment was to be placed, but didn’t show the numerous obstructions in the ceiling, nor that the technicians had to go through security every time they entered the site, the loading dock was another building away, the elevator isn’t large enough to hold the large screens, and several of the rooms were only available in evenings [so prevailing wages applied], and you couldn’t make noise during the day.” Not only does this cost AV firms more labor time (and therefore, more money), the PM in a scenario like this has little hope of managing the project proactively (profitably) rather than reactively.
Resource planning is another big challenge for project managers, Malone notes. After all, AV firms usually have any number of projects taking place all at once, which means any number of PMs are vying for the same resources in the same timeframe. “What often happens is that there’s a lack of capacity planning for resources––the sales pipeline is talking in terms of revenue, but not in terms of the number of labor hours needed and when those labor hours are needed,” he illustrated. Six different salespeople may have sold six different projects with January deadlines, and yet there may be only 10 technicians available to perform the installations when in reality, the workload requires 20. “So a lot of an AV integration company’s problems are really from resource management, not project management.”
To avoid falling into this trap, Malone encourages companies not only to ask themselves: can we execute this project? But instead: can we execute all the projects we sell? “Project management is eight to 15 percent of the labor hours on a job, typically, and that needs to be based on complexity, not size,” he said. The way AV firms can hone their project management skills is to apply them to all projects, regardless of scope. “I find a lot of companies will add project management to a large job, but not a small job. We need to learn our fundamentals of project management on small jobs. What we tend to do is wait until we win the big jobs, and then think that we can manage them well. But again, we didn’t master the skills and processes on the small jobs as a company.”
Accountability: Measuring a PM’s Success
At Linx Multimedia, project managers are accountable for two metrics upon project completion: its financial success and customer satisfaction. Travis Deatherage, the firm’s president, explained that throughout projects, PMs are required to generate monthly financial reports that detail how they expect a project to perform financially, and this data is assessed once the project is done. The company’s marketing department also generates customer surveys to measure the client’s experience.
“The mindset that we build into our project managers is that financial performance and customer satisfaction are not mutually exclusive,” Deatherage said. “I think many people believe that you can have one without the other, or that you can’t have both. I actually believe that if you have both, they feed on each other: if you have a happy customer, you’re usually more successful financially, and if you’re financially successful over the long term, you probably have happier customers. Because customers want somebody who’s going to be responsible, who’s going to be honest, who’s going to be truthful and direct, and who’s going to be there for the long haul. And that requires a level of financial responsibility.”
 Techs vs. True PMs
It’s not uncommon for AV firms to promote their lead technicians into project management positions––something that Bradley Malone, PMP, warns companies to be careful of. “It’s a different mindset,” he said, and it really depends on whether the technician wishes to assume the other non-technical responsibilities that are part of project management, such as communications and administration. “I like to take the middle-of-the-road technician who likes to talk to people, likes to see how things work together, and will do paperwork. I’m very careful in the project management selection process to take generalists, not technicians.”

 

Source: AVnetwork-How to Avoid Pitfalls in Project Management By Carolyn Heinze

IBM Opens Watson IoT Global Headquarters, Extends Power of Cognitive Computing to a Connected World

IBM announced the opening of its global headquarters for Watson Internet of Things (IoT), launching a series of new offerings, capabilities and ecosystem partners designed to extend the power of cognitive computing to the billions of connected devices, sensors and systems that comprise the IoT. These new offerings will be available through the IBM Watson IoT Cloud, the company’s global platform for IoT business and developers.

As part of today’s launch, the company announced that Munich, Germany will serve as the global headquarters for its new Watson IoT unit, as well as its first European Watson innovation center. The campus environment will bring together 1000 IBM developers, consultants, researchers and designers to drive deeper engagement with clients and partners, and will also serve as an innovation lab for data scientists, engineers and programmers building a new class of connected solutions at the intersection of cognitive computing and the IoT. The center will drive collaborative innovation with clients, partners and IBM researchers and data scientists to create new opportunities for growth in IoT. It represents IBM’s largest investment in Europe in more than two decades.

IBM also will deliver Watson APIs and services on the Watson IoT Cloud Platform to accelerate the development of cognitive IoT solutions and services, helping clients and partners make sense of the growing volume and variety of data in a physical world that is rapidly becoming digitized.

With these moves, clients, start-ups, academia and a robust ecosystem of IoT partners –from silicon and device manufacturers to industry-oriented solution providers –will have direct access to IBM’s open, cloud-based IoT platform to test, develop and create the next generation cognitive IoT apps, services and solutions. Leading automotive, electronics, healthcare, insurance and industrial manufacturers that are at the forefront of the region’s Industry 4.0 efforts are among those most expected to benefit.

“The Internet of Things will soon be the largest single source of data on the planet, yet almost 90 percent of that data is never acted upon,” said Harriet Green, general manager, Watson IoT and Education. “With its unique abilities to sense, reason and learn, Watson opens the door for enterprises, governments and individuals to finally harness this real-time data, compare it with historical data sets and deep reservoirs of accumulated knowledge, and then find unexpected correlations that generate new insights to benefit business and society alike.”

The company also announced that it has opened eight new Watson IoT Client Experience Centers across Asia, Europe and the Americas. Locations include Beijing, China; Boeblingen, Germany; Sao Paulo, Brazil; Seoul, Korea; Tokyo, Japan; and Massachusetts, North Carolina, and Texas in United States. These centers provide clients and partners access to technology, tools and talent needed to develop and create new products and services using cognitive intelligence delivered through the Watson IoT Cloud Platform.

Siemens Building Technologies, the market leader in safe, energy-efficient and environmentally friendly buildings and infrastructures, announced that it is teaming with IBM to bring innovation to the digitalization of buildings. Siemens is working to bring advanced analytics capabilities together with IBM’s IoT solutions to advance their Navigator platform for energy management and sustainability.

“By bringing asset management and analytics together with a deep technical understanding of how buildings perform, Siemens will make customers’ building operations more reliable, cost-optimized and sustainable,” said Matthias Rebellius, CEO of Siemens Building Technologies. “We are excited to stretch the envelope of what is possible in optimizing building performance by combining the asset management and database technologies from IBM’s Watson IoT business unit with our market leading building automation domain know-how.”

New Watson IoT Services Accelerate Cognitive IoT

IBM is bringing the power of cognitive analytics to the IoT by making four families of Watson API services available as part of a new IBM Watson IoT Analytics offering. As the physical world of devices and systems are becoming highly digitized, these capabilities will allow clients, partners and developers to make greater sense of this data through machine learning and correlation with unstructured data.

The four new API services include:

The Natural Language Processing (NLP) API Family enables users to interact with systems and devices using simple, human language. Natural Language Processing helps solutions understand the intent of human language by correlating it with other sources of data to put it into context in specific situations. For example, a technician working on a machine might notice an unusual vibration. He can ask the system “What is causing that vibration?” Using NLP and other sensor data, the system will automatically link words to meaning and intent, determine the machine he is referencing, and correlate recent maintenance to identify the most likely source of the vibration and then recommend an action to reduce it.

The Machine Learning Watson API Family automates data processing and continuously monitors new data and user interactions to rank data and results based on learned priorities. Machine Learning can be applied to any data coming from devices and sensors to automatically understand the current conditions, what’s normal, expected trends, properties to monitor, and suggested actions when an issue arises. For example, the platform can monitor incoming data from fleet equipment to learn both normal and abnormal conditions, including environment and production processes, which are often unique to each piece of equipment. Machine Learning helps understand these differences and configures the system to monitor the unique conditions of each asset.

The Video and Image Analytics API Family enables monitoring of unstructured data from video feeds and image snapshots to identify scenes and patterns. This knowledge can be combined with machine data to gain a greater understanding of past events and emerging situations. For example, video analytics monitoring security cameras might note the presence of a forklift infringing on a restricted area, creating a minor alert in the system; three days later, an asset in that area begins to exhibit decreased performance. The two incidents can be correlated to identify a collision between the forklift and asset that might not have been readily apparent from the video or the data from the machine.

The Text Analytics API Family enables mining of unstructured textual data including transcripts from customer call centers, maintenance technician logs, blog comments, and tweets to find correlations and patterns in these vast amounts of data. For example, phrases reported through unstructured channels — such as “my brakes make a noise”, ”my car seems to slow to stop,” and “the pedal feels mushy” — can be linked and correlated to identify potential field issues in a particular make and model of car.

The Intersection of Cognitive and Internet of Things

Cognitive computing represents a new class of systems that learn at scale, reason with purpose and interact with humans naturally. Rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment, enabling them to keep pace with the volume, complexity, and unpredictability of information generated by the IoT. Cognitive systems can make sense of the 80 percent of the world’s data that computer scientists call “unstructured,” which means they can illuminate aspects of the world that were previously invisible, allowing users to gain greater insight and to make more informed decisions.

Source: IBM-IBM Opens Watson IoT Global Headquarters, Extends Power of Cognitive Computing to a Connected World

EU-US Privacy Shield: Can written assurances adequately protect EU data from US snoops?

Privacy campaigners have been quick to question whether Safe Harbour’s replacement will be looked on favourably by the European Court of Justice.

Safe Harbour’s successor, the EU-US Privacy Shield, has been weighed up and found wanting by privacy campaigners, who fear the proposed data-transfer agreement may not stand up to legal scrutiny by the European Court of Justice (CJEU).

The European Commission (EC) has been working with US lawmakers to develop a replacement for the Safe Harbour transatlantic data-transfer agreement since it was ruled invalid by the CJEU in October 2015.
The result of these discussions is the EU-US Privacy Shield, which is expected to come into force in three months’ time, the EC said.

For that to happen, the agreement’s content has to pass muster with the Article 29 Working Party, an affiliation of the data protection authorities of all 28 EU member states.

The working party has given the EC and the US until the end of February 2016 to provide a complete breakdown of how the Privacy Shield will work, and stated formally that anyone attempting use Safe Harbour to transfer data back to the US is now breaking the law.

It also warned organisations using alternative data-transfer mechanisms – including standard contractual clauses and binding corporate rules – that permission to use these could be revoked by the end of February.

Apart from a new name, a logo and some lofty declarations about how the EU-US Privacy Shield is a “significant improvement” on Safe Harbour, only scant details about how it will work were outlined at the launch of the new-look data-transfer regime on 2 February.

These include the fact that the agreement will be subject to annual reviews – unlike Safe Harbour – and be supported by the work of a “functionally independent” ombudsman for European citizens who fear their data has been accessed unlawfully by US authorities.

Safe Harbour 2 and its shortcomings

Given how short on detail the announcement was, many industry watchers have described it as a ruse by the EC and the US to buy more time to flesh out the details of the Safe Harbour alternative, as the Article 29 Working Party initially gave the pair until 31 January to do so.

Frank Jennings, a partner specialising in cloud and technology commercial contracts at legal firm Wallace, told Computer Weekly he shares this view.

“The main driver over the timing of the announcement was the enforcement deadline set by the Article 29 Working Party,” he said. “This has bought some time while the detail is finalised.

“The European Commission has to prepare a draft adequacy decision for consideration by the Article 29 Working Party and the US still needs to set up the monitoring mechanisms and an ombudsman.”

During the 2 February press conference, Andrus Ansip, EC vice-president in charge of the Digital Single Market, promised European citizens that the EU-US Privacy Shield would protect them from “indiscriminate mass surveillance” by the US government.

He said the EC has received “written assurances” from the US government to this effect, but concerns about how watertight these penned declarations are likely to be are already starting to mount up.

A history of Safe Harbour

The Safe Harbour agreement was the legal mechanism previously used by thousands of US companies to transfer data belonging to European citizens to the US, before it was struck down by the CJEU last October following a legal challenge by Austrian legal student Max Schrems.

Is government surveillance going too far?
The CJEU backed Schrems’ assertion that Safe Harbour did not adequately protect the data of European citizens from the mass surveillance activities of the US government, which, in turn, were uncovered by NSA whistleblower Edward Snowden in 2013.

In this context, the problem that many people have with the EU-US Privacy Shield’s “written assurances” is whether or not these would be considered “adequate protection” from the US government’s mass surveillance activities.

Former EC vice-president Viviane Reding, who previously spearheaded a review of Safe Harbour in response to Snowden’s 2013 revelations, has already aired concerns about the shape of its replacement is taking.

“The new text is disappointing,” she said. “The commitment to limit mass surveillance of EU citizens is ensured only by a written letter from US authorities.

“Is this sufficient to limit oversight and prevent generalised access to the data of EU citizens? I have serious doubts if this commitment will withstand a possible new examination by the European Court of Justice.”

Alexander Hanff, CEO of civil liberties advisory group Think Privacy, shares Reding’s misgivings, saying that although the US government’s Foreign Intelligence Surveillance Act (FISA) remains in place, these penned declarations are “not worth the paper they are written on”.

FISA is a piece of federal legislation that allows the US government to covertly keep tabs on people suspected of spying on the US for overseas governments or intelligence agencies, as long as the Foreign Intelligence Surveillance Court (FISC) gives it permission to do so.

“We are supposed to believe that the very same agencies and the very same political machine that has been spying on the world’s digital communications for over a decade will now suddenly stop spying on Europeans because the European Commission has asked them to?” said Hanff. “It is preposterous to even suggest such a thing, let alone do so with a straight face.

“It doesn’t matter how many ‘assurances’ the US gives the EC, the very fact that the FISC exists and issues secret orders under FISA renders them into nothing but fantasy.”

Hanff has already written to the Article 29 Working Party outlining his concerns about the Privacy Shield’s reliance on written assurances over mass surveillance. He calls on the working party not to “entertain the notion that such an agreement is either legally secure or honest”.

He then signs off by asking Isabelle Falque-Pierrotin, chair of the Article 29 Working Party, to make sure the existence of FISA and FISC are communicated to other members of the party, along with the risk they pose to ensuring that the EU-US Privacy Shield can make good on its promise of protecting citizens from snooping.

“We simply must not allow a lie (for this Privacy Shield is exactly that) to replace a lie (which Safe Harbour was) in order to maintain the status quo and pander to the economic interests of the US technology sector,” Hanff wrote.

“The deal is bad for EU citizens and it is bad for the EU economy. It must not be accepted.”

Written assurances vs legal protections

Max Schrems released a statement following the EU-US Privacy Shield announcement, also focusing on whether a written declaration would be enough to satisfy the CJEU.

“A couple of letters by the outgoing Obama administration is by no means a legal basis to guarantee the fundamental rights of 500 million European users in the long run, when there is explicit US law allowing mass surveillance,” said Schrems.

“We don’t know the exact legal structure yet, but this could amount to disregarding the CJEU’s judgment. The court has clearly stated that the US has to ‘ensure’ proper protection by means of ‘domestic law or international commitments’.”

However, Daniel Hedley, an associate at legal firm Thomas Eggar, said that until the full details of EU-US Privacy Shield are made public, it is difficult to decide exactly how the CJEU will view the finished article.

“The CJEU’s judgment was based in large part on a finding that the US did not provide equivalent protections in law,” Hedley told Computer Weekly. “So I think we can at least say that the Privacy Shield’s legal status and enforceability are going to be critical to its success or failure.

“That is, whether or not these ‘written assurances’ provided by the US government amount to real binding rights and obligations giving European equivalent data rights, and whether the proposed enforcement mechanisms have real teeth. At the moment, with the information we have, we just can’t tell if that is the case or not.”

Until the EU and US lawmakers present the EU-US Privacy Shield proposition in full to the Article 29 Working Party at the end of February, it is difficult to say with any degree of certainty whether the CJEU would uphold any legal challenges against it, said Hedley.

And, it seems, there will be no shortage of candidates willing to put it to the test once the full details are known.

“I am not sure if this system will stand the test before the Court of Justice,” Schrems said, in his post-announcement statement. “There will clearly be people who will challenge this; depending on the final text, I may well be one of them.”

Source: CIO.com- EU-US Privacy Shield: Can written assurances adequately protect EU data from US snoops? by Caroline Donnelly

Making bimodal IT a reality

Matt Kingswood of ITS UK examines the challenges and opportunities around this new style of IT service delivery

A recent survey by Gartner found that 45 per cent of CIOs currently use a bimodal IT service management strategy. This method of service delivery allows IT teams to split their focus into two separate, coherent modes: stability and agility. The first mode involves the operational side of IT, in which the team focuses on the safety and reliability of an organisation’s IT environment. This includes daily tasks such as troubleshooting issues and helpdesk functions. The second mode, which is the crux of bimodal IT, is centred on innovation and allows the IT team to experiment and identify new ways of using technology to meet the fast-evolving demands of the business.

By 2017, Gartner predicts that 75 per cent of organisations will have implemented bimodal IT strategies. Why the sudden interest in this new approach to IT service delivery?

In short, by separating IT functions, businesses are able to continually adapt to rapid business growth, develop products and services driven by new technologies, and identify different applications of current technologies.

Bimodal IT challenges
For mid-market businesses, which typically have limited IT staff, ensuring that technology keeps pace with business demands is a challenge. Many CIOs may feel that their IT spending is weighted too heavily toward the maintenance side of IT, leaving little room for new projects.

The trouble is that, if IT staff are too consumed with troubleshooting and helpdesk issues and do not have the time to accommodate new developments, users tend to resort to shadow IT. As other business units take it upon themselves to implement IT solutions that meet their specific needs, the IT team’s role of maintaining the organisation’s infrastructure is threatened, as they are not able to support new technologies they are not aware are being used internally.

Innovation plays a key role in advancing business, but organisations cannot afford for their service delivery times to suffer. A business that outsources helpdesk, for instance, needs to ensure that who they are partnering with is equipped to quickly and efficiently troubleshoot issues. Otherwise the organisation’s ability to continue business as usual is inhibited, which could compromise future progress.

How businesses are achieving bimodal IT
To adopt a bimodal IT service delivery strategy that allows IT to take a more holistic role in the business, some organisations are choosing to outsource basic IT functions such as helpdesk. Gartner believes that with the changes in the nature of IT sourcing, smaller IT suppliers will be able to respond quickly to requirements, while also scaling solutions more quickly by utilising cloud capabilities.

By outsourcing IT functions to a managed service provider (MSP), internal staff are then free to invest more time in IT innovation, perhaps even developing their own skunkworks that allows the business to remain agile in a competitive marketplace.

Although it might not be feasible for a mid-market business to implement a bimodal IT strategy with dedicated internal staff, that does not mean innovation is out of the question. Outsourcing basic maintenance functions such as helpdesk can help the business prioritise progress.

Source: channelnomics-Making bimodal IT a reality by Matt Kingswood 

VMware Project Enzo takes EUC management to the cloud

Following in the footsteps of Citrix Workspace Cloud, VMware is readying a cloud-based platform of its own, which aims to make workspace design, delivery and management easier than ever.

VMware’s Project Enzo is a cloud-based platform designed to make it faster and less complex to implement and manage virtual desktops and applications on hyper-converged appliances.
Project Enzo works with VMware Horizon Air, Horizon 6 and partner cloud platforms to deploy, control, move, and scale desktops and apps. As of this writing, Enzo is still in beta, but it provides a control center that orchestrates operations across cloud-based and on-premises data centers.

VMware Project Enzo is aimed squarely at hyper-converged appliances, with the company’s EVO:RAIL hyper-converged infrastructure (HCI) product as its primary target. Appliances such as EVO:RAIL and EVO SDDC combine compute, storage and networking resources into a unified infrastructure to help ease the burden of deploying and managing virtual desktops and apps.

Using Enzo with VMware’s HCI appliances streamlines desktop and application management to plug-and-play simplicity, removing much of the complexity that is often associated with deployment and maintenance operations. According to VMware, virtual desktop deployments that historically took days to implement will be possible in less than an hour with Enzo.

Smart Node unites Project Enzo with HCI

A key component of the Project Enzo platform is VMware’s Smart Node technology, which is an orchestration layer installed on the appliance that communicates with the Enzo control center. Smart Node is tightly integrated with the company’s hyper-converged infrastructure, and is responsible for delivering and managing the appliance workloads. It is the Smart Node technology that makes it possible to deliver just-in-time provisioning for virtual desktops and their applications through an intuitive, easy-to-use interface.

Smart Node will be available to use with VMware Horizon Air services and other desktop as a service providers, enabling organizations to manage virtual desktops and applications in an assortment of environments. In addition, VMware is also planning to release an adapter for Horizon 6 that will allow customers to use Project Enzo with their existing systems.

Project Enzo Cloud-Control Plane

After an organization has implemented the necessary Enzo-enabled infrastructures, administrators can use the Project Enzo Cloud-Control Plane to deploy and manage their virtual desktops and applications on those infrastructures. The Cloud-Control Plane runs on VMware’s vCloud Air and provides a single platform to configure desktops, applications and policies, without needing to install additional software or manage each component separately.
The Cloud-Control Plane should make implementing virtual desktops easier and faster. According VMware, you can deploy 100 virtual desktops on your hyper-converged appliance in less than a minute, or scale up to 2,000 virtual desktops in less than 20 minutes. To get started, you only need to name the pool, specify the number of desktops and select a Windows image.

Project Enzo also provides what VMware refers to as hybrid cloud flexibility — letting you deploy virtual desktops and apps to the public cloud or to on-premises data centers, with the ability to move them back and forth between the two. You can use one platform as your production environment and another platform for seasonal workloads, desktop bursting, disaster recovery or other secondary use cases.

With VMware Project Enzo, you can also apply updates to applications and desktop images with zero downtime, eliminating the maintenance Windows traditionally required to apply updates. In addition, the Enzo platform itself is made up of a set of microservices that you can update individually, allowing VMware to introduce new technologies and fixes, with little impact on the overall system.

Technologies that integrate with Project Enzo

The Project Enzo platform uses three VMware technologies to support its deployment and management capabilities: Instant Clone, App Volumes and User Environment Management (UEM).

First introduced in VMware vSphere 6, Instant Clone — formerly Project Fargo — is now also integrated into Project Enzo and allows you to provision virtual desktops within seconds. Instant Clone uses a rapid in-memory technique to generate child clones from a running parent virtual machine. The child clones uses the parent’s disk and memory resources, with writes saved to delta disks. The parent is then placed in an inactive state and the child clones are put into production.

Project Enzo has the potential to decrease the complexity of deploying VMware virtual desktops and their applications.
To handle the application side of the virtual environment, Project Enzo uses App Volumes, a scalable, real-time delivery technology for deploying virtual applications and data in seconds. App Volumes lets you manage an application’s entire lifecycle, providing the tools necessary to provision, deliver, maintain and retire the application. Through the Enzo Cloud-Control Plane, you can make applications available immediately upon login or at system boot-up. The applications are stored in read-only virtual disks, and can follow users from one virtual desktop to another.

The final piece of the Enzo puzzle is User Environment Manager, a service that makes application settings available to a user on any type of device. With UEM, you can configure contextual user policies that map policy settings to the user’s device and location. When the user logs into a virtual desktop or other environment, UEM automatically applies policy settings to that environment, providing a personalized and consistent experience across devices, with the ability to support over 100,000 users.

Question marks over Project Enzo

Project Enzo has the potential to decrease the complexity of deploying VMware virtual desktops and applications. Enzo aims to help organizations maximize their resources, without over-provisioning virtual desktops and applications.

That said, VMware hasn’t disclosed what the final product will cost, only saying that the service will cost less than a cup of coffee per day, per user. The only people to test out Enzo are qualified customers and partners that signed a nondisclosure agreement. VMware recently announced that it plans to accept customer nominations for beta participants, and they will be able provide a better sense of how well the platform is put together. Project Enzo could prove a promising technology on the virtualization front, as long as VMware can live up to all its promises.

Source: TechTarget-VMware Project Enzo takes EUC management to the cloud by
Robert Sheldon

The current state of machine intelligence 2.0

Autonomous systems and focused startups among major changes seen in past year.

A year ago today, I published my original attempt at mapping the machine intelligence ecosystem. So much has happened since. I spent the last 12 months geeking out on every company and nibble of information I can find, chatting with hundreds of academics, entrepreneurs, and investors about machine intelligence. This year, given the explosion of activity, my focus is on highlighting areas of innovation, rather than on trying to be comprehensive. Figure 1 showcases the new landscape of machine intelligence as we enter 2016:

1400px-MI-Landscape-2.0-R9_update-31f507e4ce5f56e6cc06ee0b60bb6690

Despite the noisy hype, which sometimes distracts, machine intelligence is already being used in several valuable ways. Machine intelligence already helps us get the important business information we need more quickly, monitors critical systems, feeds our population more efficiently, reduces the cost of health care, detects disease earlier, and so on.

LEARNING PATH

Machine Learning

In this Learning Path, you’ll master everything you need to transform data into action. Start with basic techniques and move on to coding your own machine learning algorithms.

The two biggest changes I’ve noted since I did this analysis last year are (1) the emergence of autonomous systems in both the physical and virtual world and (2) startups shifting away from building broad technology platforms to focusing on solving specific business problems.

Reflections on the landscape

With the focus moving from “machine intelligence as magic box” to delivering real value immediately, there are more ways to bring a machine intelligence startup to market.  Most of these machine intelligence startups take well-worn machine intelligence techniques, some more than a decade old, and apply them to new data sets and workflows. It’s still true that big companies, with their massive data sets and contact with their customers, have inherent advantages—though startups are finding a way to enter.

Achieving autonomy

In last year’s roundup, the focus was almost exclusively on machine intelligence in the virtual world. This time we’re seeing it in the physical world, in the many flavors of autonomous systems: self-driving cars, autopilot drones, robots that can perform dynamic tasks without every action being hard coded. It’s still very early days—most of these systems are just barely useful, though we expect that to change quickly.

These physical systems are emerging because they meld many now-maturing research avenues in machine intelligence. Computer vision, the combination of deep learning and reinforcement learning, natural language interfaces, and question-answering systems are all building blocks to make a physical system autonomous and interactive. Building these autonomous systems today is as much about integrating these methods as inventing new ones.

The new (in)human touch

The virtual world is becoming more autonomous, too. Virtual agents, sometimes called bots, use conversational interfaces (think of Her, without the charm). Some of these virtual agents are entirely automated, others are a “human-in-the-loop” system, where algorithms take “machine-like” subtasks and a human adds creativity or execution. (In some, the human is training the bot while she or he works.) The user interacts with the system by either typing in natural language or speaking, and the agent responds in kind.

These services sometimes give customers confusing experiences, like mine the other day when I needed to contact customer service about my cell phone. I didn’t want to talk to anyone, so I opted for online chat. It was the most “human” customer service experience of my life, so weirdly perfect I found myself wondering whether I was chatting with a person, a bot, or some hybrid. Then I wondered if it even mattered. I had a fantastic experience and my issue was resolved. I felt gratitude to whatever it was on the other end, even if it was a bot.

On one hand, these agents can act utterly professional, helping us with customer support, research, project management, scheduling, and e-commerce transactions. On the other hand, they can be quite personal and maybe we are getting closer to Her — with Microsoft’s romantic chatbotXiaoice, automated emotional support is already here.

Evaluating Machine Learning Models

As these technologies warm up, they could transform new areas like education, psychiatry, and elder care, working alongside human beings to close the gap in care for students, patients, and the elderly.

50 shades of grey markets

At least I make myself laugh.

Many machine intelligence technologies will transform the business world by starting in regulatory grey areas. On the short list: health care (automated diagnostics, early disease detection based on genomics, algorithmic drug discovery); agriculture (sensor- and vision-based intelligence systems, autonomous farming vehicles); transportation and logistics (self-driving cars, drone systems, sensor-based fleet management); and financial services (advanced credit decisioning).

To overcome the difficulties of entering grey markets, we’re seeing some unusual strategies:

  • Startups are making a global arbitrage (e.g., health care companies going to market in emerging markets, drone companies experimenting in the least regulated countries).
  • The “fly under the radar” strategy. Some startups are being very careful to stay on the safest side of the grey area, keep a low profile, and avoid the regulatory discussion as long as possible.
  • Big companies like Google, Apple, and IBM are seeking out these opportunities because they have the resources to be patient and are the most likely to be able to effect regulatory change—their ability to affect regulation is one of their advantages.
  • Startups are considering beefing up funding earlier than they would have, to fight inevitable legal battles and face regulatory hurdles sooner.

What’s your (business) problem?

A year ago, enterprises were struggling to make heads or tails of machine intelligence services (some of the most confusing were in the “platform” section of this landscape). When I spoke to potential enterprise customers, I often heard things like, “these companies are trying to sell me snake oil” or, “they can’t even explain to me what they do.”

The corporates wanted to know what current business problems these technologies could solve. They didn’t care about the technology itself. The machine intelligence companies, on the other hand, just wanted to talk about their algorithms and how their platform could solve hundreds of problems (this was often true, but that’s not the point!).

Two things have happened that are helping to create a more productive middle ground:

  1. Enterprises have invested heavily in becoming “machine intelligence literate.” I’ve had roughly 100 companies reach out to get perspective on how they should think about machine intelligence. Their questions have been thoughtful, they’ve been changing their organizations to make use of these new technologies, and many different roles across the organization care about this topic (from CEOs to technical leads to product managers).
  2. Many machine intelligence companies have figured out that they need to speak the language of solving a business problem. They are packaging solutions to specific business problems as separate products and branding them that way. They often work alongside a company to create a unique solution instead of just selling the technology itself, being one part educator and one part executor. Once businesses learn what new questions can be answered with machine intelligence, these startups may make a more traditional technology sale.

The great verticalization

STRATA + HADOOP WORLD IN SAN JOSE 2016

Strata + Hadoop World in San Jose 2016

I remember reading Who Says Elephants Can’t Dance and being blown away by the ability of a technology icon like IBM to risk it all. (This was one of the reasons I went to work for them out of college.) Now IBM seems poised to try another risk-it-all transformation—moving from a horizontal technology provider to directly transforming a vertical. And why shouldn’t Watson try to be a doctor or a concierge? It’s a brave attempt.

It’s not just IBM: you could probably make an entire machine intelligence landscape just of Google projects. (If anyone takes a stab, I’d love to see it!)

Your money is nice, but tell me more about your data

In the machine intelligence world, founders are selling their companies, as I suggested last year—but it’s about more than just money. I’ve heard from founders that they are only interested in an acquisition if the acquirer has the right data set to make their product work. We’re hearing things like, “I’m not taking conversations but, given our product, if X came calling it’d be hard to turn down.” “X” is most often Slack (!), Google, Facebook, Twitter in these conversations—the companies that have the data.

(Eh)-I

Until recently, there’s been one secret in machine intelligence talent:Canada! During the “AI winter,” when this technology fell out of favor in the 80s and 90s, the Canadian government was one of a few entities funding AI research. This support sustained the formidable trio of Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, the godfathers of deep learning.

Canada continues to be central to the machine intelligence frontier. As an unapologetically proud Canadian, it’s been a pleasure to work with groups like AICML to commercialize advanced research, the Machine Learning Creative Destruction Lab to support startups, and to bring the machine intelligence world together at events like this one.

So what now?

Machine intelligence is even more of a story than last year, in large companies as well as startups. In the next year, the practical side of these technologies will flourish. Most new entrants will avoid generic technology solutions, and instead have a specific business purpose to which to put machine intelligence.

I can’t wait to see more combinations of the practical and eccentric. A few years ago, a company like Orbital Insight would have seemed farfetched—wait, you’re going to use satellites and computer vision algorithms to tell me what the construction growth rate is in China!?—and now it feels familiar.

Similarly, researchers are doing things that make us stop and say, “Wait, really?” They are tackling important problems we may not have imagined were possible, like creating fairy godmother drones to help the elderly, computer vision that detects the subtle signs of PTSD, autonomous surgical robots that remove cancerous lesions, and fixing airplane WiFi (just kidding, not even machine intelligence can do that).

Overall, agents will become more eloquent, autonomous systems more pervasive, machine intelligence more…intelligent. I expect more magic in the years to come.

 

Source: Oreilly-The current state of machine intelligence 2.0 By Shivon Zilis

A Learning Advance in Artificial Intelligence Rivals Human Abilities

Computer researchers reported artificial-intelligence advances on Thursday that surpassed human capabilities for a narrow set of vision-related tasks.

The improvements are noteworthy because so-called machine-vision systems are becoming commonplace in many aspects of life, including car-safety systems that detect pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

Researchers at the Massachusetts Institute of Technology, New York University and the University of Toronto reported a new type of “one shot” machine learning on Thursday in the journal Science, in which a computer vision program outperformed a group of humans in identifying handwritten characters based on a single example.

The program is capable of quickly learning the characters in a range of languages and generalizing from what it has learned. The authors suggest this capability is similar to the way humans learn and understand concepts.

The new approach, known as Bayesian Program Learning, or B.P.L., is different from current machine learning technologies known as deep neural networks.

Neural networks can be trained to recognize human speech, detect objects in images or identify kinds of behavior by being exposed to large sets of examples.

Although such networks are modeled after the behavior of biological neurons, they do not yet learn the way humans do — acquiring new concepts quickly. By contrast, the new software program described in the Science article is able to learn to recognize handwritten characters after “seeing” only a few or even a single example.

The researchers compared the capabilities of their Bayesian approach and other programming models using five separate learning tasks that involved a set of characters from a research data set known as Omniglot, which includes 1,623 handwritten character sets from 50 languages. Both images and pen strokes needed to create characters were captured.

“With all the progress in machine learning, it’s amazing what you can do with lots of data and faster computers,” said Joshua B. Tenenbaum, a professor of cognitive science and computation at M.I.T. and one of the authors of the Science paper. “But when you look at children, it’s amazing what they can learn from very little data. Some comes from prior knowledge and some is built into our brain.”

Also on Thursday, organizers of an annual academic machine vision competition reported gains in lowering the error rate in software for finding and classifying objects in digital images.

Three researchers who have created a computer model that captures humans’ unique ability to learn new concepts from a single example: from left, Ruslan Salakhutdinov, Brenden M. Lake and Joshua B. Tenenbaum. Credit Alain Decarie for The New York Times
“I’m constantly amazed by the rate of progress in the field,” said Alexander Berg, an assistant professor of computer science at the University of North Carolina, Chapel Hill.

The competition, known as the Imagenet Large Scale Visual Recognition Challenge, pits teams of researchers at academic, government and corporate laboratories against one another to design programs to both classify and detect objects. It was won this year by a group of researchers at the Microsoft Research laboratory in Beijing.

 

The Microsoft team was able to cut the number of errors in half in a task that required their program to classify objects from a set of 1,000 categories. The team also won a second competition by accurately detecting all instances of objects in 200 categories.

The contest requires the programs to examine a large number of digital images, and either label or find objects in the images. For example, they may need to distinguish between objects such as bicycles and cars, both of which might appear to have two wheels from a certain perspective.

In both the handwriting recognition task described in Science and in the visual classification and detection competition, researchers made efforts to compare their progress to human abilities. In both cases, the software advances now appear to surpass human abilities.

However, computer scientists cautioned against drawing conclusions about “thinking” machines or making direct comparisons to human intelligence.

“I would be very careful with terms like ‘superhuman performance,’ ” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence in Seattle. “Of course the calculator exhibits superhuman performance, with the possible exception of Dustin Hoffman,” he added, in reference to the actor’s portrayal of an autistic savant with extraordinary math skills in the movie “Rain Man.”

The advances reflect the intensifying focus in Silicon Valley and elsewhere on artificial intelligence.

Last month, the Toyota Motor Corporation announced a five-year, billion-dollar investment to create a research center based next to Stanford University to focus on artificial intelligence and robotics.

Also, a formerly obscure academic conference, Neural Information Processing Systems, underway this week in Montreal, has doubled in size since the previous year and has attracted a growing list of brand-name corporate sponsors, including Apple for the first time.

“There is a sellers’ market right now — not enough talent to fill the demand from companies who need them,” said Terrence Sejnowski, the director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in San Diego. “Ph.D. students are getting hired out of graduate schools for salaries that are higher than faculty members who are teaching them.”

Source: NYTimes-A Learning Advance in Artificial Intelligence Rivals Human Abilities By JOHN MARKOFF