The most common software project management mistakes

Here are the four big mistakes that get in the way of good software project management for nursing homes and what to do about them.

Mistake #1: Thinking having a “project manager” is project management. It’s not.

A project manager is a professional who has studied the tools and procedures of the project management discipline and often holds a certification such as a PMP. Despite the all-encompassing connotation of the title, many people are surprised to find out the PM’s role is rather limited. On a well-run software project the PM is tracking tasks and budgets and timelines. For that reason, I like to use the term “literal project manager.” Here’s a diagram showing the literal project manager’s place on a typical project team.

annamurraybloggraphic2_875475

Project management, in contrast, involves many more people. There are those who will discover and understand the business requirements, who will make strategic decisions about technology choice, who will set the project’s goals and establish metrics. In short, good project management is holistic, not limited.

annamurraybloggraphic1_875468_875474

What you can do: Learn all the business roles involved in holistic project management.
Many business people are not aware of the roles or “hats” necessary in a software project to achieve good governance and management, such as a project sponsor, program manager, and business analyst. Taken together, all these roles achieve good project management.
Mistake #2: Not understanding the word “risk.”

In regular life, when you hear the word “risk,” you probably translate it to the following statement, “There is a chance that I will endure some harm, but in all likelihood, I probably will not.”
Software development “risks” have a high likelihood of impacting budget and timeline. Many times, risks are not identified, or accepted heedlessly. Timelines and budgets are blown.
What you can do: Understand the most common project risks; account for them in timeline and budget.
Every software project has risks. Getting the definition straight—that a risk is likely to have consequences—is the first step.
Next you must identify your risks. Most businesspeople do not know how to identify risks in a project. These five items always go into my risk column until proven otherwise:

1. Integration—The need for one system to exchange data with another
2. Data migration—Porting data from an older system into the new one
3. Customization—Inventing a novel feature or function
4. Unproven technology—Employing a new/hot technology just introduced
5. Too-large project—Tackling a massive project in one fell swoop rather than breaking it up into parts

Sometimes risks can be avoided. If a business leader is aware of risky items and their likely consequences, she can eliminate features in a project or put the kibosh on some sexy new technology an influential cadre of people want. When risks are identified and accepted, contingency budget and timeline should be set aside to deal with them.

Mistake #3: Not understanding the accuracy of the budget

There are four methods of budgeting in software development.

  1. Comparative budgeting—When a vendor or internal team uses a recently completed equivalent project to estimate a new project.
  2. Bottom up budgeting—When a detailed list of tasks exists and can be estimated one by one.
  3. Top down budgeting—Usually on a large or innovative project. Where few equivalencies can be identified and it is not practical to develop a detailed list. The team estimates big “buckets” and how long they will take.
  4. Blends—A budget that involves two or more of the above

As you can probably tell from reading the previous list, the accuracy of the budget will vary according to the method used. Comparative budgeting and bottom-up budgeting are generally more accurate than top-down budgeting.

A businessperson may not understand why a current software development project is going over budget. He may say, “I did a project last year that seemed very complicated and it came in on time and on budget! What is wrong with you people?”
He may not understand that his previous project was comparison-budgeted against a very similar project. The current project required top-down budgeting based on its degree of innovation. Top-down budgeting is almost always less accurate than comparative budgeting, where a good recent equivalency exists.
What you can do: Understand how the budgeting methods work.
Understand the different software budgeting methods and know which one(s) was used in your project. Set aside contingencies for the methods that are less precise.

Mistake #4: Business leaders shirk their responsibilities in project management

Business leaders with little experience in technology often raise their palms to the sky and say, “I don’t know? What should we do?”
After a few interactions like this, a group of programmers may think, “It’s my responsibility to decide.” Many organizations get into enormous amounts of trouble and expense because all strategic technology decisions are being made by mid-level programmers who have no insight into any of the organization’s actual strategy. Business leaders end up discovering too late that their customer database is a mess or that they are in violation of federal data-protection guidelines.
What you can do: Step up!
It’s critical for organizational leadership to accept that however unprepared they may feel, they are in charge of making strategic technology decisions. This means involvement in software project management – in the holistic sense.

Source: mcknights-The most common software project management mistakes

Advertisements

How internal IT can become a cloud services broker

Evidence suggests that cloud projects are much more likely to fail without IT support. Expert Tom Nolle explains how organizations benefit when IT becomes a cloud services broker.

There is strong evidence that without professional IT support, cloud projects are much more likely to fail, create security and compliance problems, and divide IT activities into uncontrollable silos. For some enterprises, including many of the largest, the best approach to the cloud is to let internal IT act as a public cloud services broker. IT management and planners need to prepare for such a role by creating an IT resource plan that accepts the public cloud as an equal partner, addressing line department fears of data center lock-in or loss of agility, and defining tools and practices to embrace cloud services from any source.
Even line departments, empowered by as-a-service cloud application choices, are concerned that exercising IT procurement on a per-organization level will lead to disintegration of unified workflows and loss of productivity. They’re also concerned about security, compliance and cost overruns, and in many cases, line departments have sought internal IT advice on cloud selection. This approach is helpful, but it’s hard to structure to avoid wasteful duplication of work when multiple line departments are pursuing cloud goals independently.

Think of IT as a broker of both cloud and internal services

All these concerns naturally lead to the question of whether internal IT should act as a “fair broker” of both public cloud and internal IT services. Line departments would go to the IT organization with a business plan, which IT would then translate into a commitment to the cloud or to internally hosted technology. Given that the cloud is a reality, such a role could be a benefit both to the IT organization and the company at large. But few internal IT organizations are prepared to play the role, because they don’t have any application plans that don’t discriminate against the cloud in some way.

Most large enterprises probably can’t coordinate cloud adoption through any means other than using internal IT as a cloud services broker. Line departments that refuse to accept the concept, or an internal IT organization that refuses to accept the role of cloud services broker, eventually will create such discord, as well as workflow, security and compliance confusion, that it might even threaten company operations and profits. It is simply not logical to presume an IT future without the cloud, so it’s critical that the future be managed based on the same IT principles used to guide operations today. Only the IT organization can do that.

As logical as the role of fair broker is for internal IT, few organizations are prepared to play it. Because line department-driven cloud adoption is largely spurred by suspicion of internal IT, it’s important that a cloud brokerage plan be developed from a cloud perspective and not be seen as a defense of current data center practices.

Steps IT can take to become a cloud services broker

The first step in becoming a cloud broker is to build a unified resource plan for application hosting in “as-a-service” form. The plan should start by identifying platforms that are needed to host applications, and move to how those platforms are then realized on public cloud resources, as well as in the data center. The platform specifications must permit the platform to be priced out in any environment it’s realized, so organizations can compare costs. The goal is to create abstract views of application resources that can be applied both to internal and cloud deployments.

The next step in the resource plan is to define an integration, compliance and security plan that frames mechanisms that will be provided on each supported cloud or internal platform. This is a critical step because it’s the one that assures the individual platform choices can be harmonized with baseline IT processes to ensure information exchange and security. A map of current applications, components and workflows is helpful for this, and where a formal enterprise architecture model has been adopted, business process flows are of great value. The current practices can serve as a starting point, but be sure to adapt them to be cloud compatible.

Selling this approach to line departments can be challenging. And one tempting way to reduce the pushback is to offer the plan, and let line organizations suggest cloud applications and platforms, which IT would then certify and help integrate. This approach is most likely to win line department support, but it can introduce significant delays and chaos in later phases. It’s best to pick three top cloud provider candidates as part of the program, but then let line organizations go outside this group — accepting delay and additional costs in certification — if it’s necessary.

Selling a cloud services broker role requires planning

Experience shows that selling a cloud services broker role to line departments is facilitated if your resource plan includes specific tools and procedures for integrating cloud applications in any form — i.e., software as a service (SaaS), platform as a service (PaaS) or infrastructure as a service(IaaS) — and addressing the full spectrum of security, compliance and application lifecycle management (ALM). Such a plan not only provides confidence that the broker role you propose has substance and value, it makes it clear that there are major tasks to be completed that line organizations will inherit if the IT broker doesn’t fill the requirements through the resource plan.
The first thing to look for in your tools and practices assessment is any vendor- or platform-specific techniques. These will be at risk if line organizations can acquire SaaS or PaaS cloud elements, so look for generalized and open architectures for integration, deployment and change management.

When any vendor-specific issues have been resolved, look at ALM policies that validate information integrity, security and compliance goals to ensure that they are flexible enough to incorporate cloud-hosted elements.

Both the cloud and the data center will be part of virtually every enterprise IT application plan, so the unification of these critical resources into a single accountable whole is critical for long-term IT success. With a plan to do that, enterprises can navigate not only the technology shift the cloud generates, but the internal political shifts as well.

Source: TechTarget-How internal IT can become a cloud services broker by Tom Nolle

Why should I consider a microservices architecture?

Microservices offer a new way to architect and deploy enterprise applications. In microservices architecture, modular applications consist of small components or services that communicate and scale as needed. Microservices provide better scalability and use compute resources more efficiently than traditional software designs.
To understand microservices, it’s important to first understand traditional software models. Developers wrote traditional applications as a single, cohesive unit where every feature and function is contained within the same executable, dynamic link library or other active codebase. They would then provision a virtual machine or cloud instance with enough CPU and memory to operate the application. Then, the application is loaded onto the instance and started. This process works fine for many simple tasks.

But the problem — especially with enterprise-class, client-server applications — is that traditional application capabilities are finite. Developers design that software to provide only a limited amount of resources, such as simultaneous client connections. As the application encounters a processing, memory, I/O or other limit, its performance starts to decline and, in extreme cases, may crash. To work around bottlenecks, organizations need to provision and install another iteration of the application. This reduces the data center resources available for other tasks.

Think of traditional software design like building a house. The house is designed to hold a limited number of people and provide a finite number of services. Once you encounter a persistent bottleneck, you need a bigger house.

Software design has evolved to eliminate the bottleneck problem. Developers are increasingly segregating modern applications into separate components that use a common protocol, such as an API, to communicate and interoperate. Organizations deploy a microservices-based application as a series of interrelated components. The real power of this approach is that components are scalable; if a bottleneck occurs in one component, organizations can deploy more iterations of that component to handle the load.

A microservices architecture offers a scalable application design that uses resources more efficiently, and requires less overall compute power. In addition, microservices are an ideal complement to cloud services, such as event-driven and containerized computing.

Source: TechTarget-Why should I consider a microservices architecture? by Stephen J. Bigelow

How current industry mega trends tangibly affect the EUC industry. Part 4: Security

This is the fourth (and final) article in a series detailing several industry “mega” trends that I see in EUC today. The first article was about Hyper-Convergence, the second was about Application Management (Layering), and the most recent was about Cloud.

In this article I want to discuss another trend, ‘Security’, and how it will have an impact on our industry today and tomorrow for the bulk of our use cases across the world.

Boring and Invisible – Yet Important

Security is a difficult subject. If there’s too much of it then it is annoying, but if there is too little then bad things happen. Let’s face it: traditionally our End User Computing industry has had relatively little to do with security (*ducks*). I mean, outside of the virus scanners on PCs it really was not a big part of our [EUC] life. Of course there are the brave souls who dare to run antivirus on shared hosted desktop platforms or even hypervisors, but for the most part the job of security was left for the ‘firewall guy’. Well, you and the firewall guy need to have lunch together (often) because the world is changing rapidly.

Omni-Connected

One important factor is that the Enterprise IT world is becoming more and more connected. Where the firewall used to be the boundary of the Enterprise perimeter, this is no longer strictly the case. Think about it: with the ever increasing consumption of cloud services / SaaS applications in enterprises a larger portion of the stuff that IT is tasked to protect moves out of their network.

Don’t take my word for it. The segment called CASBs (Cloud Access Security Brokers) focuses exactly on this problem and has been exploding (in a good way) recently. Next to CASBs, there’s also the segment of more ‘traditional’ security vendors, which have all been trying to grow beyond firewalls for a while now. Much of this all revolves around the fact that all malware or other malicious ‘stuff’ in your network have one thing in common: at one point or another this malicious content will attempt to communicate outside of your network – either to phone home, spread, talk to other ‘members,’ or whatever. That’s where the prime detection possibility is and that is where a lot of the new focus will be.

Cybercrime

Another important factor is the rise of Cybercrime. Cybercrime is growing fast and getting more and more organized, both for pure monetary reasons but also for political and religious reasons. Whatever the reason, the effect was already witnessed in 2015: an unprecedented amount of high profile attacks have occurred and the year is not over yet (plus a lot of hacks are going on right now that have not been discovered yet, I am sure). Hacks ranged from those that were high profile financial services to prisoner records. Ransomware and Cryptoware are no longer just a problem for singular users. Companies are being targeted more and more, which is costing enterprises a ton of money. This survey showed that the average annual cost incurred by affected enterprises globally now stands at $7.7 million.

Cybercrime budgets are also one of the few budget categories that are increasing. For example, in 2016 the Cybercrime (CDM) budget for the US government alone is $14 billion. In a similar fashion, the UK plans to double its cybercrime budget over the next 5 years. Finally, an additional important accelerator will be that legislation, especially in EMEA, will become even stricter in terms of who is held liable when a hack occurs. The simple fact is that a lot of organizations are not well equipped (yet) to deal with this new world, and that’s why we will see security have a big impact on End User Computing in 2016.

Security at the EUC vendors

When it comes to security, I think that are a couple of types of vendors in our End User Computing market that you will see creating or expanding their offerings. For the EUC Big 3, Microsoft kind of already made the first move when they acquired Adallom for $320M in September. I say “kind of” because while you may not directly work with this technology, you probably will indirectly. Security is also part of the bigger VMware proposition–it’s actually one of the five imperatives the CEO has for the company. I have not seen a specific security product (capability) from the EUC group at VMware, but I am sure we will in the next year (NSX is an example that is very close to VMware EUC already). As for Citrix, it would be no surprise to me if Citrix jumps on the security bandwagon as well (outside of the classic security benefits that ‘centralized computing offers,’ which aren’t unique to their products). Still, I have seen no major initiatives there yet, which kind of makes sense since they are rationalizing their product portfolio.

Another category is the User Environment Management (UEM) vendors. Two that come to mind for me are AppSense and RES Software. They’ve had some security capabilities in their products for a while now, and seem to be adding to them a lot more as time goes on (a trend that I think will continue).

I also think there’s also great potential here for the more traditional End User Computing monitoring and analytics products to help their customers with these problems. That is actually quite important to realize–to be able to protect and secure the workspace you need to have detailed insights into that workspace. Since most of the workspace today is still Windows based, the current End User Computing monitoring and analytics products are in a great position to start providing these security services. Lakeside Software, for example, recently added a specific security capability in their Systrack product, and I am confident we will see some more security related developments from the End User Computing monitoring and analytics vendors in the next year.

Source: Brianmaidden-How current industry mega trends tangibly affect the EUC industry. Part 4: Security by Michel Roth

Business partner agreement: Protect your investments

When starting channel businesses, making the wrong decisions about the business partner agreement and entity selection can have severe consequences later on down the line.

Starting your own channel business may be a journey best not gone on alone, as business partnerships can greatly increase your chances of financial survival. However, entrepreneurs can make several mistakes at this early stage of the process, such as failing to set down the necessary ground rules in a business partner agreement.
“If you’re an entrepreneur starting your own business, doing it on your own will put you at a disadvantage,” said Michael Corey, president of Ntirety Inc., a database services company based in Dedham, Mass., and a division of cloud company Hosting Inc. “Every study that I’ve seen has shown that a business is more likely to succeed if there are partners involved.”

The advantages of going into business with partners are numerous, but Corey cited one of the most important as having someone there to give you a break. “The first thing anyone who goes into entrepreneurship is going to learn is it’s not a 40-hours-a-week job. It’s an 80-hours-a-week job. So, at least with a partner, you have that benefit of taking a break … and knowing that when you come back, the business is still going to be there, and, more importantly, there’s somebody in the business that cares about it as much as you do,” Corey said.

Another benefit is that partnerships can broaden your business’s reach in terms of business contacts, skillsets and capabilities, he added.

Partnerships can have their disadvantages, of course, if you’re unprepared for scenarios that can crop up later on. For example, your business partner’s attitude about the work can change after the business begins to thrive. “Now all of a sudden the business is doing well and one of the partners says, ‘You know, I don’t have to work so hard anymore,'” Corey said.

“I think a common mistake is [to say,] ‘We’ll just split the profits,'” he added. “Well, that sounds good and you finally have those profits and you split them, but four years later you’re still splitting the profits, but you’re the guy who’s working 80 hours a week and the other guy’s working 40 hours a week. You’re not going to feel too good about that.”

Prepare for the worst in a business partner agreement

David Streit, principal and owner of Stephill Associates LLC, an IT services company based in New Jersey, has had a successful solo career as a managed services provider (MSP), opting to work with only the support of independent contractors and through collaborations with peers. He also relies on the support he receives from industry organizations like the ASCII Group and its resources like ASCII Link, an open forum for MSPs to share industry insight and advice. He said he’s in the process of forming an accountability peer group with another ASCII Group member who is located in Virginia.

I think, too often people don’t have these conversations and then they get burned by it in a big way.

Even if Streit has managed to work successfully on his own for 14 years, he admitted he can’t help but sometimes wish he had a partner to have grown with. At the same time, “the problem with partnerships is that most of them break up,” he said.

Many life situations could potentially break up a partnership, so at the outset partners must consider and discuss various hypothetical events that could occur and what should happen as a result. These events include the death of a partner, a partner becoming disabled, and a partner getting a divorce or deciding to leave the business, Corey said, or what he calls “The Four Ds.” “I think, too often people don’t have these conversations and then they get burned by it in a big way,” he said.

In Corey’s opinion, partners should first address these scenarios and set expectations in an informal “kitchen table” discussion before launching the business.
Dan Liutikas, managing attorney at InfoTech Law Advocates P.C. and chief legal officer and secretary at CompTIA, said in this early stage partners should determine the management structure of the company and equity of each partner. Management authority and company ownership are not the same thing, so these need to be treated separately. “Once you’ve identified those areas in terms of authority, then you get into areas” like deciding how an LLC’s membership units or a corporation’s shares of stock get transferred, Liutikas said. “There are a number of areas related to contingencies on the transfer of ownership that you really want to provide for in … a buy/sell agreement.”

Business partners must also select the entity of their company — a decision that many channel partners fail to give serious thought to, in Liutikas’ opinion. “Unfortunately, many of these [entity] selections become a little bit too commoditized and people don’t select the correct entity for what they’re actually trying to accomplish, which produces poor tax consequences for them,” he said. “It really doesn’t lend itself to templating and that sort of thing.”

The selection of an entity, whether it is corporation, LLC or limited partnership and so forth, will affect how the partnership is documented, he said. For example, corporations are far more statutory compared to an LLC, which tends allow for more flexible definitions of the partnership relationship.

“A lot of folks pick an entity because they know somebody else who has it, or because somebody said, ‘LLCs are a good way to go.’ But oftentimes there isn’t a full analysis done as to why you are choosing a corporation versus an LLC. … It’s entirely fact-specific in terms of what kind of business you’re going to run, as to why you choose one over another. And then of course there are tax consequences that run through those,” he said.

Other important decisions partners need to make include whether their business is going to be a lifestyle business or built to sell later on. “You don’t want one partner seeing all this profit and wanting to pull it out and the other partner wanting to reinvest it. You have to have a common vision of what your end game is,” Corey said.

Partners should also figure out a plan for compensation and at what point they can start taking money out of the business.

The value of a good consultation

With so many contingencies and aspects to consider, how are partners to cover all their bases when making decisions about their business and developing the business partner agreement? For Liutikas, it “starts with a good [legal] consultation on what entity would make the most sense for you and then preparing the appropriate documentation for it.”

In the lean, early days of a partnership and channel business, it may be hard for partners to take the appropriate measures to protect their investments. In Liutikas’ experience working with entrepreneurs, he’s observed a lot of people will bootstrap their way until they can get their business going. But trying to cut costs on initial consultations could have regrettable consequences in the long term.

Liutikas said it’s uncommon for solution providers to have received appropriate consultation under entity selection. “I think a lot of folks view entity as one of those simple legal items, and they both hop up on the state website and register with an entity, or they use one of the online services that are out there — ‘$99 gets you an entity.'”

Choosing the wrong entity can cost companies thousands of dollars in additional taxes later on — “all for saving potentially an extra $500 to $1,000 at the outset,” he said.

Source: TechTarget-Business partner agreement: Protect your investments by Spencer Smith

How current industry mega trends tangibly affect the EUC industry. Part 3: Cloud

This article is part of a series where we talk about the impact some of the megatrends will have on your End User Computing work. In this article I want to discuss another trend, Cloud, and how it will have an impact on our industry today or tomorrow for the bulk of our use cases across the world.

Cloud has been the buzzword for a while now, and it’s something that every vendor and technology tries (sometimes desperately) to attach itself too. That’s also the reason why it can be a little difficult to see through the proverbial smoke that is being blown. It’s also another great example of a consumer trend that has forced its way into Enterprise IT. This year we’ve seen some pretty tangible examples of ‘Cloud’ becoming a part of day-to-day End User Computing from the three big vendors we deal with the most in End User Computing (on the desktop virtualization side).

The Cloud timeline at the EUC Big 3

As a company, VMware is all over Cloud, but this really didn’t materialize in VMware End User Computing product offerings until about a year ago. It could be said that VMware started it with VMware Horizon Air late in 2014 but that really was only a variation on Horizon DaaS (from their Desktone acquisition).

Going on a tangent for a bit, personally, I think that (pure) DaaS isn’t going to have a big impact in End User Computing market in the near future because of the problems of the application and data locality for most current enterprise End User Computing environments. As the application landscape changes this ‘problem’ will gradually fix itself, serially, over the world over the next 3-10 years but for the time being we’re still faced with the fact that many Enterprises great and small still have to deal with applications and data that are bound to their local data centers.

Given Microsoft’s strategic bet on Azure it was only a matter of time before this would have a large effect on the part of Microsoft that we deal with a lot: Remote Desktop Services. In late 2014 / beginning 2015 Microsoft made available Azure RemoteApp (ARA) which focuses more on an “application” play (vs. a full “Desktop” as-a-Service), but under the covers is RDSH in Azure managed by Microsoft with cloud-scale and the Azure management capabilities and ease of use. Over the year we’ve seen Microsoft make many incremental improvements to the service (one of the key Cloud benefits) making it more and more valuable to their customers. In some ways you could think about ARA being the future of RDSH, especially when you consider Microsoft’s drive for “Windows as a Service,” or continuous delivery direction. One thing that ARA doesn’t deliver, however, is a hybrid ability–meaning that you can manage your on-premises RDSH deployment as well as your Cloud (Azure) hosted RDSHs.

This hybrid approach is something that both Citrix and VMware are both very serious about. Citrix was the first to market when they released Citrix Workspace Cloud (CWC) a couple of months ago. In many ways Citrix Workspace Cloud represents the future of the core Citrix products (which has became even more apparent with the big changes in their business strategy Citrix recently announced). VMware is not far behind because they are getting ready to release Project Enzo early next year.

Since all of these Cloud products from the main vendors are relatively new, I think next year we will start to see the big uptake and quick incremental updates and changes (to pricing for example) as Enterprises start to use these new technologies.

Hybrid capabilities

I think that a key factor for success for these vendors is just how good their hybrid capabilities will be–combining cloud based desktop virtualization, on-premises desktop virtualization but also, maybe even especially, good old physical desktops (you know, the other 90% of the End User Computer market). Success and adoption will also depend on how well these offerings integrate with existing deployments (equaling dollars spent) which is especially important for Citrix who have deployments in almost every enterprise. It will also be important to see how the different vendors support the major public clouds in these offerings. It seems VMware and Microsoft will focus on their own public cloud(s) first, where Citrix is more focused on “any cloud” (their words, not mine).

Overall, the hybrid capabilities of these offerings are going to be key, especially in the first year(s) of these products entering the market. Getting the right blend is not going to be easy but whoever gets it right will be in the fast lane.

Source: brianmadden.com-How current industry mega trends tangibly affect the EUC industry. Part 3: Cloud by Michel Roth

Microsoft Azure partnership with HPE: Channel reaction

News analysis: The recent alliance between Microsoft Azure and HPE could eventually open opportunities for the channel, but appears to have minimal short-term impact.

The Microsoft Azure partnership with Hewlett Packard Enterprise promises to give channel partners greater opportunities and more tools to win over customers to the companies’ newly released hybrid cloud products and services.
Of interest to channel partners are plans to have Microsoft join the HPE Composable Infrastructure partner program to improve the automation and integration of Microsoft System Center and HPE OneView orchestration tools with today’s IT infrastructure, while also ramping up plans for next-generation infrastructure.

To help customers’ hybrid cloud migration projects, Hewlett Packard Enterprise (HPE) has joined Microsoft programs to assist with the adoption of mobility, identity and productivity features. For example, HPE will sell Microsoft cloud offerings across Azure, the Microsoft Enterprise Mobility Suite and Office 365 through its participation in Microsoft’s Cloud Solution Provider program.

The partnership faces stiff competition from Amazon Web Services (AWS), and Dell’s acquisition of EMC will impact the range of choices partners and customers will have in the year ahead. In a cloud services market where consolidation is rampant and new offerings are being introduced frequently, companies’ positive projections for their cloud products and services can quickly evaporate leaving channel partners in the lurch.

Microsoft Azure partnership: Background

The extended Microsoft Azure partnership, in which Microsoft’s offering emerges as HPE’s preferred public cloud partner, follows HPE’s decision to exit the public cloud. After almost two years into its Helion Public Cloud initiative, the company announced that on January 31, 2016 it will sunset the HPE Helion Public Cloud and terminate customer agreements for public cloud services.

The HPE-Microsoft Azure alignment will certainly legitimize and support the Azure message that partners like Accelera are taking to market.
Joe Brown
president and co-founder, Accelera Solutions
The company’s public cloud uptake appears to have been limited. Research firm IDC was unable to quantify Helion’s adoption among enterprise customers as a definitive percentage.

“We primarily measure public cloud and HP does not even come on the public cloud radar,” said Larry Carvalho, IDC analyst covering platform as a service (PaaS).

Carvalho noted that HPE and Microsoft Azure are playing catch-up to AWS.

“Azure is behind AWS right now in IaaS (Infrastructure as a Service) and PaaS functionality. However, in some ways HPE gets the benefit of bringing Azure to the enterprise so HPE gains a lot out of this partnership,” Carvalho said.

Partner reaction

One Microsoft Azure partner seeking to find the best cloud offerings for its clients is Syntel Inc., a cloud provider headquartered in Troy, Mich. Reacting to news of the partnership, Ashok Balasubramanian, head of the Services Transformation Group at Syntel, would only say that finding the right products and services from multiple cloud vendors is a practical approach to providing cloud offerings.

“Syntel is ‘product agnostic,’ so we do not endorse one product over another. Our goal is to choose the most suitable tool to meet our clients’ business needs,” Balasubramanian said.

Joe Brown, president and co-founder of Accelera Solutions, a cloud solutions provider based in Fairfax, Va., said he is encouraged by the alignment between HPE and Microsoft but noted that he isn’t expecting miracles. Accelera partners with Microsoft.

“The HPE-Microsoft Azure alignment will certainly legitimize and support the Azure message that partners like Accelera are taking to market,” Brown said. “I don’t expect it to create any real revenue contribution or change the way we’re selling or marketing.”

What impact will the Microsoft Azure partnership have on HPE and Azure partners?

Source: TechTarget-Microsoft Azure partnership with HPE: Channel reaction  by Nicole Lewis

How current industry mega trends tangibly affect the EUC industry. Part 2: Layering

To me the rise of layering initially was a bit of a surprise. After all, wasn’t there already application virtualization in products like App-V and ThinApp? The key thing to understand is that layering has a different goal than tradition application virtualization. Where application virtualization (App-V for example) had the goal to isolate applications from the OS (and other applications), layering is much more about improvements in application management.

Layering is becoming more and more popular because it makes it a lot easier to manage many applications across different platforms. This is important because in the IT departments of today more will need to be done with less. And layering is not limited to applications–a lot of vendors also offer the same management benefits for other layers of the workspace, like personalization or the OS.

The ‘EUC Big 3’ (Microsoft, Citrix and VMware) have had layering in some shape or form for a while now. Citrix got layering technology from their Ringcube acquisition in 2011, but in my mind never really positioned it as layering until they introduced AppDisk.

When VMware acquired Wanova in 2012 they also got layering technology. For some reason, the Wanova acquisition hasn’t yet come to fruition in VMware. I am sure there are many factors to this, but Wanova was originally positioned and designed as a backup and recovery solution for physical PCs using something that resembled layering rather than a desktop virtualization solution, which is what most of EUC at VMware is about. I still think the Wanova technology has great potential and should be a great asset for VMware. Maybe for those reasons, VMware acquired another layering solution late in the Summer of 2014, CloudVolumes, which they renamed to AppVolumes. AppVolumes is much more focused on putting layering capabilities to work by adding value to the desktop virtualization piece of End User Computing. They also acquired Immidio to add a personalization element.

Finally, Microsoft also has a layering technology. They might not position it that way but that do have it I’m not talking about Server App-V (or even traditional App-V), but rather about the User Profile Disk in Windows Server 2012 R2 which effectively mounts a VHD containing the user profile at session logon.

Layering is not new at all- as far back as 2002 a company was doing this. Actually it is quite an interesting story. A company called FSlogic (not a typo) was doing layering back then. This company was acquired by Altiris, and Altiris was later acquired by Symantec (which we all know is where great products go to die). I’m kidding–it is now part of the Symantec Endpoint Virtualization Suite.

I can hear you think, “FSLogic? Isn’t there a company called FSlogix?,” and you’d be right. In fact, many of the same people from FSLogic are part of FSlogix, as well as Kevin Goodman (who you might know as the founder of RTO Software, which is now part of VMware). FSLogix has been in business since 2013 is all about application layering with a very lightweight approach.

Another one that sticks out for me is Unidesk, who’s been shipping their product since 2010. They employ some of the heavy hitters from our industry, including Ron Oglesby. Some people I know refer to Unidesk as the Mercedes of layering. I guess it depends on how much you like Mercedes (and no, they are not a part of the Volkswagen group AFAIK 😉 ). There’s many others too including (but not limited to) Liquidware Labs, Dell (vWorkspace with WSM), Ceedo and a bunch of others. (Full disclosure: I was in charge of vWorkspace for a long time in the past)

For me, two others are worth mentioning separately because they take a slightly different approach. According to Gartner they form a separate category called “Cloud Application Virtualization”. I would propose to change that to Cloud Application Layering. The two I mean are Cloudhouse and Numecent because they add an add in the ability to distribute their application layers via the cloud. This makes it very easy to have access to the applications in these layers anywhere, which, for example, would be useful in BYOD scenarios.

Anyway, no matter how you slide it or dice it, the sheer fact that this many vendors are able to make a living on layering proves there’s a problem in the market that needs to be solved, so I think that in the short term layering is going to have a big impact on the way End User Computing is done.

Source: brianmadden.com-How current industry mega trends tangibly affect the EUC industry. Part 2: Layering by Michel Roth

6 tips to identify project management red flags

Here’s how to tell if you’re headed for project management success or failure.

Robert Burns famously wrote, “The best laid plans of mice and men often go awry,” and while he was addressing a mouse in his poem, his words sum up the day-to-day struggles of IT project managers. The key to successful project management is being able to not only to balance the “triple constraints” of time, resources and quality, but to identify red flags that could signal an impending project disaster.

“The most important thing to remember is that regardless of how well you plan, how much you build in contingencies for all the expected ways things could go sideways, something else, something you didn’t expect, will always happen. Something is always going to go wrong. If you can acknowledge and accept that, and then understand that sometimes these things aren’t in your control, you’re in a better position,” says Tushar Patel, senior vice president of marketing for project and portfolio management solutions company Innotas.

Red flag: focus on output rather than outcome
There are signs and signals, though, that can indicate when a project’s in trouble. One of the easiest to see is a focus on output rather than outcome, Patel says. Project managers must first determine what the desired outcome of a project is, and what value that project will bring to the business, and then make sure that all the steps along the way — the output — are contributing to that larger goal the organization wants to accomplish.

“PMs are supposed to look at output: project completion schedules, budgets, resources, but if you’re completely focused on those, you’ll miss the bigger picture of how your project fits into the larger business strategy. It’s like if you’re taking a road trip and, at the end you say, ‘Great! We only had one gas stop and one food stop, and we made fantastic time,’ except you ended up in Southern California when you were trying to get to Seattle,” he says.

Red flag: Focus on cost instead of value

Project managers bear the burden of proving to the larger business and the C-suite that they’re not just a cost center, but provide value to the organization, says Patel. An undue focus on IT project management as a major cost center is another red flag, and one that shouldn’t be taken lightly.

“You have to focus on showing your business leadership that the costs you’re incurring are directly in alignment with and furthering the business’ goals,” he says. Let’s for example, say you’re a project manager tasked with helping the company expand internationally and open an office in Brazil, then you have to first think of everything that entails: renting or purchasing office space; purchasing and deploying infrastructure, setting up internet connections, hiring talent and making connections with customers. Then, you need to draw direct correlations between what you’re spending and how that aligns with the business’ goals of succeeding in a new, international market.

“In this example, if your project goes over budget on infrastructure, you have to be able to argue that, say, shipping and deployment costs are higher in Brazil, or that you’re purchasing higher quality networking equipment so that you’re directly in line with the organization’s strategy,” he says. Proving that project management isn’t just a cost center but a critical, strategic value-add is incredibly important.

Source: CIO.com-6 tips to identify project management red flags By Sharon Florentine

How current industry mega trends tangibly affect the EUC industry. Part 1: Hyper-convergence

Every year or so the leading analysts of our industries (I’m looking at your Gartner, Forrester and IDC) come up with new (mega)trends that will disrupt our industry. Since I’m a no-nonsense kind of guy (being Dutch will do that to you) it often feels like hot air. They usually start first in the consumer space and later bleed over into the enterprise space, in this case End User Computing – the textbook example of the effects of Consumerization of IT. Smartphones are an often used example but AppStores are a good example too. Consider that the Apple AppStore launched in 2008 and today, slowly, we are seeing AppStores being adopted more and more in Enterprises–7 years later.

This year and next year, however, a lot of these (sub)trends will have a tangible impact on your End User Computing work and in these articles I want to share some of that impact with you.

In this set of  articles (in no particular order) we’re going to talk about some of the trends that tangibly impact our industry today or tomorrow for the bulk of our use cases across the world. I specifically say across the world because there are still very different new tech adoption cycles across the world. Typically, the US is the first to adopt new(er) technologies followed by Northern Europe about 1-2 years later. APACJ usually follows 1-2 years after that, followed by the rest of the civilized world (I’ll leave it to yourself to define “civilized”). Also know this article is not about the Workspace of 2020 and beyond where you might talk to Siri/Cortana/Echo/GoogleNow to get your work done and have your lunch in virtual reality, so keep that in mind.

The first trend I want to explore is hyper-convergence and how it will affect our industry in the coming years for the bulk of the use cases around the world.

It’s not about the hardware

I’ll start of by saying that I am not a hardware guy. Trust me, I know. I worked (full disclosure) at a company that is big on hardware, and I can decisively say I am not a hardware guy. Ironically, the fact that I am not a hardware guy actually proves the impact that hyper-convergence has on the End User Computing industry. Let’s face it, the important parts of End User Computing are about software, not about hardware.

That is, except for desktop virtualization.

Pretty soon after VDI started to become popular, many admins would wake up screaming four letters in the middle of the night. You guessed it: IOPS! The storage wars ensued and all of a sudden one had to be a storage expert to be able to host virtual desktops successfully at any scale. That’s just silly. It would be the same if you were expected to be able piece together your own engine if you wanted to drive a car. IT departments want to focus on delivering the perfect workspace with the right applications, data, security, etc., not what the write penalty will be with Office 2016 and RAID 10+11-5. As it turns out, many of the vendors in the hyper-converged space feel the same way.

VDI leads the way

Many of the vendors in the hyper-converged space have a had their early success with desktop virtualization. That makes sense because desktop virtualization, or more specifically VDI, provided unique challenges for the storage that already existed in most datacenters at the time. VDI requires more storage and more IOPS with very high peaks and deep valleys.

Since the good hyper-converged vendors (in my mind) are really about software, they’ve been in a good position to solve the ‘VDI storage problem’ with software-based features like: (inline) deduplication, storage tiering, compression, read caching, and write optimizations. Additionally, these hyper-converged vendors often offer easy ways to get high availability and DR that make the whole persistent vs non-persistent VDI debate a lot less hard from a cost perspective (the management part of it is a separate story).

Another great benefit of hyper-convergence is how easy it is to scale up (often referred to as Web-scale IT). All you need to do is add another unit/node to get X% more capacity while still managing it all from the same interface. According to the hyper-converged vendors this can be done at a lower cost than ‘traditional’ solutions.

Players

If you decide that desktop virtualization is part of your End User Computing strategy, then you should not have to worry about the physical hardware. This is what hyper-convergence in End User Computing is all about to me. Two of my personal favorites are Atlantis Computing and Nutanix.

Atlantis Computing most certainly is a great example of a company that had their early success in VDI and most certainly have the right DNA. I remember back in 2008 when I was at Provision we talked to Atlantis about integrating with ILIO (which actually had layering capabilities as well!). Actually, it is only fairly recent that Atlantis entered the hyper-converged space with their announcement of their HyperScale hardware platformearlier in 2015.

While Nutanix is more of a new player compared to the old-guard of storage (shipping their first hyper-converged product in 2011),Additionally, many people are learning that other workloads in the datacenter can also benefit from hyper-convergence and as such you will see (even) more of it in your datacenters they have seen has seen some amazing growth (they are one of those ‘Unicorns’) and have a valuation of $2B+. What I like about Nutanix are their management capabilities and their drive to make both the datacenter infrastructure and the virtualization infrastructure ‘invisible.’ Of course, keep in mind that these are just two companies of dozens more.

I think we will see hyper-converged hardware having a big impact in End User Computing projects because of the increased mindshare, commoditization, and lower prices due to increasing competition in this space. Additionally, many people are learning that other workloads in the datacenter can also benefit from hyper-convergence, and as such you will see (even) more of it in your datacenters.

Source: brianmadden.com-How current industry mega trends tangibly affect the EUC industry. Part 1: Hyper-convergence by Michel Roth