Showing posts with label Project Success. Show all posts
Showing posts with label Project Success. Show all posts

Tuesday, August 13, 2024

A Step-by-Step Guide to Business Case Validation

A Step-by-Step Guide to Business Case Validation

Creating a business case is a systematic process designed to justify a proposed project, investment, or decision within a business context. 

A strong business case typically includes an introduction with background information, a clear problem or opportunity statement, a detailed analysis of options, a risk assessment, a financial analysis, a proposed solution, and a high-level implementation plan.

But validating your business case is just as important as creating it. 

The validation process is essential for confirming that the proposed initiative is likely to achieve its intended outcomes and align with organizational goals.

I have validated many business cases, both for my clients and as an active angel investor, and if there is one thing I have learned, it is the critical importance of ensuring that a business case is both robust and realistic before committing significant resources. 

Over the years I have developed a structured approach that I want to share with you.

1) Review the Problem Statement or Opportunity

Clarity and Accuracy: Ensure the problem or opportunity is clearly articulated and well understood. Question whether the impact of not addressing the problem or missing the opportunity is accurately presented.

See my article "Understanding Your Problem Is Half the Solution (Actually the Most Important Half)" for some further reading on this topic.

2) Scrutinize Assumptions

Identify and Test Assumptions: List and validate assumptions related to market conditions, customer behavior, cost estimates, and revenue projections. Compare them with historical data and industry benchmarks to ensure they are realistic.

Scenario Analysis: Conduct best-case, worst-case, and most likely scenarios to test the sensitivity of the business case to changes in key assumptions.

3) Evaluate the Analysis of Options

Comprehensive Consideration: Ensure all reasonable options, including doing nothing, have been considered. 

Verify Estimates and Projections: Ensure cost estimates are accurate and comprehensive, and validate revenue projections against market data and trends. Recalculate ROI and perform sensitivity analyses to assess the impact of changes in key variables.

Focus on Economic Benefits: In my opinion ALL benefits of a technology project should be expressed in dollars (or any other currency). To make estimating the benefits of a project easier and more realistic, I use a simple model to assess the economic benefits of a project. It consists of five benefit types (or buckets); Increased Revenue, Protected Revenue, Reduced Costs, Avoided Costs, and Positive Impacts.

Total Cost of Ownership (TCO): TCO is an analysis meant to uncover all the lifetime costs that follow from owning a solution. As a result, TCO is sometimes called 'life cycle cost analysis.' Never just look at the implementation or acquisition costs. Always consider TCO when looking at the costs of a solution. 

Time Value of Money: The time to value (TTV) measures the length of time necessary to finish a project and start the realization of the benefits of the project. One project valuation method incorporating this concept is the payback period (PB). There is one problem with the payback period: It ignores the time value of money (TVM). That is why some project valuation methods include the TVM aspect. For example, internal rate of return (IRR) and net present value (NPV).

Unbiased Evaluation: Check if the criteria for evaluating options are relevant and unbiased, and consider whether alternative criteria might lead to different recommendations.

For more details on the financial valuations of your options have a look at my eBook The Project Valuation Model ™. You can download it for free here.

4) Examine the Proposed Solution

Feasibility: Assess whether the proposed solution is technically, financially, and operationally feasible, with realistic timelines.

Strategic Alignment: Verify that the solution aligns with the organization's broader strategic goals and represents the best value. 

See my article "Do Your Projects and Initiatives Support Your Strategy?" for some further reading on the topic.

5) Engage Stakeholders

Involvement and Feedback: Engage key stakeholders, including executives and subject matter experts, to gather feedback and address concerns. Their support is critical to the project's success.

See my article "10 Principles of Stakeholder Engagement" for some further reading on the topic.

6) Perform a Risk Assessment

Comprehensive Risk Analysis: Review the risk assessment to ensure all significant risks are identified and properly analyzed. Evaluate the feasibility of risk mitigation strategies and ensure contingency plans are in place.

See my article "Risk Management Is Project Management for Adults" for some further reading on the topic.

7) Review Legal, Regulatory, and Ethical Considerations

Compliance and Ethics: Ensure the project complies with all relevant laws, regulations, and industry standards. Consider any environmental, social, and ethical implications.

8) Assess Market and Competitive Analysis

Market and Competitive Validation: Reassess market conditions and competitive responses to ensure the business case remains relevant and viable in the current environment.

9) Evaluate Implementation Feasibility

Resource and Timeline Viability: Confirm that the necessary resources are available and that the proposed timeline is realistic. Consider conducting a pilot to validate key aspects of the business case.

Opportunity Cost: If you implement the proposed solution, what other initiatives can't you do? Is it still worth it?

Cost of Delay: What does it cost me if I do the project slower or later? Is there urgency?

For more details on the opportunity costs, and cost of delay of your initiative have a look at my eBook The Project Valuation Model ™. You can download it for free here.

10) Seek Third-Party Review

External Validation: Consider an independent review by a third-party expert to provide objective insights and increase the credibility of the business case. 

See for example my Independent Business Case Review service.

11) Final Review

Final Review: Ensure all sections of the business case are complete, coherent, and consistent. Revise as necessary based on the validation process.

Best Practices

Documentation: Keep a detailed record of validation steps, findings, and any revisions made to create a clear audit trail.

Stakeholder Engagement: Maintain clarity and avoid jargon to ensure understanding and buy-in from all stakeholders.

Data-Driven Analysis: Base your analysis and recommendations on solid data and evidence.

Constructive Approach: Focus on strengthening the business case rather than undermining it, using challenges to ensure the best possible outcome.

In a nutshell: Effective validation ensures that any weaknesses in the business case are addressed before committing significant resources, thereby reducing the risk of failure and increasing the likelihood of success.

If you are an executive sponsor, steering committee member, or a non-executive board member and want an unbiased expert view on your business case? Then my Independent Business Case Review is what you are looking for.

Read more…

Monday, July 22, 2024

The Most Important Role on Any Large Transformation Project

Change Management and Your CAST Of Characters

The most important role on a large transformation project is the project sponsor. 

Not the project manager. 

According to the Project Management Institute (PMI)'s 2018 Pulse of the Profession In-Depth Report, "1 in 4 organisations (26%) report that the primary cause of failed projects is inadequate sponsor support". 

By contrast, "organisations with a higher percentage of projects that include actively engaged executive sponsors, report 40% more successful projects than those with a lower percentage of projects with actively engaged sponsors".

And according to the 2015 Annual Review of Projects of the UKs National Audit Office “the effectiveness of the project sponsor is the best single predictor of project success or failure”. 

Project sponsors on large and complex multi million dollar transformation projects are often senior executives and most are not trained in any way to be successful in their executive sponsor role. 

Nor do they take the time that is needed to execute this role.

Often the same is the case for the project steering committee members.

Guess what happens with these projects?

Read more…

Monday, November 14, 2022

How Your Rollout in Waves Can End in a Tsunami

How Your Rollout in Waves Can End in a Tsunami

Many multinational organizations are bringing larger system implementations to a screeching halt because they misunderstand what it means to do a rollout in waves. 

We’re probably all familiar with the “phased rollout”. A phased rollout means you roll a project out to all targeted users at once but don’t deploy all of its planned functionality.

A good example of this would be rolling out a new CRM system to your organization. You go live in the first phase with Contact, Client, and Opportunity Management, and Account Management and Pipeline Management follow in the second phase.

Another popular type of rollout is the so-called staged rollout (also known as a rollout in waves). A rollout in waves or stages means that all the planned functionalities will be rolled out at once, but not for all users.

A rollout in waves gives you time to analyze the system’s quality, stability, and performance against your business goals. You can then decide if you want to roll out the system to more users, wait for more data, or stop the rollout. 

A rollout in waves is one of the core building blocks of making continuous delivery a reality. Facebook, Netflix, Microsoft, Google, and similar companies all rely heavily on staged rollouts.

One wave rollout method frequently used by multinational companies for new system implementation is the rollout by country or geographical territory. 

This is the preferred approach by companies implementing a new CRM, ERP, HCM, or some other key business application. Sometimes it’s combined with a phased approach.

Rolling out in waves is usually a good idea, especially compared to a “big bang” rollout. 

But before undertaking a rollout in waves, you have to carefully consider the following three realities:

1) The moment you switch on a new system in one country, you’ll need to address a bunch of Business As Usual (BAU) activities including Release Management, Change Management, New User Training … you name it. Your users will also discover bugs in the system and/or interfaces that weren’t discovered during testing. Many of them will be critical and need to be fixed ASAP. You’ll probably find that performance issues will be more common than not. Some companies call the first few months “Hyper Care” or some equivalent, but it is nothing else as BAU. 

2) As is always the case with a new system, it won’t work completely as expected. In addition to the bugs that need to be addressed within the BAU process, you’ll have a high number of Change Requests, because only now will users realize they need additional or different functionality to do their work. Again, a number of these requests will be critical and/or urgent. Users will probably ask for many additional reports because they don’t understand the data they see in the new system. If you combine your rollout in waves with a phased rollout, you’ll need to build and test the functionalities for the next phase.

3) At the same time, you’ll want to proceed with the next waves of your rollout, and you’ll need people to work on this. Think about discovery, migration, configuration, training, etc. for each new country that needs to be onboarded. The big idea is always to have one system for everyone, but local legislation and regulations and differences in how business is done in each country will force you to implement additional Change Requests in the system.

The number-one mistake I see is that organizations allocate a single team to accomplish all of the above tasks after the first-wave rollout. This approach always fails miserably and will bring the rollout to a screeching halt.

For a successful rollout in waves, you’ll need three different teams after the first wave:  one for BAU activities, one to deliver Change Requests, and one to onboard additional waves. Some people may work on more than one team, but this really should be the exception.

You’ll need to plan and budget for these teams, hire and train people for them, and define their organizational setup. 

And you’ll need to do all of this before you go live with the first wave – not after!

In a nutshell: You will need three teams for a successful system rollout in waves.

Read more…

Sunday, October 16, 2022

Case Study 16: Nike’s 100 Million Dollar Supply Chain "Speed bump"

Case Study 16 – Nike’s 100 Million Dollar Supply Chain Speed bump

“This is what you get for 400 million, huh?” 

Nike President and CEO Phil Knight famously raised the question in a conference call days before announcing the company would miss its third-quarter earnings by at least 28% due to a glitch in the new supply chain management software. The announcement would then send Nike’s stock down 19.8%. In addition, Dallas-based supply-chain vendor i2 Technologies, which Nike assigned blame, would suffer a 22.4% drop in stock price.

The relationship would ultimately cost Nike an estimated $100 million. Each company blamed the other for the failure, but the damage could have been dramatically reduced if realistic expectations had been set early on and a proper software implementation plan had been put in place. Most companies wouldn’t overcome such a disastrous supply chain glitch or “speed bump,” as Knight would call it, but Nike would recover due to its dominant position in the retail footwear and apparel market.

In 1999, two years before Knight’s famous outburst, Nike paid i2 $10 million to centralize its supply, demand, and collaboration planning system with a total estimated implementation cost of $40 million. Initially, i2 was the first phase of The Nike Supply Chain (NSC) project. The plan was to implement i2 to replace the existing system and introduce enterprise resource planning (ERP) software from SAP and customer relationship management (CRM) software from Siebel Systems.  

The goal of the NSC project was to improve Nike’s existing 9-month product cycle and fractured supply chain. As the brand experienced rapid growth and market dominance in the 1990s, it accumulated 27 separate order management systems around the globe. Each is entirely different from the next and poorly linked to Nike’s headquarters in Beaverton, Oregon.

At the time, there wasn’t a model to follow at the scale Nike required. Competitors like Reebok struggled to find a functional supply chain solution specific to the retail footwear and apparel industry. In an effort to solidify its position as the leader in sportswear, Nike decided to move forward quickly with i2’s predictive demand application and its supply chain planner software.

"Once we got into this, we quickly realized that what we originally thought was going to be a two-to-three-year effort would be more like five to seven," - Roland Wolfram, Nike’s vice president of global operations and technology.

The NCS project would be a success, and Nike would eventually accomplish all its supply chain goals. However, the process took much longer than expected, cost the company an additional $100 million, and could have been avoided had the operators or both companies taken a different approach to implementation.

"I think it will, in the long run, be a competitive advantage." – Phil Knight

In the end, Knight was right, but there are many valuable lessons to learn from the Nike i2 failure.

Don’t let your project fail like this one!

Discover here how I can help you turn it into a success.

For a list of all my project failure case studies just click here.

So, before we get into the case study, let’s look at precisely what happened...

Timeline of Events

1996 - 1999

Nike experienced incredible growth during this period but was at a crossroads. Strategic endorsement deals and groundbreaking marketing campaigns gave the company a clear edge over Adidas and Reebok, its two most substantial competitors in the 80s and 90s. However, as Nike became a world-renowned athletics brand, its supply chain became more complex and challenging to manage.

Part of Nike’s strategy that separated itself from competitors was the centralized approach. Product design, factory contracting, and order fulfillment were coordinated from headquarters in Oregon. The process resulted in some of the most iconic designs and athlete partnerships in sports history. However, manufacturing was much more disoriented.

During the 1970s and 80s, Nike battled to develop and control the emerging Asian sneaker supply chain. Eventually, the brand won the market but struggled to expand because of the nine-month manufacturing cycle.

At the time, there wasn’t an established method to outsource manufacturing from Asia, making the ordering process disorganized and inefficient across the industry. In addition, Nike’s fractured order management system contained tens of millions of product numbers with different business rules and data formats. The brand needed a new way to measure consumer demand and manage purchasing orders, but the state of the legacy system would make implementing new software difficult.

1999

At the beginning of 1999, Nike decided to implement the first stage of its NSC project with the existing system. i2 cost the company $10 million but estimated the entire project would cost upwards of $400 million. The project would be one of the most ambitious supply chain overhauls by a company of Nike’s size. 

i2 Technologies is a Dallas, Texas-based software company specializing in designing solutions that simplify supply and demand chain management while maximizing efficiency and minimizing cost. Before the Nike relationship, i2 was an emerging player in logistics software with year-over-year growth. Involvement in the Nike project would position the company as the leading name in supply chain management software.

Nike’s vision for the i2 phase of NSC was “achieving greater flexibility in planning execution and delivery processes…looking for better forecasting and more profitable order fulfillment." When successfully implemented, the manufacturing cycle would be reduced from nine months the six. This would convert the supply chain to make-to-order rather than make-to-sell, an accomplishment not yet achieved in the footwear and apparel industry.

Predicting demand required inputting historical sales numbers into i2’s software. “Crystal balling” the market had substantial support at the time among SCM companies. While the belief that entering numbers into an algorithm and spitting out a magical prediction didn’t age well, the methodology required reliable, uniform data sets to function.

Nike decided to implement the “Big Bang” ERP approach when switching to i2 for the supply chain management. The method consists of going live where the business completely changes without phasing out the old system. Nike also opted for a single instance strategy for implementation. The CIO at the time, Gordon Steele, is quoted saying, “single instance is a decision, not a discussion.” Typically, global corporations choose a multi-instance ERP solution, using separate instances in various regions or for different product categories.

2000

By June of 2000, various problems with the new system had already become apparent. According to documents filed by Nike and i2 shareholders in class-action suits, the system used different business rules and stored data in various formats, making integration difficult. In addition, the software needed customization beyond the 10-15% limit recommended by i2. Heavy customization slowed down the software. For example, entries were reportedly taking over a minute to be recorded. In addition, the SCM system frequently crashed as it struggled to handle Nike’s tens of millions of product numbers.

The issues persisted but were fixable. Unfortunately, the software was linked to core business processes, specifically factory orders, that sent a ripple effect that would result in over and under-purchasing critical products. The demand planner would also delete ordering data six to eight weeks after it was entered. As a result, planners couldn’t access purchasing orders that had been sent to factories.

Problems in the system caused far too many factory orders for the less popular shoes like the Air Garnett IIIs and not enough popular shoes like the Air Jordan to meet the market's demand. Foot Locker was forced to reduce prices for the Air Garnett to $90 instead of the projected retail price of $140 to move the product. Many shoes were also delivered late due to late production. As a result, Nike had to ship the shoes by plane at $4-$8 a pair compared to sending them across the Pacific by boat for $0.75.   

November 2000

According to Nike, all the problems with i2’s supply chain management system were resolved by the fall. Once the issues were identified, Nike built manual workarounds. For example, programmers had to download data from i2’s demand predictor and reload it into the supply chain planner on a weekly basis. While the software glitches were fixed and orders weren’t being duplicated or disappearing, the damage was done. Sales for the following quarter were dramatically affected by the purchasing order errors resulting in a loss of over $100 million in sales.

2001

Nike made the problem public on February 27, 2001. The company was forced to report quarterly earnings to stakeholders to avoid repercussions from the SEC. As a result, the stock price dove 20%, numerous class-action lawsuits were filed, and Phil Knight famously voiced his opinion on the implementation, "This is what you get for $400 million, huh?"

In the meeting, Nike told shareholders they expected profits from the quarter to decline from around $0.50 a share to about $0.35. In addition, the inventory problems would persist for the next six to nine months as the overproduced products were sold off.

As for the future of NSC, the company, including its CEO and President, expressed optimism. Knight said, "We believe that we have addressed the issues around this implementation and that over the long term, we will achieve significant financial and organizational benefit from our global supply-chain initiative."

A spokeswoman from Nike also assured stakeholders that the problems would be resolved; she said that they were working closely with i2 to solve the problems by creating “some technical and operational workarounds” and that the supply chain software was now stable.

While Nike was positive about the implementation process moving forward, they placed full blame on the SCM software and i2 Technologies.

Nike stopped using i2’s demand-planning software for short-and-medium range sneaker planning; however, it still used the application for short range and its emerging apparel business. By the Spring of 2001, Nike integrated i2 into its more extensive SAP ERP system, focusing more on orders and invoices rather than predictive modeling.

What Went Wrong?

While the failures damaged each company’s reputation in the IT industry, both companies would go on to recover from the poorly executed software implementation. Each side has assigned blame outward, but after reviewing all the events, it's safe to say each had a role in the breakdown of the supply chain management system.

Underestimating Complexity

Implementing software at this scale always has risks. Tom Harwick, Gigi Information Group’s research director for supply chain management, said, “Implementing a supply-chain management solution is like crossing a street, high risk if you don't look both ways, but if you do it right, low risk.”

One of Nike's most significant mistakes was underestimating the complexity of implementing software at such a large scale. According to Roland Wolfram, Nike’s operators had a false sense of security regarding the i2 installation because it was small compared to the larger NSC project. "This felt like something we could do a little easier since it wasn’t changing everything else [in the business]," he says. "But it turned out it was very complicated."

Part of the reason why the project was so complicated was because of Nike’s fractured legacy supply chain system and disoriented data sets. i2’s software wasn’t designed for the footwear and apparel industry, let alone Nike’s unique position in the market.  

Data Quality

Execution by both parties was also to blame. i2 Technologies is on record recommending customization not to exceed 10-15%. Nike and i2 should have recognized early on that this range would be impossible to accommodate the existing SCM system.

Choosing a Big Bang implementation strategy didn’t make sense in this scenario. Nike’s legacy system data was too disorganized to be integrated into the i2 without making dramatic changes before a full-on launch.

Poor Communication

Communication between Nike and i2 from 1999 to the summer of 2000 was poor. i2 claimed not to be aware of problems until Knight issued blame publicly. Greg Brady, the President of i2 Technologies who was directly involved with the project, reacted to the finger-pointing by saying, "If our deployment was creating a business problem for them, why were we never informed?" Brady also claimed, "There is no way that software is responsible for Nike's earnings problem." i2 blamed Nike’s failure to follow the customization limitations, which was caused by the link to Nike’s bake-end.

Rush to Market

At the time, Nike was on the verge of solidifying its position as the leader in footwear and sports apparel for decades to come. Building a solid supply chain that could adapt to market trends and reduce the manufacturing cycle was the last step toward complete market dominance. In addition, the existing supply chain solutions built for the footwear and apparel industry weren’t ready to deploy on a large scale. This gave Nike the opportunity to develop its own SCM system putting the company years ahead of competitors. Implementing functional demand-planning software would be highly valuable for Nike and its retail clients.

i2 also was experiencing market pressure to deploy a major project. Had the implementation gone smoothly, i2 would have a massive competitive advantage. The desire to please Nike likely played a factor in i2’s missteps. Failing to provide clear expectations and communication throughout the process may not have happened with a less prominent client.  

Failure to Train

After the problems became apparent in the summer of 2000, Nike had to hire consultants to create workarounds to make the SCM system operational. This clearly indicates that Nike’s internal team wasn’t trained adequately to handle the complexity of the new ERP software.

Nike’s CIO at the time reflected on the situation. "Could we have taken more time with the rollout?" he asked. "Probably. Could we have done a better job with software quality? Sure. Could the planners have been better prepared to use the system before it went live? You can never train enough."

How Nike Could Have Done Things Differently

While Nike and i2 attempted to implement software that had never been successfully deployed in the global footwear and apparel industry, many problems could have been avoided. We can learn from the mistakes and how Nike overcame their challenges with i2 to build a functioning ERP system.

Understanding and Managing Complexity

Nike’s failure to assess the complexity of the problem is at the root of the situation. Regardless if the i2 implementation was just the beginning of a larger project, it featured a significant transition from the legacy system. Nike’s leadership should have realized the scale of the project and the importance of starting NSC off on the right foot.  

i2 also is to blame for not providing its client with realistic expectations. As a software vendor, i2 is responsible for providing its client with clear limitations and the potential risks of failing to deploy successfully.

See "Understanding and Managing Your Project’s Complexity" for more insights on this topic.

Collaborate with i2 Technologies

Both companies should have realized that Nike required more than 10-15% customization. Working together during the implementation process could have prevented the ordering issues that were the reason for the lost revenue.

Collaboration before deployment and at the early stages of implementation is critical when integrating a new system with fractured data. Nike and i2 should have coordinated throughout the process to ensure a smooth rollout; instead, both parties executed poor project management resulting in significant financial and reputational blows.  

See "Solving Your Between Problems" for more insights on this topic.

Hire a 3rd Party Integration Company

Nike’s lack of understanding of the complexity of SCM implementation is difficult to understand. If i2 had been truthful in that they did not know about problems with their software, Nike could have made a coordinated decision not to involve the software company during the process.

Assuming that is the case, Nike should have hired a 3rd party to help with the integration process. Unfortunately, Nike’s internal team was not ready for the project. Outside integrators could have prevented the problems before the damage was done.

Not seeking outside help may be the most significant aspect of Nike’s failure to implement a new SCM system.   

See "Be a Responsible Buyer of Technology" for more insights on this topic.

Deploy in Stages

A “Big Bang” implementation strategy was a massive mistake by Nike. While i2 should have made it clear this was not the logical path considering the capabilities of their software and Nike’s legacy system, this was Nike’s decision.

Ego, rush to market, or failure to understand the complexities of the project could all have been a factor in the decision. Lee Geishecker, a Gartner analyst, stated that Nike chose to go live a little over a year after starting the project, while projects of this scale should take two years before deployment. In addition, the system should be rolled out in stages, not all at once.

Brent Thrill, an analyst at Credit Suisse First Boston, is on record saying he would have kept the old system running for three years while testing i2’s software. In another analysis, Larry Lapide commented on the i2 project by saying, "Whenever you put software in, you don't go big bang, and you don't go into production right away. Usually, you get these bugs worked out . . . before it goes live across the whole business."

Train Employees Sufficiently

At the time, Nike’s planners weren’t prepared for the project. While we will never know what would have happened if the team had been adequately trained, proper preparation would have put Nike in a much better position to handle the glitches and required customizations.

See "User Enablement is Critical for Project Success" for more insights on this topic.

Practice Patience in Software Implementation

At the time, a software glitch causing a ripple effect that would impact the entire supply chain was a novel idea. Nike likely made their decisions to risk the “Big Bang” strategy, deploy in a year without phases and proper testing, and not seek outside help because they assumed the repercussions of a glitch wouldn’t be as catastrophic.

Impatience resulted in avoidable errors. A more conservative implementation strategy with adequate testing would have likely caught the mistakes.

See "Going Live Too Early Can Be Worse as Going Late" for more insights on this topic.

Closing Thoughts

One of the most incredible aspects of Nike’s implementation failure is how quickly the company bounced back. While Nike undoubtedly made numerous mistakes during the process, NSC was 80% operational in 2004.

Nike turned the project around by making adjustments and learning patience. Few companies can suffer a $100 million “speed bump” without filing bankruptcy, but Nike is in that position because of its resilience. The SAP installation wasn’t rushed and resumed many aspects of its original strategy. In addition, a training culture was established due to the i2 failures. Customer service representatives receive 140 to 180 hours of training from highly skilled “super users,” All employees are locked out of the system until they complete their required training courses.

Aside from the $100 million loss, the NSC project was successful. Lead times were reduced from nine months to six (the initial goal), and Nike’s factory inventory levels were reduced from a month to a week in some cases. Implementing a new SCM system also created an integration between departments, better visibility of customer orders, and increased gross margins.

While Nike could have executed far more efficiently, Phil Knight’s early assessment of the i2 failure turned out to be true. In the long run, the process gave Nike a competitive advantage and was instrumental in building an effective SCM system. 

In a nutshell: A failure to demonstrate patience, seek outside help, and rush software implementation can have drastic consequences. 

Don’t let your project fail like this one!

Discover here how I can help you turn it into a success.

For a list of all my project failure case studies just click here.

Sources

> Nike says i2 hurt its profits

> I2 Technologies, Inc.

> How Not to Spend $400 Million

> i2-Nike fallout a cautionary tale

> Nike rebounds: How Nike recovered from its supply chain disaster

> Scm and Erp Software Implementation at Nike – from Failure to Success 

> I2 Says: "You Too, Nike"

Read more…

Sunday, August 07, 2022

Why Are Your Best People Not Working on This?

User Enablement is Critical for Project Success
A phenomenon I see happening again and again in larger organizations is destroying employee morale and economic value.

There’s that big internal project (think ERP, CRM, HCM/Payroll, Core Banking/Insurance) that has the potential to stop the whole organization in its tracks and will cost the organization millions of hard-earned cash to implement.

And somehow, the best people in the organization aren’t involved in the project. 

How stupid is that?

Depending on your margins, you may have to earn many more millions to pay for the project. So, if you can prevent costs from doubling and extract some benefits from it, this would seem a smart investment, right?

Profit margins vary considerably by industry, but as a general rule of thumb, a 10% net profit margin is considered average, a 20% margin is considered high (or “good”), and a 5% margin is low.

Let’s stick with the middle and take 10%. This means that for every million you spend on your internal project, you have to generate 10 million in additional revenue to make up for it.

Considering that most technology projects will cost twice what is budgeted – without taking losses in revenue and productivity into account – I think it’s worthwhile to have your best people spend some time on the project.

But you know what?

Nobody wants that. Not your executives. Not your line managers. And not your staff. 

Why is that? 

The answer is simple. Nobody makes a career with such a project. You can only lose.

If you’re staffed on the project, nine out of ten times you’ll still have to do your regular job as well. This means double the work and more stress. Your goals, bonus, and promotion probabilities are tied to your normal job, not to the project. But you have less time to achieve them. This is not conducive to career advancement.

Your line managers can try to fill the gaps in their teams but they will have additional work because their people are working on the project. This will affect their goals, chances for promotion, and bonus payments. So they will do anything to prevent their best people working on that critical project.

And you, as an executive involved in the project, can only work on spending and damage control, as everybody expects the project will be a success and within budget. The probabilities that one or the other will happen are very small. And the chances of them both happening at the same time are close to zero. 

And when you’re not a COO, CFO, or CIO, you’re expected to work on client-facing tasks, not internal projects, even when you and your teams will have to work every day with the result of that project. 

There’s a massive misalignment of incentives here. 

You have to fix this misalignment, because with critical projects it is very simple: If you want to have a higher probability of success, you need to involve your best people.

If possible, put them on it full time, and relieve them from their day jobs. It should be made clear across the organization that the project has priority and that day-to-day things may take a little longer.

Bonus payments and possible promotions should be tied to the project goals. Career goals and project goals should be aligned. If this doesn’t happen, your best people will focus on the things that improve their careers, which may not always be in your organization’s best interests.

In a nutshell: If you want to have a higher probability of success with your critical internal project, involve your best people.

Read more…

Thursday, September 02, 2021

Changing Technology Is Easy; Changing Behaviour Is Not.

User Enablement is Critical for Project Success
This is an article I have written for one of my clients. You can find the original article here.

Creating a modern workplace is key to digital transformation. If you’re embarking on such an initiative, remember that changing technology is easy, but changing behaviour is not. Begin with the end in mind, ask what you want to achieve with your new modern workplace, and address user behaviour, user adoption and usage from the outset. 

Digital transformation is nothing new. It’s a daily reality for any company. Some are disruptors, and others are disrupted. Covid-19 has made this even clearer. Everyone understands that digital transformation - not evolution - is required to maintain a competitive edge. That’s why so many digital transformation initiatives have been started around the world. 

Modern workplace projects are a key part of such digital transformation initiatives. After all, it’s people who make companies successful, and they need to be able to do their work as efficiently and effectively as possible. So when you start thinking about a modern workplace project, it’s essential to start with the end in mind. 

Your project shouldn’t be about technology. Technology by itself is worthless. It’s what you and your users do with it that (potentially) creates value. So before we defined the scope of our own modern workplace project, we asked ourselves some hard questions. You might want to use them as a checklist when embarking on your own initiative. 

Questions to ask before a modern workplace initiative 

What do we want to achieve? 
And how is this project going to execute on this? 

What user behaviour do we want to see and support, and what behaviour do we want to stop or reduce? 
For example, do we want to facilitate more remote work? Or better quality online meetings? Do we want to start supporting hybrid meetings? Or stop sending documents as attachments and work with links and single versions of truth? 

How about BYOD (bring your own device)? 
Should employees be able to access company documents on their private laptops and mobile phones? How do we want to support working on tablets? 

What current set of applications do we want to stop using and replace? 
Is this realistic? We made sure that decommissioning them was part of the project scope to avoid simply adding a number of new applications to the landscape and increasing complexity instead of reducing it. Paying twice for the same capabilities has never been a smart thing to do for a CIO. 

How important is user experience? How important is security? 
And of course our legal and compliance requirements need to be addressed. Balancing these is one of the most difficult challenges in a modern workplace project. 

Is a SaaS solution like Microsoft 365 or Google Workspace the answer? 
Maybe. But certainly not the whole suite at once. It would be almost impossible to handle the change. So what applications should be part of this project, and who will support these applications after go-live? 

Get user buy-in

After you’ve answered these and other relevant questions, you have to think about your users. Because in the end, any technology is only as good as how well it is used. 

If users don’t know how to use applications effectively, the benefits will be small, or even negative. This means user enablement is critical to the success of any modern workplace project. Include user pilots in your project. Learn from your users. And make sure your users understand the following: 

1) Why you are implementing the new solution. 
When it comes to a new modern workplace, many employees will be instrumental in the change you’re promoting. It’s important to tend to their needs throughout the change journey. Transparent communication is key. 

2) How existing processes will change. 
Your leaders, colleagues, customers and suppliers all live in their own reality – and it’s likely to be different from yours. So invest in understanding their way of working before you force them to do something that doesn’t work for them. 

3) How to use the solution. 
Your user base can range from people who started their careers on terminals to millennials who are so accustomed to touchscreens that they don’t know what a keyboard is. Your ability to transform is only as good as the average skill level shared by the majority of your employees. And don’t forget about the new joiners. Training is never done. 

4) Who to contact if they require support with any problems or questions. 
If you’re working with an internal service desk, make sure that they’re equipped with standardised scripts. Alternatively, if you’re working with an external service provider, make sure you’ve selected a partner who can provide the highest level of support required for the new applications. 

5) Whether they can offer feedback and make suggestions to improve the solution. 
One of the best ways to build support organisation-wide is to give everyone a voice and a platform to share their views throughout the transition. Active user communities are pure gold. 

A modern workplace project is all about changing and supporting the right behaviours, supporting your business model and regulatory requirements, keeping your data secure, and implementing the capabilities you need most. It’s not about implementing new technology. Remember that people aren’t able to change their behaviour overnight, so you need to plan for a journey, not a weekend trip – which of course is true for any digital transformation initiative.

Read more…

Sunday, January 31, 2021

A Great Leading Indicator for Future Trouble - Missing Milestones

A Great Leading Indicator for Future Trouble - Missing Milestones

I have done quite a number of inflight reviews and post-mortems of troubled and failed large system implementation projects. 

One pattern that emerges very clearly is the one of missing milestones whilst keeping the go-live date the same. 

It rarely ends well.

I see it again and again. Multiple important milestones are missed. Sometimes by months. And the ones that are marked as completed have their original scope reduced.

For example system integration tests (SIT) without all interfaces being completed and no production like data.

Or user acceptance testing (UAT) with systems that are not ready or contain so many bugs that end-to-end testing is not possible.

Astonishing is that in most cases both the project sponsor and project manager seem to be convinced all is “green” and it will work out until the project folds like a house of cards.

When you look at a typical large system implementation project it is still largely implemented like a waterfall. This includes ERP systems, CRM systems, Core Banking, etc. 

And this has not changed with the rise of software as a service (SaaS) offerings like Salesforce, SAP S/4HANA, Workday, etc.

Yes, the design and build phases are now iterative, but at a certain point your full solution needs to be tested end-to-end. This means one or more SIT phases and an UAT phase that includes all upstream and downstream systems and processes. 

You also need time to fix all the findings of your testing, and to do re-testing. If you are lucky one cycle is enough. Usually it is not.

You also need to train all your users and your support teams on the new solution and processes. Ideally on a solution that actually works. 

And when you are ready to go, you have a cutover phase from your old solution to your new solution. 

So yes, you design and build iteratively, but the rest is still shaped like a waterfall.

And this means that if you miss important milestones and you don’t change the go-live date you will steal time from the very important phases that come at the end of such a project. 

Starting these late phases without having completed the previous phase just does not make sense and will drive your test team and end users crazy.

Missing milestones does not mean your project team is doing a bad job, but they obviously underestimated the time it takes to do certain things.

Chances are this is a pattern that is repeated for the later phases of the project. 

So you will probably need more time for these phases as planned. Not less. 

In my experience there are only two probable outcomes of such projects:

1) They never go live

2) They go live too early 

The latter can be even worse as the first. 

See here and here for some prominent examples from multi-million projects that never went live, and here and here for projects that went live too early. 

You will find many more examples among my project failure case studies.

In a nutshell: missing milestones and not changing your go-live date is a great leading indicator for trouble in the future.

Read more…

Thursday, November 26, 2020

Project Failure Is Largely Misunderstood

Project Failure Is Largely Misunderstood

For many years, organizations, researchers, and practitioners have analyzed how to manage technology projects successfully. 

Among them is the Standish Group, which regularly publishes its findings in its Chaos reports. In 1994, Standish reported a shocking project success rate of only 16 percent; another 53 percent of the projects were challenged, and 31 percent failed outright. In subsequent reports, Standish updated its findings, yet the numbers remained troublesome.

The numbers indicate large problems with technology projects and have had an enormous impact on software development and project management. 

They suggest that the many efforts and best practices put forward to improve how companies manage and deliver technology projects are hardly successful. 

Scientific articles and media reports widely cited these numbers. Many authors used the numbers to show that technology project management is in a crisis. The numbers even found their way to a report for the president of the United States to substantiate the claim that U.S. software products and processes are inadequate.

The numbers’ impact and their widespread use indicate that thousands of authors have accepted the Standish findings. They’re perceived as “truth” and unquestionable. 

However, the Standish definitions of successful and challenged projects are problematic.

They defined software project success and failure by creating three project categories:

> Resolution Type 1, or project success. The project is completed on time and on budget, offering all features and functions as initially specified.

> Resolution Type 2, or project challenged. The project is completed and operational but over budget and over the time estimate, and offers fewer features and functions than originally specified.

> Resolution Type 3, or project impaired. The project is canceled at some point during the development cycle.

Standish defines a successful project solely by adherence to an initial forecast of cost, time, and functionality. The latter is defined only by the number of features and functions, not the functionality itself. 

Standish states the following in their report: “For challenged projects, more than a quarter were completed with only 25 percent to 49 percent of originally specified features and functions.”

This means a project that’s within budget and time but that has less functionality doesn’t fit any category, but I assume that Standish has put these projects that don't comply with one or more of the success criteria into the challenged-project category.

So, Standish defines a project as a success based on how well it did with respect to its original estimates of the amount of cost, time, and functionality.

But in reality, the part of a project’s success that’s related to estimation deviation is highly context-dependent. In some contexts, 30 percent estimation error does no harm and doesn’t impact what the organization would consider project success. 

In other contexts, only 5 percent overrun would cause much harm and make the project challenged. In that sense, there’s no way around including more context (or totally different definitions) when assessing successful and challenged projects.

The above illustrates some problems with the definitions. They’re misleading because they’re solely based on estimation accuracy of cost, time, and functionality. 

The Standish definitions don’t consider a project’s context, such as benefits, value, and user satisfaction. 

Starting with the Chaos Report in 2015, the Standish Group seem to have discovered their own mistake. They changed how they define success. This new method coded the dataset with six individual attributes of success: OnTime, OnBudget, OnTarget, OnGoal, Value, and Satisfaction. 

The new definition of successful projects is OnTime, OnBudget, and with a satisfactory result. This means the project was delivered within a reasonable estimated time, stayed within budget, and delivered customer and user satisfaction regardless of the original scope. 

This definition encompasses both a success rate for the project management of a project and for the project itself. In my opinion, this improves the definition, since we probably all have seen projects that meet the triple constraints of OnTime, OnBudget, and OnTarget, but the customer was not satisfied with the outcome. 

In changing from the OnTarget constraint to Satisfaction, they avoid penalizing a project for having an evolving target, which all projects have, even the very small ones. Customers have a clear opinion on the satisfaction level, whether or not all the features and functions they asked for at the beginning of the project are realized. 

They support these changes with their own data. They found that both satisfaction and value are greater when the features and functions delivered are much less than originally specified and only meet obvious needs. And they found that most features and functions of software are not used. These additional features increase cost, risk, and quality but do not necessarily provide value.

But in my opinion, these definitions still have a serious flaw. The Chaos Report, and numerous articles citing it, label canceled projects as “failed” and imply that all of them were canceled because of poor project management. 

This implication is both false and dangerous. 

It is false because, particularly in an era of rapid change, a lot of technology projects are properly started, well managed, and properly terminated before completion because their original assumptions have changed. 

It is dangerous because it often leaves project managers with the following thoughts: “It’s becoming clear that continuing this project will waste company resources. I should probably have the project canceled now, but that would make me the manager of a failed project and wreck my career. I’ll be better off if I say nothing, keep the project going, and look for a new project to transfer to.” See “Why Killing Projects Is so Hard (And How to Do It Anyway).”

So what is project success? 

Simply put, project success occurs when the outcome of the project adds value to the organization. And the value of a project is defined by subtracting all of the costs from all of the benefits the project delivers. 

Value = Benefits - Costs

This can be roughly translated to three levels of project success:

1) Project delivery success: Will the project delivery be successful? Essentially, this assesses the classic triangle of scope, time, and budget. These are your costs. 

2) Product or service success: This refers to when the product or service is deemed successful (e.g., the system is used by all users in scope, up-time is 99.99 percent, customer satisfaction has increased by 25 percent, and operational costs have decreased by 15 percent). These are your benefits.

3) Business success: This has to do with whether the product or service brings value to the overall organization, and how it contributes financially and/or strategically to the business’s success. This is your value.

Overall, a successful project depends on the combination of these criteria. Some argue that product/service success is the same as business success, or argue that product/service success automatically means business success. But this is not true. See “Product or Service Success Does Not Automatically Mean Business Success” for more on this topic.

When you look at project financials and translate the above to dollars, you can fairly say that a project is a failure if it has a negative value. This means the total of all the benefits is lower than the total costs. See “What Are the Real Costs of Your Technology Project?” for more on this topic. 

You can even argue that a project is a failure if the targeted return on investment (ROI) is not achieved. This is because you could have done another project with a higher ROI in the same time with the used resources. See “What Are the Real Opportunity Costs of Your Project?” for more on this topic.

But what you should not forget is that project success and project failure are NOT absolutes. It may not be possible to be a little bit pregnant, but you can be a little bit successful.

Every project has multiple success criteria related to business results, product/service results, and project delivery results (cost, schedule, scope, and quality).

Some criteria are absolute, meaning they must be completed on or before the original planned date, and some are relative, meaning they must be completed by a date acceptable to the client.

Project success is determined by how many of your success criteria are satisfied, and how well.

Whether or not a project is successful also depends on who you ask— 

> the very happy project manager that implemented the SAP project as scoped on time and below budget (I know, this will NEVER happen), 

> the end-users, who absolutely hate the complexity and slowness of the new system,

> or the COO that has seen IT costs double whilst none of the expected savings materialized.

These stakeholders may all have very different opinions on the success of the project.

Project success also depends on when you ask. 

Twelve months after the go-live, the users will have a better grasp of the system and initial performance problems will have been solved. And slowly but steadily, the expected savings will often start to materialize as well.

So in order to determine the success or failure of your project, you should define all the criteria relevant to your project, define how you will measure them, and define when you will measure them.

In a nutshell: In order to determine project success (and, as a consequence, project failure), you must define all the criteria relevant to your project, define how you will measure them, and define when you will measure them.

When you need some guidance on how to define and measure project success have a look at my Project Success Model here or by clicking on the image.

The Project Success Model

Read more…

Saturday, November 14, 2020

User Enablement is Critical for Project Success

User Enablement is Critical for Project Success

Any system is only as good as how well it is used. 

If it's a CRM, ERP, or any other system, when users don’t know how to use the system effectively the benefits of the new system for your organization will be small, or even negative. 

This means user enablement is critical to the success of a project. 

It is not enough to simply have your new system in place two weeks before your go-live date. 

Your users need to know: 

1) Why you’re implementing the new system. When it comes to organizational changes and operational logistics, many employees will be instrumental in the change you’re promoting. It’s important to tend to their needs throughout the change journey.

2) How existing processes will change and which new processes will be introduced with the new system. 

3) How to use the system. Your user base can range from people who have spent their entire careers on the “green screen” (yet still don’t know how to use a mouse to copy and paste content) to millennials, who are so accustomed to touchscreens that they don’t know that it’s possible to strike the arrow icon on the keyboard to move an object. 

4) Who to contact in case they require support with any problems or questions. If you’re working with an internal service desk, make sure that they’re equipped with standardized scripts. Alternatively, if you’re working with an external service provider, make sure you’ve selected a partner who can provide the highest level of support required for the new system. 

5) Whether they can offer feedback and make suggestions to improve the system. One of the best ways to build support organization-wide is to give everyone a voice and a platform to share their views throughout the transition.

Of course, transitions and change management require not only time, but also a stable and working system that can be used to create training, user guides, videos, and user acceptance tests. 

So if you are still building and testing your system shortly before going live your user enablement will suffer greatly. 

You will have neither the necessary time, nor the necessary trust in the system, you need to achieve user enablement and acceptance. 

And with that your benefits will be limited.

In a nutshell: when users don’t know how to use your new system effectively the benefits of the new system for your organization will be limited.

Read more…

Sunday, September 27, 2020

Project Inputs, Activities, Outputs, Outcomes, Impact and Results

Project Inputs, Activities, Outputs, Outcomes and Impact
Many people and organizations seem to have serious trouble separating between the inputs, activities, outputs, outcomes, impact, and the results of a project. 

This leads to lot’s of confusion, bad communication, disappointed project teams, and disappointed stakeholders.

Below you will find my take on these terms and their relevance for your project.

Inputs

Inputs are very often confused to be synonymous with activities. However, these terms are not interchangeable. 

Inputs, in simple terms, are those things that we use in the project to implement it. 

For example, in any project, inputs would include things like time of internal and/or external employees, finances in the form of money, hardware and/or software, office space, and so on. 

Inputs ensure that it is possible to deliver the intended results of a project.

Activities

Activities on the other hand are actions associated with delivering project goals. In other words, they are what your people do in order to achieve the aims of the project. 

In a software development project, for example, activities would include things such as designing, building, testing, deploying, etc. And in an upskilling initiative the training of employees would be an activity.

Outputs

These are the first level of results associated with a project. Often confused with “activities”, outputs are the direct immediate term results associated with a project. 

In other words, they are the delivered scope. The tangible and intangible products that result from project activities. Outputs may include a new product or service, a new ERP system replacing the old one, or employees being trained as part of a digital upskilling initiative.

Success on this first level of results is what I call “Project Delivery Success”. It is about defining the criteria by which the process of delivering the project is successful.

Essentially this addresses the classic triangle "scope, time, budget". 

It is limited to the duration of the project and success can be measured as soon as the project is officially completed (with intermediary measures being taken of course as part of project control processes). 
It is always a combination of measurements on inputs and outputs.

Outcomes

This is the second level of results associated with a project and refers to the medium term consequences of the project. Outcomes usually relate to the project goal(s).  

For example, the new ERP system is used by all users in scope, uptime is 99.99%, customer satisfaction has increased by 25%, operational costs have decreased by 15%, and so on.

These criteria need to be measured once the product/service is implemented and over a defined period of time. This means it cannot be measured immediately at the end of the project itself.


Success on this second level of results is what I often refer to as “Product or Service Success”. It is about defining the criteria by which the product or service delivered is deemed successful.

Impact

This is the third level of project results, and is the long term consequence of a project.  Most often than not, it is very difficult to ascertain the exclusive impact of a project since several other projects, not similar in nature can lead to the same impact. 

For example, financial value contribution (increased turnover, profit, etc.) or competitive advantage (market share won, technology advantage).

Success on this third level of results is what I call “Business Success”. Business success is about defining the criteria by which the product or service delivered brings value to the overall organization, and how it contributes financially and/or strategically to the business.

Results

Project results are the combination of outputs (level 1), outcomes (level 2), and impact (level 3). These levels combined will determine your overall project success. You can be successful on one level but not others.

Project success and project failure are NOT absolutes. It may not be possible to be a little bit pregnant, but you can be a little bit successful.

Every project has multiple success criteria related to business results, product/service results, and project delivery results (cost, schedule, scope, and quality).

Some criteria are absolute, meaning they must be completed on or before the original planned date, and some are relative, meaning they must be completed by a date acceptable to the client.

Project success is determined by how many of your success criteria are satisfied, and how well.

In a nutshell: You need to be able to distinguish between the inputs, activities, outputs, outcomes, and the impact of your project.

Read more…

Sunday, February 16, 2020

Project Success is a Self-Fulfilling Prophecy

Project success is a self-fulfilling prophecy.

Go around admitting doubt, and your project will fail.

Tell everyone that it will succeed and they will believe it too, and you have every chance of getting there.

You’re thinking it cannot be that simple?

That it just sounds like self-help psycho-babble?

All I can say is that I have never once in my life have seen a doubting project manager succeed.

The opposite is of course not necessarily true.

Believing in project success is necessary.

Believing alone is not sufficient to get the job done.

In a nutshell: Go around admitting doubt, and your project will fail.

Read more…

Thursday, October 10, 2019

Be a Responsible Buyer of Technology

Be a responsible buyer of technology
Being a responsible buyer of technology and outsourced software development services, and working well with suppliers during projects are crucial skills for any organization.

Yet, the absence of those skills explains more project failures in third-party projects than any other factor. You will find some prominent examples of these among my project failure case studies.

Some may argue that suppliers should have all the skills required to make their projects a success, but any company relying completely on the skills of a supplier is making themselves dependant on good luck.

If you are not a ‘responsible buyer’ then you risk not spotting when the supplier and/or the project is failing.

A responsible buyer of third-party systems and systems development will have excellent knowledge, understanding and experience in defining, planning, directing, tracking, controlling and reporting systems development projects. They will know what should be done, when, why and how.

In many projects the supplier should be running the above-mentioned processes as part of helping a buyer achieve their target business outcome (after all, the supplier is expected to have done a great many projects of this type). However, this does not mean that the supplier will, in fact, be doing all of those things.

That's why it is vital that the buyer themselves knows what needs to be done.

In most large technology projects, it is excellence in program and project management that is the crucial factor in determining success — not knowledge of technology. This is often true in situations when, for example, a project is being carried out across an organization (especially a global organization); across a group of companies in collaboration; or on behalf of a central marketplace and its participants (such as a stock exchange).

In large business-critical projects neither the supplier nor the buyer should be doing any aspect of the project in isolation, as doing so will increase the risk of failure. This isn’t just a need for transparency, it is a need for active communication plus active confirmation and verification that messages have been received and understood.

The following three excuses for total project failure will never work in court:

1) "I was drunk,"
2) "I thought the buyer or supplier knew what they were doing," and
3) "I thought the buyer or supplier was doing it, not me."

If you are the buyer and you do not have all the necessary skills and experience to be able to define and control important projects (which is perfectly understandable as in most companies they don’t happen very often), there is an easy fix for this problem: Hire a very experienced interim executive to act on your behalf, even if the supplier will still do most of the project management and other work. You can delegate authority for doing the project management to the supplier but you cannot delegate responsibility.

Responsibility for the project — including responsibility for it failing — always rests ultimately with you, the buyer.

Your highly experienced interim executive can assume delegated responsibility on your behalf. However, that means that he or she becomes your authorized representative and therefore you can never blame that person for anything (e.g., in the way you might blame the supplier).

The supplier will thank you for this clarity of thinking around responsibility and authority. Be a responsible buyer of technology — there is nothing worse for a supplier than a buyer who is unable or unwilling to fulfill their responsibilities during an important engagement.

In a nutshell: Responsibility for the project — including responsibility for it failing — always rests ultimately with you, the buyer.

Read more…

Monday, August 26, 2019

Project Success Criteria (OKRs) vs. Operations (KPIs)

OKRs VS KPIs
Project success criteria in the form of OKRs (Objectives and Key Results) should be the driving force behind your project and product direction. They boldly state where you’re going and they give you metrics to judge when you’ve arrived.

Project success criteria should be fail-by-default. To succeed in an OKR you shouldn’t be able to sit on your ass and play defense.

Objectives like “don’t release any new bugs” make terrible OKRs. A guaranteed way to achieve that objective is to stop releasing software. But despite “no new bugs” making an awful OKR, it’s still an important measure of business health. It’s worth keeping an eye on.

There are plenty of metrics like bugs released (a proxy for code quality) which are important to watch but don’t fit well in OKRs. Rather than trying to wedge them into a container where they don’t belong, consider adding a second tool to your toolkit — KPIs (key performance indicators).

If OKRs give your teams direction, KPIs make sure nothing is going off of the rails. Practically any metric — site uptime, conversion rates, user retention — can be used as a KPI. KPIs are a metric that’s important to watch, but not something you’re trying to change right now.

Keeping track of relevant KPIs will help you uncover problems as they emerge. If you decide a KPI is out of line enough to justify investing in a fix, then it simply becomes part of an OKR. The passive KPI “Conversion rate — 5%” becomes the active objective “Double conversion rate by September”.

In a nutshell: Use KPIs to keep an eye on things. Use OKRs when you want to make a change.

Read more…

Monday, August 05, 2019

Why Your Organization Doesn't Learn From Its Lessons Learned

Why your organization doesn't learn from its Lessons Learned
Lessons Learned or Lessons Learnt are experiences distilled from a project that should be actively taken into account in future projects.

There are several definitions of the concept. The one used by the NASA is as follows: “A lesson learned is knowledge or understanding gained by experience. The experience may be positive, as in a successful test or mission, or negative, as in a mishap or failure. A lesson must be significant in that it has a real or assumed impact on operations; valid in that is factually and technically correct; and applicable in that it identifies a specific design, process, or decision that reduces or eliminates the potential for failures and mishaps, or reinforces a positive result.

Personally I like the following definition: “Generalizations based on evaluation experiences with projects, programs, or policies that abstract from the specific circumstances to broader situations. Frequently, lessons highlight strengths or weaknesses in preparation, design, and implementation that affect performance, outcome, and impact.

I think most organizations feel that project sponsors, managers, and teams can reduce project costs and duration by learning from past projects, by implementing past successes, and by avoiding past failures.

But at the same time, many organizations have no standards for collecting, analyzing, storing, disseminating, and reusing Lessons Learned. Consequently, they are losing valuable knowledge gained during projects and between projects.

They seem to be able to learn the little lessons, like improving small aspects of projects, but the big lessons seem to be relearned time and time again. Here is why:

> Projects are not making sufficient time for a Lessons Learned session.

> Key people (like the sponsor or main stakeholders) are not available for a Lessons Learned session.

> Organizations have an ineffective lessons capture process. Lesson learning crucially needs a standard lessons reporting format and structure, an effective approach to root cause analysis, a focus on lesson quality, openness and honesty, and a validation process.

> Project teams do not see the benefit of a Lessons Learned session. Lessons Learned captured on a project seldom benefit that project. They benefit future projects. Often, the project sponsor and manager see capturing Lessons Learned as simply another chore that provides his or her project with little value, especially if the Lessons Learned procedure is complex, takes a fair amount of resources and time to implement, and management has not provided adequate resources to perform the work. The solution here is to have a simple procedure, ensure projects have the resources and time to implement the procedure, and hold project managers accountable for following the procedure.

> An ineffective lessons dissemination process. The value of even well-crafted reports is often undermined because they are not distributed effectively. Most dissemination is informal, and as a result development and adoption of new practices is haphazard. Generally, project teams must actively seek reports in order to obtain them. There is no trusted, accessible repository that provides Lessons Learned information to project teams company-wide, although some departments do have lessons repositories.

> Lack of motivation to fix the issues. There is a reluctance to make big fixes if it's not what you are being rewarded for, a reluctance to learn from other parts of the organization, and difficulties in deciding which actions are valid.

> A lack of dedicated resources. Commitment to learning is wasted if resources are not available to support the process. Unfortunately, funds available to sustain corrective action, training, and exercise programs are even leaner than those available for staff and equipment. Lesson-learning and lesson management need to be resourced. Roles are needed to support the process, such as those seen in the US Army and the RCAF, or in Shell. Under-resourcing lesson-learning is a major reason why the process so often fails.

> A lack of leadership involvement in and commitment to the learning process. This is the most critical barrier. An effective Lessons Learned process means having a disciplined procedure that people are held accountable to follow. It means encouraging openness about making mistakes or errors in judgment. It often means cultural or organizational change, which does not come easily in most organizations. It means leading by example. If management is unwilling to learn from their mistakes, it is unlikely that the rest of the organization will be willing to admit to mistakes either. In fact, management must reward people for being open and admitting to making mistakes, bad decisions, judgment errors, etc. This, of course, flies in the face of many corporate cultures.

> Process change versus accountability. When something goes wrong on a project, there is someone accountable. One of the biggest problems in implementing an effective Lessons Learned process is to separate the “accountability” issue from the “process” issue. Accountability is important, but is something to be dealt with by management. Lessons Learned must deal with the process deficiency that caused the problem (e.g., inadequate procedure, too much of a rush, inadequate training, poor communications, etc.). Once a Lessons Learned process focuses on blame or finger-pointing, the process will soon fade into oblivion.

> Not using Lessons Learned in the initiation and planning phases of new projects. You should ensure that projects in these stages incorporate Lessons Learned from prior projects by making a Lessons Learned session mandatory.

Closing Thoughts

Instead of leaning the little lessons, like improving small aspects of projects, I think it would be very valuable to learn the big lessons, instead of relearning them time and time again by making the same mistakes on similar projects.

There are many reasons why lesson learning is not working for most organizations. Perhaps the underlying causes include organizations treating lesson learning as a system, rather than as a product (i.e., a report with documented lessons), and a failure to treat lesson learning with the urgency and importance that it deserves.

In a nutshell: If learning lessons is important (and it usually is), then the process needs proper attention, not lip service.

Read more…

Tuesday, July 23, 2019

What Is the Real Budget of Your Project?

Why you should express your project budget in terms of expected value
Your real project budget should always be expressed in terms of the expected project benefits.

Simply put, project success occurs when outcomes add value to the business. The value of a project is defined by subtracting all costs from all benefits the project delivers.

Using this logic, when your expected project benefits are $3 million and your company wants a return on investment (ROI) on each invested dollar of 50%, your project budget is $2 million. In other words, project budget = project benefits / 1.5.

If the expected benefits of your project go down, your project budget should go down. It is that simple.

Many people confuse the real project budget with the authorized project budget. The authorized project budget is the total amount of authorized financial resources allocated for the particular purpose(s) of the sponsored project for a specific period of time. It is usually based on a mixture of project cost estimations, department budgets, free cash flow, and other factors.

But as soon as your costs go over the authorized project budget (which is highly likely for technology projects), or the expected benefits are not as big as planned (highly likely as well) you should ask yourself what the real budget of your project is and if you are willing to spend it or not.

How do you know whether you’re looking at the right factors when it comes to determining the real budget?

Whether or not your company can spend this money is a financing and risk question, not a budget question. You could even secure a loan to do certain projects. This increases risk and reduces ROI (because of paid interest) but can be a valid option.

Whether or not this budget is enough to realize the project is a cost estimation and risk question, not a budget question. You should never confuse your cost estimations with your budget. Budget is what you can spend, while cost estimation is what you think you will spend. Ideally, the latter is less than the former.

And whether or not your organization is willing to spend their money on this project is a prioritization question, not a budget question.

In a nutshell: Always express your project budget in terms of expected benefits delivered and you’ll have a better idea of the real budget of your project.

Read more…