Tuesday, August 20, 2024

Case Study 19: The $20 Billion Boeing 737 Max Disaster That Shook Aviation

Case Study 19: The $20 Billion Boeing 737 Max Disaster That Shook Aviation

The Boeing 737 Max, once heralded as a triumph in aviation technology and efficiency, has since become synonymous with one of the most catastrophic failures in modern corporate history. 

This case study delves deep into the intricacies of the Boeing 737 Max program—a project that was initially designed to sustain Boeing's dominance in the narrow-body aircraft market but instead resulted in two fatal crashes, the loss of 346 lives, and an unprecedented global grounding of an entire fleet. 

Boeing's 737 series has long been a cornerstone of the company's commercial aircraft offerings. Since its inception in the late 1960s, the 737 has undergone numerous iterations, each improving upon its predecessor while maintaining the model's reputation for reliability and efficiency. 

By the 2000s, the 737 had become the best-selling commercial aircraft in history, with airlines around the world relying on its performance for short and medium-haul flights.

However, by the early 2010s, Boeing faced significant competition from Airbus, particularly with the introduction of the Airbus A320neo. The A320neo offered superior fuel efficiency and lower operating costs, thanks to its state-of-the-art engines and aerodynamic enhancements. 

In response, Boeing made the strategic decision to develop the 737 Max, an upgrade of the existing 737 platform that would incorporate similar fuel-efficient engines and other improvements to match the A320neo without necessitating extensive retraining of pilots.

Boeing's leadership was acutely aware that any requirement for significant additional training would increase costs for airlines and potentially drive them to choose Airbus instead.

The company selected the CFM International LEAP-1B engines for the 737 Max, which were larger and more fuel-efficient than those on previous 737 models. 

However, this choice introduced significant engineering challenges, particularly related to the aircraft's aerodynamics and balance.

The Maneuvering Characteristics Augmentation System (MCAS) was developed as a solution to these challenges. 

The system was designed to automatically adjust the aircraft's angle of attack in certain conditions to prevent stalling, thereby making the 737 Max handle similarly to older 737 models. This was intended to reassure airlines that their pilots could transition to the new model with minimal additional training. 

As Dennis Muilenburg, Boeing’s CEO at the time, stated, "Our goal with the 737 Max was to offer a seamless transition for our customers, ensuring they could benefit from improved efficiency without significant operational disruptions". 

The MCAS would later become central to the 737 Max's tragic failures.

Is your project headed for trouble? Find out! Just answer the 27 questions of my Project Trouble Assessment, which will take you less than 10 minutes, and you will know.

If you just want to read more project failure case studies? Then have a look at the overview of all case studies I have written here.

Timeline of Events

2011-2013: Project Inception and Initial Development

The 737 Max project was officially launched in 2011, with Boeing announcing that the aircraft would feature new engines, improved aerodynamics, and advanced avionics. The design and development process was marked by intense pressure to meet tight deadlines and to deliver a product that could quickly enter the market. By 2013, Boeing had completed the design phase, and the first test flights were scheduled for early 2016.

2016-2017: Certification and Commercial Launch

The first test flight of the 737 Max took place in January 2016, and the aircraft performed as expected under controlled conditions. The Federal Aviation Administration (FAA) granted the 737 Max its certification in March 2017, allowing it to enter commercial service. The aircraft was initially well-received by airlines, with thousands of orders placed within the first year of its launch.

October 29, 2018: Lion Air Flight JT610 Crash

Lion Air Flight JT610, a Boeing 737 Max traveling in Indonesia from Jakarta to Pangkal Pinang, crashes, killing all 189 passengers and crew on board. Questions quickly emerge over previous control problems related to the aircraft’s MCAS. This marks the first major incident involving the 737 Max, and it raises significant concerns about the safety of the aircraft.

March 1, 2019: Boeing’s Share Price Peaks

Boeing’s share price reaches $446, an all-time record, after the company reports $100 billion in annual revenues for the first time. This reflects investor confidence in Boeing’s financial performance, despite the recent Lion Air crash.

March 10, 2019: Ethiopian Airlines Flight ET302 Crash

Ethiopian Airlines Flight ET302, another Boeing 737 Max, crashes shortly after takeoff from Addis Ababa, Ethiopia, killing all 157 people on board. The circumstances of this crash are eerily similar to the Lion Air disaster, with the MCAS system again suspected to be a contributing factor. The crash leads to global scrutiny of the 737 Max’s safety.

March 14, 2019: Global Grounding of the 737 Max

U.S. President Donald Trump grounds the entire 737 Max fleet, following the lead of regulators in several other countries. This grounding is unprecedented in its scope, affecting airlines worldwide and marking a significant turning point in the crisis surrounding the 737 Max.

October 29, 2019: Muilenburg Testifies Before Congress

Boeing CEO Dennis Muilenburg is accused of supplying “flying coffins” to airlines during angry questioning by U.S. senators. His testimony is widely criticized, and his handling of the crisis further erodes confidence in Boeing’s leadership.

December 23, 2019: Muilenburg Fired

Boeing fires Dennis Muilenburg, appointing Chairman Dave Calhoun as the new Chief Executive Officer. This leadership change is seen as an attempt to restore confidence in Boeing and address the mounting crisis.

March 6, 2020: U.S. Congressional Report

A U.S. congressional report blames Boeing and regulators for the “tragic and avoidable” 737 Max crashes. The report highlights numerous failures in the design, certification, and regulatory oversight processes, and it calls for significant reforms in the aviation industry.

March 11, 2020: Boeing Borrows $14 Billion

Boeing borrows $14 billion from U.S. banks to navigate the financial strain caused by the grounding of the 737 Max and the emerging COVID-19 pandemic. This loan is later supplemented by another $25 billion in debt, underscoring the financial challenges Boeing faces.

March 18, 2020: Boeing Shares Plummet

Boeing shares hit $89, the lowest since early 2013, reflecting investor concerns about the company’s future amid the 737 Max crisis and the impact of the COVID-19 pandemic on global air travel.

April 29, 2020: Job Cuts Announced

Boeing announces the first wave of job cuts, planning to reduce its workforce by 10% in response to the pandemic-induced drop in air travel. This move is part of broader efforts to cut costs and stabilize the company’s finances.

September 2020: Manufacturing Flaws in the 787 Dreamliner

Manufacturing flaws are discovered in Boeing’s 787 Dreamliner, leading to the grounding of some jets. This adds to Boeing’s mounting challenges and further complicates its efforts to recover from the 737 Max crisis.

November 18, 2020: U.S. Regulator Approves 737 Max for Flight

The U.S. Federal Aviation Administration approves some 737 Max planes to fly again after Boeing implements necessary design and software changes. This marks a significant step in Boeing’s efforts to return the 737 Max to service.

January 8, 2021: Boeing Pays $2.5 Billion Settlement

Boeing agrees to pay $2.5 billion to resolve a criminal charge of misleading federal aviation regulators over the 737 Max. This settlement includes compensation for victims’ families, penalties, and payments to airlines affected by the grounding.

November 11, 2021: Boeing Admits Responsibility

Boeing admits full responsibility for the second Max crash in a legal agreement with victims’ families. This admission marks a significant acknowledgment of the company’s failures in the development and certification of the 737 Max.

What Went Wrong?

Flawed Engineering and Design Decisions

One of the most significant factors contributing to the failure of the 737 Max was the flawed design of the MCAS system. Boeing engineers decided to rely on a single AOA sensor to provide input to the MCAS, despite the known risks of sensor failure. 

Traditionally, critical systems in aircraft design incorporate redundancy to ensure that a single point of failure does not lead to catastrophic consequences. 

Boeing's decision to omit this redundancy was driven by the desire to avoid triggering additional pilot training requirements, which would have undermined the 737 Max's cost advantage.

The placement of the new, larger engines also altered the aircraft's aerodynamic profile, making it more prone to nose-up tendencies during certain flight conditions. 

Instead of addressing this issue through structural changes to the aircraft, Boeing chose to implement the MCAS as a software solution. This decision, while expedient, introduced new risks that were not fully appreciated at the time. 

"We were under immense pressure to deliver the Max on time and under budget, and this led to some compromises that, in hindsight, were catastrophic," admitted a senior Boeing engineer involved in the project

Inadequate Regulatory Oversight

The FAA's role in the 737 Max disaster has been widely criticized. The agency allowed Boeing to conduct much of the certification process itself, including the evaluation of the MCAS system. This arrangement, known as Organization Designation Authorization (ODA), was intended to streamline the certification process, but it also created a conflict of interest. 

Boeing's engineers were under pressure to downplay the significance of the MCAS in order to avoid additional scrutiny from regulators. 

"The relationship between the FAA and Boeing became too cozy, and this eroded the regulatory oversight that is supposed to keep the public safe," said Peter DeFazio, Chairman of the House Transportation and Infrastructure Committee

Corporate Culture and Leadership Failures

At the heart of the 737 Max crisis was a corporate culture that prioritized profitability and market share over safety and transparency. 

Under the leadership of Dennis Muilenburg, Boeing was focused on delivering shareholder value, often at the expense of other considerations. This led to a culture where concerns about safety were dismissed or ignored, and where employees felt pressured to meet unrealistic deadlines. 

Muilenburg's public statements after the crashes, where he repeatedly defended the safety of the 737 Max despite mounting evidence to the contrary, only further eroded trust in Boeing. 

"There was a disconnect between the engineers on the ground and the executives in the boardroom, and this disconnect had tragic consequences," said John Hamilton, Boeing's former chief engineer for commercial airplanes

Communication Failures

Boeing's failure to adequately communicate the existence and functionality of the MCAS system to airlines and pilots was a critical factor in the two crashes. Pilots were not informed about the system or its potential impact on flight dynamics, which left them unprepared to handle a malfunction. 

After the Lion Air crash, Boeing issued a bulletin to airlines outlining procedures for dealing with erroneous MCAS activation, but this was seen as too little, too late. 

"It’s pretty asinine for them to put a system on an airplane and not tell the pilots who are operating it," said Captain Dennis Tajer of the Allied Pilots Association

Supply Chain and Production Pressures

The aggressive production schedule for the 737 Max also contributed to the project's failure. Boeing's management was determined to deliver the aircraft to customers as quickly as possible to fend off competition from Airbus. 

This led to a "go, go, go" mentality, where deadlines were prioritized over safety considerations. Engineers were pushed to their limits, with some reporting that they were working at double the normal pace to meet production targets. This rush to market meant that there was less time for thorough testing and validation of the MCAS system and other critical components

Moreover, Boeing's decision to keep the 737 Max's design as similar as possible to previous 737 models was driven by the desire to reduce production costs and speed up certification. This decision, however, meant that the aircraft's design was pushed to its limits, resulting in an aircraft that was more prone to instability than previous models. 

"We were trying to do too much with too little, and in the end, it cost lives," said an unnamed Boeing engineer involved in the project

Cost-Cutting Measures

Boeing's relentless focus on cost-cutting also played a significant role in the 737 Max's failure. The company made several decisions that compromised safety in order to keep costs down, such as relying on a single AOA sensor and not including an MCAS indicator light in the cockpit. 

These decisions were made in the name of reducing the cost of the aircraft and avoiding additional pilot training, which would have increased costs for airlines. However, these cost-cutting measures ultimately made the aircraft less safe and contributed to the crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302

Organizational Failures

Boeing's organizational structure also contributed to the 737 Max's failure. The company's decision to move its headquarters from Seattle to Chicago in 2001 created a physical and cultural distance between the company's leadership and its engineers. 

This move, coupled with the increasing focus on financial performance over engineering excellence, led to a breakdown in communication and decision-making within the company. Engineers felt that their concerns were not being heard by management, and decisions were made without a full understanding of the technical challenges involved. 

"There was a sense that the leadership was more focused on the stock price than on building safe airplanes," said a former Boeing engineer

How Boeing Could Have Done Things Differently?

Prioritizing Safety Over Speed

One of the most significant ways Boeing could have avoided the 737 Max disaster was by prioritizing safety over speed. The company was under intense pressure to deliver the aircraft quickly to compete with Airbus, but this focus on speed led to critical safety oversights. 

By taking more time to thoroughly test and validate the MCAS system and other components, Boeing could have identified and addressed the issues that ultimately led to the crashes. 

"In hindsight, we should have taken more time to get it right, rather than rushing to meet deadlines," said Greg Smith, Boeing's Chief Financial Officer at the time

Incorporating Redundancy in Critical Systems

Another key change Boeing could have made was to incorporate redundancy in critical systems like the MCAS. Aviation safety protocols typically require multiple layers of redundancy to ensure that a single point of failure does not lead to catastrophe. 

By relying on a single AOA sensor, Boeing violated this principle and left the aircraft vulnerable to sensor malfunctions. Including a second AOA sensor and ensuring that both sensors had to agree before the MCAS system activated could have prevented the erroneous activation of the system that caused the crashes. 

"Redundancy is a fundamental principle of aviation safety, and it's one that we should have adhered to in the design of the 737 Max," said John Hamilton, Boeing's former chief engineer for commercial airplanes

Improving Communication and Transparency

Boeing could have also improved its communication and transparency with both regulators and airlines. The company's decision to downplay the significance of the MCAS system and not include it in the aircraft's flight manuals left pilots unprepared to deal with its activation. 

By fully disclosing the system's capabilities and risks to the FAA and airlines, Boeing could have ensured that pilots were adequately trained to handle the system in the event of a malfunction. 

"Transparency is key to building trust, and we failed in that regard with the 737 Max," said Dennis Muilenburg, Boeing's CEO at the time

Strengthening Regulatory Oversight

The FAA's delegation of much of the certification process to Boeing created a conflict of interest that contributed to the 737 Max's failure. By strengthening regulatory oversight and ensuring that the FAA maintained its independence in the certification process, the agency could have identified the risks associated with the MCAS system and required Boeing to address them before the aircraft entered service. 

This would have provided an additional layer of scrutiny and ensured that safety was prioritized over speed and cost. 

"The FAA's role is to be the independent watchdog of aviation safety, and we need to ensure that it has the resources and authority to fulfill that role effectively," said Peter DeFazio, Chairman of the House Transportation and Infrastructure Committee

Fostering a Safety-First Corporate Culture

Finally, Boeing could have fostered a corporate culture that prioritized safety over profitability. The company's increasing focus on financial performance and shareholder value led to a culture where safety concerns were often dismissed or ignored. 

By emphasizing the importance of safety in its corporate values and decision-making processes, Boeing could have created an environment where engineers felt empowered to raise concerns and where those concerns were taken seriously by management. 

"Safety needs to be the top priority in everything we do, and we lost sight of that with the 737 Max," said David Calhoun, who succeeded Dennis Muilenburg as Boeing's CEO in 2020

Closing Thoughts

The Boeing 737 Max disaster is a stark reminder of the consequences of prioritizing speed and cost over safety in the aviation industry. The two crashes that claimed the lives of 346 people were not the result of a single failure but rather a series of systemic issues, including flawed engineering decisions, inadequate regulatory oversight, and a corporate culture that valued profitability over safety. 

These failures have had far-reaching consequences for Boeing, resulting in billions of dollars in losses, a damaged reputation, and a loss of trust among airlines, regulators, and the flying public.

Moving forward, it is crucial that both Boeing and the wider aviation industry learn from these mistakes. 

This means prioritizing safety above all else, ensuring that critical systems are designed with redundancy, and maintaining transparency and communication with regulators and customers. 

It also means fostering a corporate culture that values safety and empowers employees to speak up when they see potential risks.  

If I look at the "accidents" that happened to Boeing employees that have spoken up it seems to be the opposite...

Is your project headed for trouble? Find out! Just answer the 27 questions of my Project Trouble Assessment, which will take you less than 10 minutes, and you will know.

If you just want to read more project failure case studies? Then have a look at the overview of all case studies I have written here.

Sources

> Cannon-Patron, S., Gourdet, S., Haneen, F., Medina, C., & Thompson, S. (2021). A Case Study of Management Shortcomings: Lessons from the B737-Max Aviation Accidents. 

> Larcker, D. F., & Tayan, B. (2024). Boeing 737 MAX: Organizational Failures and Competitive Pressures. Stanford Graduate School of Business. 

> Boeing Co. (2019). Investigation Report: The Design and Certification of the Boeing 737 Max. 

> FAA. (2023). Examining Risk Management Failures: The Case of the Boeing 737 MAX Program. 

> Enders, T. (2024). Airbus Approach to Safety and Innovation: A Response to the Boeing 737 MAX. 

> Muilenburg, D. (2019). Boeing’s Commitment to Safety: A Public Statement. 

> Gates, D., & Baker, M. (2019). The Inside Story of MCAS: How Boeing’s 737 MAX System Gained Power and Lost Safeguards. The Seattle Times. 

> Tajer, D. (2019). Statement on MCAS and Pilot Awareness. Allied Pilots Association.

Read more…

Thursday, August 15, 2024

Lies, Damned Lies, and Statistics

Lies, damned lies, and statistics.

"Lies, damned lies, and statistics" is a phrase describing the persuasive power of statistics to bolster weak arguments. 

It is also sometimes used to doubt the statistics used to prove an opponent's point.

Last night I watched a startup pitching and they presented a slide with some statistics showing the effectiveness of their solution to a particular problem, and the first thing that came to my mind was exactly this phrase.

In statistics, there are several techniques (sometimes referred to as "tricks") that can be used to manipulate data or present results in a way that supports a particular point of view. 

While these methods can be used for legitimate analysis, they can also be misused to mislead or deceive.  

When you validate a business case or investment opportunity you should be aware of these tricks, and that is why I collected the most common ones for you.

1. Cherry-Picking Data

Selecting only the data that supports a particular conclusion while ignoring data that contradicts it.

Example: A study might report only the time periods where a particular stock performed well, ignoring periods of poor performance.

2. P-Hacking

Manipulating data or testing multiple hypotheses until a statistically significant result is found, often by increasing the number of tests without proper correction.

Example: Running many different statistical tests on a dataset and only reporting the ones that give a p-value below 0.05.

3. Misleading Graphs

Presenting data in a graph with a misleading scale, axis manipulation, or selective data points to exaggerate or downplay trends.

Example: Using a y-axis that starts at a non-zero value to exaggerate differences between groups.

4. Overgeneralization

Drawing broad conclusions from a small or unrepresentative sample.

Example: Conducting a survey in one city and generalizing the results to the entire country.

5. Omitting the Baseline

Failing to provide a baseline or control group for comparison, making the results seem more significant than they are.

Example: Reporting that a treatment led to a 50% improvement without mentioning that a placebo led to a 45% improvement.

6. Selective Reporting of Outcomes

Reporting only positive outcomes while ignoring negative or neutral results.

Example: A drug trial that only reports the successful outcomes while ignoring cases where the drug had no effect or caused harm.

7. Data Dredging

Analyzing large volumes of data in search of any statistically significant relationship, often without a prior hypothesis.

Example: Examining multiple variables in a dataset until any two variables show a correlation, then presenting this as meaningful without further validation.

8. Ignoring Confounding Variables

Failing to account for variables that could influence the results, leading to spurious conclusions.

Example: Claiming that ice cream sales cause drowning deaths without accounting for the confounding variable of temperature (both increase during summer).

9. Manipulating the Sample Size

Choosing a sample size that is too small to detect an effect or too large, which may exaggerate the significance of minor effects.

Example: Conducting a survey with only a few participants and claiming the results are representative of the entire population.

10. Misinterpreting Statistical Significance

Confusing statistical significance with practical significance or misrepresenting what a p-value actually indicates.

Example: Claiming that a treatment is effective based on a p-value below 0.05 without discussing the actual effect size or its practical implications.

11. Simpson's Paradox

Aggregating data without considering subgroups, which can lead to contradictory conclusions when the data is disaggregated.

Example: A treatment might seem effective in the overall population but ineffective or even harmful when broken down by specific demographic groups.

12. Non-Comparative Metrics

Presenting data without proper context, such as not comparing it to a relevant benchmark.

Example: Reporting that a company’s profits increased by 20% without mentioning that its competitors increased by 50%.

13. Double Dipping

Using the same data twice in a way that inflates the significance of the findings.

Example: Reporting an outcome as both a primary and secondary result, thus artificially increasing the perceived importance of the data.

14. Using Relative vs. Absolute Risk

Emphasizing relative risk instead of absolute risk to make a finding seem more significant.

Example: Saying a drug reduces the risk of disease by 50% (relative risk) when the absolute risk reduction is from 2% to 1%.

These techniques can be powerful when used correctly, but they can also be deceptive if not used with care and transparency. 

Ethical statistical practice involves full disclosure of methods, careful interpretation of results, and avoiding the intentional misuse of these tricks.


Read more…

Tuesday, August 13, 2024

A Step-by-Step Guide to Business Case Validation

A Step-by-Step Guide to Business Case Validation

Creating a business case is a systematic process designed to justify a proposed project, investment, or decision within a business context. 

A strong business case typically includes an introduction with background information, a clear problem or opportunity statement, a detailed analysis of options, a risk assessment, a financial analysis, a proposed solution, and a high-level implementation plan.

But validating your business case is just as important as creating it. 

The validation process is essential for confirming that the proposed initiative is likely to achieve its intended outcomes and align with organizational goals.

I have validated many business cases, both for my clients and as an active angel investor, and if there is one thing I have learned, it is the critical importance of ensuring that a business case is both robust and realistic before committing significant resources. 

Over the years I have developed a structured approach that I want to share with you.

1) Review the Problem Statement or Opportunity

Clarity and Accuracy: Ensure the problem or opportunity is clearly articulated and well understood. Question whether the impact of not addressing the problem or missing the opportunity is accurately presented.

See my article "Understanding Your Problem Is Half the Solution (Actually the Most Important Half)" for some further reading on this topic.

2) Scrutinize Assumptions

Identify and Test Assumptions: List and validate assumptions related to market conditions, customer behavior, cost estimates, and revenue projections. Compare them with historical data and industry benchmarks to ensure they are realistic.

Scenario Analysis: Conduct best-case, worst-case, and most likely scenarios to test the sensitivity of the business case to changes in key assumptions.

3) Evaluate the Analysis of Options

Comprehensive Consideration: Ensure all reasonable options, including doing nothing, have been considered. 

Verify Estimates and Projections: Ensure cost estimates are accurate and comprehensive, and validate revenue projections against market data and trends. Recalculate ROI and perform sensitivity analyses to assess the impact of changes in key variables.

Focus on Economic Benefits: In my opinion ALL benefits of a technology project should be expressed in dollars (or any other currency). To make estimating the benefits of a project easier and more realistic, I use a simple model to assess the economic benefits of a project. It consists of five benefit types (or buckets); Increased Revenue, Protected Revenue, Reduced Costs, Avoided Costs, and Positive Impacts.

Total Cost of Ownership (TCO): TCO is an analysis meant to uncover all the lifetime costs that follow from owning a solution. As a result, TCO is sometimes called 'life cycle cost analysis.' Never just look at the implementation or acquisition costs. Always consider TCO when looking at the costs of a solution. 

Time Value of Money: The time to value (TTV) measures the length of time necessary to finish a project and start the realization of the benefits of the project. One project valuation method incorporating this concept is the payback period (PB). There is one problem with the payback period: It ignores the time value of money (TVM). That is why some project valuation methods include the TVM aspect. For example, internal rate of return (IRR) and net present value (NPV).

Unbiased Evaluation: Check if the criteria for evaluating options are relevant and unbiased, and consider whether alternative criteria might lead to different recommendations.

For more details on the financial valuations of your options have a look at my eBook The Project Valuation Model ™. You can download it for free here.

4) Examine the Proposed Solution

Feasibility: Assess whether the proposed solution is technically, financially, and operationally feasible, with realistic timelines.

Strategic Alignment: Verify that the solution aligns with the organization's broader strategic goals and represents the best value. 

See my article "Do Your Projects and Initiatives Support Your Strategy?" for some further reading on the topic.

5) Engage Stakeholders

Involvement and Feedback: Engage key stakeholders, including executives and subject matter experts, to gather feedback and address concerns. Their support is critical to the project's success.

See my article "10 Principles of Stakeholder Engagement" for some further reading on the topic.

6) Perform a Risk Assessment

Comprehensive Risk Analysis: Review the risk assessment to ensure all significant risks are identified and properly analyzed. Evaluate the feasibility of risk mitigation strategies and ensure contingency plans are in place.

See my article "Risk Management Is Project Management for Adults" for some further reading on the topic.

7) Review Legal, Regulatory, and Ethical Considerations

Compliance and Ethics: Ensure the project complies with all relevant laws, regulations, and industry standards. Consider any environmental, social, and ethical implications.

8) Assess Market and Competitive Analysis

Market and Competitive Validation: Reassess market conditions and competitive responses to ensure the business case remains relevant and viable in the current environment.

9) Evaluate Implementation Feasibility

Resource and Timeline Viability: Confirm that the necessary resources are available and that the proposed timeline is realistic. Consider conducting a pilot to validate key aspects of the business case.

Opportunity Cost: If you implement the proposed solution, what other initiatives can't you do? Is it still worth it?

Cost of Delay: What does it cost me if I do the project slower or later? Is there urgency?

For more details on the opportunity costs, and cost of delay of your initiative have a look at my eBook The Project Valuation Model ™. You can download it for free here.

10) Seek Third-Party Review

External Validation: Consider an independent review by a third-party expert to provide objective insights and increase the credibility of the business case. 

See for example my Independent Business Case Review service.

11) Final Review

Final Review: Ensure all sections of the business case are complete, coherent, and consistent. Revise as necessary based on the validation process.

Best Practices

Documentation: Keep a detailed record of validation steps, findings, and any revisions made to create a clear audit trail.

Stakeholder Engagement: Maintain clarity and avoid jargon to ensure understanding and buy-in from all stakeholders.

Data-Driven Analysis: Base your analysis and recommendations on solid data and evidence.

Constructive Approach: Focus on strengthening the business case rather than undermining it, using challenges to ensure the best possible outcome.

In a nutshell: Effective validation ensures that any weaknesses in the business case are addressed before committing significant resources, thereby reducing the risk of failure and increasing the likelihood of success.

If you are an executive sponsor, steering committee member, or a non-executive board member and want an unbiased expert view on your business case? Then my Independent Business Case Review is what you are looking for.

Read more…

Monday, August 05, 2024

Top Ten Leading Indicators of Troubled Projects for Executives

Top Ten Leading Indicators of Troubled Projects for Executives

If you are a senior executive or a board member in the role of executive sponsor, project sponsor, or steering committee member it is key to recognize potential issues before they become critical. 

Recognizing early warning signs can make the difference between a project’s success and failure. 

Whilst lagging indicators are metrics that reflect past performance, leading indicators are metrics that predict future performance. 

They provide early signs of what is likely to happen, helping you and your organization to make proactive decisions.

Here are the top ten leading indicators of project trouble that every executive should be aware of:

#1 Definition of Done

Project failure starts when you can’t tell what “done” looks like in any meaningful way. Without some agreement on your vision of “done,” you’ll never recognize it when it arrives, except when you’ve run out of time or money or both.

Constant changes in project requirements are a red flag and a leading indicator for trouble. 

While some changes are inevitable, frequent and significant alterations indicate poor initial planning or external pressures. 

Both will disrupt your timelines and budgets.

And if your scope gets reduced to meet budgets and timelines you can be sure your business case will be impacted as there will be less benefits.  

#2 Definition of Success

A project can only be successful if its success criteria are defined and agreed upon. Therefore the lack of clear objectives is one of the earliest signs of trouble.

Every project has multiple success criteria related to business results, product/service results, and project delivery results (cost, schedule, scope, and quality).

Some criteria are absolute, meaning they must be completed on or before the original planned date, and some are relative, meaning they must be completed by a date acceptable to the client.

Project success is determined by how many of your success criteria are satisfied, and how well.

Whether or not a project is successful also depends on who you ask; 

> The very happy project manager that implemented the SAP project as scoped on time and below budget (I know, this will NEVER happen).

> The end users who absolutely hate the complexity and slowness of the new system.

> The COO that has seen IT costs double whilst none of the expected savings materialized. 

They may all have very different opinions on the success of the project. 

Project success also depends on when you ask. 

Twelve months after the go-live, the users will have a better grasp of the system and initial performance problems will have been solved. And slowly but steadily, the expected savings will often start to materialize as well.

So in order to define success of your project you should;

1) Define all the criteria relevant to your project. 

2) Define how you will measure them.

3) Define when you will measure them.

The lack of these definitions is a great leading indicator for trouble.

#3 Financial Runway

The burn rate of a project is a lagging indicator as it describes how much money is spent (or lost) for any period of time. 

The financial runway of a project refers to the length of time a project can continue to operate before it runs out of funding, based on its current expenditure rate. 

It is a crucial leading indicator for executives, as it helps ensure that the project remains financially viable.

An example: 

If a project has a total budget of $10 million and has already spent $4 million, the remaining Current Funding is $6 million.

If this project spends $500,000 per month, the monthly Burn Rate is $500,000.

The Financial Runway (in months) = Current Funding / Burn Rate = $6 million / $0.5 million = 12 months.

This means the project can continue to operate for 12 months before it runs out of funds, assuming the burn rate remains constant and no additional funding is received.

If the runway is shorter than your planned duration you know you are in for some trouble.

#4 Missing Milestones

Missing or achieving the deadline on a milestone is a lagging indicator. But it is also a leading indicator for future trouble.

Missing initial milestones or deadlines signal deeper issues. It might indicate that the project plan was unrealistic, there are problems with team productivity, or unforeseen obstacles are impacting progress.

And if missing milestones is combined with keeping the go-live date the same it rarely ends well for your project.

Unfortunately, I see it again and again. Multiple important milestones are missed. Sometimes by months. And the ones that are marked as completed have their original scope reduced.

For example system integration tests (SIT) without all interfaces being completed and no production like data.

Or user acceptance testing (UAT) with systems that are not ready or contain so many bugs that end-to-end testing is not possible.

Astonishing is that in most cases both the project sponsor and project manager seem to be convinced all is “green” and it will work out until the project folds like a house of cards.

When you look at a typical large system implementation project it is still largely implemented like a waterfall. This includes ERP systems, CRM systems, Core Banking, etc. 

And this has not changed with the rise of software as a service (SaaS) offerings like Salesforce, SAP S/4HANA, Workday, etc.

Yes, the design and build phases are now iterative, but at a certain point your full solution needs to be tested end-to-end. This means one or more SIT phases and an UAT phase that includes all upstream and downstream systems and processes. 

You also need time to fix all the findings of your testing, and to do re-testing. If you are lucky one cycle is enough. Usually it is not.

You also need to train all your users and your support teams on the new solution and processes. Ideally on a solution that actually works. 

And when you are ready to go, you have a cutover phase from your old solution to your new solution. 

So yes, you design and build iteratively, but the rest is still shaped like a waterfall.

And this means that if you miss important milestones and you don’t change the go-live date you will steal time from the very important phases that come at the end of such a project. 

Starting these late phases without having completed the previous phase just does not make sense and will drive your test team and end users crazy.

Missing milestones does not mean your project team is doing a bad job, but they obviously underestimated the time it takes to do certain things.

Chances are this is a pattern that is repeated for the later phases of the project. 

So you will probably need more time for these phases as planned. Not less. 

In my experience there are only two probable outcomes of such projects:

1) They never go live

2) They go live too early 

The latter can be even worse than the first.

#5 Issue Resolution Time

A very simple metric for determining the health of your project is the age of issues. 

Issues are like fish; when they get old, they stink. 

A sure sign of a lack of leadership and upcoming trouble is old issues or issues that take longer than necessary to resolve. 

Issues are obstacles that get in the way of execution. It is the project manager’s role to resolve and eliminate these issues as quickly as possible regardless of the owner or the cause. 

If the project owner cannot solve it (or doesn’t try) it is up to the executive sponsor and the steering committee.

If an issue stands in the way of executing project objectives or makes it difficult for project managers to perform, it’s your responsibility as an executive to resolve it. 

If the project manager is the problem, it is also your responsibility to solve this.

Fix the root causes of problems and fix them early. 

What you want to avoid is a form of collective project amnesia where issues come up and never get resolved. 

Issues have a funny way of resurfacing when they don’t get resolved.

Old issues and/or issues that take too long to resolve are an indicator of poor leadership and a great leading indicator for trouble in the future.

#6 Risk Management

The presence or lack of risk management is a great leading indicator for the impact of negative surprises. Or as Tim Lister states it “Risk management is project management for adults”.

Effective risk management involves identifying, assessing, and mitigating risks. 

Signs of trouble include a lack of a risk management plan, unidentified risks emerging frequently, or failure to address known risks adequately. 

Poor risk management can, and usually will, lead to significant setbacks.

#7 Team Situation

Stability, quality, availability, and collaboration of your project team is a good leading indicator for trouble. 

Stability: a high turnover rate among project team members will disrupt continuity and momentum. It often reflects broader issues such as poor project management, low morale, or inadequate support, which will compromise project success.

Availability: are all approved team positions staffed? If not, for how long have unstaffed positions been empty? Is the project manager constantly looking for new people to fill empty positions? Are the right people in the right roles? Are positions part-time staffed that should be full time staffed?

Quality: look at the skill set necessary to deliver the project: Does the team have these skills? Does the team have the necessary experience or are they learning on the job? Of course you should use your project to let juniors grow and learn, but you will need experience as well.

Collaboration: look at the project team’s relationships with other external groups and teams. How well do they work with the other teams and stakeholders? Are there any internal or external team conflicts or tensions that could disrupt the project? How long have they existed? How severe are they? Have there been attempts to resolve them? How did this go? Are there issues/opportunities the team isn't discussing because they're too uncomfortable?

#8 Stakeholder and Leadership Engagement

Successful projects depend on the active involvement of stakeholders and leadership. 

Signs of trouble include stakeholders and leaders who are disengaged, unavailable for key meetings, or not providing necessary feedback. 

Poor engagement, whether it’s informal or formal, is a red flag.

When your stakeholders and leaders do not care about your project, then why would anybody else?

#9 Progress Reports

Or better said, non-progress reports are a leading indicator for trouble.

Watermelon projects which come across as green on our project dashboards but are deep red on the inside seem to have a thing in common.

There’s minimal activity associated with it. 

You don’t see tangible results, you don’t hear the organization talking about the project, the reports state the same activities over and over, the only meetings you are invited to are the monthly SteerCo meeting, and your project manager is avoiding you.

Without hard work a large project will not come to fruition. And hard work leaves trails. 

The absence of these predict trouble.

#10 Gut Feeling

Trust your gut.

If your intuition is telling you that something is wrong, ask questions. Don’t stop until you get an answer that makes sense to you.

Ask the same question to multiple people close to the project. If you get conflicting answers you know something is not right.

If your project team, project manager, and your stakeholders all have a different opinion on the status and chances of success of your project, order an independent project review

They are your best insurance against a multi million dollar failure.

If you are an executive sponsor, steering committee member, or a non-executive board member and want to learn what you need to do so that your project does not land on my list with project failures? Then my (Non)-Executive Crash Course is what you are looking for.

If you want to know where you are standing with that large, multi-year, strategic project? Or you think one of your key projects is in trouble? Then an Independent Project Review is what you are looking for.

Read more…

Sunday, July 28, 2024

Why Does My Project Need a Steering Committee?

Why My Your Project Need a Steering Committee?

"Why does my project need a Steering Committee?

This question I get asked by senior executives more often than I would like. 

A Steering Committee (StC) is an essential element of any large transformation project organisation. 

Here are my top 10 reasons why.

1) Strategic Direction and Oversight: Your StC provides strategic guidance and oversight for projects or organisational initiatives, ensuring alignment with broader goals and objectives.

2) Decision-Making Authority: Your StC has the authority to make critical decisions that significantly impact the direction and success of your project. This includes approving budgets, resources, and major changes. 

3) Accountability: Your StC provides a structured governance framework that ensures accountability. Your StC monitors progress, sets performance metrics, and holds the project team accountable for meeting their objectives.

4) Risk Management: Your StC helps identify potential risks and develop mitigation strategies. Your StC’s oversight ensures that risks are managed proactively and that the project remains on track.

5) Resource Allocation: Your StC ensures that resources (financial, human, and technological) are allocated efficiently and effectively, prioritising initiatives that provide the most value to the organisation.

6) Stakeholder Representation and Engagement: Your StC includes representatives from various stakeholder groups, ensuring that diverse perspectives and interests are considered in decision-making processes. Your StC members engage with other senior level stakeholders to remove barriers and get necessary support. 

7) Change Management: Your StC plays a crucial role in managing change, helping to navigate the complexities of organisational transitions and ensuring that changes are implemented smoothly.

8) Conflict Resolution: Your StC acts as an arbitrator for resolving conflicts and differences that arise during a project. Your StC’s authority and strategic perspective enable them to mediate effectively.

9) Policy and Compliance: Your StC ensures that projects comply with organisational policies, industry standards, and regulatory requirements, thereby safeguarding the organisation from legal and reputational risks.

10) Facilitating Communication: Your StC facilitates communication between the project team and senior management, ensuring that critical information flows effectively and decisions are well-informed.

The importance of a StC lies in its ability to provide structured governance, strategic oversight, and effective decision-making, all of which are crucial for the successful delivery and alignment of the project with organisational goals.

If you read this and think “My project has a StC, but it is not even coming close to doing all of this.” or “Damn, we have been doing it completely wrong.”, then have a look at my executive crash course on how to navigate large and complex transformation programs.

(Non)-Executive Crash Course - How to navigate large and complex transformation projects

Read more…

Monday, July 22, 2024

The Most Important Role on Any Large Transformation Project

Change Management and Your CAST Of Characters

The most important role on a large transformation project is the project sponsor. 

Not the project manager. 

According to the Project Management Institute (PMI)'s 2018 Pulse of the Profession In-Depth Report, "1 in 4 organisations (26%) report that the primary cause of failed projects is inadequate sponsor support". 

By contrast, "organisations with a higher percentage of projects that include actively engaged executive sponsors, report 40% more successful projects than those with a lower percentage of projects with actively engaged sponsors".

And according to the 2015 Annual Review of Projects of the UKs National Audit Office “the effectiveness of the project sponsor is the best single predictor of project success or failure”. 

Project sponsors on large and complex multi million dollar transformation projects are often senior executives and most are not trained in any way to be successful in their executive sponsor role. 

Nor do they take the time that is needed to execute this role.

Often the same is the case for the project steering committee members.

Guess what happens with these projects?

If you are in need for a training for executive sponsors and steering committee members have a look at my 1/2 day training for senior executives;

(Non)-Executive Crash Course - How to navigate large and complex transformation projects

Read more…

Tuesday, July 02, 2024

Case Study 18: How Excel Errors and Risk Oversights Cost JP Morgan $6 Billion

Case Study 18: How Excel Errors and Risk Oversights Cost JP Morgan $6 Billion

In the spring of 2012, JP Morgan Chase & Co. faced one of the most significant financial debacles in recent history, known as the "London Whale" incident. The debacle resulted in losses amounting to approximately $6 billion, fundamentally shaking the confidence in the bank's risk management practices. 

At the core of this catastrophe was the failure of the Synthetic Credit Portfolio Value at Risk (VaR) Model, a sophisticated financial tool intended to manage the risk associated with the bank's trading strategies. 

The failure of the VaR model not only had severe financial repercussions but also led to intense scrutiny from regulators and the public. It highlighted the vulnerabilities within JP Morgan's risk management framework and underscored the potential dangers of relying heavily on quantitative models without adequate oversight. 

This case study explores the intricacies of what went wrong and how such failures can be prevented in the future. By analyzing this incident, I seek to understand the systemic issues that contributed to the failure and to identify strategies that can mitigate similar risks in other financial institutions. The insights gleaned from this case are not just relevant to JP Morgan but to the broader financial industry, which increasingly depends on complex models to manage risk.

Background

The Synthetic Credit Portfolio (SCP) at JP Morgan was a part of the bank's Chief Investment Office (CIO), which managed the company's excess deposits through various investments, including credit derivatives. The SCP was specifically designed to hedge against credit risk by trading credit default swaps and other credit derivatives. The portfolio aimed to offset potential losses from the bank's other exposures, thereby stabilizing overall performance.

In 2011, JP Morgan developed the Synthetic Credit VaR Model to quantify and manage the risk associated with the SCP. The model was intended to provide a comprehensive measure of the potential losses the bank could face under various market conditions. This would enable the bank to make informed decisions about its trading strategies and risk exposures. The VaR model was implemented using a series of Excel spreadsheets, which were manually updated and managed.

Despite the sophistication of the model, its development was plagued by several critical issues. The model's architect lacked prior experience in developing VaR models, and the resources allocated to the project were inadequate. This led to a reliance on manual processes, increasing the risk of errors and inaccuracies. Furthermore, the model's implementation and monitoring were insufficiently rigorous, contributing to the eventual failure that led to massive financial losses.

The primary objective of JP Morgan's Synthetic Credit VaR Model was to provide an accurate and reliable measure of the risk associated with the bank's credit derivatives portfolio. This would enable the bank to manage its risk exposures effectively, ensuring that its trading strategies remained within acceptable limits. The model aimed to capture the potential losses under various market conditions, allowing the bank to make informed decisions about its investments.

In addition to the primary objective, the Synthetic Credit VaR Model was expected to provide a foundation for further advancements in the bank's risk management practices. By leveraging the insights gained from the model, JP Morgan hoped to develop more sophisticated tools and techniques for managing risk. This would enable the bank to stay ahead of emerging threats and maintain a competitive edge in the financial industry.

Is your project headed for trouble? Find out! Just answer the 27 questions of my Project Trouble Assessment, which will take you less than 10 minutes, and you will know.

If you just want to read more project failure case studies? Then have a look at the overview of all case studies I have written here.

Timeline of Events

Early 2011: Development of the Synthetic Credit VaR Model begins. The project is led by an individual with limited experience in developing VaR models. The model is built using Excel spreadsheets, which are manually updated and managed.

September 2011: The Synthetic Credit VaR Model is completed and implemented within the CIO. The model is intended to provide a comprehensive measure of the potential losses the bank could face under various market conditions.

January 2012: Increased trading activity in the SCP causes the CIO to exceed its stress loss risk limits. This breach continues for seven weeks. The bank informs the OCC of the ongoing breach, but no additional details are provided, and the matter is dropped.

March 23, 2012: Ina Drew, head of the CIO, orders a halt to SCP trading due to mounting concerns about the portfolio's risk exposure.

April 6, 2012: Bloomberg and the Wall Street Journal publish reports on the London Whale, revealing massive positions in credit derivatives held by Bruno Iksil and his team.

April 9, 2012: Thomas Curry becomes the 30th Comptroller of the Currency. Instead of planning for the upcoming 150th anniversary of the Office of the Comptroller of the Currency (OCC), Mr. Curry is confronted with the outbreak of news reports about the London Whale incident.

April 16, 2012: JP Morgan provides regulators with a presentation on SCP. The presentation states that the objective of the "Core Credit Book" since its inception in 2007 was to protect against a significant downturn in credit. However, internal reports indicate growing losses in the SCP.

May 4, 2012: JP Morgan reports SCP losses of $1.6 billion for the second quarter. The losses continue to grow rapidly even though active trading has stopped.

December 31, 2012: Total SCP losses reach $6.2 billion, marking one of the most significant financial debacles in the bank's history.

January 2013: The OCC issues a Cease and Desist Order against JP Morgan, directing the bank to correct deficiencies in its derivatives trading activity. The Federal Reserve issues a related Cease and Desist Order against JP Morgan's holding company.

September - October 2013: JP Morgan settles with regulators, paying $1.020 billion in penalties. The OCC levies a $300 million fine for inadequate oversight and governance, insufficient risk management processes, and other deficiencies.

What Went Wrong?

Model Development and Implementation Failures

The development of JP Morgan's Synthetic Credit VaR Model was marred by several critical issues. The model was built using Excel spreadsheets, which involved manual data entry and copying and pasting of data. This approach introduced significant potential for errors and inaccuracies. As noted in JP Morgan's internal report, "the spreadsheets ‘had to be completed manually, by a process of copying and pasting data from one spreadsheet to another’". This manual process was inherently risky, as even a minor error in data entry or formula could lead to significant discrepancies in the model's output.

Furthermore, the individual responsible for developing the model lacked prior experience in creating VaR models. This lack of expertise, combined with inadequate resources, resulted in a model that was not robust enough to handle the complexities of the bank's trading strategies. The internal report highlighted this issue: "The individual who was responsible for the model’s development had not previously developed or implemented a VaR model, and was also not provided sufficient support". This lack of support and expertise significantly compromised the quality and reliability of the model.

Insufficient Testing and Monitoring

The Model Review Group (MRG) did not conduct thorough testing of the new model. They relied on limited back-testing and did not compare results with the existing model. This lack of rigorous testing meant that potential issues and discrepancies were not identified and addressed before the model was implemented. The internal report criticized this approach: "The Model Review Group’s review of the new model was not as rigorous as it should have been". Without comprehensive testing, the model was not validated adequately, leading to unreliable risk assessments.

Moreover, the monitoring and oversight of the model's implementation were insufficient. The CIO risk management team played a passive role in the model's development, approval, implementation, and monitoring. They viewed themselves more as consumers of the model rather than as responsible for its development and operation. This passive approach resulted in inadequate quality control and frequent formula and code changes in the spreadsheets. The internal report noted, "Data were uploaded manually without sufficient quality control. Spreadsheet-based calculations were conducted with insufficient controls and frequent formula and code changes were made". This lack of oversight and quality control further compromised the reliability of the model.

Regulatory Oversight Failures

Regulatory oversight was inadequate throughout the development and implementation of the Synthetic Credit VaR Model. The OCC, JP Morgan's primary regulator, did not request critical performance data and failed to act on risk limit breaches. As highlighted in the Journal of Financial Crises, "JPM did not provide the OCC with required monthly reports... yet the OCC did not request the missing data". This lack of proactive oversight allowed significant issues to go unnoticed and unaddressed.

Additionally, the OCC was informed of risk limit breaches but did not investigate the causes or implications of these breaches. For instance, the OCC was contemporaneously notified in January 2012 that the CIO exceeded its Value at Risk (VaR) limit and the higher bank-wide VaR limit for four consecutive days. However, the OCC did not investigate why the breach happened or inquire why a new model would cause such a large reduction in VaR. This failure to follow up on critical risk indicators exemplified the shortcomings in regulatory oversight.

How JP Morgan Could Have Done Things Differently?

Improved Model Development Processes

One of the primary ways JP Morgan could have avoided the failure of the Synthetic Credit VaR Model was by improving the model development processes. Implementing automated systems for data management could have significantly reduced the risk of human error and improved accuracy. Manual data entry and copying and pasting of data in Excel spreadsheets were inherently risky practices. By automating these processes, the bank could have ensured more reliable and consistent data management.

Moreover, allocating experienced personnel and adequate resources for model development and testing would have ensured more robust results. The individual responsible for developing the model lacked prior experience in VaR models, and the resources allocated to the project were inadequate. By involving experts in the field and providing sufficient support, the bank could have developed a more sophisticated and reliable model. As highlighted in the internal report, "Inadequate resources were dedicated to the development of the model".

Conducting extensive back-testing and validation against existing models could have identified potential discrepancies and flaws. The Model Review Group did not conduct thorough testing of the new model, relying on limited back-testing. By implementing a more rigorous testing process, the bank could have validated the model's accuracy and reliability before its implementation.

Enhanced Oversight and Governance

Enhanced oversight and governance could have prevented the failure of the Synthetic Credit VaR Model. Ensuring regular, detailed reporting to regulators and internal oversight bodies would have maintained transparency and accountability. JP Morgan failed to provide the OCC with required monthly reports, and the OCC did not request the missing data. By establishing regular reporting protocols and ensuring compliance, the bank could have maintained better oversight of the model's performance.

Addressing risk limit breaches promptly and thoroughly would have mitigated escalating risks. The OCC was informed of risk limit breaches but did not investigate the causes or implications of these breaches. By taking immediate action to address and rectify risk limit breaches, the bank could have prevented further escalation of risks. Proactive risk management is crucial in identifying and mitigating potential issues before they lead to significant losses.

Implementing continuous monitoring and review processes for all models and strategies could have identified issues before they led to significant losses. The CIO risk management team played a passive role in the model's development, approval, implementation, and monitoring. By adopting a more proactive approach to monitoring and reviewing the model, the bank could have ensured that potential issues were identified and addressed promptly. Continuous monitoring and review processes are essential in maintaining the accuracy and reliability of risk management models.

Comprehensive Risk Management Framework

Developing a comprehensive risk management framework could have further strengthened JP Morgan's ability to manage risks effectively. This framework should have included clear policies and procedures for model development, implementation, and monitoring. By establishing a robust risk management framework, the bank could have ensured that all aspects of the model's lifecycle were adequately managed.

Additionally, enhancing collaboration and communication between different teams involved in risk management could have improved the model's reliability. The CIO risk management team viewed themselves more as consumers of the model rather than as responsible for its development and operation. By fostering collaboration and communication between different teams, the bank could have ensured that all stakeholders were actively involved in the model's development and monitoring.

Closing Thoughts

The failure of JP Morgan's Synthetic Credit VaR Model underscores the critical importance of rigorous development, testing, and oversight in financial risk management. This incident serves as a cautionary tale for financial institutions relying on complex models and emphasizes the need for robust governance and proactive risk management strategies. By learning from this failure, financial institutions can develop more reliable and effective risk management frameworks.

The insights gleaned from this case study are not just relevant to JP Morgan but to the broader financial industry, which increasingly depends on complex models to manage risk. By addressing the systemic issues that contributed to the failure and implementing the strategies outlined in this case study, financial institutions can mitigate similar risks in the future.

In conclusion, the London Whale incident highlights the vulnerabilities within JP Morgan's risk management framework and underscores the potential dangers of relying heavily on quantitative models without adequate oversight. By enhancing model development processes, improving oversight and governance, and developing a comprehensive risk management framework, financial institutions can ensure more reliable and effective risk management practices.

Is your project headed for trouble? Find out! Just answer the 27 questions of my Project Trouble Assessment, which will take you less than 10 minutes, and you will know.

If you just want to read more project failure case studies? Then have a look at the overview of all case studies I have written here.

Sources

1) Internal Report of JPMorgan Chase & Co. Management Task Force Regarding 2012 CIO Losses, January 16, 2013

2) A whale in shallow waters: JPMorgan Chase, the “London Whale” and the organisational catastrophe of 2012, François Valérian, November 2017

3) JPMorgan Chase London Whale E: Supervisory Oversight, Arwin G. Zeissler and Andrew Metrick, Journal of Financial Crises, 2019

4) JPMorgan Chase London Whale C: Risk Limits, Metrics, and Models, Arwin G. Zeissler and Andrew Metrick, Journal of Financial Crises, 2019

5) JPMorgan Chase Whale Trades: A Case History of Derivatives Risks and Abuses, Permanent Subcommittee on Investigations United States Senate, 2013

Read more…

Monday, July 01, 2024

Boards Must Understand Technology. Period.

Boards Must Understand Technology. Period.

Reflecting on the 2024 Swiss Board Day in Bern it has become even more clear to me that understanding the current technological landscape and its associated opportunities, challenges, and risks is now essential for both executive and non-executive board members. 

Equally important is staying informed about governance issues related to these technologies, including regulatory challenges and potential pitfalls. 

There is now way around it anymore, in order to set the company's vision and strategy, the board must understand how technology impacts the business and its future value creation.

Consider the narratives surrounding artificial reality (AI). While ChatGPT brought large language models into the spotlight, various AI applications like face ID, image recognition, customer service chatbots, and expert systems for tasks such as self-driving cars and chess have been in use for decades. 

Despite media focus on the risks of AI, such as deep fakes and cyber threats, there are significant defensive benefits, including enhancing cybersecurity and verification processes. Boards need to understand AI’s role within their organizations, lead the way in defining “responsible AI,” and ensure issues like privacy, bias, and equity are addressed in AI development and deployment.

Clients, regulators, and markets now expect rapid and effective integration of new business drivers into strategies. Building trust around new technologies with internal and external stakeholders is crucial. 

Cybersecurity, augmented reality (AR), robotics, and AI are just a few examples where companies must identify, measure, disclose, and adapt to strategic opportunities and risks. Not all technology is relevant for your company, but the ones that are should be evaluated in detail.

How can a board effectively oversee the long-term growth and evolution of their company amidst ongoing new opportunities and challenges, especially if they lack specific knowledge on existing and emerging technologies and its risks? 

Boards should start with leveraging internal company resources. You should seek out knowledge by visiting your company's offices, attend small group sessions, do production tours, and join town halls to witness new developments firsthand and understand their strategic alignment.

Dedicated training and workshops with relevant experts can help you grasp the business implications of key technologies within their industry. Your trainer(s) should have experience implementing technology in your industry. 

What is even more important is that your trainer(s) can explain technology in a way that non-technical people can understand and are able to apply their newly won knowledge onto their business.

The aim isn’t to create a board of tech experts but to shift mindsets, open new possibilities, evaluate risks, and enhance the board’s ability to challenge management in business development.

In a nutshell: In order to set the company's vision and strategy, the board must understand how technology impacts the business and its future value creation.

If your board is in need for such a training or workshop have a look at my offerings;

> (Non)-Executive Crash Course - Technology Trends Shaping Our Future

> (Non)-Executive Workshop - Technology Vision Definition

Read more…

Friday, June 14, 2024

How To Select a Good Project Manager for Your Large and Complex Transformation Project

How To Select a Good Project Manager for Your Large and Complex Transformation Project

One of your most important jobs as a project sponsor is to select a good project manager for your project. 

Selecting the right project manager is crucial for the success of your project. 

Here are the five key factors to consider when choosing the right person for the role:

1) Experience

Nothing beats relevant experience when it comes to managing large and complex transformation projects. On smaller and less complex projects you can give people a chance. On your business critical projects you should not. 

You will need to be looking for project managers that have managed projects that were;

> in the same industry. Bonus points when it was in your own company or a direct competitor.

> having a similar objective and scope. After a full cycle SAP implementation at three different companies you understand a thing or two. Unless it were completely different modules and products. 

> having a similar size and complexity. Rolling out a new software in one country is different from doing it in twelve. Having hundreds of products, thousands of clients

Your project manager should have a track record. Check references and past project outcomes. 

A project gone belly up is not necessarily the fault of the project manager, but you will need to look for successful project completions and satisfied clients or employers.

2) Leadership and Communication Skills

A good project manager should be able to lead a team, make decisions, and motivate team members. Effective communication is critical for ensuring that all stakeholders are on the same page. You will get a feeling for this during your interviews. But the easiest way to check this is by checking references and calling your own contacts that might have worked with them.

3) Problem Understanding and Solving Skills

They should be able to analyse and understand problems and come up with effective solutions quickly. Understanding your problem is half the solution.  You can assess this by presenting a number of the problems you want to address with your project to the project manager in an interview and ask them to come up with a solution on the fly. 

4) Team Dynamics

They should be able to work well with you and your existing team. Ensure the project manager’s work style and values align with your team and company’s culture. Micromanagement sucks for everybody. Involve key team members in the interview process to get their input on potential candidates.

5) Gut Feeling

If your intuition about a candidate’s fit is good, but one or more of the 4 factors above seems to be not good, then look for a better candidate. Don't rely only on your intuition in this case.

If your intuition about a candidate’s fit is bad, but all of the 4 factors above seem to be good, then look for a better candidate. Trust your intuition in this case.

If your candidate scores well on these five factors there is a high probability they are the right candidate for the job!

PS: What is absolutely not important are certifications. Possessing the PMP shouts to the world that they have passed a comprehensive exam and confirmed that they are aware of and understand the processes, terms, tools, and techniques as represented in the PMI's Guide to the Project Management Body of Knowledge. Thats it! The same for Prince 2, SAFe, IPMA, and others. 

Passing these exams does not confirm that they are an accomplished project manager with a long history of leading successful projects. To claim or even imply that earning such a certification is any more than an indicator of general knowledge in the field is questionable.

In a nutshell: Nothing beats relevant experience when it comes to managing large and complex transformation projects. On smaller and less complex projects you can give people a chance. On your business critical projects you should not.

If you are a senior (non)-executive in the role of a project sponsor or steering committee member in a large and complex transformation project, and you are confronted with topics like the above have a look at this training;

(Non)-Executive Crash Course - How to navigate large and complex transformation projects.

I will teach you the most relevant things you need to know in half a day.

Read more…