Saturday, November 14, 2020

User Enablement is Critical for Project Success

User Enablement is Critical for Project Success

Any system is only as good as how well it is used. 

If it's a CRM, ERP, or any other system, when users don’t know how to use the system effectively the benefits of the new system for your organization will be small, or even negative. 

This means user enablement is critical to the success of a project. 

It is not enough to simply have your new system in place two weeks before your go-live date. 

Your users need to know: 

1) Why you’re implementing the new system. When it comes to organizational changes and operational logistics, many employees will be instrumental in the change you’re promoting. It’s important to tend to their needs throughout the change journey.

2) How existing processes will change and which new processes will be introduced with the new system. 

3) How to use the system. Your user base can range from people who have spent their entire careers on the “green screen” (yet still don’t know how to use a mouse to copy and paste content) to millennials, who are so accustomed to touchscreens that they don’t know that it’s possible to strike the arrow icon on the keyboard to move an object. 

4) Who to contact in case they require support with any problems or questions. If you’re working with an internal service desk, make sure that they’re equipped with standardized scripts. Alternatively, if you’re working with an external service provider, make sure you’ve selected a partner who can provide the highest level of support required for the new system. 

5) Whether they can offer feedback and make suggestions to improve the system. One of the best ways to build support organization-wide is to give everyone a voice and a platform to share their views throughout the transition.

Of course, transitions and change management require not only time, but also a stable and working system that can be used to create training, user guides, videos, and user acceptance tests. 

So if you are still building and testing your system shortly before going live your user enablement will suffer greatly. 

You will have neither the necessary time, nor the necessary trust in the system, you need to achieve user enablement and acceptance. 

And with that your benefits will be limited.

In a nutshell: when users don’t know how to use your new system effectively the benefits of the new system for your organization will be limited.

Read more…

Saturday, October 24, 2020

Case Study 14: How Texas Wasted $367 Million on an Unusable Child Support Enforcement System

Case Study 14: How Texas Wasted $367 Million on an Unusable Child Support Enforcement System

After investing $367.5 million in a child support enforcement system, the only thing that the state of Texas has to show for is some hard-won lessons. 

Initiated by the Office of the Attorney General (OAG) in 2007, “T2” aimed to deliver a secure, web-based system to automate manual functions, streamline daily operations, enable staff to manage case information online, and offer multiple platforms for parents to communicate with the Child Support Division (CSD). Other planned improvements included a comprehensive electronic case file system, standardized forms, an integrated solution for reporting systems, automated generation of child support case documents, and enhanced automation to efficiently establish and enforce child support orders.

Fifteen months behind the original completion date of December 2017 and saddled with a budget that had ballooned from $223.6 to $419.6 million, as of May 2019, state workers were still without a workable system to standardize and simplify child support applications. This angered federal backers to the point where they decided to absorb the loss of the 66% of funding they had contributed to T2. 

Senator Jane Nelson, a Flower Mound Area Republican who co-chairs the House-Senate budget conference committee summed it up with: “Stop the bleeding.” This was echoed by Rep. Giovanni Capriglione, a GOP budget writer, who pointed out: “This was a $60 million idea — $340 million ago.

Perhaps not surprisingly, the project was abandoned four months later.

To be clear, this decision has not affected payments from non-custodial parents to their children and former spouses. After all, a clunky “T1” mainframe system of record keeping – complete with glowing green computer screens that date to the mid-1990s – is still in use by state workers. As they continue to use the system designed by Accenture, under an earlier contract, most workers would be lost without their “quick reference card” that is crammed with acronyms they need to enter before inputting a client's personal information.

Even with their antiquated system, Texas child support workers managed to collect $4.4 billion in the last state fiscal year — the largest amount collected by any state.

Before we continue with this case study...

> For an overview of all case studies I have written please click here.

> To download 10 of my Project Failure Case Studies in a single eBook and be notified about new Project Failure Case Studies just subscribe to my weekly newsletter here or click on the image.

Timeline of Events

2007

In 2007, talks commenced on the need to update the child support enforcement system to establish orders, enforce compliance, and collect and disburse payments.

Soon after, Deloitte was hired to make tech recommendations and create a roadmap to implement the new child support enforcement system.

2009

In 2009, the OAG estimated that the system would cost $223.6 million to develop and would be completed in 2017.

2010

Since August of 2010, the T2 project has been under federal independent verification and validation (IV&V) by the Office of Child Support Enforcement. 

That October, Accenture was awarded the contract to develop the system, which was valued at $69.8 million.

2011

A research team at the University of Texas’ Center for Advanced Research in Software Engineering (ARiSE) was contracted to complete semi-annual reviews on quality and progress in July 2011.

2012 - 2014

Deloitte delivered its final blueprint in 2012 and exited the project.

From March 2012 to November 2014, there were 27 change orders initiated by the CSD that inflated the value of Accenture’s contract to $98.3 million.

2015

Ken Paxton took over as the attorney general in 2015, succeeding Greg Abbott (who became governor).

In an excerpt from the October 2015 IV&V report, it was noted that the T2 project was: (1) being driven by an “unrealistic schedule”; (2) the quality of the code components Accenture delivered were “below the expectations” of the OAG; and (3) uncertainty about the development strategy would likely have a “negative effect” on the work environment (including cost increases, staff turnover, and lower productivity).

On November 30, 2015, the OAG was notified that federal funds for the T2 system development contract were frozen pending the approval of an updated project schedule and corrective action plan. The following month, legislators were given a rundown of how the project went off the rails. Their reactions ranged from stunned and confused to frustrated.

"I am kind of speechless," said Rep. Helen Giddings, D-DeSoto.

"I’m just going down a rabbit trail to Wonderland," explained Rep. Dawnna Dukes, D-Austin.

When a reporter referred to the project as "a challenge," Rep. Borris Miles, D-Houston, had this to say: "I’m not going to call this a challenge. There are some other words I’d like to call it, but we’re being videotaped."

At the same time as Accenture took responsibility for some of the failure, its spokesperson seized on University of Texas software expert Herbert Krasner's testimony that Deloitte's $46 million system blueprint was "not worth the paper it was printed on." Deloitte’s media representative responded that when they exited the project in 2012, neither Abbott's office nor Accenture voiced any concerns over their blueprint.

2016

The OAG's office and Accenture agreed to a major contract amendment in 2016. It called for increased reporting to a new state executive steering committee, payments that were commensurate with the quality of the work, and a $20 million "hold" on Accenture's final check until it was clear that the federal Administration for Children and Families would sign off on the work.

Along with the project governance, Amendment No. 1 reset the T2 delivery date to December 2018 and increased the contract to $150.1 million. 

2018

Due to changing federal form requirements, Amendment No. 2 (issued in January 2018) reset the T2 delivery date to March 2019 and increased the Accenture contract to $156.9 million. 

2019

In September 2019, Ken Paxton’s office confirmed that the (much-maligned) T2 software project had been abandoned, and that they were seeking a cheaper alternative: 

The costs of moving forward with those challenges, when coupled with the ongoing costs to maintain the system upon completion, can no longer be justified when newer technologies exist that are capable of providing the necessary functionality at a lower cost to build and maintain, thus providing a better value to Texas taxpayers.

What Went Wrong

Exploding Costs

In January 2007, Deloitte’s contract to update the model for OAG’s child support services had an initial value of $1.8 million. Following the OAG’s 5 renewal options, the final contract was valued at $46 million. In that same vein, the system development contract (awarded to Accenture in October 2010) was valued at $69.8 million. After the OAG issued 30 change orders, the value increased to $156.9 million.

According to the Legislative Budget Board, from the idea phase through to January 2019, it cost $367.5 million — $124.9 million of which was state funding. 

Outsourcing Challenges

In order to turn a profit, Accenture had to outsource much of their custom development work to 165 programmers in India. Despite security concerns, the Indian programmers were given access to state data and worked on code remotely. 

Reviewers repeatedly noted they were “concerned with the low level of quality for work products and deliverables submitted by Accenture.

Performance Issues

On multiple occasions, the IV&V team reported that T2 had resulted in sub-standard processing speeds, and that switching on key security software only made the issue worse. Although improvements were made over time, the performance was never deemed satisfactory by IV&V. 

Defects and Integration Issues

In the summer of 2018, OAG staff ordered additional joint system testing of T2, which revealed over 1000 defects – ranging from minor typos to severe security issues that had to be resolved before proceeding.

Compounding the issue, the process to pull financial data from the original T1 system into T2 was not working correctly.

Infrastructure Updates

Due to delays, the core security software had lost vendor support and needed to be upgraded before T2 could be deployed. This upgrade could not begin until all system defects were addressed.

How OAG Could Have Done Things Differently

Better Contracts

During a recent hearing, Accenture T2 project executive sponsor, Ben Foster testified to Capriglione's sub-committee that the company "did not deliver the value you expected of us." However, since the 2016 contract amendment gave the company financial "skin in the game," Foster said, "We've made tremendous progress."

In response, Capriglione claimed that, while researching Accenture's work in other states, he spoke with people who'd tell him "it's really not a software company, it's a contracting company. They make very good contracts. It's very difficult to get out of [them]."

Capriglione, who spent two sessions as head of the appropriations sub-committee probing the state's contracting woes, said of Accenture: "I'm totally torqued at them. Now, if there's any good that can come of this, it is that we are now learning all of the things we should never do when we write contracts."

On a related note, I recommend reading "10 Important Questions to Ask before Signing your Cloud Computing Contract."

Being Transparent and Realistic

As early as 2011, ARiSE researchers noted significant problems that only worsened over the years. The records detail how state officials – under Abbott’s chief of child support, Charles Smith – failed to hold Accenture accountable as the project missed major deadlines and morphed into an overly complicated tangle of hundreds of software bundles.

Records show that each time a deadline was about to be missed, state officials simply extended the timeframe (this occurred on at least seven occasions). Officials in the Child Support Division referred to this as a “re-baseline” – a maneuver that obscured the project’s failings, increased costs, and delayed completion. 

For more insights on this topic, please see "8 Signs of Troubled Projects for Project Sponsors."

Executive Sponsorship

In many companies, these projects tend to fall under the category of “group responsibility.” Extending this thinking to, say, an incoming Attorney General, it is easy to pass the buck ("Oh right, that project that was led by my predecessor"). 

Every software project must have a competent director who owns it and is responsible for both the successes and failures from idea phase through to completion. Without end-to-end managerial involvement, today's business software projects are doomed to whole and half failures. Anyone who does not understand this would do well to postpone new projects.

See "Successful Projects Need Executive Champions" for more on this topic.

Be a Responsible Buyer of Technology

In any organization, it’s crucial to buy technology and implement services responsibly, not to mention work well with suppliers. More than any other factor, projects fail because these skills are lacking. 

While some people may argue that suppliers should have all the skills that are required to make a company’s project a success, that’s a matter of wishful thinking or good luck. Responsible buyers are capable of spotting when a supplier and/or the product is failing, and can mitigate risks by taking decisive actions.

For more insights on this topic, I invite you to read "Be a Responsible Buyer of Technology."

Closing Thoughts

At the time of writing, state workers are still clinging to their "quick reference cards" to input acronyms and fill in their client's personal information. They wait in limbo for the system that was supposed to help them get rid of paper files, access services remotely, and benefit from automated prompts and the ability to generate drafts of court filings. These employees and American taxpayers are the real losers in this debacle.

Free Project Complexity Assessment

This assessment will guide you through the 3 dimensions (structural, sociopolitical, and emergent) of project complexity by asking you 38 questions.

At the end of the assessment you will get a score between 0 and 38. The higher your score, the better you have a grip on the complexity of your project. Most questions have detailed feedback with links to more insights on how to handle this part of project complexity.

Other Project Failure Case Studies

For an overview of all case studies I have written please click here.

> To download 10 of my Project Failure Case Studies in a single eBook and be notified about new Project Failure Case Studies just subscribe to my weekly newsletter here or click on the image.

References

> Texas Child Support Enforcement System 2.0, Overview of System Development and Project Monitoring & Oversight, December 2015

> Overview of the Texas Child Support Enforcement System 2.0, February 2019

> An Audit Report on The Development of the Texas Child Support Enforcement System 2.0 at the Office of the Attorney General, July 2011

> Legislative Appropriations Request for Fiscal Years 2018 and 2019

> Legislative Appropriations Request for Fiscal Years 2020 and 2021

> Paxton drops contractors on tech project that's $107 million over estimated cost

> After $367.5 million, Texas gets no new child support computer software

Read more…

Sunday, October 18, 2020

The True Cost of Excluding Executives from the IT Decision Making Process

The True Cost of Excluding Executives from the IT Decision Making Process

Throughout the past 15 years that I’ve been working as an independent project recovery consultant and interim CIO, I have observed executives’ frustration – even exasperation – with information technology and their IT departments generally. Some of the more common refrains are: 

“I don’t understand IT well enough to manage it.” 

“Although they work hard, my IT people don’t seem to understand the very real business problems we’re facing.”

In fact, the complaint I hear most often from CEOs, COOs, CFOs, and other high-ranking officers, is that they haven’t reaped the business value of their high-priced technology. Meanwhile, the list of seemingly necessary IT capabilities continues to grow, and IT spending consumes an increasing percentage of their budgets. 

So why is this happening, and what can you do to prevent it?

Though it may come as a surprise, one of the most effective measures you can take is to ensure a senior business executive plays a leadership role in a handful of key IT decisions. I say this because when business executives hand over their responsibility for these decisions to IT executives, disaster often ensues. You need look no further than my project failure case studies to see the sheer number of botched adoptions of large-scale customer relationship management (CRM) and enterprise resource planning (ERP) systems. 

It would be easy to assume that the CRM and ERP disasters resulted from technological glitches. However, the problems generally occurred because senior executives failed to realize that adopting the systems would create business challenges – not just technological ones. 

To be clear, IT executives are the go-to people for numerous managerial decisions, including choosing technology standards, advising on the design of the IT operations center, providing the technical expertise the organization needs, and developing the standard methodology for implementing new systems. But an IT department should not be left to their own devices to determine the impact these choices and processes will have on a company’s business strategy.

In an effort to help executives avoid IT disasters, and, more importantly, generate real value from their IT investments, I have made a list that outlines the measures they should take and the decisions they should oversee. Whereas the first three bear on strategy, the latter items relate to execution. At the risk of a spoiler: IT people should not be making any of these decisions, because, in the end, that’s not their job [or their area of expertise].

1) How much should we spend on IT?

Given the uncertain returns on IT spending, many executives wonder whether they will reap the benefits of their investment. So the thinking goes: If we can just get the dollar amount right, the other IT issues will take care of themselves. For this, they look to industry benchmarks to determine “appropriate” spending levels.

In my experience, they should be approaching the question very differently. First, executives should determine the strategic role that IT will play in their organization to establish an organization-wide funding level that will enable technology to fulfill their objectives. After all, IT goals vary considerably across organizations – from streamlining administrative processes to feeding a global supply chain, providing flawless customer service, or cutting-edge research and development. 

Clearly, fulfilling these objectives requires different levels of spending, planning, and administrative oversight. 

2) Which projects should be funded?

I’ve seen relatively small companies with 100 IT projects underway. Despite the fact that they are not equally important, executives are often reluctant to make choices between the projects that will likely have a significant impact on the company’s success, and those that will provide some benefits but aren’t essential.

Leaving such decisions in the hands of the IT department means that they will be the ones prioritizing business issues, or, just as troubling, they will attempt to deliver on every project a business manager regards as important. When presented with a list of approved and funded projects, most IT units will do their best to carry each of them out. But this typically leads to a backlog of delayed initiatives, not to mention overwhelmed and demoralized staff.

3) Which IT capabilities need to extend company-wide?

Business leaders are increasingly recognizing the significant cost savings and strategic benefits that come with centralizing IT capabilities and standardizing infrastructure throughout an organization. Leveraging technological expertise across a company enables cost-effective contracts with software suppliers, just as it facilitates global business processes. On the other hand, standards can restrict the flexibility of individual business units, limit the company’s responsiveness to diverse customer segments, and give rise to strong resistance from managers.

When IT executives are left to make decisions about what will and will not be centralized and standardized, they typically take one of two approaches. Depending on the company’s culture, they either insist on standardizing everything to reduce costs, or they grant exceptions to any business unit manager who raises a stink (recognizing the importance of business unit autonomy). Whereas the former approach restricts flexibility, the latter is expensive, limits business synergies, and drains human resources. 

It’s worth keeping in mind that, in some instances, using different standards can be counter-productive – resulting in a corporate IT infrastructure whose total value is less than the sum of its parts. Knowing this, executives must play a lead role in weighing these crucial trade-offs.

4) What IT services do we really need?

I’ll get right to the point: An IT system that doesn’t work is useless. Reliability, responsiveness, and data accessibility come at a cost, but that doesn’t mean every system must be wrapped in gold. Ultimately, executives must decide how much they are willing to spend on various features and services.

For some companies, top-of-the-line service is non-negotiable. As a case in point, investment banks cannot afford to engage in a debate over how much data they would be willing to lose if a trading system crashes. They require 100% recovery. Similarly, Cloud providers cannot compromise on response time or allow for any downtime, because their contracts penalize them when their system becomes unavailable. This not only incentivizes the provider to ensure that their services will continue to run despite floods, tornadoes, power outages, and telecommunications breakdowns, it gives the client peace of mind and justifies higher costs.

Granted, not every company is a Google or a Goldman Sachs. Most can tolerate limited downtime or occasionally slow response times, and they must weigh the problems this creates against the costs of preventing them. Once again, decisions concerning the appropriate levels of IT service need to be made by senior business managers. Left to their own devices, IT units are likely to opt for the highest levels – i.e., Ferrari service when that of a Ford will do – because the IT unit will be judged on such things as how often the system goes down.

5) What security and privacy risks are we willing to accept?

Like reliability and responsiveness, companies must weigh the level of security they want against the amount that they are willing to expend. In doing so, there’s another trade-off to consider: Increasing security involves not only higher costs but also inconveniences users. As global privacy protections are increasingly mandated by governments, security takes on a new level of importance because well-designed privacy protections can be compromised by inadequate system security. Executives must assess these trade-offs. 

Bear in mind that many IT units will adopt the philosophy that absolute security is their responsibility, and thus deny access anytime safety cannot be guaranteed. They would do well to float that idea by a bank’s marketing executives, who are counting on simplified online transactions to attract new customers.

6) Can we assign blame when an IT project fails?

Finger-pointing often ensues when teams fail to benefit from new systems. There must be something wrong with the IT function in our company, they presume. More often than not, there is something wrong with the way that non-IT executives are managing IT-enabled change in the organization.

I invite you to reflect on the well-publicized examples of ERP and CRM initiatives that failed to generate quantifiable value. Invariably, the failures resulted from assumptions that IT units or consultants could implement the systems while business managers went about their daily tasks. The bottom line is that new systems have no value; they derive their value from new or redesigned business processes. 

Quite simply: To avoid disasters, executives must take responsibility for realizing the business benefits of an IT initiative. These “sponsors” need the authority to assign resources to projects and the time to oversee the creation and implementation of their projects. 

This includes scheduling regular meetings with IT personnel, organizing training sessions, and working with the IT department to establish clear metrics for determining the initiative’s success. Such sponsors can ensure that new IT systems deliver real business value.

In a nutshell: one of the most effective measures you can put in place is giving senior business executives a leadership role in a handful of key IT decisions. 

Read more…

Sunday, October 11, 2020

What Executives Need to Know About Project Management

The Role of Executives in Projects
I work exclusively with executives and when there is one thing that I have learned over the years is that effective executives have at least a basic understanding about project management and their roles in it. 

When you look in a dictionary for the word "executive" you will find an entry similar to the one below. 

noun - a person with senior managerial responsibility in a business.

“a C-level executive”

adjective - relating to or having the power to put plans or actions into effect.

"an executive chairman"

An executive directs, plans, and coordinates operational activities for their organization and are normally responsible for devising policies and strategies to meet the organization's goals.

Executives hold executive powers delegated to them with and by authority of a board of directors and/or the shareholders. 

Generally, higher levels of responsibility exist, such as a board of directors and those who own the company (shareholders), but they focus on managing the senior or executive management instead of on the day-to-day activities of the business. 

The executive management typically consists of the heads of a firm's product and/or geographic units and of functional executives such as the Chief Financial Officer (CFO), the Chief Operating Officer (COO), Chief Information Officer (CIO), and of course the Chief Executive Officer (CEO). 

Almost all organizations use projects to implement their strategy and drive change. So projects are an important tool for executives to do their job. 

And in the organization's most important projects and programs it are executives that have the following roles and/or responsibilities.


Project support is priceless. Engaged executives help organizations to bridge the communications gap between influencers and implementers, thereby increasing collaboration and support, boosting project success rates, and reducing collective risk.

In a nutshell: In order to be an effective executive you should have a basic understanding about how project and project portfolio management works. You should also understand how to be a great Project Sponsor, Project Champion and/or Steering Committee Member.

Read more…

Sunday, September 27, 2020

Project Inputs, Activities, Outputs, Outcomes, Impact and Results

Project Inputs, Activities, Outputs, Outcomes and Impact
Many people and organizations seem to have serious trouble separating between the inputs, activities, outputs, outcomes, impact, and the results of a project. 

This leads to lot’s of confusion, bad communication, disappointed project teams, and disappointed stakeholders.

Below you will find my take on these terms and their relevance for your project.

Inputs

Inputs are very often confused to be synonymous with activities. However, these terms are not interchangeable. 

Inputs, in simple terms, are those things that we use in the project to implement it. 

For example, in any project, inputs would include things like time of internal and/or external employees, finances in the form of money, hardware and/or software, office space, and so on. 

Inputs ensure that it is possible to deliver the intended results of a project.

Activities

Activities on the other hand are actions associated with delivering project goals. In other words, they are what your people do in order to achieve the aims of the project. 

In a software development project, for example, activities would include things such as designing, building, testing, deploying, etc. And in an upskilling initiative the training of employees would be an activity.

Outputs

These are the first level of results associated with a project. Often confused with “activities”, outputs are the direct immediate term results associated with a project. 

In other words, they are the delivered scope. The tangible and intangible products that result from project activities. Outputs may include a new product or service, a new ERP system replacing the old one, or employees being trained as part of a digital upskilling initiative.

Success on this first level of results is what I call “Project Delivery Success”. It is about defining the criteria by which the process of delivering the project is successful.

Essentially this addresses the classic triangle "scope, time, budget". 

It is limited to the duration of the project and success can be measured as soon as the project is officially completed (with intermediary measures being taken of course as part of project control processes). 
It is always a combination of measurements on inputs and outputs.

Outcomes

This is the second level of results associated with a project and refers to the medium term consequences of the project. Outcomes usually relate to the project goal(s).  

For example, the new ERP system is used by all users in scope, uptime is 99.99%, customer satisfaction has increased by 25%, operational costs have decreased by 15%, and so on.

These criteria need to be measured once the product/service is implemented and over a defined period of time. This means it cannot be measured immediately at the end of the project itself.


Success on this second level of results is what I often refer to as “Product or Service Success”. It is about defining the criteria by which the product or service delivered is deemed successful.

Impact

This is the third level of project results, and is the long term consequence of a project.  Most often than not, it is very difficult to ascertain the exclusive impact of a project since several other projects, not similar in nature can lead to the same impact. 

For example, financial value contribution (increased turnover, profit, etc.) or competitive advantage (market share won, technology advantage).

Success on this third level of results is what I call “Business Success”. Business success is about defining the criteria by which the product or service delivered brings value to the overall organization, and how it contributes financially and/or strategically to the business.

Results

Project results are the combination of outputs (level 1), outcomes (level 2), and impact (level 3). These levels combined will determine your overall project success. You can be successful on one level but not others.

Project success and project failure are NOT absolutes. It may not be possible to be a little bit pregnant, but you can be a little bit successful.

Every project has multiple success criteria related to business results, product/service results, and project delivery results (cost, schedule, scope, and quality).

Some criteria are absolute, meaning they must be completed on or before the original planned date, and some are relative, meaning they must be completed by a date acceptable to the client.

Project success is determined by how many of your success criteria are satisfied, and how well.

In a nutshell: You need to be able to distinguish between the inputs, activities, outputs, outcomes, and the impact of your project.

When you need some guidance on how to define and measure project success have a look at my Project Success Model here or by clicking on the image.

The Project Success Model

Read more…

Sunday, August 30, 2020

Solving Your Between Problems

Change Management and Your CAST Of Characters
Most executives agree with me that the biggest problems are not within roles but between roles. 

Not within teams but between teams. 

Not within departments but between departments. 

And not within organizations but between organizations.  

Such “between problems” are not assigned to anyone. Why? Because they are between. 

And these “between problems” will remain unresolved until someone chooses to own the problem. 

The biggest problems – and thus the greatest opportunities to add value – are not in your job role, your department, your function, or your organization. 

The opportunities are in the gaps between your role, function, or organization.

No matter how much effort we put into trying to define roles and structures, there still will be gaps.

And it turns out that the highest value of work is done in working out those gaps. 

So to start working on such gaps think about which team or collaboration or partnership you are a part of that could benefit from asking the following question: 

“What is our task that’s bigger than you, bigger than me, requires both of us, yet neither of us can claim success until we get that done?”
Figure that out together and start working on that task, that gap.

That is how you add value.

In a nutshell: The biggest problems are not within roles but between roles. Solve these and you will add value fast.

Read more…

Sunday, July 12, 2020

The 17 Global Goals For Sustainable Development

The 17 Global Goals For Sustainable Development
Besides shedding some light on an initiative that is very important to me, this article will present a great example on how to define project success with help from objectives and key results (OKRs).

In September 2015, the leaders of all 193 member states of the United Nations (UN) adopted Agenda 2030, a universal agenda that contains the Global Goals for Sustainable Development. 

Sustainable development has been defined as development that meets the needs of the present without compromising the ability of future generations to meet their own needs. It calls for concerted efforts towards building an inclusive, sustainable and resilient future for people and the planet.

The 17 Global Goals (i.e. Objectives) in turn hold 169 targets (i.e. Key Results) and 230 indicators (i.e. Measurements).
From 2015 till 2030 all countries will mobilize efforts to end all forms of poverty, fight inequalities and tackle climate change, while ensuring that no one is left behind.

The Global Goals is the most ambitious agreement for sustainable development that world leaders has ever made. It integrates all three aspects of sustainable development; social, economic and environmental.

The Global Goals and Agenda 2030 builds on the success of the Millennium Development Goals and aims to go further to end all forms of poverty. The new goals are unique in that they call for action by all countries, poor, rich and middle-income to promote prosperity while protecting the planet.

With the help of the Global Goals, we will be the first generation who can eradicate poverty and the last who can tackle climate change.

For the goals to be met, everyone needs to do their part: governments, the private sector, civil society and the general public.

While the Global Goals are not legally binding, governments are expected to take ownership and establish national frameworks for the achievement of the 17 Goals. 

Countries have the primary responsibility for follow-up and review of the progress made in implementing the Goals, which will require quality, accessible and timely data collection. Regional follow-up and review will be based on national-level analyses and contribute to follow-up and review at the global level.

The Objectives

The 17 defined objectives are:

1) No Poverty - End poverty in all its forms everywhere

2) Zero Hunger - End hunger, achieve food security and improved nutrition and promote sustainable agriculture

3) Good Health and Well-being - Ensure healthy lives and promote well-being for all at all ages

4) Quality Education - Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all

5) Gender Equality - Achieve gender equality and empower all women and girls

6) Clean Water and Sanitation - Ensure availability and sustainable management of water and sanitation for all

7) Affordable and Clean Energy - Ensure access to affordable, reliable, sustainable and modern energy for all

8) Decent Work and Economic Growth - Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all

9) Industry, Innovation and Infrastructure - Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation

10) Reduced Inequality - Reduce inequality within and among countries

11) Sustainable Cities and Communities - Make cities and human settlements inclusive, safe, resilient and sustainable

12) Responsible Consumption and Production - Ensure sustainable consumption and production patterns

13) Climate Action - Take urgent action to combat climate change and its impacts*

14) Life Below Water - Conserve and sustainably use the oceans, seas and marine resources for sustainable development

15) Life on Land - Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss

16) Peace and Justice Strong Institutions - Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels

17) Partnerships to achieve the Goal - Strengthen the means of implementation and revitalize the global partnership for sustainable development

How will Key Results be measured?

At the global level, the 17 Goals and 169 targets will be monitored and reviewed using a set of global indicators, agreed on by the UN Statistical Commission. 

The Economic and Social Council and the General Assembly will then adopt these indicators. Governments will also develop their own national indicators to assist in monitoring progress made on the goals and targets.

The follow-up and review process will be informed by an annual SDG Progress Report to be prepared by the Secretary-General. You will find the report for 2019 here.

The annual meetings of the High-level Political Forum on sustainable development will play a central role in reviewing progress towards the SDGs at the global level.

Taking action

Choosing one Goal to support is a good way to start, and to take specific action. However, all the Goals are interlinked, so by supporting one Goal your actions will have positive impacts on other Goals. For example, promoting gender equality (Goal 5) in your organization will help support a growing economy (Goal 8) and quality education for all (Goal 4).

There are so many things everyone can do to contribute. Here are a few good things to start you off:

> Spread the word about the Global Goals, so that more people can take action and contribute to meeting the Goals.

> Join an organization that actively contributes to meeting the Goals.

> Reduce your general waste and your environmental footprint. Avoid plastics, take the train instead of the airplane, the bike instead of the car.

> Make conscious choices in your consumption. Buy local and try to make sure what you buy is produced in fair and sustainable ways.

> Show compassion and stand up against racism, exclusion, discrimination and injustice.

Use your imagination. The future depends on our ability to imagine it.

Read more…

Monday, July 06, 2020

However Beautiful the Strategy, You Should Occasionally Look at the Results

However Beautiful the Strategy, You Should Occasionally Look at the Results
The title of this article is frequently credited to Winston Churchill (1874-1965), but he never said it. The saying first appears from about 1981, many years after Churchill’s lifetime.

The saying is used to stress that one needs to look at results and shouldn’t fall in love with one’s designed strategy if it doesn’t work.

The 2007 Financial Times obituary for UK Conservative politician Ian Gilmour (1926-2007) stated that he had used the line in a cabinet meeting in 1981.

In the end it doesn't matter who came up with the line, he or she was absolutely right! 

But how do we actually measure the results of our strategy? 

One very good answer to this question is the Balanced Scorecard. This article will show you what this is and how you can use it to drive strategy execution.

What is a Balanced Scorecard?

The Balanced Scorecard (BSC) is a business framework used for tracking and managing an organization’s strategy.

The BSC framework is based on the balance between leading and lagging indicators, which can respectively be thought of as the drivers and outcomes of your organization’s goals. When used in the BSC framework, these key results tell you whether or not you’re accomplishing your goals and whether you’re on the right track to accomplish future goals.

With a Balanced Scorecard, you have the capability to:

> Describe your strategy
> Measure your strategy
> Track the initiatives you're taking to improve upon your results

It was originally published by Dr Robert Kaplan and Dr David Norton as a paper in 1992. And then formally as a book in 1996. Both the paper and the book led to its widespread success. It is interesting to note that although Kaplan and Norton published the first paper, they were anomalously referenced in a work by Art Schneiderman who is believed to be the BSC creator.

BSC is more than just financial measures. The major difference that Kaplan and Norton introduced into this methodology is the ‘balance’ across all organisational functions. The problem back then, and still today, is that most companies focus on financial measures only. For example, revenue growth and profitability. 

By looking at an organisation across four ‘Perspectives’ a causal relationship between investment and financial outcome can be defined, measured and managed.

The BSC is not just a scorecard, it is a methodology. It starts by identifying a small number of financial and non-financial objectives related to strategic priorities. It then looks at measures, setting targets for the measures and finally strategic projects (often called initiatives). It is in this latter stage where the approach differs from other strategic methodologies. 

It forces your organisation to think about how objectives can be measured and only then identifies projects to drive the objectives. This avoids creating costly projects that have no impact on the strategy.

Objectives
Objectives are high-level organizational goals. When you create an objective, you should focus on what your organization is trying to accomplish strategically. A very general example would be: “Become an internationally-recognized brand.” The typical BSC has 10-15 strategic objectives.

Key Results
Key results help you understand if you’re accomplishing your objectives strategically. They force you to question things like, “How do I know that I’m becoming an internationally-recognized brand?”. Over time your key results might change, but your objectives will remain the same. You might have 1-2 key results per objective, so you are aiming to come up with 15-25 measures at the enterprise level of your strategy.

Initiatives
Initiatives are key action programs developed to achieve your objectives. You’ll see initiatives referred to as “projects,” “actions,” or “activities outside of the Balanced Scorecard.” Most organizations will have 0-2 initiatives underway for every objective (with a total of 5-15 strategic initiatives).

Balance
The ‘balance’ is brought about by a focus on financial and non-financial objectives that are attributed to four areas of an organisation. These are the Perspectives. They are: Financial, Customer, Internal Processes and Organisational Capacity

The four perspectives of a Balanced Scorecard

Questions often arise about the four perspectives described in the methodology. Why should we only look at Financial, Customer, Internal Processes and Organisational Capacity? Why not include Health and Safety? 

The answer is, of course, there is nothing stopping us. The four perspectives are simply a framework. However, over decades of use it has become clear that they work.

More importantly, there is a causal relationship between the perspectives. Working from the bottom to the top: Changes in Organisational Capacity will drive changes in Business Processes that will impact Customers and improve Financial results. The causal relationship may not be guaranteed if a new perspective is added. The result might be a useful scorecard, but it would not, by definition, be a balanced scorecard.

In brief, the four scorecard perspectives are:

Financial
The high-level financial objectives and financial measures of the organisation that help answer the question – How do we look to our shareholders? Financial objectives are usually the easiest to define and measure. However, creating a financial objective, for example, Improve Profit, rarely provides a clue as to how to achieve the objective. by linking objectives from the lower levels in the model, we begin to see exactly where to define projects and make investments.

Customer
Objectives and measures that are directly related to the organisation’s customers, focusing on customer satisfaction. To answer the question: How do our customers see us? It is always important to take a step outside and view your company or organisation from your customers view point. You need to understand what they want from you, not necessarily, what you can do for them.

Internal Processes
Objectives and measures that determine how well the business is running and whether the products or services conform to what is required by the customers, in other words, what should we be best at? Some of the biggest cost items can be reduced by streamlining internal processes. This is also the best area to focus on new and creative ideas.

Organisational Capacity
Objectives and measures concerning how well our people perform, their skills, training, company culture, leadership and knowledge base. This area also includes infrastructure and technology. Organisational Capacity tends to be the area where most investment takes place. It answers the question: How can we improve and create value?

The real value of the perspective approach is that it provides a framework to describe a business strategy. It focuses on objectives and key results that both inform us about progress and allow us to influence activities to achieve the strategy.

Over time, the concept of a strategy map was created. A BSC Strategy Map is a one-page visual depiction of an organization’s scorecard. It has the ability to show the connections between all four perspectives in a one-page picture.
The four perspectives are in a specific order and contain strategic objectives that contribute to a Vision and Mission. The objectives are linked in a causal way from the bottom to the top. The Strategy Map provides a very powerful tool allowing the user to talk about the causal impact of investment at the bottom to improved financial results at the top.

The benefits of a Balanced Scorecard 

A Balanced Scorecard is most often used in three ways:

> To bring an organization’s strategy to life. Those in the company can then use this strategy to make decisions company-wide.

> To communicate the strategy across the organization. This is where the strategy map is critical. Organizations print it and include it in interoffice communications, put it on their intranet, communicate it with business partners, publish it on their website, and more.

> To track strategic performance. That’s typically done through monthly, quarterly, and annual reports.

Closing thoughts

There must be a direct relationship between what an organisation is trying to achieve (the strategic objectives) and what is being measured to determine progress towards the objective. 

Clearly, there will be a lot of operational measures and some of these may contribute data to the key results, but operational measures (KPIs) should be considered as ‘housekeeping’ and ‘good practice’ and should not be confused with key results.

The approach gives us the framework to take a ‘balanced’ view across an organisation and define strategic objectives in the four perspective areas together with the associated KPIs.  We must be careful not to define too many strategic objectives.

Read more…

Monday, June 15, 2020

Your Risk Matrix Is a Lie

Your Risk Matrix Is A Lie
Risk management is at the core of good project management. 

Or as Tim Lister says “Risk management is project management for adults”.  

The standard approach is to use a risk matrix to classify project risks based on their probability and impact, then give each one a ‘risk score’ by multiplying the two numbers. Then you rank the risks by score and address the top ones first. 

Risk matrices have been widely praised and adopted as simple but effective approaches to risk management. 
Your Risk Matrix Is A Lie
And as many risk matrix practitioners and advocates have pointed out, constructing, using, and socializing risk matrices within an organization requires no special expertise in quantitative risk assessment methods or data analysis.

So in terms of “understanding and managing risk”, it seems to work.

Unfortunately it doesn’t.

It is unfit for purpose. It actually may even be doing more harm than good.

Sh!t in, sh!t out

Things go wrong from the very start. Namely with the probability estimates you put into your risk matrix.

Human beings are not very good with non-linear risks. Our instincts evolved to help us deal with immediate physical dangers in our environment. So we can tell whether an oncoming car is likely to hit us, for example. 

But the more complex the risk, and the more factors are involved, the less helpful our gut instinct is. And project management risks are some of the most complex risks in the world.

It’s extremely difficult to say how likely it is that an information breach or ransomware incident will actually occur. So most people rely on gut instinct, on the grounds that it’s better than nothing.

But if you ask someone to gauge the likelihood of a project risk — even someone with very deep knowledge — they will be hard pressed to give you an accurate answer. For instance, what’s the likelihood of a key supplier or system integrator going bust? Is it low, medium or high? Why do you say that? How do you know?

It’s a similar story with impact. In theory, it’s easier to get a reasonably good idea of financial impact by thinking about management time, developer hours, lost sales and reputation damage. But people rarely bother, because the risk matrix is only asking for a simple assessment anyway.

Enter the matrix

So the information you put into your risk matrix is hopelessly inaccurate. But then the matrix itself makes things even worse.

Because these matrices have such a low resolution, they make very different risks look alike. For example, in a 3x3 matrix (low, medium, high on both axes), risks with 67% probability and 99% probability are both “high”. 

Clearly, you’d want to address the 99% risk first. But when you come to rank your risks, you have no way of knowing which one is worse based on the matrix.

What’s more, the matrix gives equal weight to probability and impact, so an incident with 1% probability and $500,000 impact has the same priority as one with 0.2% probability and $2,500,000 impact.

In fact, in some fairly common situations (mathematically speaking, when probability and impact are negatively correlated), you’d actually be better off choosing the matrix square at random. 

Yes, you read that right — pin your matrix to the wall, throw a dart for each risk and you’ve got a better chance of picking up the most important ones. 

The risk matrix can be, quite literally, worse than useless.

Dangerous illusion of control

The problem with the risk matrix is that it feels scientific. It promises a quick, simple solution to a wicked problem without taking up loads of time, or asking you to do too many hard computations.

Before, you had no idea about risks. But now, you’ve put them in neat little boxes and given them solid-sounding scores. You “understand and manage your risks”, or so it seems.

But all you’ve really done is creating a story that gives you a dangerous illusion of control.

Not only is there no proof that risk matrices work, there’s actually proof of the opposite. 

Using the matrix actively hampers firms’ efforts to deal with risk, absorbing time, money and effort for no benefit at all

In a nutshell: Don't rely on your risk matrix to understand and manage your risk.

Read more…

Sunday, June 07, 2020

Most Good Strategies Are Not Planned

Most Good Strategies Are Not Planned
Many people are discussing strategy and strategizing as if they were the sole outcome of a rational, predictable, analytical process.

But reality is often the opposite; emotional, unpredictable, and chaotic.  

How organizations create and implement strategy is an area of intense debate within the strategy field.

Famous researcher on management and strategy Henry Mintzberg has a very clear position in this debate. He distinguishes between intended, deliberate, realized, and emergent strategies.

These four different kinds of strategy are summarized in the figure below. 
Emergent Strategy
Intended strategy is strategy as conceived by the top management team. Even here, rationality is limited and the intended strategy is the result of a process of negotiation, bargaining, and compromise, involving many individuals and groups within the organization. 

Realized strategy—the actual strategy that is implemented—is only partly related to that which was intended. Mintzberg suggests only 10%–30% of intended strategy is realized. This part is named deliberate strategy.

The primary driver of realized strategy is what Mintzberg terms emergent strategy—the decisions that emerge from the complex processes in which individual managers interpret the intended strategy and adapt to changing external circumstances. 

Emergent strategy is a set of actions, or behavior, consistent over time, “a realized pattern [that] was not expressly intended” in the original planning of strategy. The term “emergent strategy” implies that an organization is learning what works in practice.

Thus, the realized strategy is a consequence of deliberate and emerging factors. 

The battle between those who view strategy making and implementation as primarily a rational, analytical process of deliberate planning (the design school) and those that envisage strategy as emerging from a complex process of organizational decision making (the emergence or learning school) is still very much ongoing.

But instead of joining this battle on one of the sides, the question you should ask yourself is:

 “How can the two views complement one another to give us a better understanding of strategy making and implementation?” 

Because in reality, both design and emergence occur at all levels of the organization. 

The strategic planning systems of large companies involve top management passing directives and guidelines down the organization and the businesses passing their draft plans up to corporate. 

Similarly, emergence occurs throughout the organization—opportunism by CEOs is probably the single most important reason why realized strategies deviate from intended strategies. 

What I think we can say for sure is that the role of emergence relative to design increases as the world and business environments becomes increasingly volatile and unpredictable.

The world events of the last few months make this pretty obvious.

In a nutshell: Many strategies emerge instead of being planned.

Read more…

Saturday, May 30, 2020

Case Study 13: Vodafone's £59 Million Customer Relationship Disaster

Case Study 13: Vodafone's £59 Million Customer Relationship Disaster
In October 2016 the British multinational telecommunications company Vodafone achieved an unwelcome milestone - the single biggest fine for “serious and sustained” breaches of consumer protection rules in the UK. 

It was the result of a troubled CRM and billing consolidation project.

UK telecoms regulator Ofcom slapped a £4.6 million fine on Vodafone, payable within 20 working days. The fine was made up of two chunks - £3.7 million for taking pay-as-you-go customers money and not delivering a service in return, and £925,000 for failures relating to the way that the carrier handled complaints.

In a checklist of shame the regulator found that:

> 10,452 pay-as-you-go customers lost out when Vodafone failed to credit their accounts after they paid to ‘top-up’ their mobile phone credit. Those customers collectively lost £150,000 over a 17-month period.

> Vodafone failed to act quickly enough to identify or address these problems, only getting its act together after Ofcom intervened.

> Vodafone breached Ofcom’s billing rules, because the top-ups that consumers had bought in good faith were not reflected in their credit balances.

> Vodafone’s customer service agents were not given sufficiently clear guidance on what constituted a complaint, while its processes were insufficient to ensure that all complaints were appropriately escalated or dealt with in a fair, timely manner.

> Vodafone’s procedures failed to ensure that customers were told, in writing, of their right to take an unresolved complaint to a third-party resolution scheme after eight weeks.

For its part Vodafone has admitted to the breaches. It has also reimbursed all customers who faced financial loss, but for 30 it could not identify, and made a donation of £100,000 to charity. 

The events have led to a £54m crash in sales from April to June 2015, and Vodafone said that “continued operational challenges” with the mobile customers’ billing system that was introduced in 2015 had led to the 3.2% drop in sales to £1.55bn due to a customer exodus.

Adding the £4.6 million penalty on top of that, we are talking about a £59 million loss without taking the costs of the project itself into account.

Before we continue with this case study...

> For an overview of all case studies I have written please click here.

> To download 10 of my Project Failure Case Studies in a single eBook and be notified about new Project Failure Case Studies just subscribe to my weekly newsletter here or click on the image.

Timeline of Events

2012

Vodafone first selected the Siebel CRM system back in October 2012, an implementation which was intended not just to service mobile customers, but also customers for fixed-line telecoms, data networking, TV subscriptions and other services.

Siebel CRM is a product originally created by Siebel CRM Systems. The company was founded by Thomas Siebel and Patricia House in 1993. At first known mainly for its sales force automation products, the company expanded into the broader CRM market. 

By the late 1990s, Siebel Systems was the dominant CRM vendor, peaking at 45% market share in 2002. On September 12, 2005, Oracle Corporation announced it had agreed to buy Siebel Systems for $5.8 billion. "Siebel" is now a brand name owned by Oracle Corporation.

Vodafone planned to integrate Siebel CRM with Oracle BRM (Billing), Prepaid, Provisioning, ERP, DWH, etc. in order to cover the mission-critical Sales, Service and Marketing operations.

It was a hugely ambitious migration and consolidation of billing and CRM systems involving moving more than 28.5 million customer accounts from seven billing platforms to the new system. It was the largest IT project that Vodafone had undertaken, 

The main business challenges addressed in the context of this project were:

> Create a single, centralised, 360 degree Customer View that can be accessed by the various front-end systems and channels.

> Achieve more efficient & effective Customer Service, minimising handling time, call transfers and logging incident tickets and service requests.

> Empower the Call Center Agent to become a Universal Agent, able to handle any Sales, Service or Marketing related issue.

> Use Siebel as the main front-end system at the Contact Center and drastically reduce the use of all other systems at the front-end.

> “Keep customers happy” while reducing time and cost to serve.

2013

The migration and consolidation program began in 2013. 

2015

In April 2015 the migrations to the new system were completed. But in addition to suffering from downtime, the system has also led to a flood of customer complaints about bills, including some who have continued to be billed even after contracts had been cancelled, others who have had their direct debits mysteriously cancelled, or have been shut out of online accounts.

Vodafone was the most complained about telecoms provider in the three months ending in December 2015, due network failures that meant many users could not make and receive calls or were billed incorrectly.

2016

Telecoms regulator Ofcom launched its own formal investigation into Vodafone in January following a spike in complaints during 2015 over the new system. 

Based on the results of this investigation the regulator slapped the £4.6 million fine on Vodafone in October.

What Went Wrong

In a statement, Vodafone said:

Despite multiple controls in place to reduce the risk of errors, at various points a small proportion of individual customer accounts were incorrectly migrated, leading to mistakes in the customer billing data and price plan records stored on the new system. Those errors led to a range of different problems for the customers affected which – in turn – led to a sharp increase in the volume of customer complaints.

The problems resulted in the the pay-as-you-go issues:

From late 2013 until early 2015, a failure in our billing systems – linked to the migration challenges explained above – meant that customers who had topped up a PAYG mobile which had been dormant for nine months or more received a confirmation message that the credit had been added to their account; however, the mobile in question continued to be flagged as disconnected on our systems.

Although this impacted 10,452 customers, the situation caught Vodafone unaware:

Unfortunately, as the circumstances of the IT failure in question were very unusual (at the time, less than 0.01% of all Vodafone UK PAYG customers’ phones were inactive for more than nine months before being reactivated), the teams responsible for the day-to-day operation of the relevant areas were not fast enough in identifying the issue and did not fully appreciate its significance once they did so.

The migration and consolidation program began in 2013 and was completed in 2015. 

The IT failure involved was resolved by April 2015 – approximately 11 weeks after senior managers were finally alerted to it – with a system-wide change implemented in October 2015 that – as Ofcom acknowledges – means this error cannot be repeated in future.

More broadly, we have conducted a full internal review of this failure and, as a result, have overhauled our management control and escalation procedures. A failure of this kind, while rare, should have been identified and flagged to senior management for urgent resolution much earlier.

Our new billing and customer management system is designed to give our customers the best experience possible. It puts the customers in control of every aspect of the Vodafone products and services upon which they rely. It also enables our customer service and retail employees to respond quickly and efficiently to changing customer needs and swiftly put things right if they go wrong.

All of our consumer customer accounts have now been migrated successfully to the new system with a number of positive effects as a consequence. For example, there has been more than 50% reduction in customer complaints since November 2015 and our Net Promoter Score – which measures the extent to which our customers would recommend Vodafone to others – has increased by 50 points.

Vodafone has suffered for its failings commercially in the process. In the three months to the end of June 2015, UK sales fell 11.4%. At the time, Vittorio Colao, Vodafone CEO, admitted that the IT program’s problems were having a wider impact:

The UK is more a mixed picture. On one hand, we have a very good performance of the network in London, where, actually, we have really 99.9% coverage and a very good performance on dropped calls and video speed. In the rest of the country we still have to do a little bit of work. There is still improvement but we have to do a little bit of work.

The real issue has been billing migration problems in the UK which has caused disruption to the customers and to our commercial operations. We still have reached 7 million 4G customers, we still have activated 20,000 new homes in fixed broadband, but, clearly, we have got more churn than what we wanted and less commercial push until we fix the problems.

The problems are being fixed. I would say 75% of them are out of the way. We have reduced the extra calls to the call centers by more or less 0.5 million but we still have a little bit to go. We believe we will have resolved everything by the summer and then we will resume full commercial strength in the second half of the year.

How Vodafone Could Have Done Things Differently

There are some good lessons to learn from Vodafone’s troubles.

Understanding Your Problem

Vodafone had a lousy reputation for customer service for some time, coming out as easily the most complained about UK mobile provider in Ofcom’s 2015 market survey.  It had more than three times the industry average of 10 complaints per 100,000 customers in the last three months of 2015.

So Vodafone clearly had lessons to learn about the way it deals with customers before starting their CRM implementation. And if you start such a project with the mindset that customers are a pain in the ass, then all the CRM software in the world won't make things better; it'll just make it easier to anger your customers.

Internal Controls

Vodafone should have better internal controls in place. Since these incidents Vodafone has conducted a full internal review and overhauled its management control and escalation procedures, noting that the problem should have been spotted and flagged much earlier than it was.

“Despite multiple controls in place to reduce the risk of errors, at various points a small proportion of individual customer accounts were incorrectly migrated, leading to mistakes in the customer billing data and price plan records stored on the new system. Those errors led to a range of different problems for the customers affected which – in turn – led to a sharp increase in the volume of customer complaints.”

“Unfortunately, as the circumstances of the IT failure in question were very unusual (at the time, less than 0.01 percent of all Vodafone UK PAYG customers’ phones were inactive for more than nine months before being reactivated), the teams responsible for the day-to-day operation of the relevant areas were not fast enough in identifying the issue and did not fully appreciate its significance once they did so.”

Employee Training

The best CRM system in the world will have no value if your employees are not willing and empowered to help your customers. Improving customer services teams’ ability to respond to questions and problems is key to a great customer service.

“We fully appreciate the consequences for our customers of various failures in the migration process over the last three years,” it said. “We have sought to remedy these through an additional £30m investment this year in customer service and training, including hiring an additional 1,000 new UK-based call centre personnel and more than 190,000 hours of training to improve how we identify and resolve individual customer problems.”

Vodafone said that since doing this, it had seen a 50% reduction in complaint volumes and a significant improvement in its net promoter score.

Closing Thoughts

Ofcom Consumer Group director Lindsey Fussell said: 

“Vodafone’s failings were serious and unacceptable, and these fines send a clear warning to all telecoms companies. Phone services are a vital part of people’s lives, and we expect all customers to be treated fairly and in good faith.”

Vodafone replied with:

“Everyone who works for us is expected to do their utmost to meet our customers’ needs,” it said. “It is clear from Ofcom’s findings that we did not do that often enough or well enough on a number of occasions. We offer our profound apologies to anyone affected by these errors.”

It is sad state of affairs that we need a regulator to make companies realize this.

In a nutshell: The best CRM system in the world will have no value if your employees are not willing and empowered to help your customers.

Free Project Complexity Assessment

This assessment will guide you through the 3 dimensions (structural, sociopolitical, and emergent) of project complexity by asking you 38 questions.

At the end of the assessment you will get a score between 0 and 38. The higher your score, the better you have a grip on the complexity of your project. Most questions have detailed feedback with links to more insights on how to handle this part of project complexity.

Other Project Failure Case Studies

For an overview of all case studies I have written please click here.

> To download 10 of my Project Failure Case Studies in a single eBook and be notified about new Project Failure Case Studies just subscribe to my weekly newsletter here or click on the image.

References

Read more…

Tuesday, May 26, 2020

Is Your Strategy Bad? A Simple Checklist

Is Your Strategy Bad? A Simple Checklist
Recognizing good strategy is hard. 

You need to understand the organization, the market(s) it is operating in, its competitors, its strengths, and its challenges. 

On the other hand, recognizing bad strategy is easy. 

Richard Rumlet coined the term “bad strategy” in 2007 at a short Washington, D.C., seminar on national security strategy. He later explained the concept in detail in his must read book “Good Strategy Bad Strategy”. He is one of the world’s most influential thinkers on strategy and management and has always challenged dominant thinking.

Bad strategy is not the same thing as no strategy or strategy that fails rather than succeeds. It is an identifiable way of thinking and writing about strategy that is, unfortunately, still practised at many organizations. 

Bad strategy is long on goals and short on policy or action. It assumes that goals are all you need. It puts forward strategic objectives that are incoherent and, sometimes, totally impracticable. It uses buzzwords and phrases to hide these failings.

Once you develop the ability to detect bad strategy, you will dramatically improve your effectiveness at judging, influencing, and creating strategy. 

To detect a bad strategy, look for one or more of its four major signs:

1) Bullshit bingo

Rumlet calls it fluff, which is a nicer way of saying the same. Fluff is a restatement of the obvious, combined with a generous sprinkling of buzzwords that masquerade as expertise to create the illusion of high-level thinking. 

Guy Kawasaki has written extensively about this in his excellent book on startups “The Art of the Start” and brings the illustrative example of Wendy’s.

“The mission of Wendy’s is to deliver superior quality products and services for our customers and communities through leadership, innovation, and partnerships.”

Don’t get me wrong. I love Wendy’s, but I’ve never thought I was participating in “leadership, innovation, and partnerships” when I ordered a hamburger there.

2) Failure to face the challenge

A strategy is a way through a difficulty, an approach to overcoming an obstacle, a response to a challenge. If the challenge is not defined, it is difficult or impossible to assess the quality of the strategy. And, if you cannot assess that, you cannot reject a bad strategy or improve a good one.

For example when a leader characterizes the challenge as underperformance, it sets the stage for bad strategy. Underperformance is a result. The true challenges are the reasons for the underperformance.

If you fail to identify and analyze the obstacles, you don’t have a strategy. Instead, you have either a stretch goal, a budget, or a list of things you wish would happen.

3) Mistaking goals for strategy

Many so-called strategies are in fact goals. “We want to be the number one or number two in all the markets in which we operate” is one of those. 

It does not tell you what you are going to do; all it does is tell you what you hope the outcome will be. But you’ll still need a strategy to achieve it.

Many bad strategies are just statements of desire rather than plans for overcoming obstacles.

4) Bad strategic objectives

A strategic objective is set by a leader as a means to an end. Strategic objectives are “bad” when they fail to address critical issues or when they are impracticable.

A long list of “things to do,” often mislabeled as “strategies” or “objectives,” is not a strategy. It is just a list of things to do. Such lists usually grow out of planning meetings in which a wide variety of stakeholders make suggestions as to things they would like to see done.

Rather than focus on a few important items, the group sweeps the whole day’s collection into the “strategic plan.” Then, in recognition that it is just a big pile of random objectives, the label “long-term” is added so that none of them need be done today.

Others may represent a couple of the firm’s priorities and choices, but they do not form a coherent strategy when considered in conjunction. For example, consider “We want to increase operational efficiency; we will target Europe, the Middle East, and Africa; and we will divest business X.” These may be excellent decisions and priorities, but together they do not form a strategy.

Good strategy, in contrast, works by focusing energy and resources on one, or a very few, pivotal objectives whose accomplishment will lead to a cascade of favorable outcomes. It also builds a bridge between the critical challenge at the heart of the strategy and action—between desire and immediate objectives that lie within grasp. 

Thus, the objectives that a good strategy sets stand a good chance of being accomplished, given existing resources and competencies.

Why do we see so much bad strategy?

Bad strategy is everywhere around us. Rummelt offers three reasons for this.

Unwillingness or inability to choose

Any strategy that has universal buy-in signals the absence of choice. Because strategy focuses resources, energy, and attention on some objectives rather than others, a change in strategy will make some people worse off and there will be powerful forces opposed to almost any change in strategy. 

For example a department head who faces losing people, funding, support, etc., as a result of a change in strategy will most likely be opposed to the change. 

Therefore, strategy that has universal buy-in often indicates a leader who was unwilling to make a difficult choice as to the guiding policy and actions to take to overcome the obstacles.

Template-style “Strategic Planning” 

Many strategies are developed by following a template of what a “strategy” should look like. Since strategy is somewhat nebulous, leaders are quick to adopt a template they can fill in since they have no other frame of reference for what goes into a strategy.

These templates usually take this form:

> The Vision: Fill in your vision of what the school/business/nation will be like in the future. Currently popular visions are to be the best or the leading or the best known.

> The Mission: Fill in a high-sounding, politically correct statement of the purpose of the school/business/nation. Innovation, human progress, and sustainable solutions are popular elements of a mission statement.

> The Values: Fill in a statement that describes the company’s values. Make sure they are noncontroversial. Keywords include “integrity,” “respect,” and “excellence.”

> The Strategies: Fill in some aspirations/goals but call them strategies. For example, “to invest in a portfolio of performance businesses that create value for our shareholders and growth for our customers.”

This template-style planning has been enthusiastically adopted by corporations, school boards, university presidents, and government agencies. Scan through such documents and you will find pious statements of the obvious presented as if they were decisive insights. The enormous problem all this creates is that someone who actually wishes to conceive and implement an effective strategy is surrounded by empty rhetoric and bad examples.

New Thought

The New Thought movement is a movement that developed in the United States in the 19th century, considered by many to have been derived from the unpublished writings of Phineas Quimby. 

It is the belief that you only need to envision success to achieve it, and that thinking about failure will lead to failure. The problem with this belief is that strategy requires you to analyze the situation to understand the problem to be solved, as well as anticipating the actions/reactions of customers and competitors, which requires considering both positive and negative outcomes. 

Ignoring negative outcomes does not set you up for success or prepare you for the unthinkable to happen. It crowds out critical thinking.

In a nutshell: Bad strategy is not simply the absence of good strategy. 

Read more…