Sunday, August 20, 2017

14 Essential Software Engineering Practices for Your Agile Project

14 Essential software engineering practices for your agile project
Agile initiatives are found in almost any company that build software. Many of these initiatives have not brought the results that were expected. Nor are the people much happier in their work. This has many reasons, and a quick search on Google or LinkedIn will give you a plethora of them. But the one that I am confronted with almost every project I work on is the lack of experience with modern software engineering practices.

Flexibility (agility) can only happen when you are able to make changes to your product in an easy, fast, and flexible way. That is why:

Organizational Agility is constrained by Technical Agility

In other words, when you are slow in making changes to your product, then it doesn’t matter how you structure your teams, your organization or what framework you adopt, you will be slow to respond to changes. These changes include bug fixes, performance issues, feature requests, changed user behavior, new business opportunities, pivots, etc.

Luckily, there are a set of software engineering practices that will help your team to deliver a high-quality product in a flexible state. These practices are not limited to building software, they cover the whole process until actual delivery of working software to the end user.

Unfortunately, many software engineers I have met in the financial service industry have had very limited exposure to these practices and their supporting tools. And don't get me started on the newly born army of agile coaches and Scrum Masters that have not produced a single line of working code in their life. Besides hindering agility, it frustrates the teams because without these practices it is almost impossible to deliver working software in high quality within one Sprint.

Your organization will need to invest in these software engineering practices by training, infrastructure, tools, coaching and continuous improvement when you want your agile initiatives to actually able to deliver agility.

1. Unit Testing

The purpose of unit testing is not for finding bugs. It is a specification for the expected behaviors of the code under test. The code under test is the implementation for those expected behaviors. So unit test and the code under test are used to check the correctness of each other and protect each other. Later when someone changed the code under test, and it changed the behavior that is expected by the original author, the test will fail. If you code is covered by a reasonable amount of unit tests, you can maintain the code without breaking the existing feature. That’s why Michael Feathers define legacy code in his book as code without unit tests. Without unit tests your refactoring efforts will be a major risk every time you do it.

2. Continuous Integration

Martin Fowler defines Continuous Integration (CI) in his key article as follows: "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly." You see, without unit tests and test automation, it is impossible to do CI right. And only when you do CI right you might be able to succeed at Continuous Deployment.

3. Collective Code Ownership

Collective Ownership encourages everyone to contribute new ideas to all segments of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs or refactor. No one person becomes a bottle neck for changes. This is easy to do when you have all your code covered with unit tests and automated acceptance tests.

4. Refactoring

Code should be written to solve the known problem at the time. Often, teams become wiser about the problem they are solving, and continuously refactoring and changing code ensures the code base is forever meeting the most current needs of the business in the most efficient way. In order to guarantee that changes do not break existing functionality, your regression tests should be automated. I.e. unit tests are essential.

5. Test Driven Development

Test-driven development is a development style that drives the design by tests developed in short cycles of: 

1. Write one test,
2. Implement just enough code to make it pass,

3. Refactor the code so it is clean.

Ward Cunningham argues that test-first coding is not testing. Test-first coding is not new. It is nearly as old as programming. It is an analysis technique. We decide what we are programming and what we are not programming, and we decide what answers we expect. Test-first is also a design technique.

6. Automated Acceptance Testing

Also known as Specification by Example. Specification by Example or Acceptance test-driven development (A-TDD) is a collaborative requirements discovery approach where examples and automatable tests are used for specifying requirements—creating executable specifications. These are created with the team, Product Owner, and other stakeholders in requirements workshops. I have written about a successful implementation of this technique within Actuarial Modeling.

7. Chaos Engineering

Even when all of the individual services in a distributed system are functioning properly, the interactions between those services can cause unpredictable outcomes. Unpredictable outcomes, compounded by rare but disruptive real-world events that affect production environments, make these distributed systems inherently chaotic.

You need to identify weaknesses before they manifest in system-wide, aberrant behaviors. Systemic weaknesses could take the form of: improper fallback settings when a service is unavailable; retry storms from improperly tuned timeouts; outages when a downstream dependency receives too much traffic; cascading failures when a single point of failure crashes; etc.  You must address the most significant weaknesses proactively before they affect our customers in production.  You need a way to manage the chaos inherent in these systems, take advantage of increased flexibility and velocity, and have confidence in your production deployments despite the complexity that they represent.

An empirical, systems-based approach addresses the chaos in distributed systems at scale and builds confidence in the ability of those systems to withstand realistic conditions. We learn about the behavior of a distributed system by observing it during a controlled experiment. This is called Chaos Engineering. A good example of this would be the Chaos Monkey of Netflix. 

8. Continuous Deployment

Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing. Since every change is delivered to a staging environment using complete automation, you can have confidence the application can be deployed to production with a push of a button when the business is ready. Continuous deployment is the next step of continuous delivery: Every change that passes the automated tests is deployed to production automatically. Continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.

9. Micro Services

The micro services architecture runs a set of small services in a single application. These services are independent of each other and communication between these services is by means of a well-defined interface that uses a lightweight mechanism, for example, a REST API. Each service has a single function which matches micro services with business needs. There are different frameworks or programming languages that can be used to write micro services and they can also be set to either function as a single or group of services.

10. Infrastructure as Code

Code and software development techniques like version control and continuous integration are used to merge and provision infrastructure under this practice. The interaction with infrastructure is programmer based and at scale rather than a manual setup and configuration resource. The API-driven model of its cloud makes it possible for system administrators and developers to interact with the infrastructure as such. Code-based tools are used by engineers to interface with infrastructure; hence it is treated like an application code. There being code based makes it possible for infrastructure and servers to be quickly deployed, using fixed standards, also the latest patches and versions can either be updated or repetitively duplicated.

11. Configuration Management

The operating system, host configuration, operational tasks etc. are automated with codes by developers and system administrators. As codes are used, configuration changes become standard and repeatable. This relieves developers and system administrators of the burden of configuring the operating system, system applications or server software manually.

12. Policy as Code

The configuration of infrastructure and infrastructure itself are codified with the cloud. This makes it possible for organizations to dynamically monitor and enforce compliance. It enables the automatic tracking, validation, and reconfiguration of infrastructure. In that way, organizations can easily control changes over resources and security measures are properly and distributively enforced. The fact that resources that do not comply can be flagged automatically for further investigation or automatically restarted to comply, increases the speed level of teams within an organization.

13. Monitoring and Logging

To gauge the impact that the performance of application and infrastructure have on consumers, organizations monitor metrics and logs. The data and logs generated by applications and infrastructure are captured, categorized and then analyzed by organizations to understand how users are impacted by changes or updates. This makes it easy to detect sources of unexpected problems or changes. It is necessary that there be a constant monitoring, to ensure a steady availability of services and an increment in the speed at which infrastructure is updated. When these data are analyzed in real-time, organizations proficiently monitor their services

14. Communication and Collaboration

This is the key feature of any Agile and/or DevOps model; as development, test, security, and operations come together and share their responsibilities, team spirit is enhanced and communication skills are facilitated. Chat applications, project tracking systems, and wikis are some of the tools that can be used by not just developers and operations but also other teams like marketing or sales. This brings all parts of the organization closely together as they cooperate to see to the realization of goals and projects.

Read more…

Monday, August 07, 2017

Would Your Team Work With the Chaos Monkey?

Would your team work with the chaos monkey?
Advances in large-scale, distributed software systems are changing the game for software engineering. As an industry, we are quick to adopt practices that increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: How much confidence we can have in the complex systems that we put into production?

Chaos Engineering

Even when all of the individual services in a distributed system are functioning properly, the interactions between those services can cause unpredictable outcomes. Unpredictable outcomes, compounded by rare but disruptive real-world events that affect production environments, make these distributed systems inherently chaotic.

We need to identify weaknesses before they manifest in system-wide, aberrant behaviors. Systemic weaknesses could take the form of: improper fallback settings when a service is unavailable; retry storms from improperly tuned timeouts; outages when a downstream dependency receives too much traffic; cascading failures when a single point of failure crashes; etc.  We must address the most significant weaknesses proactively, before they affect our customers in production.  We need a way to manage the chaos inherent in these systems, take advantage of increasing flexibility and velocity, and have confidence in our production deployments despite the complexity that they represent.

An empirical, systems-based approach addresses the chaos in distributed systems at scale and builds confidence in the ability of those systems to withstand realistic conditions. We learn about the behavior of a distributed system by observing it during a controlled experiment. This is called Chaos Engineering

Chaos Monkey

Chaos Engineering was the philosophy when Netflix built Chaos Monkey, a tool that randomly disables Amazon Web Services (AWS) production instances to make sure you can survive this common type of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables — all the while you continue serving your customers without interruption. 

By running Chaos Monkey in the middle of a business day, in a carefully monitored environment with engineers standing by to address any problems, you can still learn the lessons about the weaknesses of your system, and build automatic recovery mechanisms to deal with them. So next time an instance fails at 3 am on a Sunday, you won’t even notice.

Chaos Monkey has a configurable schedule that allows simulated failures to occur at times when they can be closely monitored.  In this way, it’s possible to prepare for major unexpected errors rather than just waiting for catastrophe to strike and seeing how well you can manage.

Chaos Monkey was the original member of Netflix’s Simian Army, a collection of software tools designed to test the AWS infrastructure. The software is open source (GitHub) to allow other cloud services users to adapt it for their use. Other Simian Army members have been added to create failures and check for abnormal conditions, configurations and security issues. 

An Agile and DevOps engineering culture doesn’t have a mechanism to force engineers to architect their code in any specific way. Instead, you can build strong alignment around resiliency by taking the pain of disappearing servers and bringing that pain forward. 

Most people think this is a crazy idea, but you can’t depend on the infrequent occurrence of outages to impact behavior. Knowing that this will happen on a frequent basis creates strong alignment among engineers to build in the redundancy and automation to survive this type of incident without any impact on your customers.

Would your team be willing to implement their own Chaos Monkey?

Read more…

Saturday, August 05, 2017

Do we need Service Portfolio Management?

Do we need Service Portfolio Management?
I am not a big fan of ITIL. DevOps and ITIL are basically both IT Service Management (ITSM). The patterns are the same though the execution is different. Service management is not one theory among many. Service management models reality and ITSM is the description of IT reality: of all the activities that happen in the real world.

Everybody manages change; everybody releases into the world; everybody manages capacity and availability and continuity; everybody manages incidents and requests; everybody defines strategy, plans, designs, and builds services; everybody does improvement. The levels of maturity might very but these practices are present in every organization.

 DevOps is one codification of that reality and ITIL is another. I am a DevOps guy.

Nevertheless there is a concept in ITIL that I like very much. It is called service portfolio management. A company’s service portfolio can be defined as a group of services that are listed in terms of their value for business. This portfolio can include everything from in-house services to outsourced ones. If we compare project and service portfolio management, they have obviously many things in common. First of all they both share the same technique and same goal of general portfolio management: Making right decisions about investments, which is not that different from classical financial portfolio management. The main goal of service portfolio management is to maximize the realization of value to the business and meanwhile balance the risks with the costs.

Too many organizations take a portfolio view of only programs and projects, whilst neglecting the operational systems. This is one order removed from a truly holistic view. Project portfolio management only looks at change, not the current state. Service portfolio management looks at the services in production as well as the proposed changes to service. It looks at the distribution of resources across both Build and Run, not just Build. It considers the impact on the existing services when we try to introduce new ones. Failure to do this is why so many IT departments are drowning in projects and have ever increasing operational costs.

Failure to manage across the whole portfolio of current and planned services - focusing instead on only the portfolio of change - means that there will be a continuous deluge of CapEx expenditure on new services with zero consideration of Run's ability to absorb them (and often zero budget to do so too). Notice that the differentiation between Build and Run is also valid for DevOps teams. When they need most of their time to Run things, there is no time to Build things. On the other hand, when they are expected to Build continuously, there is no time to Run things. We hollow out our capacity to deliver the services because we are so busy building and changing them.

Balancing priorities and resources across the portfolio of change, of projects, of programs, is not enough. We must balance across all activity, across Build and Run together. ITIL Service Strategy tells us exactly that by defining the concept of service portfolio management. IT management is a constant balancing act of Build and Run.

If you look at project portfolio management historically, it is clearly not about services, it’s about products. At the end of a project there is a result, after that is delivered, the project ends. There is a fundamental difference in the philosophies of project and service portfolio management. With our customers, we should talk about services. A service is a means of delivering value to customers by facilitating the outcome customers want to achieve, without the ownership of specific costs and risks. This is what your customers want.

When it comes to successful service portfolio management, Product Managers and Product Owners play an important role as they are expected to manage the services and products throughout their lifecycle. Project Managers manage only the project.

A typical service portfolio contains three subsets.

1. The service pipeline includes the list of services that are currently under development or consideration for a specific market or demographic. This is your project portfolio. The service pipeline can also include goals, priorities as well as short term and long term goals of your business. 

2. The service catalog is the section of the portfolio that is visible to your customers and provides an insight into the services and products your business delivers. 

3. The last subset of the portfolio is the retired services that include products and services that are soon to be withdrawn or have already phased out. 

Alex Lichtenberger created a nice overview of these subsets and the relation with project portfolio management with the diagram below.

You can buy my books The Art of Technology Project Portfolio Management and The Science of Technology Project Portfolio Management separate or as a bundle in my store. Just click here or on the image.

Read more…

Thursday, July 27, 2017

No Validation? No Project!

Project Business Case Validation
For large or high-risk projects (what is large depends on your organization) it should be mandatory to do business case validation before you dive head-on in executing the project. In my Project Portfolio Funnel you will see a phase that is called "Validation" after selection of a project has taken place. In this phase you typically have a business case validation and/or a technical validation in the form of a proof of concept. This article will be about the project's business case validation.

Project portfolio management is about doing the right projects. And as Peter Drucker has said many times:

Nothing is less productive than to make more efficient what should not be done at all.

Jon Lay and Zsolt Kocsmárszky from Hanno have defined a process for lean product validation at I used their process as a base for business case validation for projects.

We will distinguish between four different kinds of validations. Depending on the project that you want to validate the business case for you do only one, two, three or all four of them. When your project is about launching a new product or service I would advise to do all four validations before executing the project. The results of these validations form your business case validation. Or the opposite of course, they can show you that you should not do this project as well.

1. Validate the problem: Is this a problem worth solving? If users don’t think this is a Major problem, your solution won’t be appealing.

2. Validate the market: Some users might agree that this is a problem worth solving. But are there enough of them to make up a market for your product or service?

3. Validate the product/service/solution: The problem might exist, but does your product/service/solution actually solve it?

4. Validate the willingness to pay: there might be market demand and a great product or service. But will people actually be willing to reach into their wallets and pay for it?

As you can see these validations focus on the introduction of a new product or service for your clients. But you can easily reframe them for all of your projects. Your market can be your employees instead of customers. For example when you think about implementing a new CRM, but only a very small but vocal number of users see the benefit of it, and are actually quite happy with what they currently have your market is too small. Or how about the willingness to pay for the accounting department for a new solution that triples the yearly operational costs?

You can decide per project which validations make sense, and what row order makes sense. For example, in some cases, it can be much harder to build a prototype (to validate the product or service) than it would be to build a landing page (to validate market demand).

But by working through these validations, you’re sure to receive heaps of feedback from potential users and stakeholders. With this deeper understanding, you might realize that it makes no sense to do this project, or you might realize there’s a bigger opportunity to tackle. In this case, you should feel free to pivot your project idea!

Validate the Problem

No matter how confident you are in the project idea, you first need to figure out whether the problem is a real one that needs solving. To do this, your best option is to talk to potential users and stakeholders directly. The focus here is on getting qualitative validation of the project idea.

First, you start with a small number of properly sampled, representative users and verify that the problem exists for them. Then, you optimize and verify this later on with more users and at greater scale. Have a look at our article about jobs to be done on how to frame and discuss the problem with users.

The learning you take from these interviews will give you the confidence you need to move the idea forward. Or it’ll show you that you need to pivot the idea as you learn more about the real underlying Problems.

Validate the Market

Once you’ve spoken with users and validated that the problem exists, you need to make sure the market is large enough to justify taking the idea further. Where will your users come from, and how much potential revenue or savings lies in the market opportunity? In other words, what is the potential value of the project?

By gathering as much information as you can about the potential market, you’ll be able to make an educated guess as to the size of your target audience and the number of customers you can acquire. It will also help you to figure out a possible pricing structure for the product or service, which will be valuable as you start to make other financial projections.

Here are a couple of tools you might use to validate the market for your product or service:

> Google Trends allows you to compare the relative volume of search terms. It shows you how demand for each term has changed over the last 12 months.

> Google AdWords’ Keyword Planner reveals the average monthly searches for a given keyword. It also gives you an estimate for competitors and suggested bids. Using this data, you can estimate how much traffic you can drive to your website using Google AdWords paid ads or by reaching the top of search engine results.

> Deep competitor research enables you to understand how your future competitors are trying to solve the same problem. If you’re focusing on a small market, finding just one competitor might be enough. But if you’re focusing on a bigger market, search for more, at least three. Besides searching for direct competitors, look up those that offer similar products and solutions. Then, find out as much as possible about them. Crawl their profiles, read their blogs, look for media coverage. Learn about the size of their team, pricing structure, finances, and product features.

If you can’t find any competitors, then most likely you’ve either missed something in your research or you’re working on a problem that doesn’t really exist. There’s an exception, however. If you’re starting up a very small niche business, there’s a chance you don’t have any competition… yet.

Another area to look at when conducting competitor analysis is companies that might attempt to move quickly into a new product or service area when competition arises in their market. While this isn’t of immediate concern, it pays to roughly identify fast followers who might step up to offer a similar product to yours to win the market. This will help you to prepare and come up with a plan to handle the situation should it arise.

Validate the Product or Service

There’s only one way to make sure your product or service solves the problem you’re focusing on: to get your hands dirty and build a prototype. It doesn’t matter if you don’t have much technical experience, you don’t need an engineer to build a great prototype. Instead of choosing expensive and time-consuming prototyping techniques, keep things lean.

Once you’ve built the prototype, you’ll begin to run tests with users to gather feedback and learn from them as much as possible. Finding people around you to be test subjects is a great starting point. Ideally, look for users in your target audience. However, in the very earliest stages of testing, you can get useful feedback just by grabbing people around you and running simple tests.

Once you learn more from these quick tests and potentially iterate on your product or service, taking into account the feedback you gather, you can move on to more structured tests with your target audience.

The goal of product/service validation, whether remote or in person, is to make sure that your product or service is solving the right problem in the most effective way. It’s highly unlikely that you’ll manage to do this perfectly the first time around — but that’s perfectly fine! Iterations, tweaks and pivots are a natural part of the product validation process and are one of the reasons why prototyping is such a valuable technique.

Validate the Willingness to Pay

There are many ways to validate willingness to pay, but one of the big challenges is that we can’t necessarily trust people’s words. We all know those people who say “Of course, I’d totally buy that!” but who then drop out when the time comes to commit. As far as possible, we want to get people to put money behind their words.

A single-page website is already adequate for validation, especially as a first step. But if you want to gradually iterate and add more pages and content to increase fidelity and learn more, that’s an option, too.If you have the technical resources you could design and code this yourself, but most companies are better off using a template website builder. You’ll want to include the following elements in your simple website:

> Explain what your product or service does, clearly and concisely.

> Highlight the product or service’s unique selling points.

> Address any key concerns that users had during testing. If they were worried about security, performance or comfort, explain your solution.

> Add a direct call to action to bring users to the checkout and payment part of the website. This should be clear and unequivocal — something like “Buy now for $99!” or "Sign up for early access!".

> Make sure the payment page has an email collection form so that you can gather the addresses of prospective customers.

> Instead of confirming the user’s order, you should show them a “Sorry” page that explains why you’re conducting the experiment. This way, they won’t have false expectations about receiving the product.

> Run analytics on the website to track how people are interacting with it. Google Analytics is simple and free.

Once the validation website is up and running, you’ll need to drive traffic to it. Be a little cautious about doing this within your network of long-time clients and colleagues. You don’t want to fall into the trap of collecting false positives, people who sign up just because they know you personally. The best technique for bringing unbiased traffic to a validation website is to spend some money on Facebook Ads or on a simple Google AdWords campaign.

Facebook and Google AdWords have robust targeting possibilities that let you showcase your solution to the right audience, which you would have identified in the “Validate the problem” phase. You can also further optimize your ad by factoring in data from your initial ads, such as the number of clicks, likes, and shares received and the location of people engaging with it. Getting started with an ad campaign on Facebook or Google AdWords is pretty straightforward.

Closing Thoughts

Even if you validate your project's business case, your project isn’t guaranteed to succeed. The process of validation is just the start. But if you’ve worked through the relevant validations, you’ll be in a far better position to judge if you should stop, continue or change your project.

The goal of the validation phase is to delay the expensive and time-consuming work of projects as late as possible in the process. It’s the best way to keep yourself focused, to minimize costs and to maximize your chance of a successful project.

You can buy my books The Art of Technology Project Portfolio Management and The Science of Technology Project Portfolio Management separate or as a bundle in my store. Just click here or on the image.

Read more…

Wednesday, July 26, 2017

When is your Project Backlog Item "Ready" for selection?

As defined in the Simple Portfolio Management Framework, the Project Backlog is an ordered list of all projects that might be needed in the organization and is the single source of projects for any changes to be made in the organization. The Portfolio Owner is responsible for the Project Backlog, including its content, availability, and ordering.

The Project Backlog lists all projects, ideas, proposals, enhancements, and fixes that constitute the changes to be made to the organization in the future. See my article on Demand Management for ideas on how to collect all these ideas.

 A Project Backlog is never complete. The earliest development of it only lays out the initially known and best-understood projects. The Project Backlog evolves as the organization and the environment in which it operates evolves. The Project Backlog is dynamic; it constantly changes to identify what the organization needs to be appropriate, competitive, and useful. As long as an organization exists, its Project Backlog also exists.

Project Backlog refinement is the act of adding detail, estimates, and order to items in the Project Backlog. This is an ongoing process in which the Product Owner and the PMO Team collaborate on the details of Project Backlog items. During Project Backlog refinement, items are reviewed and revised. The Portfolio Team decides how and when refinement is done. Refinement usually consumes 50% of the capacity of the Portfolio Owner and PMO Team. However, Project Backlog items can be updated at any time by the Product Owner or at the Product Owner’s discretion.

Higher ordered Project Backlog items are usually clearer and more detailed than lower ordered ones. More precise estimates are made based on the greater clarity and increased detail; the lower the order, the less detail. Project Backlog items that will occupy the Portfolio Team for the upcoming Portfolio Cycle are refined so that any one item can be decided on within the Portfolio Planning Meeting. Project Backlog items that can be decided on by the Portfolio Team within a Portfolio Planning Meeting are deemed “Ready” for selection in a Portfolio Planning. Project Backlog items usually acquire this degree of transparency through the above-described refining activities.

The Project Backlog Item

Depending on your organization and your portfolio you will need different information for your Project Backlog Items to make the right decisions. But my experience has shown that the list below will give you a good start.

Who would be the project sponsor?
What are the estimated costs? Estimate the following:  
1. Project costs: internal and cash-out
2. Yearly operational costs
3. Total Cost of Ownership for 5 years (or shorter when the life expectancy of the project result is shorter)

As explained in our article on Categorization the most effective estimates for the Project Backlog are Range Based Estimates.
What business problem does the project solve?
Strategy alignment
Which strategic goals does this project align with? See the article on Categorization on how to link project backlog items with strategy. Add regulatory/compliance and operational as an option as well.
Describe what is the impact of not doing this project?
Describe what is the risk of doing this project?
Give one risk valuation High, Medium, Low for use in the risk-value portfolio bubble chart.
What is the value of this project to the organization? Express in CHF as well describing in text. List the expected benefits.
Success metrics
How will we measure success? For example:

- Stakeholder satisfaction
- Customer satisfaction
- Quality delivered to the customer
- Return on Investment
- Meeting business case objectives
- Customer adoption
- Meeting governance criteria
- Benefits realization

Full scope, on time and in budget has nothing to do with success! It just tells you something about the quality of your estimations, not the results of your project.
Is the organization culture right for this kind of project? Describe and give a valuation Yes or No. 
Do we bring in new technology in-house or do we build with and upon existing technology? Describe and give a valuation. A categorization that I have used many times is the following: 1) Technology exists in the organization, 2) Technology is easily acquired, 3) Technology is hard to acquire, 4) Technology needs to be created, 5) No technology needed
Would we buy or make?
Time criticality
Should it be executed in a certain timeframe in order to give benefits? Describe and give a valuation Yes or No.
Client Impact
Would there be a lot of change for our clients? Discontinuing a service could have a very high impact for some Clients. Describe and give one valuation High, Medium or Low.
Employee Impact
Would there be a lot of change for our employees? For example, a new ERP system would have a very high Impact. Describe and give one valuation High, Medium or Low.
Will staff training be required? Add to costs!
How does this project link to other projects in the portfolio? Dependencies? Shared platforms? Double work? Describe.

The moment you have collected the information above you are "Ready" for evaluation and possible selection of this project for your portfolio.

You can buy my books The Art of Technology Project Portfolio Management and The Science of Technology Project Portfolio Management separate or as a bundle in my store. Just click here or on the image.

Read more…

Monday, July 24, 2017

The Project Portfolio Funnel

Project Portfolio Funnel
When you look at the project portfolio funnel of the Simple Portfolio Management Framework (SPMF) it looks like the diagram below. Like I explained in previous articles, this is not greatly different from how PMI defines the project portfolio funnel. What is greatly different, is how you manage your funnel. SPMF is a framework within which people can address complex portfolio management problems while productively and creatively delivering projects of the highest possible value. The SPMF works well with any project delivery method. From Scrum to Waterfall, and anything in between.
So let's walk through this funnel step by step to understand the dynamics of a project portfolio funnel. For this example we will start with an empty funnel at time point X. We will follow the suggestion of the SPMF Guide and have monthly portfolio cycles. At time point X we have a bunch of project ideas on our Project Backlog. These ideas are collected by the demand management process.
These eight ideas will be categorized and afterward evaluated. During categorization or in the evaluation phase ideas can be removed from the Project Backlog for reasons as described in the linked articles. In short: when it is clear there is no value in the idea at this point of time it will be marked as such and removed from the project backlog. 
At a portfolio planning meeting, the remaining ideas have the chance of being selected for validation. When they are selected for validation the ideas are placed on the Portfolio Backlog and additional information will be gathered. This can be a business case validation or a technical proof of concept.
On successful validation, the execution phase of the project will start. Duration of the validation phase can be different for each project, depending on the needs and accepted risk. In the funnel below you see that two projects are still in the validation phase, and for one project the delivery phase has started.
Of course one result of the validation phase can be that the project will not be executed. This is shown in the funnel below. One project continuous to the delivery phase and is joining the one that was already started, and one project is stopped.
When all goes well the project (or parts of it) will be delivered and you can start monitoring its benefits. When it does not go well or the priorities of the portfolio have changed the portfolio committee can decide to stop a project as well in a portfolio review meeting. Both are shown in the funnel below.
After a while of monitoring the benefits (or lack of them), you then do a project retrospective. This is essential for learning and improving within the organization. But like I said, this retrospective only make sense after a phase of benefits monitoring and not directly after the closing of the project.
The above example of the project portfolio funnel is of course extremely simplified because in reality new ideas will come in on an ongoing base, and every portfolio cycle new ideas need to be evaluated and could be selected.
The duration of the phases after evaluation can span between one and many portfolio cycles.
Projects will be delivered, and new ideas keep on coming in.
Benefit monitoring starts, and still, new ideas keep on coming. Which is a very good thing. When you run out of ideas your portfolio quality will suffer greatly.

So after a while of running SPMF your funnel will look like below. And that is why transparency and backlog management are essential for managing your portfolio. Displaying this funnel in the form of a Kanban Board is a great start.

You can buy my books The Art of Technology Project Portfolio Management and The Science of Technology Project Portfolio Management separate or as a bundle in my store. Just click here or on the image.

Read more…

Thursday, July 13, 2017

No More User Stories! There Are Jobs to Be Done

No more user stories! There are jobs to be done
Creating new and better products that thousands or millions of customers actually love is a top priority for almost every company. Agile frameworks and techniques have been hailed as a solution for exactly this problem. It started with eXtreme Programming (XP), where a conversation between the development team and the customer is the base for all requirements.

And in theory this is the way to go, right? But there are two issues with this line of thinking. The first is that it does not work for start-ups and new products were it is not clear yet who the customer is - the Lean Startup Methodology from Eric Ries is based around this fact. And the second issue is something Clayton Christensen has discovered.

Clayton Christensen, the famed Harvard Business School Professor known for coining the term “disruptive innovation”, believed that one of his most enduring legacies will be an idea he first put forward in his 2003 book "The Innovator’s Solution": don’t sell products and services to customers, but rather try to help people address their jobs to be done. This seemingly simple idea has profound implications for re-framing industries and products.

Never have companies known more about their customers. Thanks to the big data revolution, companies now can collect an enormous variety and volume of customer information, at unprecedented speed, and perform sophisticated analyses of it. Many firms have established structured, disciplined innovation processes and brought in highly skilled talent to run them. Direct access  to customers is not a problem, and customers are more then willing to voice there opinion on social media or interviews.

So why is it still so difficult to get products right, and actually have a market for them?

The fundamental problem is, most of the masses of customer data companies create is structured to show correlations: This customer looks like that one, or 68% of customers say they prefer version A to version B. While it’s exciting to find patterns in the numbers, they don’t mean that one thing actually caused another. And though it’s no surprise that correlation isn’t causality, Clayton Christensen and his team suspect that most managers have grown comfortable basing decisions on correlations.

Why is this misguided? Consider the case of myself. I am 36 years old. I am 1.74m tall. My shoe size is 43. Me and my wife have one child. I drive a Volvo CX60, and go to work by train. I have a lot of characteristics, but none of them has caused me to go out and buy my new Salomon trail running shoes. My reasons for buying these shoes are much more specific. I buy it because I need shoes to run in the mountains and my previous pair were not giving enough support anymore after running too many kilometers on them. Marketers who collect demographic or psychographic information about me—and look for correlations with other buyer segments—are not going to capture those reasons.

After decades of watching great companies fail, Christensen and his team have come to the conclusion that the focus on correlation—and on knowing more and more about customers—is taking companies in the wrong direction. What they really need to home in on is the progress that the customer is trying to make in a given circumstance—what the customer hopes to accomplish. This is what he has come to call the job to be done.

We all have many jobs to be done in our lives. Some are little (pass the time while waiting in line); some are big (find a more fulfilling career). Some surface unpredictably (dress for an out-of-town business meeting after the airline lost my suitcase); some regularly (pack a healthful lunch for my son to take to school). When we buy a product, we essentially “hire” it to help us do a job. If it does the job well, the next time we’re confronted with the same job, we tend to hire that product again. And if it does a crummy job, we “fire” it and look for an alternative. (I am using the word “product” here as shorthand for any solution that companies can sell; of course, the full set of “candidates” we consider hiring can often go well beyond just offerings from companies.)

Jobs-to-be-done theory transforms our understanding of customer choice in a way that no amount of data ever could because it gets at the causal driver behind a purchase.

Related articles:
Stop Wasting Money on FOMO Technology Innovation Projects
Project ≠ Product ≠ Business ≠ Company

If you didn't have the opportunity to listen Prof. Clayton Christensen presenting the jobs to be done concept, or to read his latest book "Competing Against Luck", please invest a few minutes in listening to his milkshake example story.

So what has this to do with user stories?

In the Agile world we make the same mistake! We focus on persona and/or roles. Instead of the job that needs to be done. Another problem with user stories is that it contains too many assumptions and doesn’t acknowledge causality. When a task is put in the format of a user story ( As a [persona/role], I want to [action], so that [outcome/benefit] ) there’s no room to ask ‘why’ — you’re essentially locked into a particular sequence with no context. Alan Klement from puts it very nicely in an Image.

Job stories slightly revise the format to be less prescriptive of a user action, and thereby give more meaningful information for the designer and developer to build for the user’s expected outcome.

Here are a few examples of the same feature story in user story and job story format:

User Story: As a subject matter expert, I want a badge on my profile that when I am a top poster, so people know I am a top poster.

Job Story: When I am one of the top posters for a topic I want it to show on my profile so that people know that I am an expert in a subject.

User Story: As a visitor, I want to see the number of posts in a topic for another user, so I can see where their experience lies.

Job Story: When I visit someones profile page I want to see how many posts that they have in each topic so that I have an understanding of where they have the most knowledge.

User Story: As a tax specialist that has used the application multiple times, I should get an alert to contribute

Job Story: When I have used the application multiple times I get nudged to contribute so that I am encouraged to contribute.

There is a very slight but meaningful difference between the two. By removing “As a __ ” from the user story, we remove any sort of biases that the team might have for that persona. Personas create assumptions about the users that might not be validated.

In my experience, user stories have a tendency to be easily manipulated to proposing a solution rather than explaining an expected outcome for that particular user. In particular, I’ve found people leave off the “so that __” in a user story with the feeling that it is optional. This leaves off the benefit that the user would get from adding new functionality.

In a job story we replace “As a __ ” with “When __ ”. This gives the team more context for the user’s situation and allows us to share his or her viewpoint. Next, the “I want to __” is transformed into situational motivation in the job story, as opposed to a prescriptive solution for a persona in the user story.

Because the differences in wording are negligible, it is an easy transition to shift from writing user stories to job stories. By placing the user’s situation upfront, you have a better understanding how it feels to be in the user’s shoes, as opposed to thinking about a particular persona. This allowed for more discussion of the expected outcome and how to best go about achieving said outcome for the user. Which in the end will result in a better product.

In a nutshell: Jobs-to-be-done theory transforms our understanding of customer choice in a way that no amount of data ever could because it gets at the causal driver behind a purchase.

Read more…

Sunday, July 02, 2017

Kanban - Card, Board, System or Method?

Kanban - Card, Board, System or Method?
Kanban is a term that can mean many things. Two people talking about Kanban usually have different levels of understanding and thinking about it. Is it a card, a board, a manufacturing system or a software development method? The short answer is – Kanban is all of them. Kanban means literally both card and board. The word ‘kanban’ has its origin in both Hiragana (Japanese language) and Kanji (Chinese language). In Hiragana, it means a ‘signal card’ while is Kanji it means a ‘sign’ or ‘large visual board’.

Kanban System

Beyond the etymology, ‘Kanban’ as a concept was popularized by Toyota in the 40’s who took inspiration from how supermarkets stock their shelves and promoted the idea of Just-in-Time manufacturing – using ‘Kanban Cards‘ as a signal between two dependent processes to facilitate smoother – and just in time – flow of parts between them. With time, the idea of Kanban evolved to be more than just a signal card. First in the manufacturing world, and now in IT industry, a ‘Kanban System’ is characterized by two key features:

1. Visualization of work items – using signal cards, or some other means.
2. A pull-based system, where work is pulled by the next process, based on available capacity, rather than pushed by the previous process.

A team that uses Kanban System to track and manage the flow of work may often use a board to visualize the items that are in progress. Such a board is called ‘Kanban Board’. Those practicing Scrum may think of a Scrum board as a simplified version of a Kanban Board. See an example of such a board below.

Kanban Board example

Kanban Method

Kanban Method’ is a term coined and popularized by David J Anderson who, over the past ten years, has evolved the Kanban concept into a management method to improve service delivery and evolve the business to be ‘fit for purpose’. It is not project management method or a process framework that tells how to develop software, but is a set of principles and practices that help you pursue incremental, evolutionary change in your organization.

In other words, it will not replace your existing process, but evolve it to be a better ‘fit for purpose’ – be it Scrum or waterfall. The six key practices outlined in the Kanban Method include:

1. Visualize your work
2. Limit work-in-progress
3. Measure and manage flow
4. Make policies explicit
5. Implement feedback loops
6. Improve collaboratively, evolve experimentally

While the idea of Kanban has evolved from a signal card to a management method, its emphasis on visualization and pull-based work management have remained intact.

Read more…

Tuesday, June 27, 2017

Lean Software Development

Lean Software Development
Lean thinking is a business methodology that aims to provide a new way to think about how to organize human activities to deliver more benefits to society and value to individuals while eliminating waste. The term lean thinking was coined by James P. Womack and Daniel T. Jones, in their 1996 book with the same name, to capture the essence of their in-depth study of Toyota’s fabled Toyota Production System. Lean thinking is a new way of thinking any activity and seeing the waste inadvertently generated by the way the process is organized by focusing on the concepts of:

1. Value,
2. Value streams,
3. Flow,
4. Pull,
5. Perfection.

The aim of lean thinking is to create a lean enterprise, one that sustains growth by aligning customer satisfaction with employee satisfaction, and that offers innovative products or services profitably while minimizing unnecessary over-costs to customers, suppliers and the environment. The basic insight of lean thinking is that if you train every person to identify wasted time and effort in their own job and to better work together to improve processes by eliminating such waste, the resulting enterprise will deliver more value at less expense while developing every employee’s confidence, competence and ability to work with others.

The idea of lean thinking gained popularity in the business world and has evolved in two different directions:

1. Lean thinking converts who keep seeking to understand how to seek dynamic gains rather than static efficiencies. For this group of thinkers, lean thinking continuously evolves as they seek to better understand the possibilities of the way opened up by Toyota and have grasped the fact that the aim of continuous improvement is continuous improvement. Lean thinking as such is a movement of practitioners and writers who experiment and learn in different industries and conditions, to lean think any new activity.

2. Lean manufacturing adepts who have interpreted the term “lean” as a form of operational excellence and have turned to company programs aimed at taking costs out of processes. Lean activities are used to improve processes without ever challenging the underlying thinking, with powerful low-hanging fruit results but little hope of transforming the enterprise as a whole. This “corporate lean” approach is fundamentally opposed to the ideals of lean thinking, but has been taken up by a great number of large businesses seeking to cut their costs without challenging their fundamental management assumptions.

The essence of [the Toyota system] is that each individual employee is given the opportunity to find problems in his own way of working, to solve them, and to make improvements.
The above quote underscores a vital point because over the years there have been some ostensibly ‘lean’ promoters that reduced lean thinking to a mechanistic, superficial level of management tools such as Kanban and queue management. These derivative descriptions ignore the central message of the Toyota experts who stress that the essence of successful lean thinking is “building people, then building products” and a culture of “challenge the status quo” via continuous improvement.

Lean software development is a translation of lean thinking to the software development domain. The term lean software development originated in a book by the same name, written by Mary Poppendieck and Tom Poppendieck. The book restates traditional lean principles, as well as a set of 22 tools and compares the tools to corresponding agile practices.

Lean software development can be summarized by the following nine principles, very close in concept to lean manufacturing principles:

1. Optimize the Whole
2. Eliminate Waste
3. Build Quality In
4. Deliver Fast
5. Defer Commitment
6. Create Knowledge
7. Empower People
8. Focus on Flow (of Value)
9. Continually Improve

Optimize the whole

Optimize the whole means
- to focus on the realization of business value
- take a systemic thought process.
- to take an holistic view.

Taking a systemic view means that one must attend look to the system within which people operate to understand why people are achieving the results they are.  Changing the system will help improve their results.  Effective results require good people and good systems.

The larger the system, the more organizations that are involved in its development and the more parts are developed by different teams, the greater the importance of having well defined relationships between different vendors, in order to produce a system with smoothly interacting components. During a longer period of development, a stronger subcontractor network is far more beneficial than short-term profit optimizing, which does not enable win-win relationships.

Systems thinking implies taking an holistic view since systems are more than the sum of their parts. This is one of the big differentiators of Lean and Agile. One must understand that the development team is only a part of the process.  Many of their challenges are due to upstream problems. If we can see the whole we can more easily change behavior of those upstream of the development team.

Eliminate waste

Eliminate waste by removing delays, only working on things of value that you know how to achieve and only starting work you know you can complete.  We must remove delays in our workflow because these literally create additional work to be done – redoing requirements, finding errors (while the fixing cost often stays the same, but finding bugs found late increases dramatically), integration errors (caused by the delays from when teams get out of synch until they are discovered in the integration process.

Lean philosophy regards everything not adding value to the customer as waste (muda). Such waste may include:

1. Building the wrong feature or product
2. Mismanaging the backlog
3. Rework
4. Unnecessarily complex solutions
5. Extraneous cognitive load
6. Psychological distress
7. Waiting/multitasking
8. Knowledge loss
9. Ineffective communication.

In order to eliminate waste, one should be able to recognize it. If some activity could be bypassed or the result could be achieved without it, it is waste. Partially done coding eventually abandoned during the development process is waste. Extra processes like paperwork and features not often used by customers are waste. Switching people between tasks is waste. Waiting for other activities, teams, processes is waste. Motion required to complete work is waste. Defects and lower quality are waste. Managerial overhead not producing real value is waste. A value stream mapping technique is used to identify waste. The second step is to point out sources of waste and to eliminate them. Waste-removal should take place iteratively until even seemingly essential processes and procedures are liquidated.

Removing delays

Removing delays requires not working beyond the capacity of your teams as this injects delays into the workflow because people have to switch between projects and are often waiting for other people working on other projects. People often point to the cost of task-switching here, which, while considerable, pales in comparison to the additional work created by the delays themselves.

Build quality in

Build quality in means to avoid errors, not test them out.  Shortening feedback loops is a big help here.  In particular, the adoption of acceptance test-driven development is a method that virtually every team we’ve run across in the last dozen years would benefit from.

The customer needs to have an overall experience of the System. This is the so-called perceived integrity: how it is being advertised, delivered, deployed, accessed, how intuitive its use is, its price and how well it solves problems. Conceptual integrity means that the system’s separate components work well together as a whole with balance between flexibility, maintainability, efficiency, and responsiveness.

This could be achieved by understanding the problem domain and solving it at the same time, not sequentially. The needed information is received in small batch pieces – not in one vast chunk - preferably by face-to-face communication and not any written documentation. The information flow should be constant in both directions – from customer to developers and back, thus avoiding the large stressful amount of information after long development in isolation.

One of the healthy ways towards integral architecture is refactoring. As more features are added to the original code base, the harder it becomes to add further improvements. Refactoring is about keeping simplicity, clarity, minimum number of features in the code. Repetitions in the code are signs of bad code designs and should be avoided.

The complete and automated building process should be accompanied by a complete and automated suite of developer and customer tests, having the same versioning, synchronization and semantics as the current state of the System. At the end the integrity should be verified with thorough testing, thus ensuring the System does what the customer expects it to. Automated tests are also considered part of the production process, and therefore if they do not add value they should be considered waste. Automated testing should not be a goal, but rather a means to an end, specifically the reduction of defects.

Deliver fast

Deliver fast is essentially the same as time to market.  Not a new concept.  But Lean adds the insight that if we focus on shortening the time of delivery by removing delays and focusing on quality, our productivity goes up and costs come down.  Quick delivery also allows for quick feedback and can help avoid building features that aren’t needed.

In the era of rapid technology evolution, it is not the biggest that survives, but the fastest. The sooner the end product is delivered without major defects, the sooner feedback can be received, and incorporated into the next iteration. The shorter the iterations, the better the learning and communication within the team. With speed, decisions can be delayed. Speed assures the fulfilling of the customer's present needs and not what they required yesterday. This gives them the opportunity to delay making up their minds about what they really require until they gain better knowledge.

Customers value rapid delivery of a quality product. The just-in-time production ideology could be applied to software development, recognizing its specific requirements and environment. This is achieved by presenting the needed result and letting the team organize itself and divide the tasks for accomplishing the needed result for a specific iteration. At the beginning, the customer provides the needed input. This could be simply presented in small cards or stories – the developers estimate the time needed for the implementation of each card. Thus the work organization changes into self-pulling system – each morning during a stand-up meeting, each member of the team reviews what has been done yesterday, what is to be done today and tomorrow, and prompts for any inputs needed from colleagues or the customer. This requires transparency of the process, which is also beneficial for team communication.

Defer commitment

Defer commitment means to make decisions when you have the right information yet always attend to the cost/risk of making decisions too early or too late.  Too many decisions are made too early. Deferring commitment requires taking steps so that delaying the commitment does not incur more risk while achieving better decisions when more information is available later.

As software development is always associated with some uncertainty, better results should be achieved with an options-based approach, delaying decisions as much as possible until they can be made based on facts and not on uncertain assumptions and predictions. The more complex a system is, the more capacity for change should be built into it, thus enabling the delay of important and crucial commitments.

The iterative approach promotes this principle – the ability to adapt to changes and correct mistakes, which might be very costly if discovered after the release of the system. An agile software development approach can move the building of options earlier for customers, thus delaying certain crucial decisions until customers have realized their needs better. This also allows later adaptation to changes and the prevention of costly earlier technology-bounded decisions. This does not mean that no planning should be involved – on the contrary, planning activities should be concentrated on the different options and adapting to the current situation, as well as clarifying confusing situations by establishing patterns for rapid action. Evaluating different options is effective as soon as it is realized that they are not free, but provide the needed flexibility for late decision making.

Create knowledge

Create knowledge means to create, capture, update and put to use the knowledge learned from product value.  Measure cycle time, work-in-progress, and which work-items have value.

Empower people

Empower people to make decisions which they are best able to make. Teams must be given clear boundaries within which to work, however. It also means to recognize the psychology of people. People are tribal, have a psychology that is sensitive (and averse) to significant change.  People also often identify with their work, so changing roles can be very difficult for them emotionally.
The lean approach follows the Agile Principle "find good people and let them do their own job", encouraging progress, catching errors, and removing impediments, but not micro-managing.

Another mistaken belief has been the consideration of people as resources. People might be resources from the point of view of a statistical data sheet, but in software development, as well as any organizational business, people do need something more than just the list of tasks and the assurance that they will not be disturbed during the completion of the tasks. People need motivation and a higher purpose to work for – purpose within the reachable reality, with the assurance that the team might choose its own commitments. The developers should be given access to the customer; the team leader should provide support and help in difficult situations, as well as ensure that skepticism does not ruin the team’s spirit.

Focus on flow 

Focus on the flow of value.  We accomplish this by managing our queues so workflows from concept to consumption smoothly.  Managing work-in-progress (WIP) is an effective way of doing this.  In particular, we must pay attention to delays, which literally increases the amount of work to be done.

Continually improve

Continually improve with a scientific method.  While software development appears complex, that is often because many of us have taken a simplistic approach.

Software development is a continuous learning process based on iterations when writing code. Software design is a problem-solving process involving the developers writing the code and what they have learned. Software value is measured in fitness for use and not in conformance to requirements. Instead of adding more documentation or detailed planning, different ideas could be tried by writing code and building. The process of user requirements gathering could be simplified by presenting screens to the end-users and getting their input. The accumulation of defects should be prevented by running tests as soon as the code is written. The learning process is sped up by usage of short iteration cycles – each one coupled with refactoring and integration testing.

Increasing feedback via short feedback sessions with customers helps when determining the current phase of development and adjusting efforts for future improvements. During those short sessions, both customer representatives and the development team learn more about the domain problem and figure out possible solutions for further development. Thus the customers better understand their needs, based on the existing result of development efforts, and the developers learn how to better satisfy those needs. Another idea in the communication and learning process with a customer is set-based development – this concentrates on communicating the constraints of the future solution and not the possible solutions, thus promoting the birth of the solution via dialogue with the customer.

Read more…