Tuesday, January 26, 2016

Storypoint Estimation Scale

Agile Estimating and Planning
Tomas Gutierrez, Partner at Scalable Path gave a detailed description of the Storypoint Estimation Scale they use at his company while answering a question on Quora. I think the way they use this has great value for any team that is thinking about improving their own estimation process. It is fast, simple and meaningful and you can take it as a base to create your own scale definition.

Like many teams, they are using story (or agile) points to assign a common definition to the effort required to complete tasks/stories. Their exponential complexity scale is based on the modified Fibonacci Sequence – 0, 1, 2, 3, 5, 8, 13, 21. This definition of complexity should be shared by a whole team, from developers, product owners, executives, to anyone else who’d like to understand the nuances and complexities of creating something with this team. The scale will allow you, your team and your organization to have visibility into timelines, complexity, budget, and staffing.

Here is how they interpret story points in our projects and de-couple effort from hours.

0 – Very quick to deliver and no complexity; on the order of minutes
- One should be able to deliver many 0’s in a day
- I know exactly what needs to be done, and it’s going to take me very little time
- Example: Change color in CSS, fix a simple query

1 – Quick to deliver and minimal complexity; on the order of an hour+
- One should be able to deliver a handful of 1’s in a day
- I know exactly what needs to be done, and it’s going to take me little time
- Example: add a field to a form

2 – Quick to deliver and some complexity; on the order of multiple hours/half-day+
- One should be able to deliver one 2 comfortably in a day
- I mostly know what needs to be done, where improvements/changes need to be implemented, and it’s going to take me some time
- Example: Add a parameter to form, validation, storage

3 – Moderate time to deliver, moderate complexity, and possibly some uncertainty/unknowns
- On the order of about a day or more to deliver
- I have a good idea what needs to be done, and it’s going to take me a bit of time
- Example: Migrate somewhat complex static CSS into a CSS pre-processor

5 – Longer time to deliver, high complexity, and likely unknowns
- On the order of about a week or more to deliver
- I know what needs to be done at a high level, but there is a good amount of work due to complexity/amount of development, and there are big unknowns we’ll discover as we get into the work.
- Example: Integrate with third-party API for pushing/pulling data, and link to user profiles in the platform

8 – Longer time to deliver, high complexity, and likely unknowns
- On the order of a couple weeks+
- I understand the concept and the goals, but it will take a while to deliver due to the amount of work, complexity, and unknowns
- If we have an 8, we should break them into smaller tasks/issues with smaller point values and minimize the complexity
- This might require a Spike to architect/remove uncertainty or be created as an epic with more granular stories within it
- Example: Overhaul the layout/HTML/CSS/JS of a web application

13 – Long time to delivery, high complexity, many critical unknowns
- On the order of many weeks/month
- Similar to an 8; this should definitely be an epic and requires discussions around how to accomplish
- Example: Migrate application from the outdated data store to new DB technology and ORM

21 – You’re doing this wrong…

As you can see, this is not clear-cut and leaves much room for interpretation. Estimating software development is difficult, and there are many factors to consider, including complexity to develop given existing architecture, team availability, business priorities, unforeseen third party complexity, use of CI (Continuous Integration), automated testing, etc. What teams should strive to do is build a culture where there is a good grasp on the solution, and all agree on definitions for the level of effort required to deliver each piece of functionality, task, bug fix, etc.

Read more…

Monday, January 25, 2016

Enterprise Scrum and Improvement Cycles

As you probably have noticed in previous posts one of my main interests is leveraging Agile and Scrum in enterprises to improve the way they work and let have the teams more fun and better results at the same time.

No surprise then that I have a Google Alert on "Enterprise Scrum" and that the website of Mike Beedle pops up now and then. Mike Beedle is the author of the first Scrum book (with Ken Schwaber), the author of the first Scrum paper published (with Jeff Sutherland, Ken Schwaber, Martine Devos and Yonat Sharon), and co-author of the Agile Manifesto. He is also the author of the upcoming Enterprise Scrum book. It is not finished yet, but you can read the executive summary on his website.

I will read the whole book when it comes out, and until then I will restrain myself from giving my opinion. But one thing I picked up from the summary and I really liked is his implementation of the concept "Improvement Cycles".

Every Improvement Cycle (a Sprint in Scrum), has a PE3R structure:  planning, execution, review, retrospective and refinement.

Planning: Plan on starting or continuing with an activity (provided you passed a DOR - definition of ready for the activities)

Execution: Execute and get things DONE (according to a DOD - definition of done, for the activities)

Review: Inspect and Adapt the results obtained on the things DONE, making everything transparent

Retrospective: Inspect and Adapt the team and the process to improve it

Refinement: Refine the Value List (Product Backlog), to change/improve the  efforts

You can apply these cycles on any kind of activities. Recruiting, marketing, sales, etc. You can decide on the cycle length (weekly, monthly, quarterly) and can run overlapping cycles (weekly and quarterly for example). I like this way of describing the essence of Scrum because it is easy to relate for people that work in enterprises. It is that simple.

"Scrum management" brings several advantages:
- better teamwork of the  team, builds a "cooperative culture"
- results oriented - focus on getting things DONE
- deadlines - everything we humans do gets done though deadlines, so the Cycle structure helps
- inspects and adapts everything:  1) Review:  the work , 2) Retro: the team and the process, 3) Refinement: the vision

Read more…

Monday, January 18, 2016

Top six reasons for failure of Agile projects

VersionOne does an annual "State of Agile Development" survey and publishes the results. You can get a copy by signing up with your email address.

I recently read through the latest (9th) annual survey results and my interest was peaked by the data on reasons why agile projects have failed. I decided to have a look at the reports of the last five years and compiled an average ranked list.

The top 6 reasons of failed agile projects according to the survey averaged over the last 5 years (with the latest ranking between parentheses):

1. "Lack of experience with agile." (1)

2. "Company philosophy or culture at odds with agile core values." (2)

3. "A broader organizational or communications problem" (6)

4. "External pressure to follow traditional waterfall processes." (4)

5. "Lack of support for cultural transition." (5)

6. "Lack of management support." (3)

What is interesting to see is that there has not changed much in the list over the years. Lack of experience with agile is cited almost every year as the number one or number two reason for failure. And the company philosophy or culture being at odds with agile core values is also mentioned almost every year as the number one or number two.

What I like about this list is that it shows very clearly that in order to succeed with agile you have to transform an organisation and not just a development team. This is especially true in larger organisations. A lack of experience you can compensate with training, coaching and mixing internal teams with external developers/testers that have experience with Agile projects. The rest have to come from within the organisation and will require organisational change and resisting pressure from external sources. And this is only possible when you have management support on the highest levels.

Read more…

Wednesday, January 13, 2016

Three must have Technical Competencies for Scrum Teams

Three must have Technical Competencies for Scrum Teams
One key element of working agile in any organization is technical competence. Why is that? Well in my opinion as an organization you can only be agile when you are able to make changes to the product in an easy, fast, and flexible way while maintaining desired quality.

In that sense your organizational agility is constrained by technical agility. In other words, when you are slow in making changes to your product, then it doesn’t matter how you structure your teams, your organization or what framework you adopt, you will be slow to respond to changes. Bas Vodde and Craig Larman from LeSS (Large Scale Scrum) wrote extensively about this in the context of the LeSS Framework and Technical Excellence. I fully agree with them in a Large Scale Scrum environment, but I am also the opinion that for a single Scrum team technical competence is essential in order to be good and get the benefits from Scrum (or any other Agile framework).

Luckily, there are a few well established agile engineering practices that can help the team to keep their work in a high quality and in a flexible state. The three practices that I think are essential for any agile team to master are the following:

- Continuous Integration
- Unit Tests
- Test Automation

Continuous Integration

Martin Fowler writes in one of his key articles the following:
Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.
Based on the above definition and their experience Bas Vodde and Craig Larman define Continuous Integration as:

- a developer practice…
- to keep a working system
- by small changes
- growing the system
- by integrating at least daily
- on the mainline
- supported by a CI system
- with lots of automated tests

I do not want to go in detail about the items above, since the guys from LeSS have already done an excellent job doing so. But one thing I would like to point out. Discussions about CI all too often are about tools and automation. Though important, CI in essence is a developer practice. Owen Rogers, one of the original creators of CruiseControl.NET writes in one of his articles:
Continuous integration is a practice – it is about what people do, not about what tools they use. As a project starts to scale, it is easy to be deceived into thinking that the team is practicing continuous integration just because all of the tools are set up and running. If developers do not have the discipline to integrate their changes on a regular basis or to maintain the integration environment in a good working order they are not practicing continuous integration. Full stop. 
Unit Tests

Unit Tests are software programs written to exercise other software programs (called Code Under Test, or Production Code) with specific preconditions and verify the expected behaviors of the CUT. Unit tests are usually written in the same programming language as their code under test.

Each unit test should be small and test only limited piece of code functionality. Test cases are often grouped into Test Groups or Test Suites. There are many open source unit test frameworks (link). Unit tests should run very fast. Typically hundreds of unit test cases within a few seconds.

The purpose of unit testing is not for finding bugs. It’s a specification for the expected behaviors of the code under test. The code under test is the implementation for those expected behaviors. So unit test and the code under test are used to check the correctness of each other, and protect each other. Later when someone changed the code under test, and it changed the behavior that is expected by the original author, the test will fail. If you code is covered by reasonable unit test, you can maintain the code without breaking the existing feature. That’s why Michael Feathers define legacy code in his book as code without unit tests.

Test Automation

Agile developers emphasize the importance of automated tests. With short cycles, manual regression testing is nearly impossible. Does that mean there is no manual testing at all? No. Some manual testing is still recommended, though such testing differs from the traditional script-based manual testing. Elisabeth Hendrickson, the author of the mini-book Exploratory Testing in an Agile Context, dares to state that:
I do think that if you can write a manual script for a test, you can automate it.
Teams often claim “It is impossible to automate tests related to a lost network connection” or “You can’t automate tests related to hardware failure”. In most cases the answer usually is “No, it is not” or “Yes, you can.”

It may be difficult to automate a test in exactly the same way as it would be carried out manually. For example, it is nearly impossible to remove the network cable automatically in a connection-lost test case. Therefore, the automated test is usually done in a different way. Instead of the cable being physically detached, the automated test instructs the driver to respond as if the cable were removed.

Is automating all tests worth it? According to Hendrickson:
If it’s a test that’s important enough to script, and execute, it’s important enough to automate.
Why is this? Iterative and incremental development implies that code is not frozen at the end of the iteration but instead has the potential to be changed each iteration. Therefore, manual regression testing would mean rerunning most of the manual test – every iteration. Automating the tests therefore pays back quickly. Automating all tests might not be worthwhile or even possible. But many of them do.

Read more…