Saturday, December 24, 2016

Essential rules for scaling Agile

Scaling Agile
Based on my experiences with scaling agile in different contexts I am the opinion that the following guiding rules serve me well.

1. Do not scale, in most cases it is not necessary. Create well-defined product boundaries with interfaces and most scaling efforts are futile.

2. Do not multi-site. It makes simple things complex, and hard things even harder.

3. You cannot scale what you do not have, i.e. when you have no well-functioning Scrum teams in your organization you should not talk about scaling Scrum.

4. Think products, not projects. This shift in thinking will help organizations a lot in making better decisions.

5. Continuous improvement. Never stop learning, and do not be afraid to stop what is not working.

6. Technical excellence is important for agile teams, but for scaling it is essential.

7. Take a modular approach, there is no one size fits all framework.

8. You need top-down support, i.e. management wants to change their own way of working too.

9. Change the system, the culture and people's behavior will follow.

10. Tackle one product(group) at a time.

11. Feature teams, not component teams.

12. Empower your Product Owner (or similar role) to take the decisions he/she needs to be taking in that role.

13. Building is the easy part. You should think about operating your products as well. DevOps is the way to go.

Read more…

Friday, December 02, 2016

Outsourcing Technical Competence Is a Very Bad Idea

Outsourcing technical competence?
I have written about technical competence in the context of Software Engineering Practices. This article will shed some light on a different aspect of technical competency. Outsourcing it.

A number of companies I have worked for have started a large project based on a technology they are not competent in, or even completely unfamiliar with. This fits very nicely in the outsourcing strategy most of them have regarding IT. But does this makes sense?

In my opinion, having technical competence (and excellence) in your organization has never been as crucial as it is today. Creating a technology strategy that allows you to not only deal with but take advantage of the increasingly rapid pace of change separates the successful organizations from the obsolete ones.

Some observations I have made when your organization is not competent in a technology that is used for a big initiative:

> Your IT architecture team is dependent on external consultants for creating an architecture. You can have luck with your external consultants (honest, competent, independent), but I have seen the opposite more often (not competent, driven by sales, vendor lock-in). Unfortunately, when you have no competence in your organization you have no way of judging before it is too late.

> The cost of implementing new features (your business requirements) are determined by external consultants and/or vendors, and you have no way of judging if these are reasonable or not.

> The decision if certain new features are possible or not are made by external consultants and/or vendors, and you have no way of knowing if these decisions are justified or not.

> You have no healthy team atmosphere during the project. Internal employees always have to ask external ones for their opinion and input. This gives a very strange team dynamic where externals are making all the decisions one way or another (direct or indirect).

> You are not able to support the implemented solution after go-live without help from external consultants and/or vendors. This makes you very dependent, even after the project is finished

> It makes it very hard to switch vendor or implementation partner, because all of the knowledge of the solution is with them, so when you want to switch because you are not happy you have to work a while in parallel with two different companies. Besides the costs, this will give an even worse team dynamic.

> Your best external people can leave the project on a whim because you have no influence on keeping them there. Supplier priorities may not be you, or they just leave your supplier.

In general I am the opinion that when a company is deciding on a certain technology for a big initiative the first thing they should do is hire two or three of the best people they can get, make them internal employees and give them key positions in your project setup (architect, QA, technical project lead).

Leverage their experience by not making beginner mistakes, get second opinions on cost and feasibility estimations and get a judgment of skills of external consultants. This will put your organization in the driver seat instead of being driven by others.

Yes, your headcount goes up, and yes they do not work for free, but trust me on this one, this is money well invested and far much cheaper as not doing it.

In a nutshell: Having technical competence (and excellence) in your organization has never been as crucial as it is today.

PS: Although this was written based on a new technology decision, all the above is true for starting a project with a technology you know. When your internal employees are not familiar with modern software engineering practices like Unit Testing, Test Automation, and Continuous Integration, get people on board that do. You will never make a lasting change in your organization by outsourcing these.

Read more…

Friday, September 30, 2016

Getting a Little Better Every Day @ Swisscom CX Day

Getting a little better every day @ Swisscom CX Day
Next month I am speaking at the Swisscom Customer eXperience Day. I am really looking forward to this event. Katja Leu and Christina Taylor have put together a fresh and interesting format on the topic "Agile".

"Agile" has long been the buzz word in software development for a new people-centered and efficient innovation culture that cuts across all methods. In four sprints we highlight "agile" insights as they relate to the organisation, leadership and culture, environment and collaboration, and measuring success. We will discover together how an agile corporate culture can contribute to lasting success.

The event is invite-only, but you will find the presentations from previous CY Day events in their archive. There are some really interesting talks lined up for this event.

"Work? Question. Think. Learn." by Bastiaan van Roden, Founder @ Nothing Interactive

The way we think about work is stuck in the age of the industrial revolution. Linear, recurring routines have become less suited to producing adequate answers to the challenges of today when we need them. Bastiaan prefers open, authentic participation when it comes to people creating meaningful experiences for other people. It's not about work anymore; it's about curiosity: only constantly asking questions, providing new answers and learning from them will ensure success.

Culture has a stronger impact than strategy." by Franziska Stebler and Rudolf Gysi, Agile Coaches @ SBB

The people in a company shape its culture. The culture is the shadow of the system and carries more influence than any strategy. For a company to change, it needs to start with the people. So the Agile Coaches train SBB IT employees in agile practices, empowering entire teams to develop new solutions faster. Using the "Iteration Zero" example, Franziska and Rudolf show how they boost the groundwork for product development with their people-centered approach.

"Goodbye boss. Hello trainer." by Heinz Herren, Head of IT, Network & Innovation @ Swisscom

You can see the full force of the butterfly effect in the networked world. When something happens in one corner of the world it is transmitted in the shortest of times to other areas and can trigger huge movements. Agile, people-centered collaboration is the answer to the shortcomings of hierarchical organizations in reacting quickly, flexibly, and diversely to changes. When it comes to management that means moving from rigid control to vibrant, learning organizations. Top and bottom was yesterday. Today it's about being a trainer and making others successful.

"The setting is crucial." by Thomas Bickel (Head of Sport) and Uli Forte (Manager) @ FC Zürich

What do agile forms of collaboration and top football have in common? What is the secret of a top-performing team that works in perfect harmony? What can influence a team positively and negatively in terms of its success? We find out from the perspective of the Head of Sport and the Manager of FC Zürich how important environment and communication are for perfect teamwork.

"Getting a little better every day." by me

Delivery is more important than all processes, frameworks, methods, and tools. But it's not more important than the people involved in the project. Only the team knows what really works. So the team determines the success, not a book, a consultant or a manager. For every race, the team decides which activities from the backlog bring the greatest added value if they are pursued further. Too much guidance restricts any eagerness to experiment and stops you from making mistakes, reducing your ability to learn. Quality is important from the first iteration. But it's not possible without technical excellence. Agility means small steps: getting a little better every day.

"Less is almost always more." by Head of Business Development @ Digitec Galaxus

Agility only works then the entire ecosystems works agilely together – not just some parts of the value chain. It's only when the entire system is operating agilely that its full power is released. It is important that everyone actively participates and can play their part. Focus is vital for this: it is only when we are prepared to give up some things willingly and leave them to the side that we can make everything faster and better. So less is almost always more.

CX-Day Panel

How does agile collaboration change management? Is management even necessary in an agile environment? If so, what type of management? How is the new management role defined and what values is it based on? Moderator Carsten Roetz will put these and other questions to the following panel of experts.

When you think such a talk could be of interest to your organization as well, have a look at my speaking page or just contact me.

Henrico was an invited speaker on our Swisscom Customer Experience Day event in October 2016. The theme of the event was agile insights, and how they relate to organization, leadership and culture, environment and collaboration, and measuring success. His talk "Getting a little bit better every day" was well prepared and well received by both organizers and attendees. I can highly recommend him as a speaker for any of your events on topics related to agile and software development in General. - Human Centered Design Consultant @ Swisscom 

Read more…

Wednesday, June 08, 2016

Antifragile @ Global Scrum Gathering Munich

Antifragile @ Global Scrum Gathering Munich 2016
From October 17 till 19 I will be at the Global Scrum Gathering in Munich. The theme of this gathering is Business Agility: How to Thrive in a Constantly Changing Environment

Questions that will be addressed are:

> How can executives of large- and mid-sized organizations set up their businesses to adapt and react faster to technological changes and challenges?

> How can they not only enable their teams to build things the right way but, even more importantly, ensure that they build the right things?

> What kind of structures are needed to involve and enable “smart creatives” to develop innovative and valuable products?

> To what extent do classic organizational structures support or contradict business agility?

I will give a talk at the gathering titled "Dealing with Uncertainty: From Agile to Antifragile".

The purpose of my talk will be:

> Gain a better understanding of what uncertainty is

> Understand Black Swan theory

> Understand what antifragile means

> Gain insight into some strategies that can be applied to deal with uncertainty

I am really looking forward to the event. Many smart and experienced people, cool conversations and new ideas will be guaranteed. When you want to meet up at the gathering and have one of those amazing German beers just contact me.

You can view and download my slide deck here at SlideShare.

When you think such a talk could be of interest to your organization as well, have a look at my speaking page or just contact me.

Read more…

Tuesday, April 12, 2016

Agile engineering practices

Agile engineering practices
Just finished reading the 10th State of Agile Report from VersionOne. I have written before about data from this report. See "Top six reasons for failure of Agile projects".
This time one question caught my attention. Respondents were asked to state which agile techniques they use. They were able to give multiple answers. Because of a project I am currently involved in I was very curious about the answers regarding used Agile Engineering Practices.

The response was as follows:

- Unit Testing (63%)
- Continuous Integration (50%)
- Single Team (integrated dev and testing) (45%)
- Refactoring (37%)
- Test Driven Development (33%)
- Automated Acceptance Testing (28%)
- Continuous Deployment (27%)
- Collective Code Ownership (25%)
- Pair Programming (24%)

What can I say... This is a sad state of affairs. Or looking from the positive side. There is a lot of potential for many agile development teams to step up their game and improve. Scrum is the most used agile process method according to the survey, and Scrum itself says nothing about Engineering Practices. Scrum is not limited to be used for software development only. But as soon as you use Scrum in software development, solid Agile Engineering Practices are essential to do Scrum right. I have written before on this in my article "Three must have Technical Competencies for Scrum Teams". Some additional thoughts...

Unit Testing

The purpose of unit testing is not for finding bugs. It is a specification for the expected behaviors of the code under test. The code under test is the implementation for those expected behaviors. So unit test and the code under test are used to check the correctness of each other and protect each other. Later when someone changed the code under test, and it changed the behavior that is expected by the original author, the test will fail. If your code is covered by a reasonable unit test, you can maintain the code without breaking the existing feature. That’s why Michael Feathers define legacy code in his book as code without unit tests. Without Unit Tests your refactoring efforts will be a major risk every time you do it.

Continuous Integration

Martin Fowler defines Continuous Integration (CI) in his key article as follows: "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly." You see, without Unit Tests and Test Automation it is impossible to do CI right. And only when you do CI right you might be able to succeed at Continuous Deployment.

Single Team

It is hard to grasp that this is still not implemented by so many agile teams. In order to deliver high-quality software every iteration, your development and testing should be done by a single team. Not just that, your team should be a Feature Team.

Refactoring

Code should be written to solve the known problem at the time. Often, teams become wiser about the problem they are solving, and continuously refactoring and changing code ensures the code base is forever meeting the most current needs of the business in the most efficient way. In order to guarantee that changes do not break existing functionality, your regression tests should be automated. I.e. Unit tests are essential.

Test Driven Development

Test-driven development is a development style that drives the design by tests developed in short cycles of:

1. Write one test,
2. Implement just enough code to make it pass,
3. Refactor the code so it is clean.

Ward Cunningham argues that test-first coding is not testing. Test-first coding is not new. It is nearly as old as programming. It is an analysis technique. We decide what we are programming and what we are not programming, and we decide what answers we expect. Test-first is also a design technique.

Automated Acceptance Testing

Also known as Specification by Example. Specification by Example or Acceptance test-driven development (A-TDD) is a collaborative requirements discovery approach where examples and automatable tests are used for specifying requirements—creating executable specifications. These are created with the team, Product Owner, and other stakeholders in requirements workshops. I have written about a successful implementation of this technique within Actuarial Modeling.

Continuous Deployment

Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing. Since every change is delivered to a staging environment using complete automation, you can have confidence the application can be deployed to production with a push of a button when the business is ready. Continuous deployment is the next step of continuous delivery: Every change that passes the automated tests is deployed to production automatically. Continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.

Collective Code Ownership

Collective Ownership encourages everyone to contribute new ideas to all segments of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs or refactor. No one person becomes a bottle neck for changes. This is easy to do when you have all your code covered with unit tests and automated acceptance tests.

Pair Programming

Having 2 developers work on one piece of code, using one keyboard and one monitor. Pairing results in higher quality output because it greatly reduces wasted time and defects, and results in high collaboration. It is noting else as continuous code reviews. Hence, when implemented you do not need code reviews before merging your branches, hence continuous integration can be done faster.

Those who are familiar with Extreme Programming (XP) will notice that many of these techniques originate from it. Ron Jeffries in his (highly recommendable) book "The Nature of Software Development" goes even that far that he says that Scrum combined with Agile Engineering Practices is nothing else as XP. I see his point. It is about wording and labeling. The ideas and principles are the same.

I am a very strong believer that without successfully implementing Agile Engineering Practices it is impossible to be agile in software development.

Read more…

Wednesday, April 06, 2016

Questions to ask before developing your new product

Questions to ask before developing your new product
A while ago I wrote an article titled "Why are we actually doing this project" where I discussed a number of questions to ask about your project. It helps determining if it is actually worthwhile doing/continuing a project.

Peter Thiel, in his book "Zero to One", goes a step further and laid out seven questions that a startup must answer in order to be successful. But when you think about it, you can use the same questions for developing a new product in an existing company. His seven questions are:

1. The Engineering Question: Can you create breakthrough technology instead of incremental improvements?

2. The Timing Question: Is now the right time to start your particular business?

3. The Monopoly Question: Are you starting with a big share of a small market?

4. The People Question: Do you have the right team?

5. The Distribution Question: Do you have a way to not just create but deliver your product?

6. The Durability Question: Will your market position be defensible 10 and 20 years into the future?

7. The Secret Question: Have you identified a unique opportunity that others don't see?

As Thiel puts it, if you don't have good answers to these questions, you will most likely have "bad luck", ie fail. "If you nail all seven, you'll master fortune and succeed. Even getting five or six correct might work."

When you need some guidance on how to define and measure project success, just download the Project Success Model by clicking on the image.


The Project Success Model

Read more…

Thursday, March 17, 2016

Next generation Scrum? Or just being Agile?

A while ago I read a blog post by Boris Gloger that interested me. It was titled "From Scrum 1.0 to Scrum 3.0". It had a few good points and besides the versioning of Scrum I found myself in agreement with what he wrote.

Today I decided to google a little on Scrum 3.0 and found an article from Sebastian Radics on his Blog "On the Agile Path". He had the opportunity to join a presentation by Boris Gloger talking about Scrum 3.0 and Organization 4.0 at an event organized by Immobilienscout24. His post provides a summary of his notes and insights about some topics presented by Boris.

His summary got me thinking and I decided to add my own notes, comments, and insights to his. My additions are all in red.

A little bit on the history of Scrum. Scrum was first defined as "a flexible, holistic product development strategy where a development team works as a unit to reach a common goal" as opposed to a "traditional, sequential approach" in 1986 by Hirotaka Takeuchi and Ikujiro Nonaka in the New Product Development Game. The authors described a new approach to commercial product development that would increase speed and flexibility, based on case studies from manufacturing firms in the automotive, photocopier and printer industries. They called this the holistic or rugby approach, as the whole process is performed by one cross-functional team across multiple overlapping phases, where the team "tries to go the distance as a unit, passing the ball back and forth".

In the early 1990s, Ken Schwaber used what would become scrum at his company, Advanced Development Methods, and Jeff Sutherland, with John Scumniotales and Jeff McKenna, developed a similar approach at Easel Corporation, and were the first to refer to it using the single word scrum.In 1995, Sutherland and Schwaber jointly presented a paper describing the Scrum methodology at the Business Object Design and Implementation Workshop held as part of Object-Oriented Programming, Systems, Languages & Applications '95 (OOPSLA '95) in Austin, Texas, its first public presentation. Schwaber and Sutherland collaborated during the following years to merge the above writings, their experiences, and industry best practices into what is now known as Scrum. In 2001, Schwaber worked with Mike Beedle to describe the method in the book Agile Software Development with Scrum.

Let's call the version described in this book for argument's sake Scrum 1.0.

Scrum 1.0

- foundation by e.g. Agile Software Development with Scrum (Ken Schwaber)
- basic meeting artifacts, 3 roles (ScrumMaster as management role, Product Owner and team)
- retrospective was not yet part of it
- Backlog idea, but not yet that established
- focus on delivery
- sprint idea – a common way to think about what we would like to deliver together, but breaks in between sprints
- long Excel-lists with tasks and detailed task estimations

What did we learn?

- breaks between sprints don’t make sense
- role of PO was still a business analyst role
- why 30 days and what does it mean – is it calendar days, what about Christmas
- sprint planning and commitments did not work

Scrum 2.0

- roughly since 2004 – driving question, how could it really work?
- breakthrough for retrospectives on the Scrum Gathering in Vienna … shortly thereafter commonly used practice
- more advanced ideas about the sprint planning
- PO has to prepare the backlog and user stories
- PO has to know what she wants
- PO became the single wring able neck
- Sprint review pattern … PO decides if the delivered is right or wrong
- created a difficult situation for the PO
- did the team fail when something did not get delivered (based on Waterfall-like thinking … for sure the team failed))
- followed by the PO shouting on the team

What did we learn?

- PO mega busy
- we created a really stressful environment
- things were not really clear
- but many best practices arose
- requirements were articulated using user stories
- dailies – post it moving sessions
- one can build a huge amount of trash following best practices
- Scrum and the process … Scrum as the Silver Bullet
- great selling argumentation for Scrum
- it worked somehow on methodical level but did not address several problems e.g. scaling
- approach to use Scrum of Scrum
- collocated teams e.g in 2010/11 – huge teams 18, 18 coaches, 18 POs … highly stressful, not that much fun … and the organization killed the initiative shortly after the project was delivered
- heavy meeting load for the PO – Daily, SOS, PO-Daily, Review … does not scale
- architecture … topic commonly shared infrastructure – addressed via communities of practice
- team delegation and architects, but slow and often no decisions leading to the best people leaving the community
- today called guilds
- process, process, process

Scrum 3.0

- ideas collected from the last 2-3 years
- all methods elaborated
- new best practices


Product Owner

- it is not her duty to write stories, it is the team’s responsibility [I fully support the idea that the team should be responsible for creating Product Backlog Items. I am the opinion though that User Stories are used to much, and are many times not the right format for Backlog Items. See for example my articles "Specification by Example in Actuarial Modelling" and "Product Backlog Stories…"]

- team has close contact to the customer … and developers write stories [100% agree, but why is this so hard. At the last Scrum Breakfast Club I asked the questions "why do Scrum teams have no contact to the customer at your project?" and the responses proved that the question to be formulated right, cause in most projects this is not the case.]

- PO is responsible for creating and transporting the product vision … the WHY becomes the central question to answer [Why are we doing this? And why are we doing this NOW? are the two questions that should be answered for any Product Backlog Item. See my article "Why are we actually doing this project?" for some more ideas about this topic.]

- team – includes everyone necessary to really build the product/system [Essential for being agile. See my article "Agile Team Organization" for more ideas about this topic.]

PO should have a basic understanding of the architecture, components and technology of the product in order to communicate effectively with the team.

Dailies

- major goal: progress [This was always the goal, but somehow got forgotten. The Scrum Guide is even updated accordingly. "The importance of the Daily Scrum as a planning event is reinforced. Too often it is seen as a status event. Every day, the Development Team should understand how it intends to work together as a self‐organizing team to accomplish the Sprint Goal and create the anticipated Increment by the end of the Sprint. The input to the meeting should be how the team is doing toward meeting the Sprint Goal; the output should be a new or revised plan that optimizes the team’s efforts in meeting the Sprint Goal. To that end, the three questions have been reformulated to emphasize the team over the individual:

o What did I do yesterday that helped the Development Team meet the Sprint Goal?
o What will I do today to help the Development Team meet the Sprint Goal?
o Do I see any impediment that prevents me or the Development Team from meeting the Sprint Goal?]

- everyone shows their progress on the product instead of moving tickets around [I do not think this always makes sense but I will give it a shot and see what happens.]

- Mob programming – all work TOGETHER and show themselves the results (pairing next level) [Very interesting idea. Hard to explain in a larger company that this could be effective. I have to admit I have no personal experience yet, so this one goes on the list of experiments I have to try once]

- no more PO dailies

- distributed teams – reduce the amount of necessary communication through an intelligent architecture with clean interfaces and restructure your organization accordingly [co-location is desirable, but not realistic in the current world, hence dealing with distributed teams is just fact of life.]

- company example – one product one team, teams build that product like they think and its ok if there are differences among products (you can drive some level of standardization using guilds if necessary) [I agree with this partly. You should still keep in mind some things, for example when deciding technology for your product. When your whole organization runs on Oracle DBs and hardly makes sense for you to use MS SQL Server. When you have 30 expert JAVA developers in-house and most products being developed in JAVA you might not want to use C#. Most product "differences" are on the UI level and can be greatly reduced by making them look the same. UX should be optimized for the product, so the differences between products should be expected and even desired.]

No Meetings

- reviews and dailies are removed or completely changed [This is an option to try in a team that has worked for a while together already, for a newly composed team I would start with doing dailies. To be honest I actually like dailies when you keep in mind that they should focus on progress and impediments and are NOT a status meeting.]

- cancel all regular planned meetings [Agree, except for Sprint Review meetings. In larger companies with a project with some stakeholders in upper management, you will need to send an invite rather early when you want them to show up. Keep in mind they are not managing their own calendars…]

- establish communication on different channels e.g. chat [I have learned to like chat again. I despised WhatsApp on my cellphone for private use and was driven nuts by group chats. But for a Scrum team this is a very functional way of communicating.]

- ad hoc session to discuss next steps on your product development [This works when you move more to a Kanban style of working and remove the concept of Sprints. Just continuously work on, and deliver new functionality. This is very high on my experiments to try list.]

- optional meeting attendance [I have implemented this on a few projects already and it can work. Depends on the team and their corporation / communication. But it is worth a shot to try in any project. When it works it improves moral and reduces waste]

- if someone does not attend, it is his duty to get up to date afterwards. It is a shift of responsibility back to the individual

- pair programming – (Manlow Innovation) – that really established pair programming in a tough manner [One easy way to "enforce" pair programming is to allow only one story in progress for each two team members. I have found this to work like a charm and has the nice benefit of focus]

One piece flow

- people just work on one story at a time – all together (e.g. using Mob programming) [Could work, but hard to explain to the people paying for the project. I have added it to my experiments list]

- differences really get transparent

No Estimates

- who still needs story point estimations? [I do not :-)]

- it is enough to count things that get delivered in a given amount of time [Fully agree. Estimations are very rarely useful. For most projects I do I have to make them at the beginning in order to get a budget (see my article "Agile Budgetting") After that they do not help being productive]

- story points were an interesting idea back in 2003, aiming to remove estimation in hours [But that did not really worked out. People starting translating them back into hours and still use velocity as a performance measure of teams.]
- using Kanban one tries to optimize flow and throughput [I am falling more and more in love of the idea of not using Sprints at all anymore and using Kanban style throughput management.]

- reduce backlog size [You can start doing this today by building a filter and only display stories that are "Ready" i.e. discussed with and understood by the whole team. I do this at all my projects cause it helps keeping focus and overview. When you have to much of them your priority is unclear or you have to much Backlog Refinement meetings.]

- PO has to learn to say NO

- best backlog size is 1 [I disagree because I am not convinced about the Mob programming thing yet. I would like to have enough stories so that each pair in a team has something meaningful to do]

- communicate and establish that we do one thing at a time and not more … FOCUS [See above. Focus can be one thing that exists out of multiple Backlog Items]

No releases

- get it live immediately and receive real customer feedback (not management and PO can decide what works for the customer, it the customer who decides)

- user stories are no laws but a way to foster communication [As said before, the 3Cs are essential, but they can be about any Backlog Items, not necessary a story]

- working with releases created delays – lets work on removing these delays [100% agree. Continuous Deployment is the ultimate goal. But it is really hard. Your team needs the skills to do so. See my article "Three must have Technical Competencies for Scrum Teams" for more ideas on this topic]
- embed deployment in the team – DevOps – the team builds it, the team is shipping it


Product development

- no longer with backlogs but using design thinking approaches, hypotheses and data [I disagree here. Cause hypotheses, data and design thinking all result in a Product Backlog Item… something that the team has to build and then will be deployed so that feedback can be collected]

- driven by thinking … how do I get to the needed/right functionality

- design thinking … I don’t really know what to build

- based on assumption, mini prototypes and/or fast and cheap development

- to learn whether we’re moving in the right direction

- learn to think what the user is thinking

- important early link with the real user

- cost estimation? use probability approaches and forecasts based on delivery time and needed scope [See my articles "Agile Budgetting" and "Estimating with Wideband Delphi and Monte Carlo Simulation" for my ideas regarding cost estimations]

- ROI and budget response to the teams – and POs have to take this new responsibility [100% agree. A PO cannot decide on priority when the PO does not own the budget]

- measure ROI increase [this is hard to measure, but starts the rights discussions. Besides this PKI I would measure a few other things as well. See my article "Scrum Project Succes Metrics" for some more KPIs]

Check your level of agility by watching:

- politics in your company – how many discussion are inward focussed (between departments and hierarchies)

- it’s not about self-organization – it’s an instrument – but the real goal is that people behave in a way that it is useful for the product to be developed. And therefore it's of high importance that it is voluntary.

- the main task for a ScrumMaster – how can I help and guide others to contribute and have fun working on it.

- focus not solely on the process but on the purpose of doing something

Maybe I missed some important points? Please share your thoughts and insights with your highly welcome comment.

Read more…

Wednesday, March 09, 2016

Agile team organization

Most projects I am involved in are of such a size that multiple teams are involved. An even when the project itself consists of one team, many dependencies with other teams/departments exists. In theory this is not an issue but in practice it usually is because how the teams are structured.

Some general patterns that I typically see are:

- Discipline Teams: Programmer Team, UI Designer Team, Tester Team, DBA Team etc.
- Location Teams: Zurich, Bern, Ney York, London etc
- Architectural Layer Teams: GUI, Middle Tier, Database, Infrastructure, etc
- Component Teams: Model Component, Computation Component, Configuration Component etc.

What I unfortunately do not see a lot are so called Feature Teams. A feature team is a long-lived, cross-functional, cross-component team that completes many end-to-end customer features one by one (Lairman/Vodde).


The characteristics of a feature team are:

- Long-lived: the team stays together so that they can grow into higher performance; they take on new features over time
- Cross-functional and cross-component
- Ideally, co-located
- Work on a complete customer-centric feature, across all components and disciplines (analysis, programming, testing, …)
- Composed of generalizing specialists
- in Scrum, typically 7 ± 2 people

Applying modern engineering practices, like continuous integration, is essential when adopting feature teams. Continuous integration facilitates shared code ownership, which is a necessity when multiple teams work at the same time on the same components.

One common misunderstanding of feature teams is that every member of a feature team needs to know the whole system. This is not the case because

- The team as a whole, not each individual member, requires the skills to implement the entire customer-centric feature. These include component knowledge and functional skills such as test, interaction design, or programming. But within the team, people still specialize… preferably in multiple areas.

- Features are not randomly distributed over the feature teams. The current knowledge and skills of a team are factored into the decision of which team works on which features.

Within a feature team organization, when specialization becomes a constraint…learning happens. Moving away from component and discipline teams is a difficult but necessary step for those who want to adopt an agile approach. In Scrum, for example, you have a team, a ScrumMaster, and a product owner. These teams work on customer-centric features that are iteratively developed, and in order to do so, they should be a feature team.

There are many advantages to organizing multi-team projects into feature teams:

Impact evaluation: At the end of a sprint, a feature team will have built end-to-end functionality, traversing all levels of the technology stack of the application. This maximizes members’ learning about the product design decisions they made (Do users like the functionality as developed?) and about technical design decisions (How well did this implementation approach work for us?)

Waste Reduction: Handing work from one group or individual to another is wasteful. In the case of a component team, there is the risk that too much or too little functionality will have been developed, that the wrong functionality has been developed, that some of the functionality is no longer needed, and so on.

Communication: Because a feature team includes all skills needed to go from idea to running, tested feature, it ensures that the individuals with those skills communicate at least daily.

Risk Mitigation: The work of a component team is valuable only after it has been integrated into the product by a feature team. The effort to integrate the component team’s work must be estimated by the feature team, whether it will occur in the same sprint during which it is developed (as is best) or in a later sprint. Estimating this type of effort is difficult because it requires the feature team to estimate the integration work without knowing the quality of the component.

Customer Focus: Organizing teams around the delivery of features, rather than around architectural elements or technologies, serves as a constant reminder of Scrum’s focus on delivering features in each sprint.

Of course, there will be occasions when creating a component team is still appropriate, for example when a new capability will be used by multiple teams or when the risk of multiple solutions being developed for the same problem is high. Overall, however, the vast majority of teams on a large project should be feature teams.

Read more…

Monday, February 08, 2016

Specification by Example in Actuarial Modelling

Specification by Example in Actuarial Modelling
Specification by example is a collaborative approach to defining requirements and business-oriented functional tests for software products based on capturing and illustrating requirements using realistic examples instead of abstract statements. Specification by example is also known as example-driven development, executable requirements, acceptance test-driven development (A-TDD), Agile Acceptance Testing or Test-Driven Requirements. I prefer A-TDD but will use both specification by example and A-TDD.

Examples as a single source of truth

A key aspect of specification by example is creating a single source of truth about required changes or new functionality from all perspectives. When business analysts work on their own documents, software developers maintain their own documentation and testers maintain a separate set of functional tests, software delivery effectiveness is significantly reduced by the need to constantly coordinate and synchronize those different versions of the truth.

With Specification by example, different roles participate in creating a single source of truth that captures everyone's understanding. Examples are used to provide clarity and precision so that the same information can be used both as a specification, documentation and a business-oriented functional test. Any additional information discovered during development or delivery, such as clarification of functional gaps, missing or incomplete requirements or additional tests, is added to this single source of truth. As there is only one source of truth about the functionality, there is no need for coordination, translation, and interpretation of knowledge inside the delivery cycle.

When applied to required changes, a refined set of examples is effectively a specification and a business-oriented test for acceptance of software functionality. After the change is implemented, specification with examples becomes a document explaining existing functionality. As the validation of such documents is automated, when they are validated frequently, such documents are a reliable source of information on business functionality of underlying software.

Specification by example is very useful to projects with sufficient organizational and domain complexity to cause problems in understanding or communicating requirements from a business domain perspective. It does not apply to purely technical problems or where the key complexity is not in understanding or communicating knowledge. There are documented usages of this approach in domains including investment banking, financial trading, insurance, airline ticket reservation, online gaming and price comparison.

This brings me to my very successful experience with A-TDD within an actuarial modeling project. To be specific to the creation of a new MCEV model for a  life insurance company.

MCEV Modelling

The Embedded Value (EV) of a life insurance company is the present value of future profits plus adjusted net asset value. It is a construct from the field of actuarial science which allows insurance companies to be valued. European embedded value (EEV) is a variation of EV which was set up by the CFO Forum which allows for a more formalized method of choosing the parameters and doing the calculations, to enable greater transparency and comparability. Market Consistent Embedded Value (MCEV) is a more generalized methodology, of which EEV is one example.

Depending on how you implement such a model one part of it is the future cash flow generation component. Input for this component is a coded feature vector of an insured person and his/her contract. Based on this information the component computes the expected cash flows for this insured person for t=0 (now) until t=40 (40 years from now). Very simplified you could say that the cashflow component computes premiums, benefits and total savings for each person in any given year. Afterward an ALS component computes Profit/Loss Statements and Balance sheets based on given Assets, Liabilities (the cashflows) and Scenarios (interest rates, market behavior, company behavior etc).

As you might already have noticed the cashflow component seems to be representable by input data, model, expected output data. This is exactly what we have done. We create one table containing input examples. We expressed business rules and model in tables to make them more comprehensible and assists in finding missing cases. And we defined an expected output table with one row for each input example. Based on this the model could be coded in in the modeling software of choice and tested automatically.

PersonIDDate of BirthGenderProductStart DateCurrent Savings
1
2
3
4
5
6
Table with Input Example

PersonIDPremium t0Benefits t0Savings t0...Premium t1
1
2
3
4
5
6
Table with Expected Output

Implementation

So how did we got those tables? Since we used Scrum at this project we mapped the A-TDD steps on a Scrum iteration as described by the guys from LeSS.

Discuss in workshop - Before the detailed Sprint Planning, the team, Product Owner, and other stakeholders clarify the requirements collaboratively in a workshop.

Develop in concurrence - Tasks for implementing the tests/requirements are created in the detailed Sprint Planning and implemented during the iteration. All activities happen “at about the same time.”

Deliver for acceptance - The working product increment—the passing acceptance tests—are delivered for acceptance to stakeholders and discussed together in the Sprint Review.

Automation

Successful application of A-TDD on larger scale projects requires frequent validation of software functionality against a large set of examples (tests). In practice, this requires tests based on examples to be automated. A common approach is to automate the tests but keep examples in a form readable and accessible to non-technical and technical team members, keeping the examples as a single source of truth. We did this by using Excel for our Examples and SharePoint for versioning.

This process is supported by most test automation tools which work with tests divided into two aspects - the specification and the automation layer. The specification of a test which is in this case a CSV file and contains the examples, their results and auxiliary descriptions. The automation layer connects the example to a software system under test. When you combine this with a Continuous Integration tool that runs all tests with each build of your software/model you will improve the quality and potential delivery speed of your project a lot.

Conclusion

At the time I moved on to another project we had not automated the whole process 100% but came close. The whole process of A-TDD was received very positively by all team members. We got great cross learning and understanding between model coders, business analysts, actuaries and DB specialists. Besides that the next audit will be much easier since this kind of documentation is up-to-date, clear and detailed. Also regression testing of the model has become far much easier. So I can only recommend starting Specification by Example when you can code (parts of) your business in tables. It is definitely worth the effort.

Read more…

Friday, February 05, 2016

Why are we actually doing this project?

Why are we actually doing this project?
In a number of projects I have been part of, one of the most important questions was asked not at all or far too late.

Why are we doing this project?

When we got to the answer to this question, it was often incomplete or incorrect. The latter sometimes even knowingly incorrect, i.e. an organizational/political/personal lie.

I admit my experience is biased because I typically join projects at a point of time the project already has some serious issues and my role is helping to solve these issues. But even in good running projects, the question of why we are doing this project seems rather difficult to answer.

That is why I use a set of eleven questions that together will help you answer the big one.

1) Exactly what problem will this project solve? (value proposition)

2) For whom do we solve that problem? (target market or target users)

3) How big is the opportunity? (market size, potential savings, risk reduction)

4) What alternatives are out there? (competitive landscape or alternative solutions/products/suppliers)

5) Why are we best suited to pursue this? (our differentiator as a company or project team)

6) Why now? (market window and urgency)

7) How will we get this project go live? (implementation strategy)

8) How will we measure success/make money from this product? (metrics/revenue strategy)

9) What factors are critical to success? (solution requirements, skill requirements, budget)

10) What are the main cost drivers? (people, licenses, hardware, training, ...)

11) Given the above, what’s the recommendation? (go or no-go, continue or stop)

One of the first thing I start doing when I join a project is trying to get answers to these questions.

They will help to guide your project, give you and the team focus, and will help you with stakeholder discussions.

When things change you will have to rethink your answers. And sometimes you will just have to stop the project.

Read more…

Tuesday, January 26, 2016

Storypoint Estimation Scale

Agile Estimating and Planning
Tomas Gutierrez, Partner at Scalable Path gave a detailed description of the Storypoint Estimation Scale they use at his company while answering a question on Quora. I think the way they use this has great value for any team that is thinking about improving their own estimation process. It is fast, simple and meaningful and you can take it as a base to create your own scale definition.

Like many teams, they are using story (or agile) points to assign a common definition to the effort required to complete tasks/stories. Their exponential complexity scale is based on the modified Fibonacci Sequence – 0, 1, 2, 3, 5, 8, 13, 21. This definition of complexity should be shared by a whole team, from developers, product owners, executives, to anyone else who’d like to understand the nuances and complexities of creating something with this team. The scale will allow you, your team and your organization to have visibility into timelines, complexity, budget, and staffing.

Here is how they interpret story points in our projects and de-couple effort from hours.

0 – Very quick to deliver and no complexity; on the order of minutes
- One should be able to deliver many 0’s in a day
- I know exactly what needs to be done, and it’s going to take me very little time
- Example: Change color in CSS, fix a simple query

1 – Quick to deliver and minimal complexity; on the order of an hour+
- One should be able to deliver a handful of 1’s in a day
- I know exactly what needs to be done, and it’s going to take me little time
- Example: add a field to a form

2 – Quick to deliver and some complexity; on the order of multiple hours/half-day+
- One should be able to deliver one 2 comfortably in a day
- I mostly know what needs to be done, where improvements/changes need to be implemented, and it’s going to take me some time
- Example: Add a parameter to form, validation, storage

3 – Moderate time to deliver, moderate complexity, and possibly some uncertainty/unknowns
- On the order of about a day or more to deliver
- I have a good idea what needs to be done, and it’s going to take me a bit of time
- Example: Migrate somewhat complex static CSS into a CSS pre-processor

5 – Longer time to deliver, high complexity, and likely unknowns
- On the order of about a week or more to deliver
- I know what needs to be done at a high level, but there is a good amount of work due to complexity/amount of development, and there are big unknowns we’ll discover as we get into the work.
- Example: Integrate with third-party API for pushing/pulling data, and link to user profiles in the platform

8 – Longer time to deliver, high complexity, and likely unknowns
- On the order of a couple weeks+
- I understand the concept and the goals, but it will take a while to deliver due to the amount of work, complexity, and unknowns
- If we have an 8, we should break them into smaller tasks/issues with smaller point values and minimize the complexity
- This might require a Spike to architect/remove uncertainty or be created as an epic with more granular stories within it
- Example: Overhaul the layout/HTML/CSS/JS of a web application

13 – Long time to delivery, high complexity, many critical unknowns
- On the order of many weeks/month
- Similar to an 8; this should definitely be an epic and requires discussions around how to accomplish
- Example: Migrate application from the outdated data store to new DB technology and ORM

21 – You’re doing this wrong…

As you can see, this is not clear-cut and leaves much room for interpretation. Estimating software development is difficult, and there are many factors to consider, including complexity to develop given existing architecture, team availability, business priorities, unforeseen third party complexity, use of CI (Continuous Integration), automated testing, etc. What teams should strive to do is build a culture where there is a good grasp on the solution, and all agree on definitions for the level of effort required to deliver each piece of functionality, task, bug fix, etc.

Read more…

Monday, January 25, 2016

Enterprise Scrum and Improvement Cycles

As you probably have noticed in previous posts one of my main interests is leveraging Agile and Scrum in enterprises to improve the way they work and let have the teams more fun and better results at the same time.

No surprise then that I have a Google Alert on "Enterprise Scrum" and that the website of Mike Beedle pops up now and then. Mike Beedle is the author of the first Scrum book (with Ken Schwaber), the author of the first Scrum paper published (with Jeff Sutherland, Ken Schwaber, Martine Devos and Yonat Sharon), and co-author of the Agile Manifesto. He is also the author of the upcoming Enterprise Scrum book. It is not finished yet, but you can read the executive summary on his website.

I will read the whole book when it comes out, and until then I will restrain myself from giving my opinion. But one thing I picked up from the summary and I really liked is his implementation of the concept "Improvement Cycles".

Every Improvement Cycle (a Sprint in Scrum), has a PE3R structure:  planning, execution, review, retrospective and refinement.

Planning: Plan on starting or continuing with an activity (provided you passed a DOR - definition of ready for the activities)

Execution: Execute and get things DONE (according to a DOD - definition of done, for the activities)

Review: Inspect and Adapt the results obtained on the things DONE, making everything transparent

Retrospective: Inspect and Adapt the team and the process to improve it

Refinement: Refine the Value List (Product Backlog), to change/improve the  efforts

You can apply these cycles on any kind of activities. Recruiting, marketing, sales, etc. You can decide on the cycle length (weekly, monthly, quarterly) and can run overlapping cycles (weekly and quarterly for example). I like this way of describing the essence of Scrum because it is easy to relate for people that work in enterprises. It is that simple.

"Scrum management" brings several advantages:
- better teamwork of the  team, builds a "cooperative culture"
- results oriented - focus on getting things DONE
- deadlines - everything we humans do gets done though deadlines, so the Cycle structure helps
- inspects and adapts everything:  1) Review:  the work , 2) Retro: the team and the process, 3) Refinement: the vision

Read more…

Monday, January 18, 2016

Top six reasons for failure of Agile projects

VersionOne does an annual "State of Agile Development" survey and publishes the results. You can get a copy by signing up with your email address.

I recently read through the latest (9th) annual survey results and my interest was peaked by the data on reasons why agile projects have failed. I decided to have a look at the reports of the last five years and compiled an average ranked list.

The top 6 reasons of failed agile projects according to the survey averaged over the last 5 years (with the latest ranking between parentheses):

1. "Lack of experience with agile." (1)

2. "Company philosophy or culture at odds with agile core values." (2)

3. "A broader organizational or communications problem" (6)

4. "External pressure to follow traditional waterfall processes." (4)

5. "Lack of support for cultural transition." (5)

6. "Lack of management support." (3)

What is interesting to see is that there has not changed much in the list over the years. Lack of experience with agile is cited almost every year as the number one or number two reason for failure. And the company philosophy or culture being at odds with agile core values is also mentioned almost every year as the number one or number two.

What I like about this list is that it shows very clearly that in order to succeed with agile you have to transform an organisation and not just a development team. This is especially true in larger organisations. A lack of experience you can compensate with training, coaching and mixing internal teams with external developers/testers that have experience with Agile projects. The rest have to come from within the organisation and will require organisational change and resisting pressure from external sources. And this is only possible when you have management support on the highest levels.

Read more…

Wednesday, January 13, 2016

Three must have Technical Competencies for Scrum Teams

Three must have Technical Competencies for Scrum Teams
One key element of working agile in any organization is technical competence. Why is that? Well in my opinion as an organization you can only be agile when you are able to make changes to the product in an easy, fast, and flexible way while maintaining desired quality.

In that sense your organizational agility is constrained by technical agility. In other words, when you are slow in making changes to your product, then it doesn’t matter how you structure your teams, your organization or what framework you adopt, you will be slow to respond to changes. Bas Vodde and Craig Larman from LeSS (Large Scale Scrum) wrote extensively about this in the context of the LeSS Framework and Technical Excellence. I fully agree with them in a Large Scale Scrum environment, but I am also the opinion that for a single Scrum team technical competence is essential in order to be good and get the benefits from Scrum (or any other Agile framework).

Luckily, there are a few well established agile engineering practices that can help the team to keep their work in a high quality and in a flexible state. The three practices that I think are essential for any agile team to master are the following:

- Continuous Integration
- Unit Tests
- Test Automation

Continuous Integration

Martin Fowler writes in one of his key articles the following:
Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.
Based on the above definition and their experience Bas Vodde and Craig Larman define Continuous Integration as:

- a developer practice…
- to keep a working system
- by small changes
- growing the system
- by integrating at least daily
- on the mainline
- supported by a CI system
- with lots of automated tests

I do not want to go in detail about the items above, since the guys from LeSS have already done an excellent job doing so. But one thing I would like to point out. Discussions about CI all too often are about tools and automation. Though important, CI in essence is a developer practice. Owen Rogers, one of the original creators of CruiseControl.NET writes in one of his articles:
Continuous integration is a practice – it is about what people do, not about what tools they use. As a project starts to scale, it is easy to be deceived into thinking that the team is practicing continuous integration just because all of the tools are set up and running. If developers do not have the discipline to integrate their changes on a regular basis or to maintain the integration environment in a good working order they are not practicing continuous integration. Full stop. 
Unit Tests

Unit Tests are software programs written to exercise other software programs (called Code Under Test, or Production Code) with specific preconditions and verify the expected behaviors of the CUT. Unit tests are usually written in the same programming language as their code under test.

Each unit test should be small and test only limited piece of code functionality. Test cases are often grouped into Test Groups or Test Suites. There are many open source unit test frameworks (link). Unit tests should run very fast. Typically hundreds of unit test cases within a few seconds.

The purpose of unit testing is not for finding bugs. It’s a specification for the expected behaviors of the code under test. The code under test is the implementation for those expected behaviors. So unit test and the code under test are used to check the correctness of each other, and protect each other. Later when someone changed the code under test, and it changed the behavior that is expected by the original author, the test will fail. If you code is covered by reasonable unit test, you can maintain the code without breaking the existing feature. That’s why Michael Feathers define legacy code in his book as code without unit tests.

Test Automation

Agile developers emphasize the importance of automated tests. With short cycles, manual regression testing is nearly impossible. Does that mean there is no manual testing at all? No. Some manual testing is still recommended, though such testing differs from the traditional script-based manual testing. Elisabeth Hendrickson, the author of the mini-book Exploratory Testing in an Agile Context, dares to state that:
I do think that if you can write a manual script for a test, you can automate it.
Teams often claim “It is impossible to automate tests related to a lost network connection” or “You can’t automate tests related to hardware failure”. In most cases the answer usually is “No, it is not” or “Yes, you can.”

It may be difficult to automate a test in exactly the same way as it would be carried out manually. For example, it is nearly impossible to remove the network cable automatically in a connection-lost test case. Therefore, the automated test is usually done in a different way. Instead of the cable being physically detached, the automated test instructs the driver to respond as if the cable were removed.

Is automating all tests worth it? According to Hendrickson:
If it’s a test that’s important enough to script, and execute, it’s important enough to automate.
Why is this? Iterative and incremental development implies that code is not frozen at the end of the iteration but instead has the potential to be changed each iteration. Therefore, manual regression testing would mean rerunning most of the manual test – every iteration. Automating the tests therefore pays back quickly. Automating all tests might not be worthwhile or even possible. But many of them do.

Read more…