Tuesday, December 29, 2015

Operations First Delivery

I've been involved with many projects that attempt to increase their delivery cadence after it has been deployed to production. This is met with severe resistance for multiple reasons; the manual testing effort takes too long, the environment is unstable, or it could result in downtime and customer impact. Since the business isn't asking for more frequent deliveries how can we justify the expense?

These reasons feel similar to the the resistance I receive when introducing test driven development. This got me thinking of an idea I've coined as "Operations First Delivery". The ideas is that the first delivery of a product should be a deployment to the production environment. This may seem ridiculous since nobody is using a non-existent product. I see that as a risk-free opportunity. This is the ideal time to release to production.

If we start every product were every changeset is deployed to production, we will start with Continuous Delivery. We will learn how to do zero-downtime deployments when it's appropriate. We will learn how to monitor our systems and resolve issues that generate support phone alerts. We will learn to evaluate and reduce risks in order to continuously deliver. This learning will be incremental and iterative from the beginning of the product. 

Thursday, August 1, 2013

Daily Tally

Have you attended (or participated) in this daily stand-up meeting?
Q. What did I do yesterday?
A. I worked on (pointing) this card yesterday. 
Q. What am I doing today?
A. I'll continue working on this card today. 
Q. What's in my way?
A. Nothing is in my way.

If this continues for an entire iteration the team looses credibility and trust with themselves and other stakeholders and teams. One technique my teams use to provide visibility to potential risks is a "Daily Tally" on every user story/defect card.

5 Point User Story Card with Daily Tallies
Black - Developers
Orange - Testers
Red & Highlighter - Back Flow
Every day we place a tally mark on the card. Developers and testers have unique color markers and we tally from the the left to the right of the card to represent a timeline. The colors tell us how long we've been working on a card and how the time has been split between the developers and testers.

We also add a red tally when a card is returned to the developers from the testers. A red tally indicates that we missed an acceptance test scenario or introduced a defect.

The team uses the information from the daily tally in two primary ways:
  1. Retrospective Item - A red tally mark shows when we have an opportunity to improve by reviewing the causes for the back flow and determine if we want to prevent it in the future.
  2. Pushing help - When the number of tally marks equal the card's point estimate it triggers the team  to ask "How can we help?". This is not a failure of an individual or the team. It is a mechanism that makes the team aware to potential risks as early as possible.
We still use more traditional tracking methods, but we've found this technique effective for spotting trouble early and providing examples (with data) for retrospective meetings.

Thursday, February 21, 2013

Are all your changes included in the next release?

I'd like to share one technique I've used with teams to help error-proof their branching to verify the next release will include all their changes. This technique only applies to the "One Branch" strategy since a single code line will always contain all the changes.

Error-proof Branching Strategies

  • One Branch (2 code lines) - main (of default) and release (or stable)
  • No Branches (1 code line) - main (or default)

The main error that occurs with branches is that developers forget to apply their changes to all branches. For example, a hot fix needs to be made to the release code line and the development code line to makes it in the next release. Like most errors, the developers aren't malicious and telling them to "do a better job next time" won't improve their behavior.

One highly effective technique I use with teams is to run a script once a day to validate that all changes made to a release branch have been applied to the main (or default) code line. Since this happens daily, any merge conflicts are minimized.

In this example using Mercurial, this batch script is called from a Jenkins job that runs once a day. If a commit exists on the release branch that hasn't been merged to the default branch it will fail the build. I hope you find it useful.

Branching Strategies


I've stumbled across many creative and interesting branching strategies during my career. I'm going to categorize the branching strategies in to two categories: Error-prone and Error-proof.

Error-prone Branching Strategies

  • Branch per release per environment based on previous release
  • Branch per release
  • Branch from a previous release
  • Branch per environment

Error-proof Branching Strategies

  • One Branch (2 code lines) - main (of default) and release (or stable)
  • No Branches (1 code line) - main (or default)

Saturday, July 14, 2012

My Mercurial Workflow (using Bookmarks)

The "master" branch...er...bookmark

I create a bookmark that always points to the latest changeset from the shared repository. I should call it "origin/master", but have been able to manage with a single 'master' bookmark. I update to the master bookmark before making changes or pulling changes,

Any changesets that I want to push are ancestors of the 'master' bookmark. I put new commits on 'master' and defer the decision to create a branch when I have changes that need to be shared at different times. For example, when you start working on a new feature, but have to go back and fix a quick defect.

Daily Flow:

hg update master
(edit, commit, edit, commit, edit, commit)

** determine I need to move previous commits to a feature branch
hg bookmark feature1

** move the 'master' bookmark back to the last shareable/pushable changeset (rev 1234)
hg bookmark -f master -r 1234
(edit, commit, edit, commit, edit, commit)

** Quit working on the non-sharable commits push the sharable changes
hg update master
hg pull --rebase
hg push -r master

** Time goes by with dozens of changesets by other....
hg pull --update (--update will move the 'master' bookmark forward)
hg update feature1 (move my working directory to my feature1 branch)
hg rebase -d master (rebase my feature1 changeset on to the latest changes)

** when these are ready to be shared
hg bookmark master -f (move the master bookmark to the changesets we want to share)
hg push -r master

Pushing (The secret sauce)

If you have local changesets on bookmarked branches that haven't been merged in to master and you attempt a push you get the "abort: push creates new remote head" error.

Include the -r (or --rev) option to only push master changesets.

-> hg push -r master
pushing to origin
searching for changes
1 changesets found
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files

Tips for Bookmarks

Keep an eye on the active bookmark. The active bookmark is denoted with an asterisk (*). When you commit, the active bookmark will update to your new commit. This is most important when you're pulling in other changes. Switching the active bookmark is done by updating the bookmark (hg update feature_1). This can be done even when the bookmarks are point to the same revision.

-> hg bookmark
   feature_1                 3097:6757f51d7d1a
 * master                    3101:0b0834be4897


Thursday, May 3, 2012

Testing Boundaries

Many projects follow the same pattern for software testing. These projects test the entire system by running manual test cases through the user interface. When the number of manual test cases grow beyond the capacity of the current staff they introduce automation tools to execute the manual test cases. This testing strategy leads to discussions about data management, test maintenance cost and ever increasing execution times.

While there are ways to improve the exist testing process, ultimately I think the strategy is wrong. More specifically the test boundaries are wrong. We can't effectively test a system using end-to-end tests exclusively. This is an example of an alternative testing strategy.

Imagine we have a system with three Modules (A, B, C).


For the sake of simplicity lets assume that each module has 10 paths. Every path in Module A depends on every path in Module B and every path in Module B depends on every path in Module C.

If we test the entire system across all the modules, we need 1000 tests to cover all the paths through the system.


We can drastically reduce the testing effort by splitting our test boundaries.

When we unit test each module in isolation we'll need 10 tests per module, 1 for each path in the module. However these 30 tests don't cover the interactions between the modules. We need 200 integration tests between the modules. 100 tests to validate A and B plus 100 tests to cover B and C.



So if we change our test boundaries we can get the same test coverage for a system using 200 + 30 = 230 test cases.

The most powerful benefit of smaller test boundaries is the ability to quickly localize failures. If there is a single path in Module B that has a defect and we have a full system boundary we will have 100 failing test cases. This large number of failing tests makes it difficult to track down a single defect.


If we split our testing boundaries up, we have a single unit test failure for Module B and 10 broken integration tests between Module A and Module B and 10 broken tests between Module B and Module C. 21 failing tests is still a lot, but our single failing unit test will tell us the exact location of the defect and we can quickly discover the defect.

        

I realize this is a simple example, but I hope it illustrates the effects and impact of using smaller testing boundaries.

Sunday, April 29, 2012

Observations from a 2 day Hack-a-thon

Last week I had the opportunity to participate in a two day "hack-a-thon". This is an event where engineers get to work on any interesting project and present their results to a panel of judges. The judges determine which projects are the most innovative and are awarded and promoted across the organization.

In the past I've been skeptical of these types of events. Most of the proposed project are engineering improvements that should be worked on as part of their project deliverables and the majority of the projects for this event would fall into that category. However, I think the event gave the teams authority and permission to work on these projects and was impressed by the results the teams produced in two (more like 1.5 days to allow for demos).

This lead me to ask a couple questions. One to the teams and one to the leaders/managers. To the team, I asked: How were the teams able to deliver such impressive results in 2 days? Some of the team's reasons included:

  • Improved collaboration
  • Uninterrupted/focused work
  • Interesting projects
  • Competition with other teams
  • Scoping the work to two days
  • Food and snacks
The reason that I think contributed the most to the success of the projects is scoping the work to the time available. Most teams had two to three options for what they envisioned for their demo. They delivered the simplest demo first then worked on the next small improvement. Most demos were ready during the first day.

I saw another impressive behavior...The teams followed up on impediments instantly and would not take "no" or "tomorrow" as an answer. I don't know if I would call it Drive, Responsibility or Ownership, but was very happy to see it.


On Monday, I'll follow up with manager and leaders on how to replicate these outcomes on our real projects.