Story about User Stories

It’s amazing how people somehow manage to figure out ways to abuse systems that they once so passionately fought to bring in. I recently came across a user story card which had the whole sequence diagram scribbled on it in an extremely small fonts. Interesting work-around, isn’t it? For those who have not seen one, a user story card is typically a 3×5 index card used by many organizations to represent a user story.  The small size has been selected for a reason. The idea behind having a 3×5 index card is to constraint the user story to short and simple.

But user story card is just a part of the story. Let us try look at the whole story itself. Please allow me to tell you the story of user stories.

A user story describes desired functionality from the business perspective; it is the user’s story. A good user story describes the desired functionality, who wants it, and how and why the functionality will be used. I completely agree with Mike Cohn’s favourite Template for writing user story which are short and simple,

As a [user role]

I want [functionality]

So that [reason]

 A user story is comprised of:

1. Card – the written description of the story, serves as and identification, reminder, and also helps in planning.

2. Conversation – this is the meat of the story; the dialogue that is carried out with the users; recorded notes; mockups; documents exchanged.

3. Confirmation – the acceptance test criteria that the user will utilize to confirm that the story is completed.

A very good guideline for writing good user stories is the INVEST model

Independent – One user story should be independent of another (as much as possible). Dependencies between stories make planning, prioritization, and estimation much more difficult. Often enough, dependencies can be reduced by either combining stories into one or by splitting the stories differently.

Negotiable – A user story is negotiable. The “Card” of the story is just a short description of the story which do not include details. The details are worked out during the “Conversation” phase. A “Card” with too much detail on it actually limits conversation with the customer.

Valuable – Each story has to be of value to the customer (either the user or the purchaser). One very good way of making stories valuable is to get the customer to write them. Once a customer realizes that a user story is not a contract and is negotiable, they will be much more comfortable writing stories.

Estimable – The developers need to be able to estimate (at a ballpark even) a user story to allow prioritization and planning of the story. Problems that can keep developers from estimating a story are: lack of domain knowledge (in which case there is a need for more Negotiation/Conversation); or if the story is too big (in which case the story needs to be broken down into smaller stories).

Small – A good story should be small in effort, typically representing no more, than 2-3 person weeks of effort. A story which is more than that in effort can have more errors associated with scoping and estimation.

Testable – A story needs to be testable for the “Confirmation” to take place. Remember, we do not develop what we cannot test. If you can’t test it then you will never know when you are done. An example of non-testable story: “software should be easy to use”.

So the moral of the story is this. User Stories being the cornerstones for Agile development, deserve their fair share of time, effort and most importantly prudence to be “invested” in them in order to lay a strong foundation for project.

Advertisements

IOC Containers Demystified

When I first came across this big buzz called IOC Containers, I found it very close to a factory class in its purpose and philosophy. As I dig deeper and deeper into the concept, I find myself more and more convinced that an IOC Container is nothing more than a Fancy Factory.

Amid all the mumbo-jumbo around it, these IOC Containers actually quite simple at their hearts. IOC Containers are essentially a tool for Dependency Injection. The whole idea being able to invert the flow of program control in a manner that maximum amount loose coupling can be achieved. Hence the name IOC (Inversion Of Control) Container.

Sounds vague? Let us jump into some code in order to understand this. Allow me to use the example of my favorite ‘PizzaStore’ class.


Public class PizzaStore
{
    Public ChickenPizza Prepare()
    {
        ChickenPizza pizza = new ChickenPizza();
        return pizza.Prepare();
    }
}

In the light of Dependency Inversion Principle (DIP) , this is not the right way of doing it. For every new Pizza class that is created in the system there needs to be a new PizzaStore class. This is because the PizzaStore class depends upon the Pizza class. A desirable design goal (according to DIP) here would be that the Pizza and the PizzaStore class should be decoupled in a manner that same PizzaStore class should be able to work with different types of Pizza. In other words, the dependency between Pizza and PizzaStore should be so inverted that PizzaStore should not directly depend on Pizza class. As a matter of fact, according to DIP, it should depend on an abstraction (IPizza interface in this case).


Public class PizzaStore
{
    Public IPizza Prepare(IPizza  pizza)
    {
        return pizza.Prepare();
    }
}

The client program must take pass the right dependency (instance of pizza) to the IPizza. This tecnique of injecting the dependency from outside is called Dependency Injection.

If we separate out the piece of code that assumes the responsibility of inverting the flow of control of the system in comparison to procedural programming using the technique of Dependency Injection, we would call this piece of code an IOC Container.

An IOC container, at the very basic should be able to do two things
• Register a Dependency
• Resolve a Dependency

For the ease of understanding, let us start with the Resolve part. Take a look at the following code,


public class PizzaStore
{
    Public IPizza Prepare()
    {
        var pizza = Container.Resolve<IPizza>();
        return pizza.Prepare();
    }
}

Here the Container determines and returns the right instance of the Pizza. Hence, whenever an instance of the type Pizza is required by the PizzaStore class, it will request the Container to resolve and return the right instance for the Pizza. This means that it is the responsibility of the Container to instantiate a class. This also means that the Container needs to have the information about what instance should it return for a given type. This is when the Register part comes into picture. The Register method tells the Container what object should be returned for a given type.


ChikenPizza chkPizza = new ChikenPizza();
Container.Register<IPizza>(chkPizza);

Or sometimes,


Container.Register<IPizza>(ChikenPizza);

Most of the existing IOC containers allow an ability to Register through configuration also. A typically full-blown IOC Container also provide with other sophistications like managing the scope and lifetime of the instance. For example, would you like your object to be instantiated on per call basis, or per thread basis or should your instance be a singleton instance.

In a nutshell, all that an IOC Container essentially does is know what instance to be retuned for a given type and then return that instance through an abstraction (generally interface) using dependency Injection.

Dependency Injection Simplified

A lot is being said and written about Dependency Injection(DI) these days. Its amazing how most of these current literature make DI sound so complicated. In reality though the concept is really simple. Powerful but yet simple. DI is based on one of the SOLID principles called the Dependency Inversion Principle (DIP) propagated by Uncle Bob. The principle states:


HIGH LEVEL MODULES SHOULD NOT DEPEND UPON LOW LEVEL MODULES. BOTH SHOULD DEPEND UPON ABSTRACTIONS.
ABSTRACTION SHOULD NOT DEPEND UPON DETAILS. DETAILS SHOULD DEPEND UPON ABSTRACTION.

Let’s try to understand this with an example. As I write this post on a Sunday evening enjoying a Four-cheese pizza slice (yummy!), all I can think of for an example is pizzas. Let’s assume a class called PizzaStore, which has a specialty in Grilled Chicken pizzas. Hence it sells only this Grilled Chicken pizza. Let’s call this class as ChickenPizza.

public class PizzaStore
{
    public void CreatePizza()
    {
        ChickenPizza pizza = new ChickenPizza();
        pizza.Prepare();
    }
}

Here the PizzaStore class is said to be the higher-level class and ChickenPizza class is the lower-level class. Hence we have a high-level class depend upon the low- level class. But so far, this is perfectly fine and acceptable as there is just one type of pizza to prepare.

Time passes, and our little PizzaStore becomes more popular. The business grows and so the demands. Its time that they add some more veriety to the kind of pizzas they sell. The Pizza store introduces a new pizza – the famous Chicago Deep Dish pizza. Lets this be called the DeepDishPizza.

This will require the high-level PizzaStore class to change a little to accomodate this new low-level DeepDishPizza class.

public class PizzaStore
{
   public void CreatePizza(string pizzaType)
   {
      if (pizzaType == "DeepDishPizza")
      {
          DeepDishPizza pizza = new DeepDishPizza();
          pizza.Prepare();
      }
      else
      {
          ChickenPizza pizza = new ChickenPizza();
          pizza.Prepare();
      }
   }
}

This means every time a new Pizza type is introduced, the PizzaStore class will need to change. That is, the high-level class will need to change if a low level class changes. With increase in number of these pizza classes the coupling will keep on increasing. Plugging in a new type of pizza will not very easy, even for this overly simplified PizzaStore.

Let’s try analyze if DI can come to the rescue as far as our little PizzaStore is concerned.

In order to implement DI here, we will need to abstract out the commonalities of these pizza classes in an abstract class called Pizza class.

public abstract class Pizza
{
   public virtual void Prepare();
}

Let all the pizza classes inherit from this class and implement their own version of Prepare() method.

public class ChickenPizza: Pizza
{
   public override void Prepare()
   {
      ...
   }
}

 


public class DeepDishPizza: Pizza
{
   public override void Prepare()
   {
      ...
   }
}

And the PizzaStore class will work only with the abstraction of Pizza

Public class PizzaStore
{
   private Pizza _pizza;
   public PizzaStore(Pizza pizza)
   {
       _pizza = pizza;
   }
   public void CreatePizza()
   {
       _pizza.Prepare();
   }
}

This technique of passing (or injecting) the instance (Dependency) on to the depending class (PizzaStore) class is called Dependency Injection.

The technique is simple but very powerful to achieve the ultimate design goal – “Loose Coupling”.

The S.O.L.I.D code

As they say, “A picture is worth a thousand words”. Just came across some motivational posters on Uncle Bob’s SOLID Principles on software design. If you are a software developer who likes to keep an eye on what’s going on in the object -oriented (OO) development fraternity, chances are that you would have come across the famous clash of OO titans (Joel vs UncleBob). Interesting perspectives, but that’s a different story altogether and will get into that some other time.

Uncle Bob’s SOLID principles are a collection of five OO design principles. These are:

1. Single Responsibility Principle (SRP)
2. Open Closed Principle (OCP)
3. Liskov Substitution Principle (LSP)
4. Interface Segregation Principle (ISP)
5. Dependency Inversion Principle (DIP)

    DIP

THERE SHOULD NEVER BE MORE THAN ONE REASON FOR A CLASS TO CHANGE.
https://codingcraft.files.wordpress.com/2011/03/dip.jpg

    OCP

SOFTWARE ENTITIES (CLASSES, MODULES, FUNCTIONS, ETC.) SHOULD BE OPEN FOR EXTENSION BUT CLOSED FOR MODIFICATION.
https://codingcraft.files.wordpress.com/2011/03/ocp.jpg

    LSP

FUNCTIONS THAT USE REFERENCES TO BASE CLASSES MUST BE ABLE TO USE OBJECTS OF DERIVED CLASSES WITHOUT KNOWING IT.
https://codingcraft.files.wordpress.com/2011/03/lsp.jpg

    ISP

CLIENTS SHOULD NOT BE FORCED TO DEPEND UPON INTERFACES THAT THEY DO NOT USE.
https://codingcraft.files.wordpress.com/2011/03/isp.jpg

    SRP

HIGH LEVEL MODULES SHOULD NOT DEPEND UPON LOW LEVEL MODULES. BOTH SHOULD DEPEND UPON ABSTRACTIONS.
ABSTRACTION SHOULD NOT DEPEND UPON DETAILS. DETAILS SHOULD DEPEND UPON ABSTRACTION.
https://codingcraft.files.wordpress.com/2011/03/dip.jpg

Irrespective of whether you are on Uncle Bob’s side or Joel’s side, these principles to me are great pointers to a good software design. Of course, like any other engineering principles, these should not be treated as commandments. Rather the requirements of the project should dictate the degree of relevance of these principles in their case. That is where Coding becomes a Craft and not mere a discipline of science.

With that said, having the mind laden with these principles will definitely put ones thinking in the right direction. In the direction in which the fundamental requirement of all good software design can be achieved – “Loose Coupling”.

Code for Testability

Testability is the degree of ease with which a piece of code can be tested. Anything that makes a piece of code harder to test reduces Testability. This gives the bugs an opportunity to hide for longer and conceal themselves better. Hence, it is always advisable to design and code with Testability in mind.

I always used to wonder that why should somebody like me who is more responsible for coding and designing part of the software development process be concerned about Testability? Why should I take the pain to coding for testability? What does it bring to the table for somebody like me?

Well! The answer is, “better design”.

In order to make a code more testable, the classes must have the ability to be tested in isolation. This means more mockable classes. It is easier to mock classes which are interface driven (Yes, we are talking about Design by contract here). Also, in order to make classes easily mockable, they must be as loosely coupled as possible. The dependencies must be abstracted out. Hence, good design practices like Dependency Inversion Principle becomes an obvious candidate for consideration.

To achieve high testability, i.e. degree of ease with which a piece of code can be tested, the classes must be smaller, focused and cohesive in nature. That is, they should focus on one or few related piece of functionality. Such classes are easier to maintain and tend to have neater Separation of Concerns. Such classes are closer to Single Responsibility Principle at their hearts.

The more testable the code is, the more coverage it tends to have. This is because of a simple reason that writing more and more tests for such code is easier. This allows the developers more cushion to refactor those pieces of code which has become obsolete and ugly over a period of time.

Hence, coding for Testability is all about writing more maintainable, flexible and robust code. Code which has a solid foundation and code which can withstand the only constant of software development life – change.

SOA – The Four Commandments

Recently, I got an opportunity to meet and chat with some very able and talented developers and architects here at our .Net user group – KolkataNet. As usual, we got into discussing technology. This time around the buzzword was SOA (Service Oriented Architecture). To my utter surprise, none of them (including me) were not able to come up with a decently ‘complete’ definition for SOA. I must confess here that deep down inside I was not very unhappy about it. I was glad to find that when it comes to SOA, I am not the only confused bloke (lol).

This set me out to dig deeper into this highly hyped term. And really, the crux of finding was that the so highly hyped SOA is nothing but just a natural growth of the craft of developing software. The craft has come up a long way from procedural to object-orientation to component-based and now to service-orientation. Of course, every stage has had its own compelling drivers to take it to the next one. Hence, SOA is no revolution, it is just an evolution. There are business (rather marketing, lol) aspects to SOA too. But let us limit our focus to SOA as software development approach.

So what is SOA? In order to qualify itself as an SOA, an architecture must adhere to the infamous Four Tenants of SOA. Which are:

Boundaries are explicit. Developers should define explicitly what methods/properties are going to expose to the client.

Services are autonomous. Services and the consumer application are independent. So in future if we need to modify or enhance the services feature then we can take the services offline and work with that. So this won’t affect the consumer application.

Share Schema/Contract not class implementation. We need to share only the schema to our clients. If should not share any implementation information in to our clients. For example, we should not ask them to give any connection string info in the attribute level, which will expose what database we are using for our service.

Compatibility based on policy. The services should define all the requirements in order to use the services. We should not have person – to – person communication about the services.

So next time when you profess to have implemented SOA, check against these tenants to make sure that you are not exaggerating.

Cheers.

Get rid of the “Manager” syndrome.

Are you one of those coders who often find themselves stuck on deciding a meaningful, explanatory name for their newly created classes? Do you frequently end up creating some “Manager” or “Master” class? If so, then beware! There is something wrong going in there with the vision and perception of your classes. Perhaps, you are asking your nice little classes to do too many things for you; they probably are losing their focus and are perplexed about their existance. Perhaps, you burden these poor fellows with too many responsibilities.

Is anything wrong about having a class to do more than one thing? Well, the Single Responsibility Principle (SRP) says “Yes” there is.

Consider the following class:

  GiveMeName
{
public void Save(Employee emp){…}
public List Get(){…}
public double CalculateGross(double basic){…}
public void NotifyEmployee(Employee emp){…}
}
 

This is the what the class does for you:

1. Saves the newly created employee object to the database,

2. gets the list of employees in the database,

3. calculates the Gross from the basic using the business rule,

4. Sends email Notifications to the employees.

Now if you were to give a meaningful, unambiguous and self-explanatory name to the class GiveMeName, what would you name it? Well, confusing … isn’t it?  The confusion is a very good indicator of the fact that we are doing something unfair to the class here. We are making the poor guy cater to too many responsibilities.  I bet the best name you can think (like me) would be Employee or EmployeeManager. This is where we lose the trick.

Try breaking the GiveMeName class into three small classes:

  EmployeeDb
{
public void Save(Employee emp){…}
public List Get(){…}
}
 
  EmployeeGrossCalculator
{
double Calculate(double basic);
}
 
  EmployeeNotifier
{
public void Notify();
}
 

These classes are much more focused and their names are beautifully self-explanatory and suggestive of their purpose and meaning of existence. They tell you why were they born and why do they exist. Thus there would be one and only reason for any of these classes to change, for, there is one and only one responsibility these classes cater. This will ensure Orthogonal code and hence maintainability of your code. It will help your code to stay closer to universal principle of programming – Tight Cohesion and Loose Coupleing.

So next time when you create a class and find that it’s name doesn’t come obviously to you, and you have to satisfy yourself with some “Manager” kind of name with your class, think twice, think if you have taken a leaf out of the book of SRP.