Application Service Design with Commands

While the design-principles of Domain Driven Design do not require you to use Commands in your Application Service, there are few merits to doing the same.

Temporal Decoupling and Autonomy

Commands can lend themselves to be issued asynchronously depending upon your Service Level Agreement and hence introduce temporal decoupling in your system, if that is what you need. If you must have your command processed reliably, these commands can further lend themselves to be serialized and stored in some kind of non-volatile storage (like a message queue) until they are processed successfully. This autonomy becomes of critical importance if you are dealing with distributed systems.

Validation

Validations are those context-independent rules that do not depend on the state of the domain objects. Business rules, on the other hand, are context-dependent (not very different from Martin Fowler’s idea of Contextual Validation). They check whether or not the command execution will result is valid state transition. Carrying out some of these context-independent validation rules on the command it-self will take the load off the domain layer by pre-filtering commands which will anyways be rejected by the domain objects.

Many people like repeating these validation rules on both places. This will bring value only of you ever need to use your domain layer not in conjunction to your application layer. Is it likely to happen in your case?

Authorization

Implementing authorization on the command side of the application is essentially asking your-self whether or not the user is allowed to issue the command. Dealing with authorization on queries is a different discussion.

Auditing

You can store the commands to capture the user’s action which then can be used for auditing purposes.

Logging

You can log commands and hence the log the user action and intention. These logs can make root-cause analysis of production issues very easy because you now have the end-to-end data to replicate the issue in your development environment.

Re-play and Undo

Commands lend themselves to be stored which can be used to redo or undo a user action if the use-case requires you to do so.

In a nutshell, commands are programmatic representation of user’s actions and intentions. Thinking of these commands as the building blocks of the application layer can open up a floodgate of interesting possibilities in your DDD architecture and bring a lot of power and flexibility to your design.

Reference

http://www.udidahan.com/2009/12/09/clarified-cqrs/

http://codebetter.com/iancooper/2011/04/27/why-use-the-command-processor-pattern-in-the-service-layer/

 

Advertisements

Command Query Separation

Have a look at your current application for a moment and try to split it into two parts – the ‘read-only’ side and the write side. Please notice that I have mentioned the word ‘read-only’ side. As the name suggests, the read-only side is the side of the application that is responsible for reading data from the database and displaying it on the UI. No where during the request is it supposed to save any change back to the database. Displaying data in pagable, sortable grid falls under the ‘read-only’ side of the application. Reports would fall under ‘read-only’ side. These are the requests that do not make any change to the database. In other words, it do not change the state of system

Where-as, the other side of the application i.e the write-side of the application would persist changes made by the users into the database.

Let’s say that you were to architect the read-only and the write sides of the application separately, as though they were two different systems, what would some of the design goals that you would come up with? Try to do this exercise with your current application and put down few design goals in the order of their priorities for each side.

As far as I am concerned, eighty percent of my users are going to use twenty percent of my application. Moreover, this twenty percent of the application consists primarily of the read-only side of the application. Hence, for me, some of the high priority design goals for the read-only side of my applications would be

1. Performance
2. Scalability

By no means am I attempting to suggest that the write-side of the application is any less important. In my case, it is actually the foundation for the read-side and hence is critically important for the success of the system. But, it has different design priorities. Since, it mostly deals with complex business rules and ensures non-corruption of data, some of the design goals that would top the list would be as follows,

1. Data Integrity
2. Maintainability
2. Extendibility
3. Flexibility

Clearly, the architectural needs of the read-only and write sides of my application are different in nature. Is it the same for you also? If it is, then the question we should be asking ourselves is that does it justify applying or rather imposing same architectural patterns to both sides of application just for the sake of symmetry?

Rich object graphs are ideal for write-side of the application. They result in high degree maintainable code. But they start to play nasty in situations where complex joins are needed and high performance is a priority, something that the read-only side of application needs a lot. And really! These old fashioned stored procedures and inline queries works like charm in these kind of situation. But the problem with Stored Procedures and inline query approach is that they do not provide the same kind of maintainability and data integrity that rich object graphs do. Hence, separating out the read-only and the write sides of the application and applying different architectural patterns to both can very well be the answer.

Bertrand Meyer in his book “Object Oriented Software Construction” separates an object’s methods into two categories:

Queries: Return a result and do not change the observable state of the system (are free of side effects). In other words, the read-only methods.

Commands: Change the state of a system but do not return a value. The write methods.

Meyer calls this principle the Command Query Separation. This principle, applied at architectural level leads to a clear segregation of the commands (write operations) and the queries (read-only operation) and lends itself to the flexibility of applying different architectural patterns to the very different design needs of the Command and Query sides of the application.

Simple Pattern for Arrange-Act-Assert style of Tests

Came up with a nice little pattern to implement Arrange-Act-Assert (AAA) style of tests using Visual Studio’s MSTest last night. The pattern is ridiculously simple and can be very useful for writing outside-in style of tests (some time referred to as acceptance tests too). By outside-in tests I am referring to those high-level tests which are meant to test, from top to bottom, a slice of business functionality.

At the heart of this simple pattern is an abstract base class which forces the concrete test classes to follow the AAA. This base class declares abstract signatures for Arrange(), Act() and Assert() methods and requires all the concrete classes to implement them.

protected abstract void Arrange();
protected abstract void Act();
protected abstract void Assert();

Hence, any class that implements this base class will have to implement the following

Arrange – which will allows them to setup the test data,
Act – the action that is being tested, and
Assert – check the output of the action for correctness.

The base will also have a method called Run(), which would call the Arrange(), Act() and Assert() in right sequence. This is the actual TestMethod which the “Test framework” will run.


[TestMethod]
public void Run()
{
Arrange();
Act();
Assert();
}

Here is what our base Test-Class would look like,

public abstract class BaseTest
{
[TestMethod]
public void Run()
{
Arrange();
Act();
Assert();
}
protected abstract void Arrange();
protected abstract void Act();
protected abstract void Assert();
}

And a concrete implementation would look something like this,

[TestClass]
public class When_Customer_Buys_an_items : BaseTest
{
private Cart _cart;
private Bill _bill;
private Receipt _receipt;
protected override void Arrange()
{
_cart = new Cart();
_cart.AddItem("Book", 12.49M, ProductType.Exempted, 1);
_cart.AddItem("music CD", 14.99M, ProductType.General, 1);
_cart.AddItem("chocolate", 0.85M, ProductType.Exempted, 1);
}
protected override void Act()
{
_bill = new Bill(_cart);
_receipt = _bill.GenerateReceipt();
}
protected override void Assert()
{
Testing.Assert.AreEqual(_receipt.Items.Count(), 3);
AssertUtil.AssertLineItems(_receipt, _cart);
Testing.Assert.AreEqual(_receipt.SalesTax, 1.50M);
Testing.Assert.AreEqual(_receipt.Total, 29.83M);
}
}

The neat thing about this pattern is the fact that all that the actual test classes need to be concerned about or be responsible for would be arranging the data, acting on this data and asserting the results. The base class will take care of the rest.

Try this out in your project; you would be pleasantly surprised with the structure and discipline this simple pattern brings to the way you write your tests.

Automated Deployment with TFS

One of the few things that can add true agility to the Agile practices that you claim to do is Automation. – “AUTOMATE EVERYTHING YOU CAN!

In my current project we have successfully automated our application deployment process. To us, deploying the ‘latest’ or an ‘already compiled revision’ of the build to the desired environment (Integration/QA/UAT/Prod etc) is just a click away. This encourages the team to deploy quickly, frequently and fearlessly.

This post will have us take a look at how to automate your application deployment using Microsoft Team Foundation Server (TFS). I am assuming that you are familiar with TFS and Team builds. Needless to say that you should have a Build Server (Build Controller, Agents) up and running. We will extend CI build capabilities of TFS to deploy our builds continuously to the integration environment. Assuming that you have the Build Controller and Agents in place, the first thing we might want to do is setup a basic CI build on the Build Server, selecting Continuous Integration as the build trigger. This will setup our out-of-the box CI Build Service using TFS.

The CI build definition we created is good enough to compile builds, run tests, deploy databases (if configured) and drop the builds to a specified location every time we check-in code. But what we want to do here is extend this build definition in a manner that in addition to doing all of the above; it will also go ahead and deploy the application on the web server.

Here is how we do it,

1.     Right click on the CI build-definition in team explorer.

2.     Select “Process”

3.     Expand “Advance” node, if not already expanded

4.     In the ‘MSBuild Arguments’, provide the following parameters,

/p:DeployOnBuild=True
/p:DeployTarget=MsDeployPublish
/p:CreatePackageOnPublish=False
/p:MSDeployPublishMethod=RemoteAgent
/p:AllowUntrustedCertificate=True
/p:MSDeployServiceUrl=<name of the Server>
/p:DeployIisAppPath=<IIS App>/<Virtual Dir>”
/p:UserName=<username>
/p:Password=<password>

And, there you go – simple, yet powerful. Try it out in your projects; you might be pleasantly surprised with the results.

Any discussion about Agile and Build Automation is not complete it without a mention of Continuous Deployment and Continuous Delivery. Continuous Deployment is about shipping out every feature developed by the team through the Deployment Pipeline all the way to the production in an automated fashion. Continuous Delivery is having a human intervention somewhere in the Deployment Pipeline to before making the feature available to the users.

These techniques are really about taking your agile practices to the extremes, both in terms of potential and performance. But, even if you don’t intend to get there immediately, it would still be unthinkable as to why wouldn’t you start automating things that you need to be doing as often as needed and as accurate as possible – Build Deployment definitely being one of them.

BDD with SpecFlow

Behavior Driven Development (BDD) is about defining the system as a collection of behaviors and then let these behaviors drive rest of the development work. Building on my previous post , this post will have us dive deeper into the concepts of BDD. In order to understand BDD better and see the some of the concepts around it in real action, we will use a popular tool called SpecFlow from TeckTalk. Tools like SpecFlow facilitate communication between the business experts and the developers by allowing them to use a common platform to define executable system behaviors.

Before we jump into SpecFlow, let us get some of the terms and nomenclatures around this tool straightened. We will be using some of these terms over and over again in the course of this post. Two such very important terms are Feature and Scenario.

Feature – a logical unit of functionality which has a business value. For example, the ability to add a customer to the system is a feature, or the ability to send an email with the product catalog to a customer can also thought to be another feature. Creation of the “CUSTOMER_MASTER” table in the database in not a feature.

Scenarios – different conditions around a feature and the system’s expected behavior against these conditions. For example, if the customer is successfully added to the system, the user should be taken to customer-list page. If the customer is not successfully added then the user should remain in the same page and the reason of failure must be displayed.

In order to understand the idea of Feature and Scenario better, let us use an overly simplified feature for an example. Let us assume that we want to build a feature called “Customer Addition”. As the name suggests, we intend to develop a Feature which will allow the user to add a customer to the system. Let’s say we come up with two scenarios – First, when the user is successfully added to the system, and second when the user is not successfully added to the system.

SpecFlow uses  english-like language called Gherkin to write these Features and Scenarios. Gherkin is a DSL(Domain Specific Language) and hence can be understood by the Business folks and the SMEs quite easily. Gherkin uses .feature file to specify a Feature. The Feature keyword marks the beginning of a feature. For more details on the syntax of Feature see this link. This descriptive text in nature and is used to describe what the feature is supposed to do. It need not necessarily follow a particular convention or pattern. SpecFlow recommends using the following template though.

In order to [reason]
As a [role]
I want to [functionality]

A Feature typically consists of one or more Scenarios. The scenarios must follow the Given-When-Then syntax,

Given – precondition or the state of the System (Under Test) prior to the action
When – action taken
Then – result or the state of the System (Under Test) after the action

Okay! Back to our “Customer Addition” feature. SpecFlow installs few templates in the Visual Studio. One such template is called SpecFlow Feature File and has the .feature extension. Let us create our feature using the SpecFlow Feature File and name it CustomerAddition.feature. In light of the conventions discussed earlier in the post, the feature should look something like this,

Feature: Add a CustomerIn order to maintain the customer’s information
As the authorized user of the system
I want to add a customer to the systemScenario: Customer Added successfully
Given the following customer
|Name  |Phone  |Address    |
|Rob  |678-826-6675 |514 M cricle, Atlanta, GA 30328 |
When I click “Add”
Then go to the customer-list page and display the added customer

Scenario: Customer not Added successfully
Given the following customer
|Name  |Phone  |Address    |
|Jane|678-826-6675 |120 Park Dr, GA 30300 |
When I click “Add”
Then stay in the same page and display the error

What is actually happening here is that every time the Feature file is saved, SpecFlow parses it and generates a code-behind file for it. Under the hood, all this generated piece of code essentially does is call few methods (tests) for these Given, When and Then statements. If we go ahead and try to run our Feature file (using MSTest or NUnit) at this point, we will see that SpecFlow is complaining that it does not find a matching step definition for the Given, When or Then statements.

   
No matching step definition found for the step. Use the following code to create one:
    [Binding]
    public class StepDefinitions
    {
        [Given(@”the following customer”)]
        public void Given(Table table)
        {
            ScenarioContext.Current.Pending();
        }
    } 

Let’s go ahead and add something called the SpecFlow Step Definition file from the Visual Studio. A Step Definition File tells the runtime which method (step) should be executed for a Given or When or the Then statement. This wiring up is done by the [Binding] attribute that decorates the StepDefinition class. If our Step Definition Class were called the CustomerAdditionStep, then using the generated code in the error message we can create a class which would look like the following,

   
    [Binding]
    public class CustomerAdditionStep
    {
        [Given(@”the following customer”)]
        public void Given(Table table)
        {            
            // …..
            //Implement the tests and asserts here
         }
    }
 

We can write tests and assertions using these methods and automate a business scenario using these step definitions.

This is a very simple demonstration of how a tool (SpecFlow in this example) can be used to provide a common platform for both business and the development folks to be able to collaborate, develop a shared understanding of the system and then be able to automate that shared understanding. In other words, SpecFlow allows the devs and the BAs to collaborate and define executable behaviors for the system (in terms of scenario) and then let these set of behaviors drive the development of rest of the system.

Get Your TDD Right with BDD

If you have been using TDD (Test Driven Development) or even maintaining unit-tests at some level, then probably you have already been using these tests as a means to express the business requirement. Writing unit tests before writing the actual implementation code is undoubtedly a very good exercise to brainstorm and understand problem before making an attempt to solve it. But for developers who are writing these tests, there is always a chance of missing out something of business importance or misinterpreting a business requirement. In other words, though these tests might be good for many things but they hardly make any attempt to bridge the age old gap between the business needs and the code development.

Another ailing thing about TDD is the name itself. ‘Test’ driven development is hardly about testing. It has never been about doing the tester’s job. Many people would say TDD is and has always been about good design. I do not completely agree with that either. Good design is a very good by-product of TDD, but that’s not the primary purpose of TDD. The very name and the nomenclature around it, words like Test, Assertion etc, influences the brain (Sapir–Whorf hypothesis) compels it to think that TDD has got something to do with testing.

TDD also requires a fundamental shift in focus as a developer. For decades we developers have been thinking about those grand software designs up-front, creating the databases, the data-access classes, the business facades, re-usable components and what not before addressing the actual business needs in small bits. TDD done right demands for a drastic change in this mindset. It requires the developer to think about the business needs first. Pick a very tiny slice from it which still has a business value and write only as much code that is required to implement this piece of business value. Really, this might not be the most intuitive thing to do for most of the developers. Moreover, this ‘no up-front design’ idea (though I do not completely agree with it) does not work in the best interest of someone like me who sell themselves as ‘Architects’. Pun intended!

No wonder so many teams struggle to get TDD right.

Due to some of these inherent problems with TDD and the huge gray area without sufficient guidelines on what and how much to test, people like Dan North came out with the idea of BDD a.k.a. Behavior Driven Development. The idea behind BDD is pretty simple. The idea is to provide a common tool and guideline to both the business and development folks. This tool/guideline would allow the business analysts and SME’s to write the business requirements (in a specified format) using english-like language. They would write the different business scenarios around the requirement. In other word, they would define the ‘Behavior’ of the system. The same tool would be able to generate or at least guide developers write the test-cases from these scenarios. The developers would implement these test-cases one by one and make all the tests go green. And TADA! In a perfect world, you have a business requirement translated into a working code.

With BDD the point really is to be able to write tests which are more relevant. Tests which matter. Tests which are derived right out of the business requirements and scenarios, coming right out of the horses mouth. BDD is about putting some more structure and discipline into the Test Driven practices you already claim to do.

Tale of a Pragmatic Team

Take the world as it is, not as it ought to be
                                                                                                  — German Proverb

With the passing every single year in the world of software development I find myself more and more convinced of the fact that every project and every team is unique. They have unique situations and requirements. One should accept and adopt tools and processes only to the extent that fits the personality and culture of the project team and caters to the needs of the project. Let me share a tale of a team that I worked with in recent times.

The team was maintaining a suit of unit-tests around the code. Though the code coverage was decent, the execution coverage was really poor. If you ran the test-suit, at any given point, many if not most tests would fail. Not a happy situation to be in, especially, if you are willing to leverage the benefits of units tests. One of the most important benefits of having a unit tests around your code is to provide you with the courage and the confidence to refactor your code fearlessly. You should be able to make a change in the code and run the tests to make sure everything is alright. And this is what the team was badly missing with it’s tests. Any “red” test that we came across would not clearly conclude that weather the test failed because of bad data or the functionality was actually broken. We dug into the each and every unit-test. To our relief we found almost all the failing tests were failing because bad test-data and not broken logic. Believe me or not this is the case with many teams claiming to maintain unit-tests.

We fixed the test-data, created dummies, mocks and stubs where required and made the test-suite hundred percent pass. We would setup data before running the tests and undo the changes to the database (or other states) after the tests ran. Hence, the test-suit always remained automated.

The next logical question then was how are we going to make sure that the tests we fixed this time around would not run into the same issue in the future. The answer was obvious – Continuous Integration (CI). With Microsoft’s Team Foundation Server (TFS), it was fairly easy to set up the build for Continuous Integration. Once we configured the CI, we realized that the builds were heavy and slow as many of these tests talked to the database. It was turning out to be an overkill for the return we were achieving. Again, we followed the “horses for courses” strategy. The pragmatic solution that we all agreed upon was that instead of queuing a build at every check-in, we should trigger the test-run only in the nightly builds. That way we were sure that code integrates at least once every day. Something we happily lived with.

The idea behind telling the story is that tools, process and best-practices are not meant to be followed like commandments. One should certainly look out for and be aware of what’s working for most teams out there, borrow the idea as and when necessary but should not impose them on the team. They should be adopted and accepted according to the teams comfort and adopted as per the teams need.

As you would have already understood that what we followed was not even close to be called TDD. We just maintained unit tests. But at the end of the day we delivered, and delivered well. Isn’t that all that matters? Isn’t it more important to do what works than what is right? After all, isn’t it true that there is no single key for every lock?