Messaging as a programming model Part 2

This topic has generated a lot of interest, more than I could have imagined beforehand. Thanks to everyone for all the feedback, good, bad and indifferent, it’s much appreciated.

This post is part two of two on using messaging techniques within your everyday in-process applications. You can find the first part here.

A Quick Recap

All object oriented applications have an object model of sorts though from my experience it’s usually an accidental model and not one “designed”. Not many applications require a domain model but of those that do they have Domain Driven Design, and more recently, CQRS and Event Sourcing as tools, techniques, and methodologies to help developers discover and build useful models of the business domain in which they work. Object models tend to have nothing but the bog standard four tenets of object orientation to fall back on, usually in the context of CRUD applications. Whilst in theory, this should be enough, these types of applications are typically large and messy from the get go and steadily grow worse over time. It would be nice if there were a few more techniques that developers could reach for to help them produce better object models when working on these projects. Messaging is one such technique.

Messages are explicit concepts in code that can be acted upon in many ways enabling some useful functionality that ordinarily can be quite difficult to implement especially in those areas that have to span the entire application. These are known as cross cutting concerns and include things such as logging and transaction handling, or retry logic around certain operations. The naive approach sees developers littering their code with Log(“whatever”) statements in their methods and repeating it all over. There is though a better way once we’ve adopted the messaging mindset using Aspect Oriented Programming. As we’re using the Pipe and Filters pattern we’re in a good position to tackle those concerns but first I’m going to dig deeper into the PipeLine class mentioned in Part one and later flesh out a few simple alternatives.

The PipeLine

At last we come to the implementation of the PipeLine class. I know you’ve been wondering how you’ll cope with the sheer complexity of it, I bet you’ve even had a few sleepless nights waiting for this moment. Well here it is in all it’s glory:


public class PipeLine<T>
{
    private readonly List<Action<T>> _actions = new List<Action<T>>();

    public void Execute(T input)
    {
        _actions.ForEach(ac => ac(input));
    }

    public PipeLine<T> Register(Action<T> action)
    {
        _actions.Add(action);
        return this;
    }
}

Wow, complicated right? er…no, actually. It’s probably as simple as one could hope for.

So, we have a private list of Action of T with T being the type of Message that you declare when creating an instance of the PipeLine.


var pipeline = new PipeLine<LogInMessage>();

The Register method simply adds the callback you supply to the internal list and then returns itself through the this reference allowing the pipeline to be built by chaining the Register methods together:


var loginPipeline = new PipeLine<LogInMessage>();

loginPipeline.Register(msg => new CheckUserSuppliedCredentials(msg))
             .Register(msg => new CheckApiKeyIsEnabledForClient(msg))
             .Register(msg => new IsUserLoginAllowed(msg))
             .Register(msg => new ValidateAgainstMembershipApi(msg))
             .Register(msg => new GetUserDetails(msg));

Each callback of course is now constrained, in this example, to work only with LogInMessage so it stands to reason that with this pipeline instance we can only use filters that take a LogInMessage parameter.

Having built the pipeline we call the Execute method to have them invoked one by one. As you can see we simply iterate over the list with the Linq ForEach method passing the supplied message (input) to each callback:


public void Execute(T input)
{
    _actions.ForEach(ac => ac(input));
}

But Wait..There’s More

Whilst that’s all you need at the most basic level there are other questions that arise. How can we stop the pipeline from processing without having to throw an exception? What do we do if an exception does occur in one of the filters, how do we handle it? How can we make our pipeline execute asynchronously or make it execute in the context of a transaction? Because we’ve implemented the Pipe and Filter pattern all these questions can be answered easily by implementing some simple Aspects that we can wrap around our pipeline and we don’t need any fancy frameworks to do it.

AOP

Wikipedia supplies the following definition for Aspect Oriented Programming:


"aspect-oriented programming (aop) is a programming paradigm that
aims to increase modularity by allowing the separation of 
cross-cutting concerns"

It’s amazing how often we come across code that mixes the responsibility of the object at hand with other concerns such as logging. It makes code harder to reason about, methods larger, and more likely to break when changes are made. Extracting out these behaviours can be a difficult task when code is poorly structured and more often than not no effort is made to extract them. Some inversion of control libraries such as Castle Windsor have support for AOP built in using reflection based libraries like Dynamic Proxy but with our pipeline we don’t need any of that. In fact, again like everything we’ve seen so far, it’s almost trivial to implement ourselves. So let’s start answering some of those questions and a few more besides.

Exception Logging

If we wanted to, we could simply add a try..catch block around the ForEach iteration in the PipeLine implementation above and bingo, exception handling implemented. Job done, right? Not quite. We don’t always want to handle exceptions the same way. Sometimes we may want to only log an exception other times we may want to log it and rollback a transaction. Maybe we want to retry a set number of times and then only log it if the we fail after all the retries have happened. Sounds like a difficult undertaking right? Not at all thanks to AOP.

Here’s an aspect we can use to log exceptions for us:


public class ExceptionLoggingAspect<T>
{
    public ExceptionLoggingAspect(Action<T> action)
    {
        Handle = action;
    }
	
    public void Execute(T input)
    {
        try
        {
            Handle(input);
        }
        catch (Exception ex)
        {
            If(Log.IsErrorEnabled)
                Log.Error("*** ERROR ***: {0}", ex);
        }
    }
	
    private Action<T> Handle { get; set; }

    private static readonly ILog Log = LogManager.GetLogger("MyLog");
}

Before explaining how this works let’s see it in use first. In part one I showed the Invoke method of a UserService class and said that I would flesh it out later. Well now we can do this using our aspect:


public void Invoke(LogInMessage input)
{
    var errorHandler = new ExceptionLoggingAspect<LogInMessage>(_loginPipeline.Execute);
    errorHandler.Execute(input);
}

Here we’ve wrapped the Execute method of our pipeline by passing it into the constructor of an ExceptionLoggingAspect which immediately assigns it to the private Handle property. Notice that the constructor and Handle property expect an Action of T, in other words a callback that uses our Message type, whatever that may be. When we invoke Execute on the aspect it in turn invokes the method assigned to the Handle property, again passing through our message, and does so wrapped inside a try..catch block. Should an exception occur now inside one of our filters the aspect will catch the exception and let us log it. In the example shown I’m assuming a dependency on Log4Net but it can be whatever you like.

The upshot of this approach means that now we can decide if we want an exception to be caught and logged by using composition rather than having it forced on us as it would have been if we had put it in the plain vanilla PipeLine implementation.

General Logging

Notice that the ExceptionLoggingAspect only logs errors according to the logging level as defined in our Log4Net configuration. It does so by testing the IsErrorEnabled property. What if we want to be able to turn on logging so that we get to write out the message name as and when the user executes various pipelines within our application? Maybe we want to see how the user is using the application by looking at what messages they execute. For that we just need to define an ordinary message logging aspect:


public class MessageLoggingAspect<T>
{
    public MessageLoggingAspect(Action<T> action)
    {
        Handle = action;
    }
	
    public void Execute(T input)
    {
        if(Log.IsDebugEnabled)
            Log.Debug("Message Received: {0}", input.GetType().Name); 

        Handle(input);               
    }
	
    private Action<T> Handle { get; set; }

    private static readonly ILog Log = LogManager.GetLogger("MyLog");
}

and use it like so:


public void Invoke(LogInMessage input)
{ 
    var logHandler = new MessageLoggingAspect<LogInMessage>(_loginPipeline.Execute);
    var errorHandler = new ExceptionLoggingAspect<LogInMessage>(logHandler.Execute);
    errorHandler.Execute(input);
}

Each time we compose the aspects we end up creating a chain of method calls as each method we pass in will be assigned to the current aspect’s Handle property. So now calling Execute on the errorHandler invokes Execute on messageLogger which finally invokes Execute on the pipeline itself. As long as the DEBUG logging level is enabled in our Log4Net configuration file then each time we call the Invoke method, our message’s class name will be written to the log file. Should an error occur we will also capture the exception in the log file. If we were to keep the log level at ERROR in our Log4Net config file then our MessageLoggingAspect would still be invoked, it just wouldn’t write anything to our file because we’ve told Log4Net to only log errors.

Okay let’s keep building this up to handle other cross cutting concerns

Automatic Retries

One of my favourite frameworks NServiceBus has a nice retry feature that kicks in whenever an exception is thrown whilst it is processing a message on a queue. It will, depending upon how you configure it, attempt to process the message over and over until a set number of retries have been attempted. Wouldn’t it be cool if we could do that too?


public class RetryAspect<T>
{
    public RetryAspect(Action<T> action)
    {
        Handle = action;		

        // get these from config file
        _maxRetries = 3; 
        _slideTime = 5000;
    }
	
    public void Execute(T input)
    {
        try
        {
            Handle(input);
        }
        catch(Exception)
        {
            _currentRetry++;
            if(_currentRetry <= _maxRetries)
            {				
                 Thread.Sleep(_slideTime * _currentRetry);				
                 Execute(input);
            } else {
                throw;		
            }
        }
    }

    private Action<T> Handle { get; set; }	

    private int _maxRetries;
    private int _currentRetry = 0;
    private int _slideTime;
}

At last, some code with a bit of meat on it. Here we have an aspect that will retry an operation if an exception is thrown by recursively calling the Execute method on itself until the maximum number of attempts has been reached. Notice that the retry operation waits a little bit longer each time a retry is needed to give any infrastructure problems such as a momentary drop out on the network a chance to recover. In this case, the first retry waits for 5 seconds, the second for 10 seconds, and the third 15 seconds before finally raising an exception. I’ve hard coded the values here but ordinarily you’d read these from a config file to allow changes without recompiling.

Notice too that I’m using Thread.Sleep here. This is a definitely more controversial as in general there shouldn’t ever really be a reason to use this but I’m lazy! There are issues around the various .Net Timer classes such as callbacks being executed on a different thread or exceptions being swallowed. As the whole point of this aspect is to retry on the same thread when an exception is thrown I chose to implement it this way for now. Most of the time I use the retry aspect in conjunction with an asynchronous aspect anyway so sleeping a ThreadPool thread shouldn’t really be a problem. Feel free to argue or provide your own implementation though.

Let’s build up the Invoke method again:


public void Invoke(LogInMessage input)
{ 
    var retryHandler = new RetryAspect<LogInMessage>(_loginPipeline.Execute);
    var logHandler = new MessageLoggingAspect<LogInMessage>(retryHandler.Execute);
    var errorHandler = new ExceptionLoggingAspect<LogInMessage>(logHandler.Execute);
    errorHandler.Execute(input);
} 

Now we get automatic retries for little effort along with message logging and exception logging. I’ve used this technique when integrating with a certain UK based delivery courier who’s web service is to put it politely, sometimes unreliable. However, a few seconds after a failed call I usually get success. This technique comes in very handy for that kind of scenario.

Transactions

I think you might be getting the idea by now so I’ll just show the code for transaction handling:


public class TransactionAspect<T>
{
    public TransactionAspect(Action<T> action)
    {
        Handle = action;
    }

    public void Execute(T input)
    {
        using(var scope = new TransactionScope())
        {
            Handle(input);
            scope.Complete();
        }		
    }
	
    private Action<T> Handle { get; set; }
}


public void Invoke(LogInMessage input)
{ 
    var retryHandler = new RetryAspect<LogInMessage>(_loginPipeline.Execute);
    var logHandler = new MessageLoggingAspect<LogInMessage>(retryHandler.Execute);
    var tranHandler = new TransacionAspect<LogInMessage>(logHandler.Execute);
    var errorHandler = new ExceptionLoggingAspect<LogInMessage>(tranHandler.Execute);
    errorHandler.Execute(input);
} 

As expected, it’s very similar to all the previous aspects. The only thing to take note of here is that I inserted the tranHandler instance between the logHandler and errorHandler. This is to ensure that should an exception occur the transaction won’t commit. If I had it ordered like so:


public void Invoke(LogInMessage input)
{ 
    var retryHandler = new RetryAspect<LogInMessage>(_loginPipeline.Execute);
    var logHandler = new MessageLoggingAspect<LogInMessage>(retryHandler.Execute);
    var errorHandler = new ExceptionLoggingAspect<LogInMessage>(logHandler.Execute);
    var tranHandler = new TransacionAspect<LogInMessage>(errorHandler.Execute);
    tranHandler.Execute(input);
} 

then a raised exception would be logged by the ExceptionLoggingAspect but as it is not re-thrown it is therefore handled and so the transaction would commit which is not the behaviour we want. Sometimes the order of our aspects can be important other times it makes no difference.

Asynchronous Execution

Sometimes we want our pipeline to execute on a background thread. You may remember the example I gave for a WinForms batch import application in part one. Running long operations on the UI thread is expensive and blocks the UI from responding to a user’s input. Once again though providing an aspect to wrap our pipeline solves the problem easily:


public class AsyncAspect<T>
{
    public AsyncAspect(Action<T> action)
    {
        Handle = action;		
    }

    public void Execute(T input)
    {
        ThreadPool.QueueUserWorkItem(i => Handle(input));
    }

    private Action<T> Handle { get; set; }
}

and here is our Invoke method:


public void Invoke(LogInMessage input)
{ 
    var retryHandler = new RetryAspect<LogInMessage>(_loginPipeline.Execute);
    var logHandler = new MessageLoggingAspect<LogInMessage>(retryHandler.Execute);
    var tranHandler = new TransacionAspect<LogInMessage>(logHandler.Execute);
    var errorHandler = new ExceptionLoggingAspect<LogInMessage>(tranHandler.Execute);    
    var asyncHandler = new AsyncAspect<LogInMessage>(errorHandler.Execute);
    
    asyncHandler.Execute(input);
} 

Whilst the message itself is mutable, it is not shared. Each thread can safely modify its own message without affecting any others but that doesn’t stop you from accessing some shared resource in one of your filters so you still need to be careful about how you write multi-threaded code.

Authorization

Authorization is another one of those pesky cross cutting concerns yet trivial when using aspects. This one checks the current principal assigned to a thread which you may have done during a log in operation. If the returned GenericIdentity (or WindowsIdentity) has not been authenticated then we simply throw an exception and stop the pipeline from being processed.


public class AuthenticationAspect<T>
{        
    public AuthenticationAspect(Action<T> action)
    {
        Handle = action;
    }

    public void Execute(T input)
    {
        var identity = Thread.CurrentPrincipal.Identity;

        if (identity.IsAuthenticated)
        {
            Handle(input);
        }
        else
            throw new Exception("Unable to authenticate. You are not authorised to perform this operation");
        }

        private Action<T> Handle { get; set; }  
    }
}

and now our final Invoke method implementation:


public void Invoke(LogInMessage input)
{ 
    var retryHandler = new RetryAspect<LogInMessage>(_loginPipeline.Execute);
    var logHandler = new MessageLoggingAspect<LogInMessage>(retryHandler.Execute);
    var tranHandler = new TransacionAspect<LogInMessage>(logHandler.Execute);
    var errorHandler = new ExceptionLoggingAspect<LogInMessage>(tranHandler.Execute);    
    var asyncHandler = new AsyncAspect<LogInMessage>(errorHandler.Execute);
    var authHandler = new AuthenticationAspect<LogInMessage>(asyncHandler.Execute);

    authHandler.Execute(input);
} 

If we ignore the fact that this example happens to show all these aspects wrapping a login pipeline (no I wouldn’t do this, pretend it’s something else!) then we can see that a lot of potentially complicated requirements can be fulfilled in a trivial manner all because we adopted explicit messaging. Now our pipeline requires that the user must be authenticated before we start processing on a background thread, in the context of a transaction, with error logging, message logging, and automatic retries.

Okay, I know I keep banging on about messages, pipes, and filters, etc. but let’s nail it down because it’s worth emphasising. The reason we are able to get so many benefits out of relatively trivial code is because we only have one parameter, the message being passed to the pipeline via the Execute method. In our normal OOP/procedural hybrid approach we tend to find lots of methods all requiring a different number of parameters which makes it very difficult to provide a consistent, uniform way of handling them. When you adopt the messaging programming model whether or not you use Pipe And Filters or MessageRouters or what have you, the fact that you are passing around just one single concept opens the door to numerous possibilities allowing some very interesting things to be done like the various aspects shown here.

Loose Ends

I’m going to conclude by showing one or two other techniques you can choose to adopt if you have a need. I don’t always use these techniques and there are no doubt numerous different ways of achieving the same results but they’re worth mentioning for reference.

Stop message processing without throwing an exception

The first is one of those questions I asked earlier, how do you stop the pipeline from continuing to process messages when you don’t want to raise an exception? Maybe you consider throwing an exception bad form unless it’s truly exceptional circumstances. Perhaps you just want to drop a message in a given scenario, in effect, making the operation idempotent. One way would be to declare an interface along the following lines:


public interface IStopProcessing
{
    bool Stop { get; set; }
}

and modify the PipeLine implementation to only allow messages that implement it:


public class PipeLine<T> where T : IStopProcessing
{
    private List<Action<T>> _actions = new List<Action<T>>();

    public void Execute(T input)
    {
        _actions.ForEach(ac => { if(!input.Stop)
                                     ac(input);
                               });
    }

    public PipeLine<T> Register(Action<T> action)
    {
        _actions.Add(action);
        return this;
    }
}

This extra constraint on the message means that at any point in the pipeline a filter can set the Stop property on the message to true and no other filters will ever be invoked from that point onward.

This example is the same as that from part one checking that a user has supplied a valid user name and/or password but this time we don’t throw exceptions:


public class CheckUserSuppliedCredentials
{
    public CheckUserSuppliedCredentials(LogInMessage input)
    {
        Process(input);
    }

    private void Process(LogInMessage input)
    {
        if(string.IsNullOrEmpty(input.Username) || 
                  string.IsNullOrEmpty(input.Password))
        {
            input.Stop = true;
            input.Errors.Add("Invalid credentials");
        }
    }
}

The only other addition here is an “Errors” property on the message that you add to when stopping the pipeline so that the caller can interrogate it later.

Guarantee filter execution even when an exception is thrown

Sometimes when an exception is thrown we use the try..catch..finally construct or even just try..finally to ensure some resource or other is cleaned up. How can we do this so that some filters are guaranteed to run no matter what? Yet again, it’s just a variation of our standard pipeline implementation:


public class PipeLine<T>
{
    private List<Action<T>> _actions = new List<Action<T>>();
    private List<Action<T>> _finallyActions = new List<Action<T>>();

    public void Execute(T input)
    {
        try
        {
            _actions.ForEach(ac => ac(input));
        }
        finally
        {
            _finallyActions.ForEach(ac => ac(input));
        }
    }

    public PipeLine<T> Register(Action<T> action)
    {
        _actions.Add(action);
        return this;
    }

    public PipeLine<T> RegisterFinally(Action<T> action)
    {
        _finallyActions.Add(action);
        return this;
    }
}

Here we’ve added a second list that will hold the callbacks that must be executed no matter what. You just need to register those that should always run using the new RegisterFinally method and as you can see in the Execute method when the finally block is entered those “finally” filters will be executed one by one.

Debug the current state of the message as it passes through the pipeline

Looking at the message after it has been modified step by step (or filter by filter) is easy enough if so desired. Just create yourself a filter that will print out the current state of the message to the Console or where ever you like:


public class DebugFilter<T>
{
    public DebugFilter(T input)
    {
        Console.WriteLine("Message State: {0}", input);
    }
}

This filter is different to all the others we’ve seen yet. This one is a generic filter so that we can reuse it with any message. The only other requirement is that your message override its ToString method to return its values as a formatted string. Now we can insert our filter at various points within the chain when registering our pipeline filters:


var loginPipeline = new PipeLine<LogInMessage>();

loginPipeline.Register(msg => new CheckUserSuppliedCredentials(msg))
             .Register(msg => new DebugFilter<LogInMessage>(msg))             
             .Register(msg => new CheckApiKeyIsEnabledForClient(msg))
             .Register(msg => new DebugFilter<LogInMessage>(msg))
             .Register(msg => new IsUserLoginAllowed(msg))
             .Register(msg => new DebugFilter<LogInMessage>(msg))             
             .Register(msg => new ValidateAgainstMembershipApi(msg))
             .Register(msg => new DebugFilter<LogInMessage>(msg))
             .Register(msg => new GetUserDetails(msg))
             .Register(msg => new DebugFilter<LogInMessage>(msg));

Now we get to see the state of the message at each point on its journey through the pipeline which can be a useful debugging aid though I wouldn’t necessarily insert one after each and every step – this is just for show.

Finally…

At last I’m done. To sum up, a lot of applications don’t need a domain model but care should still be taken to create a good object model. Unfortunately the majority of today’s applications don’t really have an explicit architecture or object model. The lack of a well-defined structure tends to make these applications hard to reason about, increases their complexity, and makes it hard to safely make changes.

Making a conscious decision to implement Messaging as a first-class concept in your application not only gives you the ability to easily implement some otherwise hard to achieve features but also provides a uniform and consistent structure that everyone on the team should be able to understand and work with. It’s easy to compose tasks or use cases by chaining together more fine grained responsibilities in the shape of filters, and as a consequence makes unit testing relatively easy too. It also reduces the proliferation of parameters that often have to be propagated down the layers each time some extra piece of data is required on existing methods.

The pipe and filter pattern has been around a long time but its use has often been limited to distributed messaging and message queues. By bringing the pattern into your normal in-process application code you open yourself up to learning to build your software in a more “functional” way whilst at the same time managing complexity and introducing some much needed structure. The best part is you don’t have to introduce any complex frameworks or dependencies to do it yet the benefits are real and tangible.

I’ve had a lot of success with this programming style in recent times and have applications in production using it. They are some of the easiest applications to work on, tend to be more reliable and, in my experience, less prone to introducing unexpected side effects due to their highly modular nature. It makes sense where there are a lot of procedural, sequential steps to invoke.

To re-iterate what I said at the beginning of part one, it’s not the only way to achieve modular code nor perhaps the best (no silver bullets, etc) but I do believe it produces a consistent, easy to read structure, something that is often lacking in modern enterprise applications. I also wouldn’t recommend building an entire application this way, but as I’ve said before, you can use it in tandem with normal OO techniques. Unfortunately in C# we have to use classes for the filters/steps of our pipeline. It would be nice if we could create functions without needing a class but I guess the answer to that would be to move to a functional language like F# which isn’t practical for most developers in the enterprise workplace.

All that said I think making concepts explicit in the form of messages has real merit and I’d encourage you to give it a go, play around with it and see if it’s something you could benefit from in your applications too.

Messaging as a programming model Part 2

AOP With Castle Windsor

Aspect Oriented Programming is a technique that in my opinion all developers should at least be aware of yet in my experience most that I have spoken with have not heard of it let alone used it. Maybe it gets glossed over because it seems like an alternative to OOP rather than complimenting it? Or perhaps it looks a little bit complex or hard to see where it could be useful? I don’t know but I do know it can greatly simplify a codebase by separating the business logic from the cross cutting concerns. But what’s a cross cutting concern?

Most developers, even if not familiar with AOP, at least understand cross-cutting concerns. Logging is the traditional example. It’s generic code that every app needs and it doesn’t belong to one particular layer but is used in many different layers. It’s also code that we seem to duplicate over and over in method after method. It becomes tedious to write, a chore to maintain, and it gets in the way of the real intent of the method the reader is currently focused on. It’s also a prime candidate for AOP.

Other examples include transaction management and authorisation. Every time we communicate with our data layer we need to begin a transaction, call our data method then commit or rollback. AOP can help us here as it can with authorisation. Instead of tediously writing the same code to check that the current user is authorised to call the method we can use aspects to do the work for us.

So the next question is, what is an aspect?

Aspects are units of code that can be applied to other parts of our codebase in a non-intrusive manner. They give us the ability to write code once but have it applied in many places. The code that the aspect affects becomes a lot simpler as the “noise” has been moved to a more suitable place. Both the code and the aspect gain a clear separation of concerns. The aspect concentrates on providing the cross cutting concern whilst the method it affects concentrates on the business logic.

AOP has a couple of different flavours, IL code weaving being one (see PostSharp for this method and also a nice video presentation here) and interception which is the method used by Castle Windsor and what I’m going to show here. IL code weaving is a post-compilation process where by the IL code is altered to have the aspects “weaved” into the dll. It has potential for more powerful use cases than interception which is applied dynamically but often interception can give us a great deal and Castle Windsor makes it easy.

When creating our own aspects we need to implement a single interface – IInterceptor. My own convention is to name my classes with a post-fix of Aspect so that anyone reading my code would immediately recognise the concept. So let’s take the Logging example:

public class LoggingAspect : IInterceptor
{
      public void Intercept(IInvocation invocation)
      {
            invocation.Proceed();
      }
}

Here we’ve created a class that will (but doesn’t yet) do some logging for us. We’ve implemented the IInterceptor interface and so done the minimum possible to compile. The IInvocation variable represents the method that does the real work i.e implements our business logic. Calling the Proceed method invokes it. We can do logging before or after our method is called depending on our needs. So if, for instance, we use Log4Net as our logging tool of choice we’d end up with something like this:

public class LoggingAspect : IInterceptor
{
      public void Intercept(IInvocation invocation)
      {
            Log.Debug(string.Format("Entered: {0}", invocation.Method.Name));
            invocation.Proceed();
            Log.Debug(string.Format("Exited: {0}", invocation.Method.Name));
      }
}

If our business logic method returns a result we can also use this in our aspect:

public class LoggingAspect : IInterceptor
{
      public void Intercept(IInvocation invocation)
      {
            Log.Debug(string.Format("Entered: {0}", invocation.Method.Name));
            invocation.Proceed();
            Log.Debug(string.Format("Exited: {0} with return value of {1}",
            invocation.Method.Name,
            invocation.ReturnValue);
      }
}

The transaction example is pretty simple:

public class TransactionAspect : IInterceptor
{
      public void Intercept(IInvocation invocation)
      {
            using(var scope = new TransactionScope())
            {
                  invocation.Proceed();
                  scope.Complete();
            }
      }
}

All we’ve done here is wrap a TransactionScope object around the call to invocation.Proceed. If that method throws an exception then the scope.Complete() call never happens and our changes are rolled back upon exiting the using block.

The final example, Authorisation, shows how we can make a decision on whether or not the current logged in user is allowed to call our method:

public class AuthorisationAspect : IInterceptor
{
      public void Intercept(IInvocation invocation)
      {
            var currentUser = System.Threading.Thread.CurrentPrincipal;
            if(currentUser.Identity.IsAuthenticated && currentUser.IsInRole("admin"))
                  invocation.Proceed();
            else
                  throw new UnauthorizedAccessException("For admin eyes only!");
      }
}

Once we’ve written our aspects we need to register them in our IoC container:

_container = new WindsorContainer();

// register the aspects
_container.Register(AllTypes.FromAssembly(Assembly.LoadFrom("mytypes.dll"))
      .Pick().If(t => t.Name.EndsWith("Aspect")));

// register the types we want aspects applied to
_container.Register(AllTypes.FromAssembly(Assembly.LoadFrom("mytypes.dll"))
      .Pick().If(t => t.Name.EndsWith("Command"))
      .Configure(c => c.LifeStyle.Transient
      .Interceptors(new[] { typeof(LoggingAspect) })));

Here we’re applying aspects only to objects whose name end with the word Command. Every command object in the container will be decorated with the LoggingAspect. Alternatively, you can use attributes to selectively apply Aspects to your objects. This example ensures that only admins (based on the AuthorisationAspect defined above) can execute this command:

[Interceptor(typeof(AuthorisationAspect))]
public class DeleteAllDataCommand : ICommand
{
      public virtual void Execute()
      {
            // delete everything!!
      }
}

Now whenever we resolve a component from the container the object returned has the appropriate aspects automatically ‘wrapped” around it and any call to the component’s methods are “intercepted” so that the aspects code runs before and after the invocation.

There are potentially a lot of uses for Aspects which due to their non-invasive nature make the important code of our application a lot simpler to maintain and easier on the eye too. The drawback is that it isn’t immediately obvious what other code runs when we invoke our method but in my opinion this is far outweighed by the benefits. As usual though, just because we can doesn’t mean we always should but adding AOP to your developer toolbox is definitely worthwhile as it gives you another option to consider when faced with the cross cutting concern problem.

AOP With Castle Windsor