C# is not F#

C# is a great imperative OO language on the .Net platform. It really is. I’ve used it since day one and I’ve seen it grow with new features continually added, most of which brought great benefit. Now though, I’m concerned. With functional programming, though not yet mainstream, entering the thoughts of more and more enterprise developers, there seems to be an obsession within the C# compiler team to add more functional features to the language. I for one, don’t understand why.

The application development community seems to understand reasonably well that there’s a fine line between simplicity and complexity and it pays to stay on the simple side where possible because nothing is free and you will have to pay for that complexity one way or another. Sometimes, as the saying goes, less is more. From afar though that advise seems to be largely ignored with those responsible for the direction of C# within Microsoft.

Let’s be clear, despite functional-style additions such as LINQ which really did bring a lot of benefit, C# is not a functional language. It doesn’t really support immutability, is largely statement based rather than expression based, and whilst with LINQ you can do function currying and partial application of sorts if you try hard enough, would you really want to? It’s not exactly easy on the eye.

The big question for me is why try to bring more half-baked functional ideas to the language as has been suggested with C# 6.0 at the risk of making it complex for all, and worse, giving more tools/ammunition to the plethora of “developers”who have no interest in programming as a craft but merely banging out one mess after another that some other poor soul will have to pick up and pick apart?

C# doesn’t need to change drastically in my view. It’s great at what it does now without needing to bastardize it and potentially ruin it. I’d even wager that most C# developers are not even aware of (never mind, might need) functional programming so why try to force it on them? For those that do need or want functional features they know who they are, and they’re perfectly aware that C# is not the right tool for the functional job. They also know that there’s a perfectly good language already available for functional programming, and it too comes from Microsoft. It’s called F# and what a fantastic functional programming language it is too! Yes, it’s a hybrid, imperative, OO and functional language but at least it was designed that way from the ground up.

Instead of trying to blur the line and make a “one size fits all”, jack of all trades – master of none language, Microsoft should just leave C# alone and trust developers to make the right choice when the time comes. Functional isn’t for everybody – yet.

Advertisements
C# is not F#

Branching messages with pipelines

If you’ve read my recent posts on Messaging as a programming model you might gather that I’m a fan of this approach. If nothing else, it’s a great way of breaking down a big use case into a small set of fine grained, easily maintainable and replaceable steps that make the code very easy to reason about. However, one of the drawbacks at first glance is that there doesn’t appear to be a way to easily branch the message as it moves through the sequence of filters. You may recall that using the essence of the Message Router pattern we can change which filters are executed by modifying the pipeline whilst it is executing by registering new steps based on a particular value carried on the message like so:


public class Filter3
{
    public Filter3(SomeMessage input, PipeLine<SomeMessage> pipeline)
    {
        Console.WriteLine("filter3");
   
        if(input.SomeCondition)
            pipeline.Register(msg => new Filter4(input));
        else
            pipeline.Register(msg => new Filter5(input));
    }
}

However, all this does is attach one of either Filter4 or Filter5 to the end of the existing list of registered filters. You can obviously register as many filters as you like under each path of the conditional if test but even so they are still added at the end with no means to then continue executing back on the original main branch. A diagram might be best here to help explain what I mean:

Alternative Filter Paths

The old

The picture on the left depicts the structure of the filters as the original approach allows. The picture on the right shows a more useful alternative that we’d like as a minimum, and ideally without having to pass the pipeline into the filter to change it. The question is how can we achieve this?

The answer lies in the IFilter interface as described in the Revisited post of the original series. By adopting this simple interface for all our filters we can implement a variation of the composite design pattern where we have filters that contain logic (the leaf) and filters that themselves contain pipelines of filters (the composite) with the addition of a new filter whose sole purpose is to test a condition and determine what to execute next. I’ll use a modified (and contrived) version of the original LogIn example to illustrate the technique. The original example looked like this:

Simple pipeline for logging in a user

It’s a straightforward linear sequence of steps that modify and enrich the message as it passes through. What if though, for arguments sake, we want to execute a bunch of different steps depending on the result of the conditional test in the IsUserLoginAllowed filter?

The new

Branching pipeline for logging in a user

Here with the addition of a few more filters the message will always pass through the first three steps but depending on the logic within the IsUserLoginAllowed filter will then either pass through the two filters on the left branch or through the three on the right branch before joining back up with the main branch to record the fact that a log in was attempted. This final filter will execute regardless of whether the message went left or right. This is a lot more useful and easily achievable through simple composition thanks to the use of our IFilter interface, so let’s look at the code starting with the interface that all our filters will implement:


public interface IFilter<T>
{
    void Execute(T msg);
}

and the Pipeline class which also implements the interface:


public class PipeLine<T> : IFilter<T>
{
   public PipeLine<T> Register(IFilter<T> filter)
   {
      _filters.Add(filter);
      return this;
   }	

   public void Execute(T input)
   {
       _filters.ForEach(f => f.Execute(input));
   }

   List<IFilter<T>> _filters = new List<IFilter<T>>();	
}

All straightforward and recognisable, as are the first two filters shown here:


public class CheckUserSuppliedCredentials : IFilter<LogInMessage>
{
   public void Execute(LogInMessage input)
   {
        Console.WriteLine("CheckUserSuppliedCredentials");
        if(string.IsNullOrEmpty(input.Username) || string.IsNullOrEmpty(input.Password))
        {
            input.Stop = true;
            input.Errors.Add("Invalid credentials");
        }
   }
}

public class CheckApiKeyIsEnabledForClient : IFilter<LogInMessage>
{
    public CheckApiKeyIsEnabledForClient(IClientDetailsDao dao)
    {
        _dao = dao;
    }

    public void Execute(LogInMessage input)
    {
        Console.WriteLine("CheckApiKeyIsEnabledForClient");
        var details = _dao.GetDetailsFrom(input.ApiKey);

        if(details == null)
        {
            input.Stop = true;
            input.Errors.Add("Client not found");
        }

        input.ClientDetails = details;
    }
}

Branch Filters

The next filter, IsUserLogInAllowed, is more interesting and has changed from previous implementations. I refer to it as a Branch filter because, well, it allows your message to branch!


public class IsUserLoginAllowed : IFilter<LogInMessage>
{
	public IsUserLoginAllowed(IFilter<LogInMessage> leftBranch, IFilter<LogInMessage> rightBranch)
	{
		_onDeny = leftBranch;
		_onAllow = rightBranch;
	}

	// only users of ClientX and ClientY can log in
	public void Execute(LogInMessage input)
	{
		Console.WriteLine("IsUserLoginAllowed");
		switch(input.ClientDetails.Name)
		{
			case "ClientX":
			case "ClientY":
				_onAllow.Execute(input);
				break;
			
			default:
				_onDeny.Execute(input);
				break;
		}
	}
	
	private IFilter<LogInMessage> _onAllow;
	private IFilter<LogInMessage> _onDeny;
}

Look at the constructor of this filter and notice that it takes two IFilter instances, one for each branch path. These can be normal, individual (leaf) filters but it gets more interesting and useful when they are filters that group other filters together, in other words, composite filters. A Branch filter’s only responsibility is to determine what to do next. If we look at how this is all wired up we’ll see that our Branch filter, IsUserLoginAllowed, takes two composite filters, OnDenyLogIn and OnAllowLogIn:


    var pipeline = new PipeLine<LogInMessage>();
    pipeline.Register(new CheckUserSuppliedCredentials());
    pipeline.Register(new CheckApiKeyIsEnabledForClient(new ClientDetailsDao()));
    pipeline.Register(new IsUserLoginAllowed(new OnDenyLogIn(), new OnAllowLogIn()));
    pipeline.Register(new RecordLogInAttempt());
		
    var msg = new LogInMessage { /* ... set values on the message */ };
    pipeline.Execute(msg);

Composite Filters

Composite filters contain their own pipelines each registering their own related filters and because all filters whether leaf or composite implement the IFilter interface thay can all be treated the same via the Execute method:

composed pipeline for logging in a user

The code is simple consisting of nothing more than registering filters in a pipeline.

OnDenyLogin:


public class OnDenyLogIn : IFilter<LogInMessage>
{
    public OnDenyLogIn()
    {
        _pipeline.Register(new LockAccountOnThirdAttempt());
        _pipeline.Register(new PublishLogInFailed());
    }

    public void Execute(LogInMessage input)
    {
         Console.WriteLine("OnDenyLogIn");
         _pipeline.Execute(input);
    }

    private PipeLine<LogInMessage> _pipeline = new PipeLine<LogInMessage>();
}

and OnAllowLogIn:


public class OnAllowLogIn : IFilter<LogInMessage>
{
    public OnAllowLogIn()
    {
        _pipeline.Register(new ValidateAgainstMembershipApi());
        _pipeline.Register(new GetUserData());	
        _pipeline.Register(new PublishLogInSucceeded());
    }
	
    public void Execute(LogInMessage input)
    {
        Console.WriteLine("OnAllowLogIn");
        _pipeline.Execute(input);
    }
	
    private PipeLine<LogInMessage> _pipeline = new PipeLine<LogInMessage>();
}

The fact that composites are self contained means that their own internal pipelines don’t even need to be for the same message type! Even though both of the example composite filters both take a LogInMessage via their Execute method, there’s nothing to stop them sending a completely different message type through the internal pipeline and then assigning the result, whatever that may be, to some property on the LogInMessage type before it continues on its way through the rest of the sequence. None of the others filters ever need know that a different message type was created and processed within a composite filter.

Benefits

Composing filters in this way has some important benefits. First it makes the main pipeline easier to read as there are a smaller number of steps at the top level and you can see immediately that there are two courses of action that can be taken when the IsUserLoginAllowed branch filter is executed without having to know the lower level details. Second, grouping related functionality into cohesive whole units like this not only more closely adheres to the Single Responsibility Principle but also allows for some quite complex pipelines to be composed as the following diagram demonstrates:

more complex pipeline

Unit testing is still easy whether we test individual filters or the whole pipeline, but now we have the added option of testing at a third level of granularity via a composite filter such as OnDenyLogIn or OnAllowLogIn. When we’re happy these work as expected we can plug them in and test them in the context of the whole pipeline. Additionally, we’re also still able to use aspects to wrap the entire execution path inside a transaction, for example:


var pipeline = new PipeLine<LogInMessage>();
pipeline.Register(new CheckUserSuppliedCredentials());
pipeline.Register(new CheckApiKeyIsEnabledForClient(new ClientDetailsDao()));
pipeline.Register(new IsUserLoginAllowed(new OnDenyLogIn(), new OnAllowLogIn()));
pipeline.Register(new RecordLogInAttempt());
		
var msg = new LogInMessage { /* ... set values on the message */ };
var tranAspect = new TransactionAspect<LogInMessage>(pipeline);
tranAspect.Execute(msg);

Running this would send the message through either branch (depending on the client):


start tran
CheckUserSuppliedCredentials
CheckApiKeyIsEnabledForClient
IsUserLoginAllowed
OnAllowLogIn
ValidateAgainstMembershipApi
GetUserData
PublishLogInSucceeded
RecordLogInAttempt
end tran

or alternatively:


start tran
CheckUserSuppliedCredentials
CheckApiKeyIsEnabledForClient
IsUserLoginAllowed
OnDenyLogIn
LockAccountOnThirdAttempt
PublishLogInFailed
RecordLogInAttempt
end tran

Summary

The adoption of the IFilter interface and the idea of building separate pipelines inside composite filters opens up more possibilities for messaging as a programming model, most notably by giving us a way to model more complex business processes via branching paths whilst still retaining all the benefits outlined in previous messaging posts. It’s another useful addition to the pipe and filter toolbox, adding variation, and gives more weight to the argument for using messaging techniques in our applications.

Branching messages with pipelines