Why NoSql Databases Suck

Actually, they don’t except in one very important way. Because they’re so damn easy to work with I now find it incredibly tedious to work with Entity Framework or NHibernate at all! As I sit and build models and then go through the pain of (in the case of NHibernate) xml mapping files or thinking about this property mapping to that column, specifying keys, lazy or eager fetching, code based migrations and scripts, I just want to shout “WTF! End this madness!”.

If you’ve never played with one, I say go and do it now. Nothing highlights the object-relational impedance mismatch more. As a .Net dev I recommend RavenDB just for its transaction support alone because, more than likely, in enterprise shops that will be very important, but I am a massive fan of MongoDB too (and it’s free!). Whatever, go learn one and then go back to trying to make your OO model conform to a relational schema and tell me it’s better. If you take away scalability, the primary reason NoSql databases are touted as being good for, and just appreciate it for how simple it makes your life with regards to persistence, then I wouldn’t be surprised at all if you wanted to ditch your relational database too.

In terms of domain models I see the likes of RavenDB and MongoDB as the natural choice but at the same time I appreciate the power of SQL databases as being well suited to reporting data and there’s no reason why the two couldn’t co-exist together. They each solve a different problem well. I have a feeling that as they begin to gain more traction we might well see this as the preferred approach because from a productivity point of view I cannot think of a faster way to build an application, and if we can build applications faster and remove pain and friction from our daily development lives then both we and the business are happy. It’s a win-win situation. I’m so looking forward to that day.

Advertisements
Why NoSql Databases Suck

Integrating with External Clients via NServiceBus

I thought I’d write up how we’ve been using NServiceBus at work to help us handle a rather large integration project that we’re in the middle of and how much simpler the task was than it might otherwise have been without such an awesome tool.

We have a new client who are essentially outsourcing part of their business to us. That means we have to be able to receive orders from them and also be able to send them order dispatch notifications and stock updates via web services that they own. Their order data is fed into an existing system and eventually ends up populating a couple of tables that contain the outgoing data, one for dispatches and one for stock levels. This data needs to be read from these tables and sent to the client via calls to their web services.

Potentially this could be a tricky problem as there’s no guarantee that the web service will actually be there at the time we make the call. Each call could hang around for a while before timing out which would slow the system down too. If we had chosen to read this data and invoke the web service within the same process then this would meet the definition of temporal coupling where we package together two modules (data reads and web service calls) because we believe they need to happen at the same time but this can and probably should be avoided. Even if we are able to successfully read and send data it still doesn’t mean we are guaranteed to succeed. What if there’s a validation issue? How do we handle a failed attempt to call them? Can we just throw the data away and wait until we have another request to make or do we need some kind of error handling and retry mechanism?

Thankfully, in NServiceBus we have a tool at our disposal that while it wasn’t necessarily designed for this scenario, it actually handles it with ease.

My solution to this problem was to create a simple console application that can be run under the Windows Task Scheduler that periodically reads data from these tables and creates messages that it Bus.Sends to a queue – two queues actually, one for dispatches, one for stock updates, so that each endpoint can be individually monitored to assess the volume of messages it is having to work with (we expect that stock updates will outnumber dispatches by a large margin).

At a high level it looks like this:

About

The client process is essentially nothing more than the following snippet running every 10 minutes:

using(var scope = new TransactionScope())
{
	LoadMessages();
	SendMostRecentStockMessages();
	SendAllDispatchMessages();
	MarkAsSent();
	scope.Complete();
}

Digging deeper, I split the LoadMessages method into two more methods – ReadStockMessages() and ReadDispatchMessages(). Here I used Dapper (another favourite tool of mine) as a simple way to map table columns and rows into the actual message types I want to place on the queue:

stockMessages = cnn.Query<StockMessage>("SELECT * FROM StockTable").ToList();

disptachMessages = cnn.Query<DispatchMessage>("SELECT * FROM DispatchTable").ToList();

The SendMostRecentStockMessages() and SendAllDispatchMessages() methods basically iterate over the messages pushing them on to their respective queues:

dispatchMessages.ForEach(msg => Bus.Send(msg));

The logic for sending Stock messages was a little more involved as we didn’t want to send all messages for a given part number, only the most recent as the client is only interested in knowing what the current stock level is not what it might have been multiple times since we last polled the database. A little LINQ query helped here.

Once the messages are on the queue I then update a DateSent field in each row so that the system can move them to an audit table. All this is done using the TransactionScope (and therefore the Microsoft DTC) so that if any problems arise with bad data then no messages will be sent. The nice thing about NServiceBus is that it takes care of configuring the DTC for you when you first download it meaning that you don’t really need to give it much thought at all.

With the transaction scope completed and messages safely on their respective queues NServiceBus kicks in and invokes the appropriate handlers. Anyone familiar with NServiceBus will be aware of the Profile feature that allow us to do something different with the received message depending on the profile selected. We used the profiles mainly as a means to help us test the client’s web services by invoking their test urls in the Integration profile and the live urls in the Production profile. With profiles testing becomes simple.

For me, the stand out feature of NServiceBus though is the error handling which takes us back to the original statement about temporal coupling. If we had not gone with NServiceBus and attempted to read data and send it in one hit we would have been faced with having to roll our own error handling and somehow ensure that any failures did not result in lost data. Once again, this is taken care of in NServiceBus automatically. Any failure that occurs now results in the message being moved to an error queue for later inspection by an admin who can, once the initial problem is resolved, move the message back to its original queue at which point NServiceBus will then attempt to send it again. If it goes through, great. If not, it’s back to the error queue and we try to resolve whatever the issue is this time. It’s reassuring to know that no matter what goes wrong we never lose data! It may be that the data is now too old and no longer worth sending. In this case we can just delete the message from the error queue but at least we are in a position to make that decision.

It has to be said that NServiceBus when used as intended, i.e. in a pub/sub environment costs money but in this scenario we’ve used it on a single machine (which is free under the Express Edition licence) with the sole purpose of providing a robust integration point and it works very well. We don’t need publish and subscribe right now but what business doesn’t need reliability and durability? The fact that NServiceBus gives this to us in a simple manner makes it a great way to provide integration between disparate systems where fault tolerance is a must-have and it is something I’d encourage anyone faced with a similar problem to consider.

Integrating with External Clients via NServiceBus