Steve Spencer's Blog

Blogging on Azure Stuff

How to emulate Azure Service Bus Topic Subscription Filtering in RabbitMQ

When creating a subscription to an Azure Service Bus Topic you can add a filter which will determine which messages to send to the subscription based upon the properties of the message.

image

This is done by passing a SqlFilter to the Create Subscription method

e.g.

if (!_NamespaceManager.SubscriptionExists(topic, subscription))
{
    if (!String.IsNullOrEmpty(filter))
    {
        SqlFilter strFilter = new SqlFilter(filter);
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription, strFilter);
        bSuccess = true;
    }
    else
    {
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription);
        bSuccess = true;
    }
}

Where strFilter is a string representing the properties that you want to filter on e.g.

// Create a "LowMessages" filtered subscription.

SqlFilter lowMessagesFilter = new SqlFilter("MessageNumber <= 3");

namespaceManager.CreateSubscription("TestTopic","LowMessages",lowMessagesFilter);

Applying properties to messages makes it easier to configure multiple subscribers to sets of messages rather than having multiple subscribers that receive all the messages, providing you with a flexible approach to building your messaging applications.

Subscriptions are effectively individual queues that each subscriber uses to hold the messages that a relevant to the subscriptions

When a message is pushed onto a Topic the Service Bus will look at all the subscriptions for the Topic and determine which messages are relevant to the subscription. If it is relevant then the subscription will receive the message into its queue. If there are no subscriptions capable of receiving the message then the message will be lost unless the topic is configured to throw an exception when there are no subscriptions to receive the message.

This approach is useful if most of the message data is stored in the properties (which are subject to a size limit of 64KB) and the body content is serialised to the same object (or the body object types are known).

Receiving messages on a Service Bus Subscription is as follows:

MessagingFactory messageFactory = MessagingFactory.CreateFromConnectionString(_ConnectionString);
SubscriptionClient client = messageFactory.CreateSubscriptionClient(topic, subscription);
message = await client.ReceiveAsync(new TimeSpan(0, 5, 0));
if (message != null)
{
    properties = message.Properties;
    body = message.GetBody<MyCustomBodyData>();
    if (processMessage != null)
    {
        // do some work
    }
    message.Complete();
}

Over the past few months I have been looking at RabbitMQ and trying to apply my Service Bus knowledge, as well as looking at the differences. Routing messages based upon the message properties rather than a routing key defined in the message is still applicable in the RabbitMQ world and RabbitMQ is configurable enough to work in this way. RabbitMQ requires more configuration than Service Bus but there is a mechanism called Header Exchange which can be used to route messages based upon the properties of the message.

The first thing to do is to create the exchange, then assign a queue to it based upon a set of filter criteria. I’ve been creating my exchanges with an alternate exchange to allow me to receive message that are not handled in a default queue. The code to create the exchange and queue that subscribes to messages where the ClientId property is “Client1” and the FileType property is “transaction”.

// Create Header Exchange with alternate-exchange

IDictionary<String, Object> args4 = new Dictionary<String, Object>();

args4.Add("alternate-exchange", alternateExchangeNameForHeaderExchange);

channel.ExchangeDeclare(HeaderExchangeName, "headers", true, false, args4);

channel.ExchangeDeclare(alternateExchangeNameForHeaderExchange, "fanout");

//Queue for Header Exchange Client1 & transaction

Dictionary<string, object> bindingArgs = new Dictionary<string, object>();

bindingArgs.Add("x-match", "all"); //any or all

bindingArgs.Add("ClientId", "Client1");

bindingArgs.Add("FileType", "transaction");

channel.QueueDeclare(HeaderQueueName, true, false, false, args5);

channel.QueueBind(HeaderQueueName, HeaderExchangeName, "", bindingArgs);

//queue for Header Exchange alternate exchange (all other)

channel.QueueDeclare(unroutedMessagesQueueNameForHeaderExchange, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueNameForHeaderExchange, alternateExchangeNameForHeaderExchange, "");

This will setup the exchange and queue in RabbitMQ and now you can send a message to the exchange with the correct properties as follows:

IBasicProperties properties = channel.CreateBasicProperties();
properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("ClientId", "Client1");
properties.Headers.Add("FileType", "transaction");


string routingkey = "header.key";
var message = "Hello World";
var body = Encoding.UTF8.GetBytes(message);

channel.BasicPublish(exchange: TopicName,
                                routingKey: routingkey,
                                basicProperties: properties,
                                body: body);

Receiving messages from the queue is as follows:

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    var routingKey = ea.RoutingKey;
    Byte[] FileTypeBytes = (Byte[])ea.BasicProperties.Headers["FileType"];
    Byte[] ClientIDBytes = (Byte[])ea.BasicProperties.Headers["ClientId"];
    string FileType = System.Text.Encoding.ASCII.GetString(FileTypeBytes);
    string ClientID = System.Text.Encoding.ASCII.GetString(ClientIDBytes);
    Console.WriteLine(" [x] Received '{0}':'{1}' [{2}] [{3}]",
                        routingKey,
                        message,
                        ClientID,
                        FileType);
    EventingBasicConsumer c = model as EventingBasicConsumer;
    if (c != null)
    {
        c.Model.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
};
channel.BasicConsume(queue: queueProcessorBaseName + textBox1.Text,
                        noAck: false,
                        consumer: consumer);

Again an out of the box feature for Service Bus can also be implemented in RabbitMQ but it is much simpler to use in Service Bus. The use of properties to route messages offers a much more flexible approach but does require that the body of the messages are either not used or are understood by each consumer. Service Bus offers more flexibility as the query string can contain a variety of operators whereas RabbitMQ matches all or some of the header values and not a range.

Dead Letters with Azure Service Bus and RabbitMQ

Firstly, what are dead letters?

When a  message is received in a messaging system, something tries to process it. The message is normally understood by the system and can be processed, sometimes however the messages are not understood and can cause the receiving process to fail. The failure could be caught by the systems and dealt with but in extreme situations the message could cause the receiving process to crash. Messages that cannot be delivered or that fail when processed need to be removed from the queue and stored somewhere for later analysis. A message that fails in this way is called a dead letter and the location where these dead letters reside is called a dead letter queue. Queuing systems such as Azure Service Bus, Rabbit MQ and others have mechanisms to handle this type of failure. Some systems handle them automatically and others require configuration.

Dead letter queues are the same as any other queue except that they contain dead letters. As they are queues they can be processed in the same way as the normal queues except that they have a different address to the normal queue. I’ve already discussed Service Bus Dead Letter Queue addressing in a previous post and this is still relevant today.

On RabbitMQ a Dead Letter queue is just another queue and is addressed in the same way as any other queue. The difference is in the way the Dead Letter queue is setup. Firstly you create a dead letter queue and then you add it to the queue you want to use it with.

To set up the dead letter queue, declare a “direct” exchange and bind a queue to it:

channel.ExchangeDeclare(DeadLetterExchangeName, "direct");
channel.QueueDeclare(DeadLetterQueueName, true, false, false, null);
channel.QueueBind(DeadLetterQueueName, DeadLetterExchangeName, DeadLetterRoutingKey, null);

I’ve used a dead letter routing key that is related to the queue I want to use it from with an additional “dl”. The routing key needs to be unique so that only messages you want to go to this specific dead letter queue will be delivered. e.g. Payments.Received.DL

Now we need to attach the dead letter queue to the correct queue, so when I created my new queue I needed to add the dead letter queue to it

IDictionary<String, Object> args3 = new Dictionary<String, Object>();
args3.Add("x-dead-letter-exchange", DeadLetterExchangeName);
args3.Add("x-dead-letter-routing-key", DeadLetterRoutingKey);
channel.QueueDeclare(queueName, true, false, false, args3);
channel.QueueBind(queueName, TopicName, paymentsReceivedRoutingKey)
;

Whilst there is a lot of flexibility with RabbitMQ, Dead Letter queues come out of the box with Azure Service Bus. Each topic and queue has  one and is enabled by default. RabbitMQ however allows each topic subscription to have their own dead letter queue which allows you to have a finer grained control over what to do with each type of failed message.

Now we have these dead letter queues and we know how to access them, how do we get messages into them.

In Azure Service Bus, there is a mechanism that will automatically put the message in the dead letter queue if the message fails to be delivered 10 times (default). However, you may wish to handle bad messages yourself in code without relying upon the system to do this for you. If a message is delivered 10 times before failure, you are utilising system resources when the message is being processed and these resources could be used to process valid messages. When the message is receive and validation of the message has failed or there is an error whilst processing that you have caught, then you can explicitly send the message to the dead letter queue by calling the dead letter method on the message object.

BrokeredMessage receivedMessage = subscriptionClient.EndReceive(result);

if (receivedMessage != null)
{
    Random rdm = new Random();
    int num = rdm.Next(100);
    Console.WriteLine("Random={0}", num);
    if (num < 10)
    {
        receivedMessage.DeadLetter("Randomly picked for deadletter", "error 123");
        Console.WriteLine("Deadlettered");
    }
    else
    {
        receivedMessage.Complete();
    }
}

My test code, above, randomly sends 10% of my message to the dead letter queue.

In Rabbit MQ will be published to the dead letter queue if one of the following occurs:

  1. The message is rejected by calling BasicNack or BasicReject
  2. The TTL (Time to Live) expires
  3. The queue length limit is exceeded

I’ve written a similar piece of test code for RabbitMQ

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
                       
    var message = Encoding.UTF8.GetString(body);
    Random random = new Random((int)DateTime.Now.Ticks);
    int randomNumber = random.Next(0, 100);
    if (randomNumber > 30)
    {
        channel.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
    else
    {
        if (randomNumber > 10)
        {
            channel.BasicNack(ea.DeliveryTag,false, true);
            Console.WriteLine(" [xxxxx] NAK {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
        }
        else
        {
            Console.WriteLine(" [xxxxx] DeadLetter {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
            channel.BasicNack(ea.DeliveryTag, false, false);
        }
    }
    Thread.Sleep(200);
};
channel.BasicConsume(queue: "hello",
                        noAck: false,
                        consumer: consumer);

If you look at the code you will see that there are two places where BasicNack is called and only one of them sends them to the dead letter queue. BasicNack takes 3 parameters and the last one is “requeue”. Setting requeue to true will put the message back on the originating queue whereas setting requeue to false will publish the message on the dead letter queue.

Both RabbitMQ and Service Bus have the dead letter queue concept and can be used in a similar way. Service Bus has one configured by default and has both an automatic and manual mechanism for publishing message to the dead letter queue. RabbitMQ requires more configuration and does not have the same automation for dead lettering but it can be configured with more flexibility.

Unhandled Messages with Azure Service Bus and RabbitMQ

One of the requirements for our messaging system is to be able to build a system to process messages and either

  1. Have a default handler and then add custom handlers as and when they are required without needing to recode the main system.
  2. Be notified if a message is put onto a topic and there isn’t a process to handle the message.

In RabbitMQ this is relatively straight forward and requires creating an alternate-exchange, adding it as a property to your main exchange and then creating a queue to service the alternate-exchange

 

IDictionary<String, Object> args2 = new Dictionary<String, Object>();

args2.Add("alternate-exchange", alternateExchangeName);

channel.ExchangeDeclare(mainExchangeName, "direct", false, false, args2);

channel.ExchangeDeclare(alternateExchangeName, "fanout");

// Adds a queue bound to the unhandled messages exchange

channel.QueueDeclare(unroutedMessagesQueueName, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueName, alternateExchangeName, "");

Now when a message is published on the main exchange and there is no subscription to handle the message, then the message will automatically appear on the unrouted message queue. This solution will solve both the scenarios we were looking for.

I was interested however understanding how to do this in the Azure Service Bus and whilst it is possible isn’t not as straight forward and will require some code to setup. Topics can be configured to throw an exception if there is no subscription available to process the message when the message is sent. So When the topic is created it needs to be configured to enable this exception to be thrown.

NamespaceManager namespaceManager =

               NamespaceManager.CreateFromConnectionString(_ConnectionString);

TopicDescription td = new TopicDescription(topic)

{

          EnableFilteringMessagesBeforePublishing = true

};

await namespaceManager.CreateTopicAsync(td);

 

Now when a message is sent we need to handle the exception and do something with the message. This is the difference between RabbitMQ and Service Bus. In RabbitMQ the message will automatically end up in the unhandled message queue. In service bus we will need to actually add it to the unhandled message queue when the message is sent. This means that at each message producer, the code will need to handle the exception:

try

{

     client.Send(message);

}

catch(NoMatchingSubscriptionException ex)

{

     // Do something here to process the unhandled message

     // Probably put it on an unhandled message queue

}

Note, however, that if you had a subscription that was a catch all (for example logging all the messages) then unhandled messages would not appear as they are already being handled by the catch all subscription.

Steve Spencer's Blog | All posts tagged 'Management'

Steve Spencer's Blog

Blogging on Azure Stuff

Why do I need Pre and Post Approval Steps in my Release Pipeline?

TFS Release Manager and Octopus Deploy, both support the concept of approval steps, but why do you need both pre-release and post-release approval steps. When I first started to look at the automated release tools such as TFS release manager I could understand the reason behind the pre-release approval. This step is your quality gate and adds some controls into your process. When creating your release pipeline you will setup a number of environments (e.g. Test, UAT, Pre production, Production) and at each stage you can make a different set of people responsible for allowing the deployment onto each environment.

When the developers have finished their new piece of code and it has been tested in their own environments, they will want to get it onto the servers so that the testers can test it in a more formal way. Currently this may involve the developer talking to the testers and the developers hanging around whilst the testers finish what they are doing so that they can free up the servers ready for the deploy. The developers will then deploy the software to the test environment, hopefully using some method of automation. With release manager, the developers can kick of an automated build and when it is completed it can then automatically create a release ready for deployment. This release can be configured to automatically deploy to the target environment.

The developer can force the software on to the test environment without the tester being ready. The testers for example may be finishing off testing a previous release and need some time before they are ready to accept the software. They may also have a set of criteria that need to be met before they will accept the software onto their environment.

Adding a pre-release approval step that allows the test team to “Accept” the release gives control to the test team and allows them to accept and indeed reject a release. This “Pause” the process allows the testers to check that all the developer quality gates have been met and therefore allow them push back to the developers if they are not happy. As the deployments can be automated, the testers can use the approval process to also control when the new software is to be deployed into their environment, allowing them to complete their current set of tests first. It also frees up the developer, so that they are not hanging around waiting to deploy. Similarly, moving on to UAT, Pre-Prod or Production, a pre-release approval step can be configured with different approvers who then become the gate keepers for each environment.

A pre-release approval step makes a lot of sense and provides order and control to a process and remove a lot of user error from the process.

So what about a post-release approval step, why would you need one? It wasn’t until I started to use TFS Release manager to automatically deploy my applications to Azure Websites where the need to have pre-release approval process became clear. Once I had released my software onto the test environment, I needed a mechanism to allow the testers to be able to reject a release if it failed testing for whatever reason. The post release approval step allowed them to have this power. By adding both a pre and post release approval step for each environment allowed the environment owner to accept the release into the environment when they are ready for it and when they are satisfied that the developers have done their jobs correctly. They can also control when it is ready to move to the next stage in the process. If after completing testing the software is ready to release to UAT then the tester can approve the release which pushes it to the next environment. If the tester is not happy with the release then they can reject it and the release does not move forwards. The tester can comment on the reason for rejection and the release will show red for failure on the dash board. Adding pre and post approval steps to each environment moves the control of software releases onto each environment to a group of people who are responsible for what happens on each.

Using these approval steps can also act as a sanity check to ensure that software releases do not accidentally get pushed onto an environment if someone kicks of the wrong build for example.

I’ve created a release pipeline for my applications which use pre and post approval steps for releases to Test and UAT, I don’t’ have a pre-production environment, but production utilises the staging slots feature of Azure Websites to allow me to deploy the release to staging prior to actually going live. The production environment only has a pre-release approval step, but as it is only going to staging, there is an additional safe guard to allow the coordinated live release when the business is ready.

Both Pre and Post release approval steps provide a useful feature to put the control of the release with the teams that are responsible for each environment. The outcome of each approval process can be visible, which also highlights if and when there are issues with the quality of the software being released.

Scrum Overview

Scrum is a process to help with the day to day running of projects. From my experience of running projects using scrum I have written an overview document. This is probably not pure scrum but it works and has practical advise. The overview is written from the team leaders perspective and may be useful to new team leaders or existing team leaders who are new to scrum.

overview - Scrum overview (pdf).

When is Complete Complete?

Developing software is complex but breaking the problem down into smaller tasks makes the job easier. It is OK if you are the person who defines the work, splits it down and then completes it yourself as you fully understand what the problem is and when the problem is solved. This is not normally the way it works in the real world. Here the problem is defined by the customer; there is generally someone or a layer or people defining what needs to be done to solve the problem and then a group of developers who are actually implementing the solution. Once the problem and its solution are split between different people the concept of complete starts to become an issue. There are three points of view of completed:  Developers’ perspective, Team Leaders’ perspective, Customers’ perspective.

The customers perspective of completed is when the customer gets what they want. This is not necessarily what is in the requirements. One of the biggest problems with requirements is that the person or people who normally write them understand what they want and often assume that something will be like what they want without actually specifying what it is they actually want. These are hidden or implicit requirements.

The team leaders’ perspective of completeness is when the developers have told the team leader that the task is completed and the software has been built, deployed and is in a fit state to test or demonstrate.

The developers’ perspective dictates that something is complete when the item of work given to the developer is completed as close to the specified design as possible. If you are lucky the developer has let the team leader know which parts of the design have not been implemented for whatever reason.

These three perspectives often are not compatible with each other which lead to disappointment with the customer as things do not appear to be going to plan even though the developers and team leaders think things are going well. If you are not careful this could lead to a de-motivated workforce and an even more disappointed customer.

One of the secrets of a successful development is to align the three perspectives of completion. This generally lies with the team leaders or project managers. This layer of the development process, interfaces with both the customers and the developers and it is their actions that ultimately determine how successful the project is. For simplicity we shall assume that the team leader is handling this issue. It could easily be anyone of the senior members of the development team: the consultant, designer, project manager, team leader etc. It is the team leaders’ responsibility to fully understand what is actually expected to be delivered to the customer. If the team leader does not fully understand what is required then how can anyone expect to get what they want? The team leader must rely on fact and not hearsay or rumours. The team leader must understand the requirements and try to work out what the implicit requirements are and handle the delivery expectations of the customer. If there are features that the customer seems to be talking about that are not explicitly specified in the requirements then the team leader needs to get to grips with what the customer is expecting to be delivered. This will often contradict with what the development team has been contracted to do. This may require that the project manager needs to ensure that the scope of the project is managed and that additional work is charged accordingly (assuming that the customer and developers work for different organisations). It is the team leaders’ responsibility to make sure that the customer is aware of what is being delivered at each stage.

Once the team leader has aligned the customers’ expectations with the requirements the development team can now work to complete the tasks. To make sure that the team understands what they are expected to deliver the team leader must specify the minimum criteria for completeness. This is effectively a set of tasks which explicitly defines what needs to be done. This will include testing, preparing for a demo (including what features are required to demo) and building and deploying. The goal of the team needs to be specified so that all the development team understands what is expected from them. The team needs to be made responsible for delivering the demonstration and the whole team needs to be involved with its preparation. This means that whilst the demonstration is being prepared and the software is being built and deployed, any problems are resolved as soon as possible by the development team. In addition to this any short fall in specification needs to be identified as early as possible so that the customer can be notified and/or the problem resolved before the item is delivered as complete. It is the team leaders’ responsibility to ensure that all this happens, if the delivery is not successful then is it not acceptable for the team leader to blame any member of the team. They are ultimately responsible for the successful delivery.

Aligning the perspective of completeness of the customer, team leader and developer is one of the key to successful software developments. The team leader is the catalyst for this alignment and needs to be able to communicate effectively with the customer and the developer; to understand what is required and to communicate this to the development team.

Clicks Cost Money

In this ever growing connected world of computer automation the user interface is becoming more and more critical to the success of a business and the success of a business computer system. The world is moving towards a service-based architecture with the web being the conduit to link all these services together. Ease of deployment is a factor in the move to web based user interfaces for business applications and this move poses challenges for software developers around the globe.

Whilst the underlying architectures are changing the basic premise for a user interface is still the same: To make the business functionality easy and obvious to use. Vast quantities of Time and money are spent developing the back end systems and relatively little is consumed designing the user interface. Most user interfaces that are developed are functional in behaviour, but they are not always usable or appear “Clunky”. The developer is not to blame for this, as the whole development process does not necessarily make the user interface as important as the back end systems and architecture. No one spends real time analysing the way the data should be presented and the way the screens interact with the user; different teams of developers develop the screens so there is not real consistency to the designs and no time is spent assessing the needs of the user when designing the user interface. Quite often the user interface is too complex and is trying to accommodate all users all of the time and often makes the often-simple task hard to use.

One client termed the phrase “Clicks Cost Money”. In an organisation where the throughput of customers is the main factor in profitability the business needs to get the customer though the system as quick as possible so the more clicks the user has to do with the computer system the longer this process takes and therefore the less money that is received by the company.

The user interface needs to be consistent through out so that when the user is presented with a screen it behaves the same way in all scenarios. The number of actions required to complete an action needs to be as short as possible. A few years back I was reading an article (I can’t remember where) that stated that each user needs to do a maximum of 3 clicks to complete a task and that it should take no longer than 1 second.

The design of the user interface needs to be brought in to the forefront of the development cycle. When I talk about design, I do not mean the bit where you make it look pretty and have the right colours (i.e. styling), I mean the actual functionality and behaviour of the user interface. Thought needs to be given to this and the developers need to be given a style guide to work with so that each of the screens is consistent using the same styles and controls.

Designing the user interface has a lot to do with the way the business is run. Each screen needs to present enough information for the user to be able to make the right decision, ask the right questions or gather the correct information so that the real business benefit of the system can be achieved. The design documentation needs to spend more time on the behaviour of the user interface and the interactions. We should be concentrating upon making the user interface easy to use and responsive

During development it is not just enough to say that the user interface performs the function it was designed to do to be fault free. It needs to adhere to some common sense values as well. It needs to be responsive, it needs to be easy to use, and it needs to behave, as you would expect it. Getting these ideas across to the developers is difficult. Time needs to be spent designing the user interfaces and documenting them in a way that is easy to understand by the developers. Developers will develop user interfaces that are easy to build not necessarily easy to use, so we need to educate them in what makes a good user interface. We need to put guidelines in place so that the user interface can be evaluated which should include response times for actions. We need to have User interface Design guides to specify common actions to make the user interface consistent and we need to understand the business requirements for the user interface so that the information that is needed most is easy and obvious to get to and that the day to day running of the business is as efficient as possible. After all that is the reason for putting the computer systems in place.