Steve Spencer's Blog

Blogging on Azure Stuff

Azure Relay Hybrid Connections

If you are using the Azure App Service to host your web site and you want to connect to an on-premises server then there a number of ways you can do this. One of the simplest is to use the hybrid connection. Hybrid connections have had a bit of a revamp lately and they used to require a BizTalk service to be created, now you just need a Service Bus Relay. You can generally use the hybrid connection to communicate to your back end server over TCP and you will need to install an agent on your server (or a server that  can reach the one you want to connect to) called the Hybrid Connection Manager (HCM). HCM will make an outbound connection to the Service Bus Relay over ports 80 and 443, so you are unlikely to need firewall ports changing.

Hybrid connections are limited to a specific server name and port and your code in the Azure App Service will address the service as if it was in your local network, but will only be able to connect to the machine and port configured in the Hybrid Connection. Instructions for configuring your hybrid connection and HCM are here.

I have setup a number of the old BizTalk style hybrid connections and the new way is a lot easier to do. I ran into a few connectivity issues when I first created the Relay hybrid connection and there were a few things I found that helped me to find out where the issues were. Firstly the link I provided to configure the hybrid connection has a troubleshooting section which talks about tcpping. You can run this in the debug console in Azure and it will check to see if your HCM is talking to the same relay as the one in your app service. To get to the debug console, log in to your azure portal, select the app service you want to diagnose. Scroll down to Advanced Tools and click Go.

image

This will take you to your Kudu dashboard where you can do a lot of nice things, such as process explorer, diagnostic dumps, log streaming and debug console

The address will be https://[your namespace].scm.azurewebsites.net/

The debug console will allow you to browse and edit files directly in your application without the need to ftp. This is really useful when trying to check configuration issues.

If you want to check connectivity from your server machine to the Azure Relay then you can use telnet. You might need to add the telnet feature to Windows by using:

dism /online /Enable-Feature /FeatureName:TelnetClient (From https://www.rootusers.com/how-to-enable-the-telnet-client-in-windows-10/)

in a command prompt type

telnet [your relay namespace].servicebus.windows.net 80 or

telnet [your relay namespace].servicebus.windows.net 443

Then a blank screen denotes successful connectivity (from: https://social.technet.microsoft.com/wiki/contents/articles/2055.troubleshooting-connectivity-issues-in-the-azure-appfabric-service-bus.aspx)

You can also use PowerShell to check:

Test-Netconnection -ComputerName [your relay namespace].servicebus.windows.net -Port 443

This all checks that you are connected to the relay, the final thing you need to check is whether you can actually resolve the dns of the target service from the server where HCM is running. This needs to be the host name of the server and not the fully qualified name. This also needs to match the machine name you configured in the hybrid connection.The easiest way to do this for me was to put the address of WCF service I wanted to connect to into a browser on the machine running HCM.

Hopefully I’ve given you a few pointers to help identify why your hybrid connection does not connect.

Custom ASP.NET MVC app running in a Container on Service Fabric

In an earlier post, I talked about how to create a Docker container on Windows that housed a custom ASP.Net MVC app. What I want to show now is how you can get this container running in Service Fabric.

I created 3 identical virtual machines all capable of running Docker as in my earlier post. Now I needed to make my three VMs into a Service fabric cluster. These two posts explain how:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-standalone-deployment-preparation

My 3 VMs are called sf0, sf1 & sf2 and I needed to  put these into my cluster config. I picked the ClusterConfig.Unsecure.MultiMachine config file that comes with the Service Fabric files and changed it to include my 3 VMs, so my nodes look like this:

"nodes": [
{
      "nodeName": "sf0",
      "iPAddress": "sf0",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc1/r0",
      "upgradeDomain": "UD0"
},
{
      "nodeName": "sf1",
      "iPAddress": "sf1",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc2/r0",
      "upgradeDomain": "UD1"
},
{
      "nodeName": "sf2",
      "iPAddress": "sf2",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc3/r0",
      "upgradeDomain": "UD2"
}
],

I then remoted onto one of the machines and ran the following PowerShell:

.\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.json

This will check all the machines in the ClusterConfig.json file to see if they are configured correctly and report any errors. I got the following error:

Machine 'sf2' is not reachable on port 445. Check connectivity/open ports. Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.1.222:445

This meant I needed to open the correct firewall ports on my VM. I got this error for all the machines in the cluster. Once I fixed this and reran the PowerShell, the tests passed which meant I could install Service Fabric on each of the machines as follows:

.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json –AcceptEULA

When this completes successfully you should see something like this:

Your cluster is successfully created! You can connect and manage your cluster using Microsoft Azure Service Fabric Explorer or Powershell. To connect through Powershell, run 'Connect-ServiceFabricCluster

I could connect to Service Fabric Explorer using: http://sf0:19080

Now I have my cluster running I needed to create a Service Fabric App and deploy it to the cluster. Make sure that you have installed the Service Fabric SDK, then run Visual Studio. Create a new Service Fabric project. When the project is created, right click on the services node, Select Add->New Service Fabric Service

clip_image001[4]

Then pick Guest Container and enter the name in your Docker Hub repository where your Docker image resides.

clip_image001

This will add in the necessary files to your service fabric project. If you remember from my earlier post, the website was hosted on port 8000 of the container. We need to tell service fabric about this and also we may want to map this to a different port.

If you open the containers ServiceManifest file

clip_image001[8]

Add an endpoint with the endpoint you want Service Fabric to use to publish the website out

clip_image001[10]

In this example I’m using the same port. If you want to map the port to a different one then changes this to something else e.g. If I wanted to use http://sf0:8080 as the website then I would change the Service Manifest to this:

image

You also need to tell service fabric about the Container port that is published. This is done in the application manifest file:

image

This is set to 8000 as that is the port exposed by the Docker container

Now deploy your application to service fabric. It may take a while to initialise your container as it will need to be downloaded from Docker Hub before it will run. Once it is running you should see it as Ready in the Service Fabric Explorer

image

Error updating SSL certificates in Azure App Services

I was asked to update the SSL certificates on a website that was hosted in Azure Web Apps. No problem I thought.

Go to the Azure Portal,

Select the website you want to update.

When the blade appears scroll down the left panel and select SSL Certificates

image

image

 

remove the binding, by clicking … at the end of the binding row and select Delete

image

Now remove the certificate by clicking .. at the end of the certificate row and select Delete

This is where I got an error

image

It took a short while to resolve this.

I tried a few things like restarting the site & checking the staging slot, but I still got the error. Finally, I checked other sites in the same app service plan and I had the same certificate used for another Web App (both using the same domain url). Once I removed the binding from that site, I could delete the certificate and upload a new one. I had to then add the new bindings to both sites.

Processing a flat file with Azure Logic Apps

[Update: 5th Aug 2018 – This post is still relevant especially the integration account, schemas and maps and I  have written a new blog that builds on this one and integrates into SQL  -  Using Azure Logic Apps to Import CSV to SQL Server]

A lot of companies require the transfer of files in order to transact business and there is always a need to translate these files from one format to another. Logic Apps provides a straight forward way to build serverless components that provide the integration points into your systems. This post is going to look at Logic apps enterprise integration to convert a multi-record CSV file into and XML format. Most of the understanding for this came from the following post:

https://seroter.wordpress.com/2016/09/09/trying-out-standard-and-enterprise-templates-in-azure-logic-apps/

Logic Apps can be created in Visual Studio or directly in the Azure Portal using the browser. Navigate to the azure portal https://portal.azure.com click the plus button at the top of the right hand column, then Web + Mobile then Logic App

clip_image002

Complete the form and click Create

clip_image003

This will take a short while to complete. Once complete you can select the logic app from you resource list to start to use it.

If you look at my recent list

clip_image005

You can see the logic app I’ve just created but you will also see my previous logic app and you will also notice that there is also an integration account and an Azure function. These are both required in order to create the necessary schemas and maps required to translate the CSV file to XML.

The integration account stores the schemas and maps and the Azure function provides some code that is used to translate the CSV to XML.

An integration account is created the same way as a logic app. The easiest way is to click on the plus symbol and then search for integration

clip_image007

Click on Integration Account then Create

clip_image009

Complete the form

clip_image010

Then Create. Once created you can start to add your schemas and maps

clip_image012

You will now need to jump into Visual Studio to create your maps and schemas. You will need to install the Logic Apps Integration Tools for Visual Studio

You will need to create a schema for the CSV file and a schema for the XML file. These two blog posts walk you through creating a flat file schema for a CSV file and also a positional file

I created the following two schemas

clip_image014

clip_image016

Once you have create the two schemas you will need to create a map which allows you to map the fields from one schema to the fields in the other schema.

clip_image018

In order to upload the map you will need to build the project in Visual Studio to build the xslt file.

The schemas and map file project can be found in my repository on GitHub

To upload the files to the integration account, go back to the Azure portal where you previously selected the integration account, click Schemas then Add

clip_image020

Complete the form, select the schema file from your Visual Studio project and click OK. Repeat this for both schema files. You do the same thing for the map file. You will need to navigate to your bin/Debug (or Release) folder to find the xslt file that was built. Your integration account should now show your schemas and maps as uploaded

clip_image021

There’s one more thing to do before you can create your logic app. In order to process the transformation some code is required in an Azure Function. This is standard code and can be created by clicking the highlighted link on this page. Note: If you haven’t used Azure Functions before then you will also need to click the other link first.

clip_image023

clip_image025

This creates you a function with the necessary code required to perform the transformation

clip_image027

You are now ready to start your logic app. Click on the Logic App you created earlier. This will display a page where you can select a template with which to create your app.

clip_image029

Close this down as you need to link your integration account to your logic app.

Click on Settings, then Integration Account and pick the integration Account where you previously uploaded the Schemas and Map files. Save this and return to the logic app template screen.

Select VETER Pipeline

clip_image031

Then “Use This Template”. This is the basis for your transformation logic. All you need to do now is to complete each box.

clip_image032

clip_image034

In Flat File Decoding & XML Validation, pick the CSV schema

clip_image035

In the transform XML

clip_image036

Select the function container, the function and the map file

All we need to do now is to return the transformed xml in the response message. Click “Add an Action” on Transform XML and search for Response.

clip_image038

Pick the Transformed XML content as the body of the response. Click save and the URL for the logic app will be populated in the Request flow

clip_image039

We now have a Request that takes the CSV in the body and it returns the XML transform in the body of the response. You can test this using a tool like PostMan or Fiddler to send in the request to the request URL above.

There is also a test CSV file in my repository on GitHub which can be used to test this.

My next post covers how I diagnosed a fault with this Logic App

Migrating Azure WebJobs to Azure Service Fabric

As part of a proof of concept for Azure Service Fabric one of the challenges was to migrate backend services from a variety of different places. I had a number of services running as Azure Webjobs on the same platform as my web site. The WebJobs were hosted as triggered services meaning that they were using the WebJobs SDK and this has the advantage that the WebJob will run as a console application outside of the Azure Web site it is currently hosted in.

Azure Service Fabric has the capability to run any Windows application that can be run from a command line as a guest executable. This means that I could host my WebJob in Service Fabric as a guest executable.

Once I had Visual Studio setup with the Service Fabric SDK & Tools. It was relatively straight forward to add the WebJob.

As an example, my WebJob is triggered when a message is placed onto an Azure Storage Queue and it then passes the message into an Azure Service Bus Topic. The WebJob project was added to my Service Fabric application

clip_image001

To add this as a Guest Executable, right click on your service node in the Service Fabric application and select “New Service Fabric Service”

clip_image002

When the “New Service Fabric Service” dialog appears, select “Guest Executable”

clip_image004

Click Browse and select your WebJob executable folder. The WebJob executable should now appear in the Program drop down. Select this, change the service name and click OK.

This should add your WebJob as a guest executable to your application package root

clip_image005

Once deployed to a Service Fabric cluster, your WebJob should run as normal. If you leave the connection string settings the same as they are in the WebJob then your diagnostic traces will appear in the same blob container as they are now.

Service Fabric: Resolving External Service Address

I am using Azure Service Fabric to host my application but I’ve deployed it on premises using a 3 machine cluster (running version Microsoft.Azure.ServiceFabric.WindowsServer.5.1.156.9590). It was easy to deploy and I only needed to run PowerShell on 1 of the nodes to configure up all 3. I followed the instructions here.

From Visual Studio I deployed my application which consisted of a number of stateless services and a WCF service. When everything is running in the cluster it all works fine but I wanted to access the WCF service from outside of the cluster. The first issue was that the actual address of the service is not known but you can see the address if you look at the Service Fabric Explorer of the cluster. Navigating through to the application on one of the nodes returns the url of the service e.g.

net.tcp://192.168.56.122:8081/4f341989-ec72-4cd5-8778-6e11e01dc727/968d5932-935a-4773-b83b-fa99f59d9073-131148669247596041

You don’t want to use this url directly as it could change depending upon the configuration of your cluster and the health of each of the nodes. Service Fabric provides a mechanism for discovering the address of the end point using the Service Resolver. If you are running in the cluster then you can use the default resolver and this will return the url of the end point which you can connect to. However, when you are outside of the cluster you need to tell the resolver where to look for the cluster.

Again if you look at the Service Fabric Explorer you can find out the ports used in the cluster e.g.

<ClientConnectionEndpoint Port="19000" />
<LeaseDriverEndpoint Port="9026" />
<ClusterConnectionEndpoint Port="19001" />
<HttpGatewayEndpoint Port="19080" Protocol="http" />
<ServiceConnectionEndpoint Port="9027" />
<ApplicationEndpoints StartPort="20001" EndPort="20031" />
<EphemeralEndpoints StartPort="20032" EndPort="20062" />

The example here shows how to connect to the resolver in an Azure hosted environment.

ServicePartitionResolver resolver = new  ServicePartitionResolver("mycluster.cloudapp.azure.com:19000", "mycluster.cloudapp.azure.com:19001");

This example provides a list of endpoints to try on both ports 19000 & 19001. Mapping this to my environment I used the ip address of the node on which I ran the PowerShell which also is the node that displays the Service Fabric Explorer. I also needed to know the application name in order for the resolver to find the end point I was after. The code below is part of a console application that attempts to call a WCF service from outside of the cluster. I’ve highlighted the service name and addresses I’ve used

string uri = "fabric:/ServiceFabricApp/FileStoreServiceStateless";
Binding binding = WcfUtility.CreateTcpClientBinding();
// Create a partition resolver
var serviceResolver = new ServicePartitionResolver(new string[] { "192.168.56.122:19000" , "192.168.56.122:19001" });
 
// create a  WcfCommunicationClientFactory object.
var clientFactory = new WcfCommunicationClientFactory
                    (clientBinding: binding, servicePartitionResolver: serviceResolver);
 
var client = new ServicePartitionClient>(
                    clientFactory,
                    new Uri(uri), partitionKey: Microsoft.ServiceFabric.Services.Client.ServicePartitionKey.Singleton);
 
var result = client.InvokeWithRetry(svc => svc.Channel.GetDocuments("Document", "1000848776", null));
However, when the code ran it always locked up on the call to InvokeWithRetry. On further investigation by calling ResolveAsyc first, I determined that may application was locking up when trying to resolve the address of the service. This took me a long time to figure out what was wrong and I tried a number of different addresses and ports with no luck. It was only after trying to run the code here, which lists all the services in a cluster, in the Visual Studio debugger that things started to work. This was confusing because I’d already tried loads of different things. The only difference was that the Development Service Fabric was running. I then ran my console app and no lock up occurred. Turning off the development service fabric and the console app locked up again. I moved the console app on to another computer that didn’t have the development service fabric installed and everything worked fine.

The good thing about this is that everything seems to be working and and I’ve learnt more about service fabric now Smile

How to emulate Azure Service Bus Topic Subscription Filtering in RabbitMQ

When creating a subscription to an Azure Service Bus Topic you can add a filter which will determine which messages to send to the subscription based upon the properties of the message.

image

This is done by passing a SqlFilter to the Create Subscription method

e.g.

if (!_NamespaceManager.SubscriptionExists(topic, subscription))
{
    if (!String.IsNullOrEmpty(filter))
    {
        SqlFilter strFilter = new SqlFilter(filter);
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription, strFilter);
        bSuccess = true;
    }
    else
    {
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription);
        bSuccess = true;
    }
}

Where strFilter is a string representing the properties that you want to filter on e.g.

// Create a "LowMessages" filtered subscription.

SqlFilter lowMessagesFilter = new SqlFilter("MessageNumber <= 3");

namespaceManager.CreateSubscription("TestTopic","LowMessages",lowMessagesFilter);

Applying properties to messages makes it easier to configure multiple subscribers to sets of messages rather than having multiple subscribers that receive all the messages, providing you with a flexible approach to building your messaging applications.

Subscriptions are effectively individual queues that each subscriber uses to hold the messages that a relevant to the subscriptions

When a message is pushed onto a Topic the Service Bus will look at all the subscriptions for the Topic and determine which messages are relevant to the subscription. If it is relevant then the subscription will receive the message into its queue. If there are no subscriptions capable of receiving the message then the message will be lost unless the topic is configured to throw an exception when there are no subscriptions to receive the message.

This approach is useful if most of the message data is stored in the properties (which are subject to a size limit of 64KB) and the body content is serialised to the same object (or the body object types are known).

Receiving messages on a Service Bus Subscription is as follows:

MessagingFactory messageFactory = MessagingFactory.CreateFromConnectionString(_ConnectionString);
SubscriptionClient client = messageFactory.CreateSubscriptionClient(topic, subscription);
message = await client.ReceiveAsync(new TimeSpan(0, 5, 0));
if (message != null)
{
    properties = message.Properties;
    body = message.GetBody<MyCustomBodyData>();
    if (processMessage != null)
    {
        // do some work
    }
    message.Complete();
}

Over the past few months I have been looking at RabbitMQ and trying to apply my Service Bus knowledge, as well as looking at the differences. Routing messages based upon the message properties rather than a routing key defined in the message is still applicable in the RabbitMQ world and RabbitMQ is configurable enough to work in this way. RabbitMQ requires more configuration than Service Bus but there is a mechanism called Header Exchange which can be used to route messages based upon the properties of the message.

The first thing to do is to create the exchange, then assign a queue to it based upon a set of filter criteria. I’ve been creating my exchanges with an alternate exchange to allow me to receive message that are not handled in a default queue. The code to create the exchange and queue that subscribes to messages where the ClientId property is “Client1” and the FileType property is “transaction”.

// Create Header Exchange with alternate-exchange

IDictionary<String, Object> args4 = new Dictionary<String, Object>();

args4.Add("alternate-exchange", alternateExchangeNameForHeaderExchange);

channel.ExchangeDeclare(HeaderExchangeName, "headers", true, false, args4);

channel.ExchangeDeclare(alternateExchangeNameForHeaderExchange, "fanout");

//Queue for Header Exchange Client1 & transaction

Dictionary<string, object> bindingArgs = new Dictionary<string, object>();

bindingArgs.Add("x-match", "all"); //any or all

bindingArgs.Add("ClientId", "Client1");

bindingArgs.Add("FileType", "transaction");

channel.QueueDeclare(HeaderQueueName, true, false, false, args5);

channel.QueueBind(HeaderQueueName, HeaderExchangeName, "", bindingArgs);

//queue for Header Exchange alternate exchange (all other)

channel.QueueDeclare(unroutedMessagesQueueNameForHeaderExchange, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueNameForHeaderExchange, alternateExchangeNameForHeaderExchange, "");

This will setup the exchange and queue in RabbitMQ and now you can send a message to the exchange with the correct properties as follows:

IBasicProperties properties = channel.CreateBasicProperties();
properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("ClientId", "Client1");
properties.Headers.Add("FileType", "transaction");


string routingkey = "header.key";
var message = "Hello World";
var body = Encoding.UTF8.GetBytes(message);

channel.BasicPublish(exchange: TopicName,
                                routingKey: routingkey,
                                basicProperties: properties,
                                body: body);

Receiving messages from the queue is as follows:

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    var routingKey = ea.RoutingKey;
    Byte[] FileTypeBytes = (Byte[])ea.BasicProperties.Headers["FileType"];
    Byte[] ClientIDBytes = (Byte[])ea.BasicProperties.Headers["ClientId"];
    string FileType = System.Text.Encoding.ASCII.GetString(FileTypeBytes);
    string ClientID = System.Text.Encoding.ASCII.GetString(ClientIDBytes);
    Console.WriteLine(" [x] Received '{0}':'{1}' [{2}] [{3}]",
                        routingKey,
                        message,
                        ClientID,
                        FileType);
    EventingBasicConsumer c = model as EventingBasicConsumer;
    if (c != null)
    {
        c.Model.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
};
channel.BasicConsume(queue: queueProcessorBaseName + textBox1.Text,
                        noAck: false,
                        consumer: consumer);

Again an out of the box feature for Service Bus can also be implemented in RabbitMQ but it is much simpler to use in Service Bus. The use of properties to route messages offers a much more flexible approach but does require that the body of the messages are either not used or are understood by each consumer. Service Bus offers more flexibility as the query string can contain a variety of operators whereas RabbitMQ matches all or some of the header values and not a range.

Dead Letters with Azure Service Bus and RabbitMQ

Firstly, what are dead letters?

When a  message is received in a messaging system, something tries to process it. The message is normally understood by the system and can be processed, sometimes however the messages are not understood and can cause the receiving process to fail. The failure could be caught by the systems and dealt with but in extreme situations the message could cause the receiving process to crash. Messages that cannot be delivered or that fail when processed need to be removed from the queue and stored somewhere for later analysis. A message that fails in this way is called a dead letter and the location where these dead letters reside is called a dead letter queue. Queuing systems such as Azure Service Bus, Rabbit MQ and others have mechanisms to handle this type of failure. Some systems handle them automatically and others require configuration.

Dead letter queues are the same as any other queue except that they contain dead letters. As they are queues they can be processed in the same way as the normal queues except that they have a different address to the normal queue. I’ve already discussed Service Bus Dead Letter Queue addressing in a previous post and this is still relevant today.

On RabbitMQ a Dead Letter queue is just another queue and is addressed in the same way as any other queue. The difference is in the way the Dead Letter queue is setup. Firstly you create a dead letter queue and then you add it to the queue you want to use it with.

To set up the dead letter queue, declare a “direct” exchange and bind a queue to it:

channel.ExchangeDeclare(DeadLetterExchangeName, "direct");
channel.QueueDeclare(DeadLetterQueueName, true, false, false, null);
channel.QueueBind(DeadLetterQueueName, DeadLetterExchangeName, DeadLetterRoutingKey, null);

I’ve used a dead letter routing key that is related to the queue I want to use it from with an additional “dl”. The routing key needs to be unique so that only messages you want to go to this specific dead letter queue will be delivered. e.g. Payments.Received.DL

Now we need to attach the dead letter queue to the correct queue, so when I created my new queue I needed to add the dead letter queue to it

IDictionary<String, Object> args3 = new Dictionary<String, Object>();
args3.Add("x-dead-letter-exchange", DeadLetterExchangeName);
args3.Add("x-dead-letter-routing-key", DeadLetterRoutingKey);
channel.QueueDeclare(queueName, true, false, false, args3);
channel.QueueBind(queueName, TopicName, paymentsReceivedRoutingKey)
;

Whilst there is a lot of flexibility with RabbitMQ, Dead Letter queues come out of the box with Azure Service Bus. Each topic and queue has  one and is enabled by default. RabbitMQ however allows each topic subscription to have their own dead letter queue which allows you to have a finer grained control over what to do with each type of failed message.

Now we have these dead letter queues and we know how to access them, how do we get messages into them.

In Azure Service Bus, there is a mechanism that will automatically put the message in the dead letter queue if the message fails to be delivered 10 times (default). However, you may wish to handle bad messages yourself in code without relying upon the system to do this for you. If a message is delivered 10 times before failure, you are utilising system resources when the message is being processed and these resources could be used to process valid messages. When the message is receive and validation of the message has failed or there is an error whilst processing that you have caught, then you can explicitly send the message to the dead letter queue by calling the dead letter method on the message object.

BrokeredMessage receivedMessage = subscriptionClient.EndReceive(result);

if (receivedMessage != null)
{
    Random rdm = new Random();
    int num = rdm.Next(100);
    Console.WriteLine("Random={0}", num);
    if (num < 10)
    {
        receivedMessage.DeadLetter("Randomly picked for deadletter", "error 123");
        Console.WriteLine("Deadlettered");
    }
    else
    {
        receivedMessage.Complete();
    }
}

My test code, above, randomly sends 10% of my message to the dead letter queue.

In Rabbit MQ will be published to the dead letter queue if one of the following occurs:

  1. The message is rejected by calling BasicNack or BasicReject
  2. The TTL (Time to Live) expires
  3. The queue length limit is exceeded

I’ve written a similar piece of test code for RabbitMQ

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
                       
    var message = Encoding.UTF8.GetString(body);
    Random random = new Random((int)DateTime.Now.Ticks);
    int randomNumber = random.Next(0, 100);
    if (randomNumber > 30)
    {
        channel.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
    else
    {
        if (randomNumber > 10)
        {
            channel.BasicNack(ea.DeliveryTag,false, true);
            Console.WriteLine(" [xxxxx] NAK {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
        }
        else
        {
            Console.WriteLine(" [xxxxx] DeadLetter {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
            channel.BasicNack(ea.DeliveryTag, false, false);
        }
    }
    Thread.Sleep(200);
};
channel.BasicConsume(queue: "hello",
                        noAck: false,
                        consumer: consumer);

If you look at the code you will see that there are two places where BasicNack is called and only one of them sends them to the dead letter queue. BasicNack takes 3 parameters and the last one is “requeue”. Setting requeue to true will put the message back on the originating queue whereas setting requeue to false will publish the message on the dead letter queue.

Both RabbitMQ and Service Bus have the dead letter queue concept and can be used in a similar way. Service Bus has one configured by default and has both an automatic and manual mechanism for publishing message to the dead letter queue. RabbitMQ requires more configuration and does not have the same automation for dead lettering but it can be configured with more flexibility.

Unhandled Messages with Azure Service Bus and RabbitMQ

One of the requirements for our messaging system is to be able to build a system to process messages and either

  1. Have a default handler and then add custom handlers as and when they are required without needing to recode the main system.
  2. Be notified if a message is put onto a topic and there isn’t a process to handle the message.

In RabbitMQ this is relatively straight forward and requires creating an alternate-exchange, adding it as a property to your main exchange and then creating a queue to service the alternate-exchange

 

IDictionary<String, Object> args2 = new Dictionary<String, Object>();

args2.Add("alternate-exchange", alternateExchangeName);

channel.ExchangeDeclare(mainExchangeName, "direct", false, false, args2);

channel.ExchangeDeclare(alternateExchangeName, "fanout");

// Adds a queue bound to the unhandled messages exchange

channel.QueueDeclare(unroutedMessagesQueueName, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueName, alternateExchangeName, "");

Now when a message is published on the main exchange and there is no subscription to handle the message, then the message will automatically appear on the unrouted message queue. This solution will solve both the scenarios we were looking for.

I was interested however understanding how to do this in the Azure Service Bus and whilst it is possible isn’t not as straight forward and will require some code to setup. Topics can be configured to throw an exception if there is no subscription available to process the message when the message is sent. So When the topic is created it needs to be configured to enable this exception to be thrown.

NamespaceManager namespaceManager =

               NamespaceManager.CreateFromConnectionString(_ConnectionString);

TopicDescription td = new TopicDescription(topic)

{

          EnableFilteringMessagesBeforePublishing = true

};

await namespaceManager.CreateTopicAsync(td);

 

Now when a message is sent we need to handle the exception and do something with the message. This is the difference between RabbitMQ and Service Bus. In RabbitMQ the message will automatically end up in the unhandled message queue. In service bus we will need to actually add it to the unhandled message queue when the message is sent. This means that at each message producer, the code will need to handle the exception:

try

{

     client.Send(message);

}

catch(NoMatchingSubscriptionException ex)

{

     // Do something here to process the unhandled message

     // Probably put it on an unhandled message queue

}

Note, however, that if you had a subscription that was a catch all (for example logging all the messages) then unhandled messages would not appear as they are already being handled by the catch all subscription.