Steve Spencer's Blog

Blogging on Azure Stuff

Azure Key Vault Logging and Events with Log Analytics

Following on from my previous blog post (http://blogs.recneps.net/post/Setting-up-Azure-Key-Vault-with-Audit-logging) which explains how to set up Azure key vault with logging enabled, this post explains how to access the details of these logs and also to create an alert so you can see if someone is accessing the key vault from an unknown ip address (for example)

Open the Azure portal and navigate to the Resource Groups section and pick the resource group that we configured last time which contains the key vault and log analytics resources

image

Click your log analytics item, to open Log Analytics.

You can then select Log Search

image

This screen allows you to create your own query or select from existing ones.

image

Selecting “All Collected Logs” will show you the logs for the last day. I’ve highlighted the areas where you can change the time period, see the query and also click on Advanced Analytics to give a richer environment for analysing your logs.

image

If you want to query just for the Key Vault Audit logs then you can use the following query:

search * | where Category=="AuditEvent"

image

This will default to a list view, but clicking the Table button will format the data in an easier to read table.

image

You can sort and filter on the column headers. This can also be achieved using the order by clause as follows:

search * |where Category=="AuditEvent"  | order by TimeGenerated desc

A blog post discussing the query language can be found here

We are interested in all calls where someone has tried to access a Secret from the key vault. For that we are looking for an AuditEvent with an OperationName of SecretGet. If we also want to restrict the columns we retrieve then you can use “project” e.g.

search * | where Category=="AuditEvent"  and OperationName == "SecretGet"
| order by TimeGenerated desc
| project TimeGenerated, OperationName, CallerIPAddress, ResultSignature, requestUri_s

image

Now we are familiar with writing queries we can look at alerting. I’d like to set up an alert when the key vault is access from an IP Address other than the one where my application is running. This can be done as follows:

search * | where Category=="AuditEvent" and CallerIPAddress != "51.140.184.51"

This ip address is actually the Azure Portal and is shown when you view the resource group that contains the key vault.I’m using this ip address so that I will actually get an alert (at the wrong time) when my application runs

Click New Alert Rule

image

The following screen should appear

image

The Alert Target should be the Log Analytics we’ve been using and the Target Criteria (when clicked) should show the query we’ve just written

image

We need to configure the rule for when this alert should be triggered. I’m interested when at least 1 attempt has been made in the last 5 minutes to access the Key Vault from an unknown location, so I set the threshold to be zero and click Done. We’ve now configured the logic to determine when the event is fired. Now we need to say what we want to happen when it fires.Firstly we need to give the alert a name and description

image

Now we need to configure how we are alerted. For this you need to create an action group. An action group allows you to define a collection of activities that will happen when the alert is fired. Click New Action Group

image

Action Types can be any of the following:

image

An action group can have multiple actions and you can select both email and SMS in a single action.Once you have created your Action Group you need to select in then click “Create alert rule”

image

Your alert is now set up and running. You can view/edit alerts by selecting Monitor in the Azure Portal

image

then click Alerts (preview), you will be able to see the alerts that have fired.

image

Click Manage Rules to edit the alert.

When the alert is fired I will get an email containing the details of the alert.

Log analytics is a powerful tool and whilst this series of posts has been related to auditing of Key Vault we can use log analytics for a wide variety of log sources such as Application Insights. We can also use the same mechanism for alerting to these other log sources,

Setting up Azure Key Vault with Audit logging

Azure Key Vault is a good way to share secrets with your partners in a way that allows you to have control over the access to each of the assets in Azure. We also need to know who is accessing the resources and from where so that we can monitor for suspicious activity. This post will talk through setting up the key vault and then configuring logging to keep track of the audit information for your certificates, keys and secrets. For each application that you want to access your resources you will need to create some credentials that the application can use.

To allow an application to access key vault an App Registration needs to be added to Azure Active Directory (AAD). This effectively sets up a username and password that the application can use for credentials.

Open the azure portal (http://portal.azure.com) and navigate to Active Directory.

Click "App registrations"

clip_image002

Then "New application registration"

clip_image001

Name needs to be unique within your AD, select Web API/API and enter sign-on url. If you not building a website then enter anything in here. It might be useful to use a url related to your existing domain with application name appended. It doesn’t need to be a valid url. The click “Create”

Once created copy the Application ID as this is equivalent to a username to be used when calling the Key Vault in code. You now need to create the password.

Click Settings then Keys

clip_image002

clip_image004

clip_image006

Enter a name in the description field and select a duration, then click Save. The new key value will be displayed. You will need to copy this as it will not be visible again once you leave this page. This will be used as the password.

clip_image002[5]

Now create the Key Vault. To do that it is a good idea to put it in a specific resource group, especially if you are creating a set of resources that the key vault is going to access or if you are going to setup third party access. Once the Resource Group has been created, select it and add a Key Vault. When the Create Key Vault panel appears, click Access Policies, click "Add new"

clip_image004[5]

Pick the application you just created in AAD and select Get in Secret permissions, Save then go back to the main Key Vault pane and click Create

You have just given the application we created earlier access to just retrieving secrets. As you can see from the access policy you can give the application permissions to access a combination of Keys, Secrets and Certificates with the minimum access of Get. The Key Vault security is at the vault level and you cannot protect individual secrets at the user level. By granting only Get access on the Secret the application will not be able to list the Secrets available and will only be able to retrieve secrets it knows the names of.

Now the Key vault is set up and can be accessed, we want to know who is accessing the vault and from where. Out of the box this is not enabled and requires additional configuration and resources to allow us to be able to retrieve this audit information. This is achieved by enabling diagnostic logs in the Key Vault.

Before you can enable this you need to create a new storage account in this resource group to store the logs, then add Application Insights to the resource group

clip_image002[7]

Once these have been provisioned, navigate to the Key Vault you just created & click Diagnostic logs

clip_image004[7]

Click "Turn on diagnostics"

clip_image006[6]

Select “Archive to Storage Account” and Pick the storage account you’ve just created

Select “Send to Log Analytics” and Create a new OMS workspace in your resource group

clip_image008

Once created select this for Log Analytics

clip_image009

select the AuditEvent log and click Save. 

Now any changes to the Key Vault plus any access from your application will be logged and visible via log analytics. There’s a 10 – 15 minute delay between accessing the Key Vault and the log appearing.

To Add a Secret to the vault, Navigate to the vault, click Secrets then Add

clip_image010

Select Manual from the Upload options, enter a name and the secret

clip_image011

Remember the name you gave the Secret as you will need this in your code when accessing the key vault. This secret will now have a unique identifier that you will use. The one I’ve just created is:

https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You should see in the logs this secret being created and also when it gets accessed.

Accessing the KeyVault in C# can be seen here: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-use-from-web-application

The application in the example uses settings as defined below:

ClientID is the Application ID we created in the application registration in AD

ClientSecret is the key you created (that you had to save as it wasn’t visible again) as part of creating the application registration in AD.

Each Key, Secret and Certificate has a unique url which is used as the SecretURI e.g. https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You now have your key vault set up with audit logging and are able to access it. My next blog post will talk you through how to access the logs and also how to set up alerting

Creating a Scheduled Web Job in Azure

It’s been a while since I’ve talked about web jobs, but they are still around.I needed to modify one of mine recently and configure it as a scheduled web job.

You can deploy your web job from the Azure Portal. Web jobs are part of App Services and are deployed by selecting your app service you want and clicking the web jobs service.

image

Click the Add button.

image

Enter a name for your web job, Browse a zip or exe containing your web job

Select Triggered from the Type drop down. This change the UI to allow you to select Scheduled

image

The Schedule is triggered using a CRON expression.

Alternatively, configure the web job as a manual trigger, then use the Azure Scheduler to trigger the web job.. When you have configured your web job, click on its properties in the Azure Portal and you should see a web hook url.

https://yourwebapp.scm.azurewebsites.net/api/triggeredwebjobs/yourwebjob/run

Now create a new scheduler Job

image

Select the method as Post and paste in the webhook url.Once you've completed this configuration you can then configure the schedule

image

Using the scheduler allows you to configure retry policies and also error actions

Custom ASP.NET MVC app running in a Container on Service Fabric

In an earlier post, I talked about how to create a Docker container on Windows that housed a custom ASP.Net MVC app. What I want to show now is how you can get this container running in Service Fabric.

I created 3 identical virtual machines all capable of running Docker as in my earlier post. Now I needed to make my three VMs into a Service fabric cluster. These two posts explain how:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-standalone-deployment-preparation

My 3 VMs are called sf0, sf1 & sf2 and I needed to  put these into my cluster config. I picked the ClusterConfig.Unsecure.MultiMachine config file that comes with the Service Fabric files and changed it to include my 3 VMs, so my nodes look like this:

"nodes": [
{
      "nodeName": "sf0",
      "iPAddress": "sf0",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc1/r0",
      "upgradeDomain": "UD0"
},
{
      "nodeName": "sf1",
      "iPAddress": "sf1",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc2/r0",
      "upgradeDomain": "UD1"
},
{
      "nodeName": "sf2",
      "iPAddress": "sf2",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc3/r0",
      "upgradeDomain": "UD2"
}
],

I then remoted onto one of the machines and ran the following PowerShell:

.\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.json

This will check all the machines in the ClusterConfig.json file to see if they are configured correctly and report any errors. I got the following error:

Machine 'sf2' is not reachable on port 445. Check connectivity/open ports. Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.1.222:445

This meant I needed to open the correct firewall ports on my VM. I got this error for all the machines in the cluster. Once I fixed this and reran the PowerShell, the tests passed which meant I could install Service Fabric on each of the machines as follows:

.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json –AcceptEULA

When this completes successfully you should see something like this:

Your cluster is successfully created! You can connect and manage your cluster using Microsoft Azure Service Fabric Explorer or Powershell. To connect through Powershell, run 'Connect-ServiceFabricCluster

I could connect to Service Fabric Explorer using: http://sf0:19080

Now I have my cluster running I needed to create a Service Fabric App and deploy it to the cluster. Make sure that you have installed the Service Fabric SDK, then run Visual Studio. Create a new Service Fabric project. When the project is created, right click on the services node, Select Add->New Service Fabric Service

clip_image001[4]

Then pick Guest Container and enter the name in your Docker Hub repository where your Docker image resides.

clip_image001

This will add in the necessary files to your service fabric project. If you remember from my earlier post, the website was hosted on port 8000 of the container. We need to tell service fabric about this and also we may want to map this to a different port.

If you open the containers ServiceManifest file

clip_image001[8]

Add an endpoint with the endpoint you want Service Fabric to use to publish the website out

clip_image001[10]

In this example I’m using the same port. If you want to map the port to a different one then changes this to something else e.g. If I wanted to use http://sf0:8080 as the website then I would change the Service Manifest to this:

image

You also need to tell service fabric about the Container port that is published. This is done in the application manifest file:

image

This is set to 8000 as that is the port exposed by the Docker container

Now deploy your application to service fabric. It may take a while to initialise your container as it will need to be downloaded from Docker Hub before it will run. Once it is running you should see it as Ready in the Service Fabric Explorer

image

Error updating SSL certificates in Azure App Services

I was asked to update the SSL certificates on a website that was hosted in Azure Web Apps. No problem I thought.

Go to the Azure Portal,

Select the website you want to update.

When the blade appears scroll down the left panel and select SSL Certificates

image

image

 

remove the binding, by clicking … at the end of the binding row and select Delete

image

Now remove the certificate by clicking .. at the end of the certificate row and select Delete

This is where I got an error

image

It took a short while to resolve this.

I tried a few things like restarting the site & checking the staging slot, but I still got the error. Finally, I checked other sites in the same app service plan and I had the same certificate used for another Web App (both using the same domain url). Once I removed the binding from that site, I could delete the certificate and upload a new one. I had to then add the new bindings to both sites.

Processing a flat file with Azure Logic Apps

A lot of companies require the transfer of files in order to transact business and there is always a need to translate these files from one format to another. Logic Apps provides a straight forward way to build serverless components that provide the integration points into your systems. This post is going to look at Logic apps enterprise integration to convert a multi-record CSV file into and XML format. Most of the understanding for this came from the following post:

https://seroter.wordpress.com/2016/09/09/trying-out-standard-and-enterprise-templates-in-azure-logic-apps/

Logic Apps can be created in Visual Studio or directly in the Azure Portal using the browser. Navigate to the azure portal https://portal.azure.com click the plus button at the top of the right hand column, then Web + Mobile then Logic App

clip_image002

Complete the form and click Create

clip_image003

This will take a short while to complete. Once complete you can select the logic app from you resource list to start to use it.

If you look at my recent list

clip_image005

You can see the logic app I’ve just created but you will also see my previous logic app and you will also notice that there is also an integration account and an Azure function. These are both required in order to create the necessary schemas and maps required to translate the CSV file to XML.

The integration account stores the schemas and maps and the Azure function provides some code that is used to translate the CSV to XML.

An integration account is created the same way as a logic app. The easiest way is to click on the plus symbol and then search for integration

clip_image007

Click on Integration Account then Create

clip_image009

Complete the form

clip_image010

Then Create. Once created you can start to add your schemas and maps

clip_image012

You will now need to jump into Visual Studio to create your maps and schemas. You will need to install the Logic Apps Integration Tools for Visual Studio

You will need to create a schema for the CSV file and a schema for the XML file. These two blog posts walk you through creating a flat file schema for a CSV file and also a positional file

I created the following two schemas

clip_image014

clip_image016

Once you have create the two schemas you will need to create a map which allows you to map the fields from one schema to the fields in the other schema.

clip_image018

In order to upload the map you will need to build the project in Visual Studio to build the xslt file.

The schemas and map file project can be found in my repository on GitHub

To upload the files to the integration account, go back to the Azure portal where you previously selected the integration account, click Schemas then Add

clip_image020

Complete the form, select the schema file from your Visual Studio project and click OK. Repeat this for both schema files. You do the same thing for the map file. You will need to navigate to your bin/Debug (or Release) folder to find the xslt file that was built. Your integration account should now show your schemas and maps as uploaded

clip_image021

There’s one more thing to do before you can create your logic app. In order to process the transformation some code is required in an Azure Function. This is standard code and can be created by clicking the highlighted link on this page. Note: If you haven’t used Azure Functions before then you will also need to click the other link first.

clip_image023

clip_image025

This creates you a function with the necessary code required to perform the transformation

clip_image027

You are now ready to start your logic app. Click on the Logic App you created earlier. This will display a page where you can select a template with which to create your app.

clip_image029

Close this down as you need to link your integration account to your logic app.

Click on Settings, then Integration Account and pick the integration Account where you previously uploaded the Schemas and Map files. Save this and return to the logic app template screen.

Select VETER Pipeline

clip_image031

Then “Use This Template”. This is the basis for your transformation logic. All you need to do now is to complete each box.

clip_image032

clip_image034

In Flat File Decoding & XML Validation, pick the CSV schema

clip_image035

In the transform XML

clip_image036

Select the function container, the function and the map file

All we need to do now is to return the transformed xml in the response message. Click “Add an Action” on Transform XML and search for Response.

clip_image038

Pick the Transformed XML content as the body of the response. Click save and the URL for the logic app will be populated in the Request flow

clip_image039

We now have a Request that takes the CSV in the body and it returns the XML transform in the body of the response. You can test this using a tool like PostMan or Fiddler to send in the request to the request URL above.

There is also a test CSV file in my repository on GitHub which can be used to test this.

My next post covers how I diagnosed a fault with this Logic App

Service Fabric: Resolving External Service Address

I am using Azure Service Fabric to host my application but I’ve deployed it on premises using a 3 machine cluster (running version Microsoft.Azure.ServiceFabric.WindowsServer.5.1.156.9590). It was easy to deploy and I only needed to run PowerShell on 1 of the nodes to configure up all 3. I followed the instructions here.

From Visual Studio I deployed my application which consisted of a number of stateless services and a WCF service. When everything is running in the cluster it all works fine but I wanted to access the WCF service from outside of the cluster. The first issue was that the actual address of the service is not known but you can see the address if you look at the Service Fabric Explorer of the cluster. Navigating through to the application on one of the nodes returns the url of the service e.g.

net.tcp://192.168.56.122:8081/4f341989-ec72-4cd5-8778-6e11e01dc727/968d5932-935a-4773-b83b-fa99f59d9073-131148669247596041

You don’t want to use this url directly as it could change depending upon the configuration of your cluster and the health of each of the nodes. Service Fabric provides a mechanism for discovering the address of the end point using the Service Resolver. If you are running in the cluster then you can use the default resolver and this will return the url of the end point which you can connect to. However, when you are outside of the cluster you need to tell the resolver where to look for the cluster.

Again if you look at the Service Fabric Explorer you can find out the ports used in the cluster e.g.

<ClientConnectionEndpoint Port="19000" />
<LeaseDriverEndpoint Port="9026" />
<ClusterConnectionEndpoint Port="19001" />
<HttpGatewayEndpoint Port="19080" Protocol="http" />
<ServiceConnectionEndpoint Port="9027" />
<ApplicationEndpoints StartPort="20001" EndPort="20031" />
<EphemeralEndpoints StartPort="20032" EndPort="20062" />

The example here shows how to connect to the resolver in an Azure hosted environment.

ServicePartitionResolver resolver = new  ServicePartitionResolver("mycluster.cloudapp.azure.com:19000", "mycluster.cloudapp.azure.com:19001");

This example provides a list of endpoints to try on both ports 19000 & 19001. Mapping this to my environment I used the ip address of the node on which I ran the PowerShell which also is the node that displays the Service Fabric Explorer. I also needed to know the application name in order for the resolver to find the end point I was after. The code below is part of a console application that attempts to call a WCF service from outside of the cluster. I’ve highlighted the service name and addresses I’ve used

string uri = "fabric:/ServiceFabricApp/FileStoreServiceStateless";
Binding binding = WcfUtility.CreateTcpClientBinding();
// Create a partition resolver
var serviceResolver = new ServicePartitionResolver(new string[] { "192.168.56.122:19000" , "192.168.56.122:19001" });
 
// create a  WcfCommunicationClientFactory object.
var clientFactory = new WcfCommunicationClientFactory
                    (clientBinding: binding, servicePartitionResolver: serviceResolver);
 
var client = new ServicePartitionClient>(
                    clientFactory,
                    new Uri(uri), partitionKey: Microsoft.ServiceFabric.Services.Client.ServicePartitionKey.Singleton);
 
var result = client.InvokeWithRetry(svc => svc.Channel.GetDocuments("Document", "1000848776", null));
However, when the code ran it always locked up on the call to InvokeWithRetry. On further investigation by calling ResolveAsyc first, I determined that may application was locking up when trying to resolve the address of the service. This took me a long time to figure out what was wrong and I tried a number of different addresses and ports with no luck. It was only after trying to run the code here, which lists all the services in a cluster, in the Visual Studio debugger that things started to work. This was confusing because I’d already tried loads of different things. The only difference was that the Development Service Fabric was running. I then ran my console app and no lock up occurred. Turning off the development service fabric and the console app locked up again. I moved the console app on to another computer that didn’t have the development service fabric installed and everything worked fine.

The good thing about this is that everything seems to be working and and I’ve learnt more about service fabric now Smile

How to emulate Azure Service Bus Topic Subscription Filtering in RabbitMQ

When creating a subscription to an Azure Service Bus Topic you can add a filter which will determine which messages to send to the subscription based upon the properties of the message.

image

This is done by passing a SqlFilter to the Create Subscription method

e.g.

if (!_NamespaceManager.SubscriptionExists(topic, subscription))
{
    if (!String.IsNullOrEmpty(filter))
    {
        SqlFilter strFilter = new SqlFilter(filter);
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription, strFilter);
        bSuccess = true;
    }
    else
    {
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription);
        bSuccess = true;
    }
}

Where strFilter is a string representing the properties that you want to filter on e.g.

// Create a "LowMessages" filtered subscription.

SqlFilter lowMessagesFilter = new SqlFilter("MessageNumber <= 3");

namespaceManager.CreateSubscription("TestTopic","LowMessages",lowMessagesFilter);

Applying properties to messages makes it easier to configure multiple subscribers to sets of messages rather than having multiple subscribers that receive all the messages, providing you with a flexible approach to building your messaging applications.

Subscriptions are effectively individual queues that each subscriber uses to hold the messages that a relevant to the subscriptions

When a message is pushed onto a Topic the Service Bus will look at all the subscriptions for the Topic and determine which messages are relevant to the subscription. If it is relevant then the subscription will receive the message into its queue. If there are no subscriptions capable of receiving the message then the message will be lost unless the topic is configured to throw an exception when there are no subscriptions to receive the message.

This approach is useful if most of the message data is stored in the properties (which are subject to a size limit of 64KB) and the body content is serialised to the same object (or the body object types are known).

Receiving messages on a Service Bus Subscription is as follows:

MessagingFactory messageFactory = MessagingFactory.CreateFromConnectionString(_ConnectionString);
SubscriptionClient client = messageFactory.CreateSubscriptionClient(topic, subscription);
message = await client.ReceiveAsync(new TimeSpan(0, 5, 0));
if (message != null)
{
    properties = message.Properties;
    body = message.GetBody<MyCustomBodyData>();
    if (processMessage != null)
    {
        // do some work
    }
    message.Complete();
}

Over the past few months I have been looking at RabbitMQ and trying to apply my Service Bus knowledge, as well as looking at the differences. Routing messages based upon the message properties rather than a routing key defined in the message is still applicable in the RabbitMQ world and RabbitMQ is configurable enough to work in this way. RabbitMQ requires more configuration than Service Bus but there is a mechanism called Header Exchange which can be used to route messages based upon the properties of the message.

The first thing to do is to create the exchange, then assign a queue to it based upon a set of filter criteria. I’ve been creating my exchanges with an alternate exchange to allow me to receive message that are not handled in a default queue. The code to create the exchange and queue that subscribes to messages where the ClientId property is “Client1” and the FileType property is “transaction”.

// Create Header Exchange with alternate-exchange

IDictionary<String, Object> args4 = new Dictionary<String, Object>();

args4.Add("alternate-exchange", alternateExchangeNameForHeaderExchange);

channel.ExchangeDeclare(HeaderExchangeName, "headers", true, false, args4);

channel.ExchangeDeclare(alternateExchangeNameForHeaderExchange, "fanout");

//Queue for Header Exchange Client1 & transaction

Dictionary<string, object> bindingArgs = new Dictionary<string, object>();

bindingArgs.Add("x-match", "all"); //any or all

bindingArgs.Add("ClientId", "Client1");

bindingArgs.Add("FileType", "transaction");

channel.QueueDeclare(HeaderQueueName, true, false, false, args5);

channel.QueueBind(HeaderQueueName, HeaderExchangeName, "", bindingArgs);

//queue for Header Exchange alternate exchange (all other)

channel.QueueDeclare(unroutedMessagesQueueNameForHeaderExchange, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueNameForHeaderExchange, alternateExchangeNameForHeaderExchange, "");

This will setup the exchange and queue in RabbitMQ and now you can send a message to the exchange with the correct properties as follows:

IBasicProperties properties = channel.CreateBasicProperties();
properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("ClientId", "Client1");
properties.Headers.Add("FileType", "transaction");


string routingkey = "header.key";
var message = "Hello World";
var body = Encoding.UTF8.GetBytes(message);

channel.BasicPublish(exchange: TopicName,
                                routingKey: routingkey,
                                basicProperties: properties,
                                body: body);

Receiving messages from the queue is as follows:

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    var routingKey = ea.RoutingKey;
    Byte[] FileTypeBytes = (Byte[])ea.BasicProperties.Headers["FileType"];
    Byte[] ClientIDBytes = (Byte[])ea.BasicProperties.Headers["ClientId"];
    string FileType = System.Text.Encoding.ASCII.GetString(FileTypeBytes);
    string ClientID = System.Text.Encoding.ASCII.GetString(ClientIDBytes);
    Console.WriteLine(" [x] Received '{0}':'{1}' [{2}] [{3}]",
                        routingKey,
                        message,
                        ClientID,
                        FileType);
    EventingBasicConsumer c = model as EventingBasicConsumer;
    if (c != null)
    {
        c.Model.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
};
channel.BasicConsume(queue: queueProcessorBaseName + textBox1.Text,
                        noAck: false,
                        consumer: consumer);

Again an out of the box feature for Service Bus can also be implemented in RabbitMQ but it is much simpler to use in Service Bus. The use of properties to route messages offers a much more flexible approach but does require that the body of the messages are either not used or are understood by each consumer. Service Bus offers more flexibility as the query string can contain a variety of operators whereas RabbitMQ matches all or some of the header values and not a range.

Why do I need Pre and Post Approval Steps in my Release Pipeline?

TFS Release Manager and Octopus Deploy, both support the concept of approval steps, but why do you need both pre-release and post-release approval steps. When I first started to look at the automated release tools such as TFS release manager I could understand the reason behind the pre-release approval. This step is your quality gate and adds some controls into your process. When creating your release pipeline you will setup a number of environments (e.g. Test, UAT, Pre production, Production) and at each stage you can make a different set of people responsible for allowing the deployment onto each environment.

When the developers have finished their new piece of code and it has been tested in their own environments, they will want to get it onto the servers so that the testers can test it in a more formal way. Currently this may involve the developer talking to the testers and the developers hanging around whilst the testers finish what they are doing so that they can free up the servers ready for the deploy. The developers will then deploy the software to the test environment, hopefully using some method of automation. With release manager, the developers can kick of an automated build and when it is completed it can then automatically create a release ready for deployment. This release can be configured to automatically deploy to the target environment.

The developer can force the software on to the test environment without the tester being ready. The testers for example may be finishing off testing a previous release and need some time before they are ready to accept the software. They may also have a set of criteria that need to be met before they will accept the software onto their environment.

Adding a pre-release approval step that allows the test team to “Accept” the release gives control to the test team and allows them to accept and indeed reject a release. This “Pause” the process allows the testers to check that all the developer quality gates have been met and therefore allow them push back to the developers if they are not happy. As the deployments can be automated, the testers can use the approval process to also control when the new software is to be deployed into their environment, allowing them to complete their current set of tests first. It also frees up the developer, so that they are not hanging around waiting to deploy. Similarly, moving on to UAT, Pre-Prod or Production, a pre-release approval step can be configured with different approvers who then become the gate keepers for each environment.

A pre-release approval step makes a lot of sense and provides order and control to a process and remove a lot of user error from the process.

So what about a post-release approval step, why would you need one? It wasn’t until I started to use TFS Release manager to automatically deploy my applications to Azure Websites where the need to have pre-release approval process became clear. Once I had released my software onto the test environment, I needed a mechanism to allow the testers to be able to reject a release if it failed testing for whatever reason. The post release approval step allowed them to have this power. By adding both a pre and post release approval step for each environment allowed the environment owner to accept the release into the environment when they are ready for it and when they are satisfied that the developers have done their jobs correctly. They can also control when it is ready to move to the next stage in the process. If after completing testing the software is ready to release to UAT then the tester can approve the release which pushes it to the next environment. If the tester is not happy with the release then they can reject it and the release does not move forwards. The tester can comment on the reason for rejection and the release will show red for failure on the dash board. Adding pre and post approval steps to each environment moves the control of software releases onto each environment to a group of people who are responsible for what happens on each.

Using these approval steps can also act as a sanity check to ensure that software releases do not accidentally get pushed onto an environment if someone kicks of the wrong build for example.

I’ve created a release pipeline for my applications which use pre and post approval steps for releases to Test and UAT, I don’t’ have a pre-production environment, but production utilises the staging slots feature of Azure Websites to allow me to deploy the release to staging prior to actually going live. The production environment only has a pre-release approval step, but as it is only going to staging, there is an additional safe guard to allow the coordinated live release when the business is ready.

Both Pre and Post release approval steps provide a useful feature to put the control of the release with the teams that are responsible for each environment. The outcome of each approval process can be visible, which also highlights if and when there are issues with the quality of the software being released.