Steve Spencer's Blog

Blogging on Azure Stuff

Processing a flat file with Azure Logic Apps

[Update: 5th Aug 2018 – This post is still relevant especially the integration account, schemas and maps and I  have written a new blog that builds on this one and integrates into SQL  -  Using Azure Logic Apps to Import CSV to SQL Server]

A lot of companies require the transfer of files in order to transact business and there is always a need to translate these files from one format to another. Logic Apps provides a straight forward way to build serverless components that provide the integration points into your systems. This post is going to look at Logic apps enterprise integration to convert a multi-record CSV file into and XML format. Most of the understanding for this came from the following post:

https://seroter.wordpress.com/2016/09/09/trying-out-standard-and-enterprise-templates-in-azure-logic-apps/

Logic Apps can be created in Visual Studio or directly in the Azure Portal using the browser. Navigate to the azure portal https://portal.azure.com click the plus button at the top of the right hand column, then Web + Mobile then Logic App

clip_image002

Complete the form and click Create

clip_image003

This will take a short while to complete. Once complete you can select the logic app from you resource list to start to use it.

If you look at my recent list

clip_image005

You can see the logic app I’ve just created but you will also see my previous logic app and you will also notice that there is also an integration account and an Azure function. These are both required in order to create the necessary schemas and maps required to translate the CSV file to XML.

The integration account stores the schemas and maps and the Azure function provides some code that is used to translate the CSV to XML.

An integration account is created the same way as a logic app. The easiest way is to click on the plus symbol and then search for integration

clip_image007

Click on Integration Account then Create

clip_image009

Complete the form

clip_image010

Then Create. Once created you can start to add your schemas and maps

clip_image012

You will now need to jump into Visual Studio to create your maps and schemas. You will need to install the Logic Apps Integration Tools for Visual Studio

You will need to create a schema for the CSV file and a schema for the XML file. These two blog posts walk you through creating a flat file schema for a CSV file and also a positional file

I created the following two schemas

clip_image014

clip_image016

Once you have create the two schemas you will need to create a map which allows you to map the fields from one schema to the fields in the other schema.

clip_image018

In order to upload the map you will need to build the project in Visual Studio to build the xslt file.

The schemas and map file project can be found in my repository on GitHub

To upload the files to the integration account, go back to the Azure portal where you previously selected the integration account, click Schemas then Add

clip_image020

Complete the form, select the schema file from your Visual Studio project and click OK. Repeat this for both schema files. You do the same thing for the map file. You will need to navigate to your bin/Debug (or Release) folder to find the xslt file that was built. Your integration account should now show your schemas and maps as uploaded

clip_image021

There’s one more thing to do before you can create your logic app. In order to process the transformation some code is required in an Azure Function. This is standard code and can be created by clicking the highlighted link on this page. Note: If you haven’t used Azure Functions before then you will also need to click the other link first.

clip_image023

clip_image025

This creates you a function with the necessary code required to perform the transformation

clip_image027

You are now ready to start your logic app. Click on the Logic App you created earlier. This will display a page where you can select a template with which to create your app.

clip_image029

Close this down as you need to link your integration account to your logic app.

Click on Settings, then Integration Account and pick the integration Account where you previously uploaded the Schemas and Map files. Save this and return to the logic app template screen.

Select VETER Pipeline

clip_image031

Then “Use This Template”. This is the basis for your transformation logic. All you need to do now is to complete each box.

clip_image032

clip_image034

In Flat File Decoding & XML Validation, pick the CSV schema

clip_image035

In the transform XML

clip_image036

Select the function container, the function and the map file

All we need to do now is to return the transformed xml in the response message. Click “Add an Action” on Transform XML and search for Response.

clip_image038

Pick the Transformed XML content as the body of the response. Click save and the URL for the logic app will be populated in the Request flow

clip_image039

We now have a Request that takes the CSV in the body and it returns the XML transform in the body of the response. You can test this using a tool like PostMan or Fiddler to send in the request to the request URL above.

There is also a test CSV file in my repository on GitHub which can be used to test this.

My next post covers how I diagnosed a fault with this Logic App

Service Fabric: Resolving External Service Address

I am using Azure Service Fabric to host my application but I’ve deployed it on premises using a 3 machine cluster (running version Microsoft.Azure.ServiceFabric.WindowsServer.5.1.156.9590). It was easy to deploy and I only needed to run PowerShell on 1 of the nodes to configure up all 3. I followed the instructions here.

From Visual Studio I deployed my application which consisted of a number of stateless services and a WCF service. When everything is running in the cluster it all works fine but I wanted to access the WCF service from outside of the cluster. The first issue was that the actual address of the service is not known but you can see the address if you look at the Service Fabric Explorer of the cluster. Navigating through to the application on one of the nodes returns the url of the service e.g.

net.tcp://192.168.56.122:8081/4f341989-ec72-4cd5-8778-6e11e01dc727/968d5932-935a-4773-b83b-fa99f59d9073-131148669247596041

You don’t want to use this url directly as it could change depending upon the configuration of your cluster and the health of each of the nodes. Service Fabric provides a mechanism for discovering the address of the end point using the Service Resolver. If you are running in the cluster then you can use the default resolver and this will return the url of the end point which you can connect to. However, when you are outside of the cluster you need to tell the resolver where to look for the cluster.

Again if you look at the Service Fabric Explorer you can find out the ports used in the cluster e.g.

<ClientConnectionEndpoint Port="19000" />
<LeaseDriverEndpoint Port="9026" />
<ClusterConnectionEndpoint Port="19001" />
<HttpGatewayEndpoint Port="19080" Protocol="http" />
<ServiceConnectionEndpoint Port="9027" />
<ApplicationEndpoints StartPort="20001" EndPort="20031" />
<EphemeralEndpoints StartPort="20032" EndPort="20062" />

The example here shows how to connect to the resolver in an Azure hosted environment.

ServicePartitionResolver resolver = new  ServicePartitionResolver("mycluster.cloudapp.azure.com:19000", "mycluster.cloudapp.azure.com:19001");

This example provides a list of endpoints to try on both ports 19000 & 19001. Mapping this to my environment I used the ip address of the node on which I ran the PowerShell which also is the node that displays the Service Fabric Explorer. I also needed to know the application name in order for the resolver to find the end point I was after. The code below is part of a console application that attempts to call a WCF service from outside of the cluster. I’ve highlighted the service name and addresses I’ve used

string uri = "fabric:/ServiceFabricApp/FileStoreServiceStateless";
Binding binding = WcfUtility.CreateTcpClientBinding();
// Create a partition resolver
var serviceResolver = new ServicePartitionResolver(new string[] { "192.168.56.122:19000" , "192.168.56.122:19001" });
 
// create a  WcfCommunicationClientFactory object.
var clientFactory = new WcfCommunicationClientFactory
                    (clientBinding: binding, servicePartitionResolver: serviceResolver);
 
var client = new ServicePartitionClient>(
                    clientFactory,
                    new Uri(uri), partitionKey: Microsoft.ServiceFabric.Services.Client.ServicePartitionKey.Singleton);
 
var result = client.InvokeWithRetry(svc => svc.Channel.GetDocuments("Document", "1000848776", null));
However, when the code ran it always locked up on the call to InvokeWithRetry. On further investigation by calling ResolveAsyc first, I determined that may application was locking up when trying to resolve the address of the service. This took me a long time to figure out what was wrong and I tried a number of different addresses and ports with no luck. It was only after trying to run the code here, which lists all the services in a cluster, in the Visual Studio debugger that things started to work. This was confusing because I’d already tried loads of different things. The only difference was that the Development Service Fabric was running. I then ran my console app and no lock up occurred. Turning off the development service fabric and the console app locked up again. I moved the console app on to another computer that didn’t have the development service fabric installed and everything worked fine.

The good thing about this is that everything seems to be working and and I’ve learnt more about service fabric now Smile

How to emulate Azure Service Bus Topic Subscription Filtering in RabbitMQ

When creating a subscription to an Azure Service Bus Topic you can add a filter which will determine which messages to send to the subscription based upon the properties of the message.

image

This is done by passing a SqlFilter to the Create Subscription method

e.g.

if (!_NamespaceManager.SubscriptionExists(topic, subscription))
{
    if (!String.IsNullOrEmpty(filter))
    {
        SqlFilter strFilter = new SqlFilter(filter);
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription, strFilter);
        bSuccess = true;
    }
    else
    {
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription);
        bSuccess = true;
    }
}

Where strFilter is a string representing the properties that you want to filter on e.g.

// Create a "LowMessages" filtered subscription.

SqlFilter lowMessagesFilter = new SqlFilter("MessageNumber <= 3");

namespaceManager.CreateSubscription("TestTopic","LowMessages",lowMessagesFilter);

Applying properties to messages makes it easier to configure multiple subscribers to sets of messages rather than having multiple subscribers that receive all the messages, providing you with a flexible approach to building your messaging applications.

Subscriptions are effectively individual queues that each subscriber uses to hold the messages that a relevant to the subscriptions

When a message is pushed onto a Topic the Service Bus will look at all the subscriptions for the Topic and determine which messages are relevant to the subscription. If it is relevant then the subscription will receive the message into its queue. If there are no subscriptions capable of receiving the message then the message will be lost unless the topic is configured to throw an exception when there are no subscriptions to receive the message.

This approach is useful if most of the message data is stored in the properties (which are subject to a size limit of 64KB) and the body content is serialised to the same object (or the body object types are known).

Receiving messages on a Service Bus Subscription is as follows:

MessagingFactory messageFactory = MessagingFactory.CreateFromConnectionString(_ConnectionString);
SubscriptionClient client = messageFactory.CreateSubscriptionClient(topic, subscription);
message = await client.ReceiveAsync(new TimeSpan(0, 5, 0));
if (message != null)
{
    properties = message.Properties;
    body = message.GetBody<MyCustomBodyData>();
    if (processMessage != null)
    {
        // do some work
    }
    message.Complete();
}

Over the past few months I have been looking at RabbitMQ and trying to apply my Service Bus knowledge, as well as looking at the differences. Routing messages based upon the message properties rather than a routing key defined in the message is still applicable in the RabbitMQ world and RabbitMQ is configurable enough to work in this way. RabbitMQ requires more configuration than Service Bus but there is a mechanism called Header Exchange which can be used to route messages based upon the properties of the message.

The first thing to do is to create the exchange, then assign a queue to it based upon a set of filter criteria. I’ve been creating my exchanges with an alternate exchange to allow me to receive message that are not handled in a default queue. The code to create the exchange and queue that subscribes to messages where the ClientId property is “Client1” and the FileType property is “transaction”.

// Create Header Exchange with alternate-exchange

IDictionary<String, Object> args4 = new Dictionary<String, Object>();

args4.Add("alternate-exchange", alternateExchangeNameForHeaderExchange);

channel.ExchangeDeclare(HeaderExchangeName, "headers", true, false, args4);

channel.ExchangeDeclare(alternateExchangeNameForHeaderExchange, "fanout");

//Queue for Header Exchange Client1 & transaction

Dictionary<string, object> bindingArgs = new Dictionary<string, object>();

bindingArgs.Add("x-match", "all"); //any or all

bindingArgs.Add("ClientId", "Client1");

bindingArgs.Add("FileType", "transaction");

channel.QueueDeclare(HeaderQueueName, true, false, false, args5);

channel.QueueBind(HeaderQueueName, HeaderExchangeName, "", bindingArgs);

//queue for Header Exchange alternate exchange (all other)

channel.QueueDeclare(unroutedMessagesQueueNameForHeaderExchange, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueNameForHeaderExchange, alternateExchangeNameForHeaderExchange, "");

This will setup the exchange and queue in RabbitMQ and now you can send a message to the exchange with the correct properties as follows:

IBasicProperties properties = channel.CreateBasicProperties();
properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("ClientId", "Client1");
properties.Headers.Add("FileType", "transaction");


string routingkey = "header.key";
var message = "Hello World";
var body = Encoding.UTF8.GetBytes(message);

channel.BasicPublish(exchange: TopicName,
                                routingKey: routingkey,
                                basicProperties: properties,
                                body: body);

Receiving messages from the queue is as follows:

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    var routingKey = ea.RoutingKey;
    Byte[] FileTypeBytes = (Byte[])ea.BasicProperties.Headers["FileType"];
    Byte[] ClientIDBytes = (Byte[])ea.BasicProperties.Headers["ClientId"];
    string FileType = System.Text.Encoding.ASCII.GetString(FileTypeBytes);
    string ClientID = System.Text.Encoding.ASCII.GetString(ClientIDBytes);
    Console.WriteLine(" [x] Received '{0}':'{1}' [{2}] [{3}]",
                        routingKey,
                        message,
                        ClientID,
                        FileType);
    EventingBasicConsumer c = model as EventingBasicConsumer;
    if (c != null)
    {
        c.Model.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
};
channel.BasicConsume(queue: queueProcessorBaseName + textBox1.Text,
                        noAck: false,
                        consumer: consumer);

Again an out of the box feature for Service Bus can also be implemented in RabbitMQ but it is much simpler to use in Service Bus. The use of properties to route messages offers a much more flexible approach but does require that the body of the messages are either not used or are understood by each consumer. Service Bus offers more flexibility as the query string can contain a variety of operators whereas RabbitMQ matches all or some of the header values and not a range.

Why do I need Pre and Post Approval Steps in my Release Pipeline?

TFS Release Manager and Octopus Deploy, both support the concept of approval steps, but why do you need both pre-release and post-release approval steps. When I first started to look at the automated release tools such as TFS release manager I could understand the reason behind the pre-release approval. This step is your quality gate and adds some controls into your process. When creating your release pipeline you will setup a number of environments (e.g. Test, UAT, Pre production, Production) and at each stage you can make a different set of people responsible for allowing the deployment onto each environment.

When the developers have finished their new piece of code and it has been tested in their own environments, they will want to get it onto the servers so that the testers can test it in a more formal way. Currently this may involve the developer talking to the testers and the developers hanging around whilst the testers finish what they are doing so that they can free up the servers ready for the deploy. The developers will then deploy the software to the test environment, hopefully using some method of automation. With release manager, the developers can kick of an automated build and when it is completed it can then automatically create a release ready for deployment. This release can be configured to automatically deploy to the target environment.

The developer can force the software on to the test environment without the tester being ready. The testers for example may be finishing off testing a previous release and need some time before they are ready to accept the software. They may also have a set of criteria that need to be met before they will accept the software onto their environment.

Adding a pre-release approval step that allows the test team to “Accept” the release gives control to the test team and allows them to accept and indeed reject a release. This “Pause” the process allows the testers to check that all the developer quality gates have been met and therefore allow them push back to the developers if they are not happy. As the deployments can be automated, the testers can use the approval process to also control when the new software is to be deployed into their environment, allowing them to complete their current set of tests first. It also frees up the developer, so that they are not hanging around waiting to deploy. Similarly, moving on to UAT, Pre-Prod or Production, a pre-release approval step can be configured with different approvers who then become the gate keepers for each environment.

A pre-release approval step makes a lot of sense and provides order and control to a process and remove a lot of user error from the process.

So what about a post-release approval step, why would you need one? It wasn’t until I started to use TFS Release manager to automatically deploy my applications to Azure Websites where the need to have pre-release approval process became clear. Once I had released my software onto the test environment, I needed a mechanism to allow the testers to be able to reject a release if it failed testing for whatever reason. The post release approval step allowed them to have this power. By adding both a pre and post release approval step for each environment allowed the environment owner to accept the release into the environment when they are ready for it and when they are satisfied that the developers have done their jobs correctly. They can also control when it is ready to move to the next stage in the process. If after completing testing the software is ready to release to UAT then the tester can approve the release which pushes it to the next environment. If the tester is not happy with the release then they can reject it and the release does not move forwards. The tester can comment on the reason for rejection and the release will show red for failure on the dash board. Adding pre and post approval steps to each environment moves the control of software releases onto each environment to a group of people who are responsible for what happens on each.

Using these approval steps can also act as a sanity check to ensure that software releases do not accidentally get pushed onto an environment if someone kicks of the wrong build for example.

I’ve created a release pipeline for my applications which use pre and post approval steps for releases to Test and UAT, I don’t’ have a pre-production environment, but production utilises the staging slots feature of Azure Websites to allow me to deploy the release to staging prior to actually going live. The production environment only has a pre-release approval step, but as it is only going to staging, there is an additional safe guard to allow the coordinated live release when the business is ready.

Both Pre and Post release approval steps provide a useful feature to put the control of the release with the teams that are responsible for each environment. The outcome of each approval process can be visible, which also highlights if and when there are issues with the quality of the software being released.

Dead Letters with Azure Service Bus and RabbitMQ

Firstly, what are dead letters?

When a  message is received in a messaging system, something tries to process it. The message is normally understood by the system and can be processed, sometimes however the messages are not understood and can cause the receiving process to fail. The failure could be caught by the systems and dealt with but in extreme situations the message could cause the receiving process to crash. Messages that cannot be delivered or that fail when processed need to be removed from the queue and stored somewhere for later analysis. A message that fails in this way is called a dead letter and the location where these dead letters reside is called a dead letter queue. Queuing systems such as Azure Service Bus, Rabbit MQ and others have mechanisms to handle this type of failure. Some systems handle them automatically and others require configuration.

Dead letter queues are the same as any other queue except that they contain dead letters. As they are queues they can be processed in the same way as the normal queues except that they have a different address to the normal queue. I’ve already discussed Service Bus Dead Letter Queue addressing in a previous post and this is still relevant today.

On RabbitMQ a Dead Letter queue is just another queue and is addressed in the same way as any other queue. The difference is in the way the Dead Letter queue is setup. Firstly you create a dead letter queue and then you add it to the queue you want to use it with.

To set up the dead letter queue, declare a “direct” exchange and bind a queue to it:

channel.ExchangeDeclare(DeadLetterExchangeName, "direct");
channel.QueueDeclare(DeadLetterQueueName, true, false, false, null);
channel.QueueBind(DeadLetterQueueName, DeadLetterExchangeName, DeadLetterRoutingKey, null);

I’ve used a dead letter routing key that is related to the queue I want to use it from with an additional “dl”. The routing key needs to be unique so that only messages you want to go to this specific dead letter queue will be delivered. e.g. Payments.Received.DL

Now we need to attach the dead letter queue to the correct queue, so when I created my new queue I needed to add the dead letter queue to it

IDictionary<String, Object> args3 = new Dictionary<String, Object>();
args3.Add("x-dead-letter-exchange", DeadLetterExchangeName);
args3.Add("x-dead-letter-routing-key", DeadLetterRoutingKey);
channel.QueueDeclare(queueName, true, false, false, args3);
channel.QueueBind(queueName, TopicName, paymentsReceivedRoutingKey)
;

Whilst there is a lot of flexibility with RabbitMQ, Dead Letter queues come out of the box with Azure Service Bus. Each topic and queue has  one and is enabled by default. RabbitMQ however allows each topic subscription to have their own dead letter queue which allows you to have a finer grained control over what to do with each type of failed message.

Now we have these dead letter queues and we know how to access them, how do we get messages into them.

In Azure Service Bus, there is a mechanism that will automatically put the message in the dead letter queue if the message fails to be delivered 10 times (default). However, you may wish to handle bad messages yourself in code without relying upon the system to do this for you. If a message is delivered 10 times before failure, you are utilising system resources when the message is being processed and these resources could be used to process valid messages. When the message is receive and validation of the message has failed or there is an error whilst processing that you have caught, then you can explicitly send the message to the dead letter queue by calling the dead letter method on the message object.

BrokeredMessage receivedMessage = subscriptionClient.EndReceive(result);

if (receivedMessage != null)
{
    Random rdm = new Random();
    int num = rdm.Next(100);
    Console.WriteLine("Random={0}", num);
    if (num < 10)
    {
        receivedMessage.DeadLetter("Randomly picked for deadletter", "error 123");
        Console.WriteLine("Deadlettered");
    }
    else
    {
        receivedMessage.Complete();
    }
}

My test code, above, randomly sends 10% of my message to the dead letter queue.

In Rabbit MQ will be published to the dead letter queue if one of the following occurs:

  1. The message is rejected by calling BasicNack or BasicReject
  2. The TTL (Time to Live) expires
  3. The queue length limit is exceeded

I’ve written a similar piece of test code for RabbitMQ

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
                       
    var message = Encoding.UTF8.GetString(body);
    Random random = new Random((int)DateTime.Now.Ticks);
    int randomNumber = random.Next(0, 100);
    if (randomNumber > 30)
    {
        channel.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
    else
    {
        if (randomNumber > 10)
        {
            channel.BasicNack(ea.DeliveryTag,false, true);
            Console.WriteLine(" [xxxxx] NAK {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
        }
        else
        {
            Console.WriteLine(" [xxxxx] DeadLetter {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
            channel.BasicNack(ea.DeliveryTag, false, false);
        }
    }
    Thread.Sleep(200);
};
channel.BasicConsume(queue: "hello",
                        noAck: false,
                        consumer: consumer);

If you look at the code you will see that there are two places where BasicNack is called and only one of them sends them to the dead letter queue. BasicNack takes 3 parameters and the last one is “requeue”. Setting requeue to true will put the message back on the originating queue whereas setting requeue to false will publish the message on the dead letter queue.

Both RabbitMQ and Service Bus have the dead letter queue concept and can be used in a similar way. Service Bus has one configured by default and has both an automatic and manual mechanism for publishing message to the dead letter queue. RabbitMQ requires more configuration and does not have the same automation for dead lettering but it can be configured with more flexibility.

System.Web.Mvc not found after deploying to Azure Web Apps using Release Manager

I’m currently evaluating Release Manager in Visual Studio Team Services and I am using it to deploy website to Azure Web Apps. I recently tried to deploy an Asp.Net MVC 4 application and ran into some issues.

I’ve created a build that packages and zips up my web application which runs successfully.I’ve linked a Release pipeline to this build and I can deploy to my test Azure site without any errors, but when I try and run the web application I get the following error:

Could not load file or assembly 'System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

image

I’m using Visual Studio 2013 with MVC as a nuget package.Looking at the properties of System.Web.Mvc I can see that it is set to Copy Local = True

image

I tried a few different things to try to get the assembly to be copied like redoing the nuget install and eventually I toggled the Copy Local to False, saved the project file and then set it back to true. When I looked at the diff of the project file I found an additional property

image

This seems to fix the build. When I checked this in and rebuild, System.Web.Mvc now appears in the zip file. The build was then release to Azure and the web app worked correctly.

Making My Azure ML Project Oxford Sample Application More Visual

Following on from my last post where I introduce Project Oxford I’ve done a bit more work to take the project that was built and make it more visual. To summarise, Project Oxford is a set of APIs that build on top of Azure ML to provide Face, Speech, Computer Vision and Language Understanding Intelligence Service (LUIS). There was a good video from Build 2015 that I watched to provide an overview of each of the APIs.

I used the tutorials to build an application that would identify a number of people from a known list in a photograph and highlight the ones that were unknown. The Face API requires people to be trained with a set of photos first, before identification can be made. This was done by using the code in the samples. I created a folder for each person that I wanted to be trained and added different photos of each person with and without hats, and sunglasses and also with different expressions. Then each set of folders was passed to the training API. Once trained you can then use the rest of the Face API to firstly identify faces in a picture and then take each face that is found and see if they are known.

One useful tip I’ve found is to have Fiddler running whilst you are debugging as it is far easier to see any errors in the body of the response message than in the exceptions that are thrown. Details of the errors can be seen in the Face API documentation.

The process for training is as follows (Note the terminology is based around the SDK methods, but I’ve linked to the API page as this gives details about the errors etc):

  1. Create a Person Group
  2. Create a Face list for each person using Face Detect
  3. Create a Person one for each person you want to identify with the person group id and face list
  4. Train the Person Group

Note: The training does not last forever and you will need to redo it periodically. If you try and detect a person when training has expired then you will get an error response saying that the person group is unknown.

To Identify each individual in a photograph:

  1. Stream the photograph into Detect. This will return a list of faces with face ids
  2. Iterate around each Face and call Identify 
  3. Use the Identify Results to extract the names by calling Get Person.

This is where I got to with the previous post, but this wasn’t very visual and as I was working with photographs I thought it would be useful to use the data returned to draw a box around the faces that were identified and add the name of the person underneath. This was also useful to know which person was identified incorrectly. On the project Oxford web site there was the following image

I wanted to emulate this and also to take it one step further. The data returned from the face detection API provides details about gender, age, the area (face rectangle) in the picture where the face was found, face landmarks, and head pose. What the detection API did not do was to tie the name of the person to the face. We do already have this information as it was returned from the Identify API and Get Person. The attribute that links them is the face id. Using the results of the Identify API I called get person for each face identified to return the person’s name and stored this in a Dictionary along with the face ID. This then allowed me to load the original photograph into memory draw the rectangles for each face and add the text below each using the face id to extract the rectangle and match the name from the Dictionary, This could then be scaled shown in the app.

Setting Custom Domain for Traffic Manager and Azure Websites

Recently I’ve been looking at using traffic manager to front up websites hosted in Azure Websites. I needed to setup a custom domain name instead of using mydomain.trafficmanager.net.

In order to use Traffic Manager with an Azure website the website needs to be setup using a Standard Hosting Plan.

Each website you want to be included in the traffic manager routing will need to be added as an endpoint in the traffic manager portal.

Once you have this setup you will need to add the DNS CNAME record for your domain. This needs to be configured at your Domain provider. You set the CNAME to point to mydomain.trafficmanager.net

In order for the traffic to be routed to your Azure hosted website(s), each website setup as an endpoint in traffic manager will need to have your mapped domain e.g. www.mydomain.com  configured. This is done under settings->Custom Domains and SSL in the new portal and under the configure tab –> manage domains (or click the Manage Domains button)

If you don’t add this then you will see this 404 error page whenever you try to navigate to the site through the traffic manager custom domain name:

image

Azure Websites: Blocking access to the azurewebsites.net url

I’ve been setting up one of our services as the backend service for Azure API management. Part of this process we have mapped DNS to point to the service. As the service is hosted in Azure Websites there are now two urls that exist which can be used to access the service. I wanted to stop a user from accessing the site using the azurewebsites.net url and only access it via the mapped domain. This is easy to achieve and can be configured in the web.config file of the service.

In the <system.webServer> section add the following configuration

<rewrite>
    <rules>
        <rule name="Block traffic to the raw azurewebsites url"  patternSyntax="Wildcard" stopProcessing="true">
          <match url="*" />
          <conditions>
            <add input="{HTTP_HOST}" pattern="*azurewebsites.net*" />
          </conditions>
          <action type="CustomResponse" statusCode="403" statusReason="Forbidden"
          statusDescription="Site is not accessible" />
        </rule>
    </rules>
</rewrite>

Now if I try and access my site through the azurewebsites.net url, I get a 403 error, but accessing through the mapped domain is fine.

Azure Media Services Live Media Streaming General Availability

Yesterday Scott Guthrie announced a number of enhancements to Microsoft Azure. One of the enhancements is the General Availability of Azure Media Services Live Media Streaming. This gives us the ability to stream live events on a service that has already been used to deliver big events such as the 2014 Sochi Winter Olympics and the 2014 FIFA World Cup.

I’ve look at this for a couple of our projects and found it relatively fast and easy to set up a live media event even from my laptop using its built in camera. There’s a good blob post that walks you through the process of setting up the Live Streaming service. I used this post and was quickly streaming both audio and video from my laptop.

The main piece of software that you need to install is a Video/Audio Encode the supports Smooth Streaming or RTMP. I used the WireCast encoder as specified in the post. You can try out the encoder for 2 weeks as long as you don’t mind seeing the Wirecast Logo on your video (which is removed if you buy a license). Media services pricing can be found here

The Media Services team have provided a MPEG-DASH player to help you test your live streams.

It appears that once you have created a stream that is is still accessible on demand after the event has completed.Also there is around a 20s delay when you receive the stream on your player.