Steve Spencer's Blog

Blogging on Azure Stuff

How to emulate Azure Service Bus Topic Subscription Filtering in RabbitMQ

When creating a subscription to an Azure Service Bus Topic you can add a filter which will determine which messages to send to the subscription based upon the properties of the message.

image

This is done by passing a SqlFilter to the Create Subscription method

e.g.

if (!_NamespaceManager.SubscriptionExists(topic, subscription))
{
    if (!String.IsNullOrEmpty(filter))
    {
        SqlFilter strFilter = new SqlFilter(filter);
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription, strFilter);
        bSuccess = true;
    }
    else
    {
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription);
        bSuccess = true;
    }
}

Where strFilter is a string representing the properties that you want to filter on e.g.

// Create a "LowMessages" filtered subscription.

SqlFilter lowMessagesFilter = new SqlFilter("MessageNumber <= 3");

namespaceManager.CreateSubscription("TestTopic","LowMessages",lowMessagesFilter);

Applying properties to messages makes it easier to configure multiple subscribers to sets of messages rather than having multiple subscribers that receive all the messages, providing you with a flexible approach to building your messaging applications.

Subscriptions are effectively individual queues that each subscriber uses to hold the messages that a relevant to the subscriptions

When a message is pushed onto a Topic the Service Bus will look at all the subscriptions for the Topic and determine which messages are relevant to the subscription. If it is relevant then the subscription will receive the message into its queue. If there are no subscriptions capable of receiving the message then the message will be lost unless the topic is configured to throw an exception when there are no subscriptions to receive the message.

This approach is useful if most of the message data is stored in the properties (which are subject to a size limit of 64KB) and the body content is serialised to the same object (or the body object types are known).

Receiving messages on a Service Bus Subscription is as follows:

MessagingFactory messageFactory = MessagingFactory.CreateFromConnectionString(_ConnectionString);
SubscriptionClient client = messageFactory.CreateSubscriptionClient(topic, subscription);
message = await client.ReceiveAsync(new TimeSpan(0, 5, 0));
if (message != null)
{
    properties = message.Properties;
    body = message.GetBody<MyCustomBodyData>();
    if (processMessage != null)
    {
        // do some work
    }
    message.Complete();
}

Over the past few months I have been looking at RabbitMQ and trying to apply my Service Bus knowledge, as well as looking at the differences. Routing messages based upon the message properties rather than a routing key defined in the message is still applicable in the RabbitMQ world and RabbitMQ is configurable enough to work in this way. RabbitMQ requires more configuration than Service Bus but there is a mechanism called Header Exchange which can be used to route messages based upon the properties of the message.

The first thing to do is to create the exchange, then assign a queue to it based upon a set of filter criteria. I’ve been creating my exchanges with an alternate exchange to allow me to receive message that are not handled in a default queue. The code to create the exchange and queue that subscribes to messages where the ClientId property is “Client1” and the FileType property is “transaction”.

// Create Header Exchange with alternate-exchange

IDictionary<String, Object> args4 = new Dictionary<String, Object>();

args4.Add("alternate-exchange", alternateExchangeNameForHeaderExchange);

channel.ExchangeDeclare(HeaderExchangeName, "headers", true, false, args4);

channel.ExchangeDeclare(alternateExchangeNameForHeaderExchange, "fanout");

//Queue for Header Exchange Client1 & transaction

Dictionary<string, object> bindingArgs = new Dictionary<string, object>();

bindingArgs.Add("x-match", "all"); //any or all

bindingArgs.Add("ClientId", "Client1");

bindingArgs.Add("FileType", "transaction");

channel.QueueDeclare(HeaderQueueName, true, false, false, args5);

channel.QueueBind(HeaderQueueName, HeaderExchangeName, "", bindingArgs);

//queue for Header Exchange alternate exchange (all other)

channel.QueueDeclare(unroutedMessagesQueueNameForHeaderExchange, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueNameForHeaderExchange, alternateExchangeNameForHeaderExchange, "");

This will setup the exchange and queue in RabbitMQ and now you can send a message to the exchange with the correct properties as follows:

IBasicProperties properties = channel.CreateBasicProperties();
properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("ClientId", "Client1");
properties.Headers.Add("FileType", "transaction");


string routingkey = "header.key";
var message = "Hello World";
var body = Encoding.UTF8.GetBytes(message);

channel.BasicPublish(exchange: TopicName,
                                routingKey: routingkey,
                                basicProperties: properties,
                                body: body);

Receiving messages from the queue is as follows:

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    var routingKey = ea.RoutingKey;
    Byte[] FileTypeBytes = (Byte[])ea.BasicProperties.Headers["FileType"];
    Byte[] ClientIDBytes = (Byte[])ea.BasicProperties.Headers["ClientId"];
    string FileType = System.Text.Encoding.ASCII.GetString(FileTypeBytes);
    string ClientID = System.Text.Encoding.ASCII.GetString(ClientIDBytes);
    Console.WriteLine(" [x] Received '{0}':'{1}' [{2}] [{3}]",
                        routingKey,
                        message,
                        ClientID,
                        FileType);
    EventingBasicConsumer c = model as EventingBasicConsumer;
    if (c != null)
    {
        c.Model.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
};
channel.BasicConsume(queue: queueProcessorBaseName + textBox1.Text,
                        noAck: false,
                        consumer: consumer);

Again an out of the box feature for Service Bus can also be implemented in RabbitMQ but it is much simpler to use in Service Bus. The use of properties to route messages offers a much more flexible approach but does require that the body of the messages are either not used or are understood by each consumer. Service Bus offers more flexibility as the query string can contain a variety of operators whereas RabbitMQ matches all or some of the header values and not a range.

Dead Letters with Azure Service Bus and RabbitMQ

Firstly, what are dead letters?

When a  message is received in a messaging system, something tries to process it. The message is normally understood by the system and can be processed, sometimes however the messages are not understood and can cause the receiving process to fail. The failure could be caught by the systems and dealt with but in extreme situations the message could cause the receiving process to crash. Messages that cannot be delivered or that fail when processed need to be removed from the queue and stored somewhere for later analysis. A message that fails in this way is called a dead letter and the location where these dead letters reside is called a dead letter queue. Queuing systems such as Azure Service Bus, Rabbit MQ and others have mechanisms to handle this type of failure. Some systems handle them automatically and others require configuration.

Dead letter queues are the same as any other queue except that they contain dead letters. As they are queues they can be processed in the same way as the normal queues except that they have a different address to the normal queue. I’ve already discussed Service Bus Dead Letter Queue addressing in a previous post and this is still relevant today.

On RabbitMQ a Dead Letter queue is just another queue and is addressed in the same way as any other queue. The difference is in the way the Dead Letter queue is setup. Firstly you create a dead letter queue and then you add it to the queue you want to use it with.

To set up the dead letter queue, declare a “direct” exchange and bind a queue to it:

channel.ExchangeDeclare(DeadLetterExchangeName, "direct");
channel.QueueDeclare(DeadLetterQueueName, true, false, false, null);
channel.QueueBind(DeadLetterQueueName, DeadLetterExchangeName, DeadLetterRoutingKey, null);

I’ve used a dead letter routing key that is related to the queue I want to use it from with an additional “dl”. The routing key needs to be unique so that only messages you want to go to this specific dead letter queue will be delivered. e.g. Payments.Received.DL

Now we need to attach the dead letter queue to the correct queue, so when I created my new queue I needed to add the dead letter queue to it

IDictionary<String, Object> args3 = new Dictionary<String, Object>();
args3.Add("x-dead-letter-exchange", DeadLetterExchangeName);
args3.Add("x-dead-letter-routing-key", DeadLetterRoutingKey);
channel.QueueDeclare(queueName, true, false, false, args3);
channel.QueueBind(queueName, TopicName, paymentsReceivedRoutingKey)
;

Whilst there is a lot of flexibility with RabbitMQ, Dead Letter queues come out of the box with Azure Service Bus. Each topic and queue has  one and is enabled by default. RabbitMQ however allows each topic subscription to have their own dead letter queue which allows you to have a finer grained control over what to do with each type of failed message.

Now we have these dead letter queues and we know how to access them, how do we get messages into them.

In Azure Service Bus, there is a mechanism that will automatically put the message in the dead letter queue if the message fails to be delivered 10 times (default). However, you may wish to handle bad messages yourself in code without relying upon the system to do this for you. If a message is delivered 10 times before failure, you are utilising system resources when the message is being processed and these resources could be used to process valid messages. When the message is receive and validation of the message has failed or there is an error whilst processing that you have caught, then you can explicitly send the message to the dead letter queue by calling the dead letter method on the message object.

BrokeredMessage receivedMessage = subscriptionClient.EndReceive(result);

if (receivedMessage != null)
{
    Random rdm = new Random();
    int num = rdm.Next(100);
    Console.WriteLine("Random={0}", num);
    if (num < 10)
    {
        receivedMessage.DeadLetter("Randomly picked for deadletter", "error 123");
        Console.WriteLine("Deadlettered");
    }
    else
    {
        receivedMessage.Complete();
    }
}

My test code, above, randomly sends 10% of my message to the dead letter queue.

In Rabbit MQ will be published to the dead letter queue if one of the following occurs:

  1. The message is rejected by calling BasicNack or BasicReject
  2. The TTL (Time to Live) expires
  3. The queue length limit is exceeded

I’ve written a similar piece of test code for RabbitMQ

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
                       
    var message = Encoding.UTF8.GetString(body);
    Random random = new Random((int)DateTime.Now.Ticks);
    int randomNumber = random.Next(0, 100);
    if (randomNumber > 30)
    {
        channel.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
    else
    {
        if (randomNumber > 10)
        {
            channel.BasicNack(ea.DeliveryTag,false, true);
            Console.WriteLine(" [xxxxx] NAK {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
        }
        else
        {
            Console.WriteLine(" [xxxxx] DeadLetter {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
            channel.BasicNack(ea.DeliveryTag, false, false);
        }
    }
    Thread.Sleep(200);
};
channel.BasicConsume(queue: "hello",
                        noAck: false,
                        consumer: consumer);

If you look at the code you will see that there are two places where BasicNack is called and only one of them sends them to the dead letter queue. BasicNack takes 3 parameters and the last one is “requeue”. Setting requeue to true will put the message back on the originating queue whereas setting requeue to false will publish the message on the dead letter queue.

Both RabbitMQ and Service Bus have the dead letter queue concept and can be used in a similar way. Service Bus has one configured by default and has both an automatic and manual mechanism for publishing message to the dead letter queue. RabbitMQ requires more configuration and does not have the same automation for dead lettering but it can be configured with more flexibility.

Unhandled Messages with Azure Service Bus and RabbitMQ

One of the requirements for our messaging system is to be able to build a system to process messages and either

  1. Have a default handler and then add custom handlers as and when they are required without needing to recode the main system.
  2. Be notified if a message is put onto a topic and there isn’t a process to handle the message.

In RabbitMQ this is relatively straight forward and requires creating an alternate-exchange, adding it as a property to your main exchange and then creating a queue to service the alternate-exchange

 

IDictionary<String, Object> args2 = new Dictionary<String, Object>();

args2.Add("alternate-exchange", alternateExchangeName);

channel.ExchangeDeclare(mainExchangeName, "direct", false, false, args2);

channel.ExchangeDeclare(alternateExchangeName, "fanout");

// Adds a queue bound to the unhandled messages exchange

channel.QueueDeclare(unroutedMessagesQueueName, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueName, alternateExchangeName, "");

Now when a message is published on the main exchange and there is no subscription to handle the message, then the message will automatically appear on the unrouted message queue. This solution will solve both the scenarios we were looking for.

I was interested however understanding how to do this in the Azure Service Bus and whilst it is possible isn’t not as straight forward and will require some code to setup. Topics can be configured to throw an exception if there is no subscription available to process the message when the message is sent. So When the topic is created it needs to be configured to enable this exception to be thrown.

NamespaceManager namespaceManager =

               NamespaceManager.CreateFromConnectionString(_ConnectionString);

TopicDescription td = new TopicDescription(topic)

{

          EnableFilteringMessagesBeforePublishing = true

};

await namespaceManager.CreateTopicAsync(td);

 

Now when a message is sent we need to handle the exception and do something with the message. This is the difference between RabbitMQ and Service Bus. In RabbitMQ the message will automatically end up in the unhandled message queue. In service bus we will need to actually add it to the unhandled message queue when the message is sent. This means that at each message producer, the code will need to handle the exception:

try

{

     client.Send(message);

}

catch(NoMatchingSubscriptionException ex)

{

     // Do something here to process the unhandled message

     // Probably put it on an unhandled message queue

}

Note, however, that if you had a subscription that was a catch all (for example logging all the messages) then unhandled messages would not appear as they are already being handled by the catch all subscription.

Unlock The Door Demo Software on GitHub

If you attended my DDD East Anglia talk “A Raspberry Pi2, Azure ML and Project Oxford to unlock that door!” where I integrate a Raspberry Pi running Windows 10 IoT core with the service bus , Project Oxford for face recognition and a Windows Store App to take my picture and hopefully unlock my door. Yes I did bring a door with me. Thanks for attending and for your nice comments.

I have started to put my code up on GitHub. The code for the Raspberry Pi is already there - https://github.com/sdspencer-mvp/RaspberryPi2-UnlockTheDoor. More will appear later as I tidy it up and remove all my config secrets Winking smile

I will be repeating this talk at Smart Devs in Hereford on 12 October 2015 and again at DDD North in Sunderland on 24 October 2015.

Raspberry Pi2 , Iot Core and Azure Service Bus

Using Raspberry Pi2 on Windows 10 IoT core has a number of challenges mainly due to the limitations of both the universal app APIs and also the lack of APIs that currently run on the platform. I specifically wanted to utilise Azure Service Bus Topics to send/receive messages on my Raspberry Pi2. After a bit of searching around I decided that the easiest way to achieve this was to use the Service Bus REST API. There are a number of samples included in the documentation:

Receiving a message: https://msdn.microsoft.com/en-us/library/azure/hh690923.aspx

Sending a message: https://msdn.microsoft.com/en-us/library/azure/hh690922.aspx

The full code for the sample uses WebClient but I needed to use HttpClient so I converted the samples accordingly.

[EDIT] The above links don't work anymore so I've published my code on GitHub https://github.com/sdspencer-mvp/RaspberryPi2-UnlockTheDoor/blob/master/UnlockTheDoor/MainPage.xaml.cs 

Sending a message to the service bus requires a POST and receive and delete requires a DELETE. The following code shows how this was achieved using HttpClient

private async void SendMessage(string baseAddress, string queueTopicName, string token, string body, IDictionary<string, string> properties)

{

    string fullAddress = baseAddress + queueTopicName + "/messages" + "?timeout=60&api-version=2013-08 ";

    await SendViaHttp(token, body, properties, fullAddress, HttpMethod.Post);

}

 

 

 

// Receives and deletes the next message from the given resource (queue, topic, or subscription)

// using the resourceName and an HTTP DELETE request.

private static async System.Threading.Tasks.Task <string> ReceiveAndDeleteMessageFromSubscription(string baseAddress, string topic, string subscription, string token, IDictionary<string, string> properties)

{

    string fullAddress = baseAddress + topic + "/Subscriptions/" + subscription + "/messages/head" + "?timeout=60";

    HttpResponseMessage response = await SendViaHttp(token, "", properties, fullAddress, HttpMethod.Delete);

    string content = "";

    if (response.IsSuccessStatusCode)

    {

        // we should have retrieved a message

        content = await response.Content.ReadAsStringAsync();

    }

    return content;

}

 

 

 

private static async System.Threading.Tasks.Task<HttpResponseMessage> SendViaHttp(string token, string body, IDictionary<string, string> properties, string fullAddress, HttpMethod httpMethod )

{

    HttpClient webClient = new HttpClient();

    HttpRequestMessage request = new HttpRequestMessage()

    {

        RequestUri = new Uri(fullAddress),

        Method = httpMethod ,

 

    };

    webClient.DefaultRequestHeaders.Add("Authorization", token);

 

    if (properties != null)

    {

        foreach (string property in properties.Keys)

        {

            request.Headers.Add(property, properties[property]);

        }

    }

    request.Content = new FormUrlEncodedContent(new[] { new KeyValuePair<string, string>("", body) });

    HttpResponseMessage response = await webClient.SendAsync(request);

    if (!response.IsSuccessStatusCode)

    {

        string error = string.Format("{0} : {1}", response.StatusCode, response.ReasonPhrase);

        throw new Exception(error);

    }

    return response;

}

 

There was an issue with the GetSASToken method as some of the encryption classes weren't supported on the Universal App so I converted it to the following:

private string GetSASToken(string baseAddress, string SASKeyName, string SASKeyValue)

{

    TimeSpan fromEpochStart = DateTime.UtcNow - new DateTime(1970, 1, 1);

    string expiry = Convert.ToString((int)fromEpochStart.TotalSeconds + 3600);

    string stringToSign = WebUtility.UrlEncode(baseAddress) + "\n" + expiry;

    string hmac = GetSHA256Key(Encoding.UTF8.GetBytes(SASKeyValue), stringToSign);

    string hash = HmacSha256(SASKeyValue, stringToSign);

    string sasToken = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}",

        WebUtility.UrlEncode(baseAddress), WebUtility.UrlEncode(hash), expiry, SASKeyName);

    return sasToken;

}

 

 

public string HmacSha256(string secretKey, string value)

{

    // Move strings to buffers.

    var key = CryptographicBuffer.ConvertStringToBinary(secretKey, BinaryStringEncoding.Utf8);

    var msg = CryptographicBuffer.ConvertStringToBinary(value, BinaryStringEncoding.Utf8);

 

    // Create HMAC.

    var objMacProv = MacAlgorithmProvider.OpenAlgorithm(MacAlgorithmNames.HmacSha256);

    var hash = objMacProv.CreateHash(key);

    hash.Append(msg);

    return CryptographicBuffer.EncodeToBase64String(hash.GetValueAndReset());

}

 

This allowed me to send and receive messages on my Raspberry Pi2 using IoT core. I created the subscriptions for the topic using a separate app using the .NET SDK which is cheating I guess, but I’ll get around to converting it at some point.

 

In order to use this the following parameters are used:

 

SendMessage( BaseAddress, Username, Token, MessageBody, MessageProperties)

 

BaseAddress is “https://<yournamespace>.servicebus.windows.net/”

 

Token is the return value from the GetSASToken method. using the same base address as above and the KeyName and Key are obtained from the Azure portal and is of the format

 

Endpoint=sb://<yournamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<Key>.

 

MessageBody – This is the string value of the message body

 

MessageProperties are a Dictionary containing name/value pairs that will get added to the Request headers. For example I have set the message properties when I press the door bell button on my Raspberry PI2

 

Dictionary<string, string> properties = new Dictionary<string, string>();

properties.Add("Priority", "High");

properties.Add("MessageType", "Command");

properties.Add("Command", "BingBong");

 

These are added to the service bus message and allow me to have subscriptions that filer on Command message types as well as the specific command of BingBong

 

Receiving messages are a bit trickier as we need to create a separate task that is continually running. Once the message is received we need to get back to the main tread to execute the action for the message

await Task.Run(async () =>

{

.

.

.

string message = await ReceiveAndDeleteMessageFromSubscription(_BaseAddress

,_TopicName

, _SubscriptionName

, token, null);

 if (message.Contains("Unlock"))

{

   await Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(

      CoreDispatcherPriority.Normal,

      () =>

      {

          SwitchLED(false);

     });

}

 

.

.

}

 

You may want to put a delay in this if receiving the messages causes the app to slow down due to the message loop hogging all the resources. There’s a default timeout in the call to SendAsync and this will automatically slow the thread down.

 

I now have a working Raspberry PI2 that can send and receive message to the Azure Service bus. I’ve created a test win forms app that allows me to send messages to the Service bus and it allows me to control the Raspberry Pi2 remotely. The next phase is to build a workflow engine that hooks up to the service bus and allows me to automatically control the Raspberry Pi. 

Making My Azure ML Project Oxford Sample Application More Visual

Following on from my last post where I introduce Project Oxford I’ve done a bit more work to take the project that was built and make it more visual. To summarise, Project Oxford is a set of APIs that build on top of Azure ML to provide Face, Speech, Computer Vision and Language Understanding Intelligence Service (LUIS). There was a good video from Build 2015 that I watched to provide an overview of each of the APIs.

I used the tutorials to build an application that would identify a number of people from a known list in a photograph and highlight the ones that were unknown. The Face API requires people to be trained with a set of photos first, before identification can be made. This was done by using the code in the samples. I created a folder for each person that I wanted to be trained and added different photos of each person with and without hats, and sunglasses and also with different expressions. Then each set of folders was passed to the training API. Once trained you can then use the rest of the Face API to firstly identify faces in a picture and then take each face that is found and see if they are known.

One useful tip I’ve found is to have Fiddler running whilst you are debugging as it is far easier to see any errors in the body of the response message than in the exceptions that are thrown. Details of the errors can be seen in the Face API documentation.

The process for training is as follows (Note the terminology is based around the SDK methods, but I’ve linked to the API page as this gives details about the errors etc):

  1. Create a Person Group
  2. Create a Face list for each person using Face Detect
  3. Create a Person one for each person you want to identify with the person group id and face list
  4. Train the Person Group

Note: The training does not last forever and you will need to redo it periodically. If you try and detect a person when training has expired then you will get an error response saying that the person group is unknown.

To Identify each individual in a photograph:

  1. Stream the photograph into Detect. This will return a list of faces with face ids
  2. Iterate around each Face and call Identify 
  3. Use the Identify Results to extract the names by calling Get Person.

This is where I got to with the previous post, but this wasn’t very visual and as I was working with photographs I thought it would be useful to use the data returned to draw a box around the faces that were identified and add the name of the person underneath. This was also useful to know which person was identified incorrectly. On the project Oxford web site there was the following image

I wanted to emulate this and also to take it one step further. The data returned from the face detection API provides details about gender, age, the area (face rectangle) in the picture where the face was found, face landmarks, and head pose. What the detection API did not do was to tie the name of the person to the face. We do already have this information as it was returned from the Identify API and Get Person. The attribute that links them is the face id. Using the results of the Identify API I called get person for each face identified to return the person’s name and stored this in a Dictionary along with the face ID. This then allowed me to load the original photograph into memory draw the rectangles for each face and add the text below each using the face id to extract the rectangle and match the name from the Dictionary, This could then be scaled shown in the app.

Face Recognition with Azure ML and Project Oxford

I’ve wanted to use Azure Machine Learning for a while but didn’t know where to start. Microsoft have released some gallery applications for Azure ML to take away some of the complexity and make it easy for developers to use the service. One item in the gallery that will be useful is Project Oxford. Project Oxford offers a number of features and the one I am going to talk about here is the Face API.

With the Face API you can train Azure ML with pictures of a number of people and then use the matching api to see whether any of the trained people appear in the image.

This is easy to setup and there is a good tutorial here: http://www.projectoxford.ai/doc/face/How-To/identifyperson

Firstly you will need to sign up and get a subscription key http://www.projectoxford.ai/doc/general/subscription-key-mgmt

Login to Azure portal with an Azure subscription, The link should open market place. Scroll down to find Face APIs and then click through to the purchase button and purchase. This api is currently free.

Your face api service will now be created. Once complete you need to extract the keys for use in your app. Click on your face api service then click the Manage Button

clip_image001

Click on show to view your key and copy it into your application

clip_image002

Download the face api from https://www.projectoxford.ai/sdk unzip and add to your project, then add a reference in your application.

Follow the code here: http://www.projectoxford.ai/doc/face/How-To/identifyperson

Be aware that when this is run you may get a bad request error (I used fiddler to see the error) when creating a Person Group. This seems to be due to case sensitivity and when I made the parameters lower case it worked! The sample code above is mixed case but the service seems to want all lowercase. Details of the error messages can be found here: https://dev.projectoxford.ai/docs/services/54d85c1d5eefd00dc474a0ef/operations/54f0387249c3f70a50e79b84 The body of the response contains the exact details of the error.

There are limitations on file size so I ended up editing mine down to below 4MB

Once trained you can detect multiple people in one photo graph and will identify those that it knows

I've trained it with a number of people especially as my daughter was identified as her mum :-)

Now I've added her into the training files she is not mistaken.

You might need to play around with the training files especially to take into account hats and glasses.

Enjoy

Introducing the Azure App Service

Last month Microsoft announced the Azure App Service (http://azure.microsoft.com/blog/2015/03/24/announcing-azure-app-service/). The App Service incorporates Web (sites) Apps and Mobile apps and introduces two new services: API Apps and Logic Apps.

API Apps allows you to build small RESTful services that can be combined together with Web, Mobile and/or Logic apps to build your application.

There is new tooling for Visual Studio (http://blogs.msdn.com/b/visualstudio/archive/2015/03/24/introducing-the-azure-api-apps-tools-for-visual-studio-2013.aspx) to help you build API apps, as well as providing the ability to debug your API App when it is deployed in Azure (http://azure.microsoft.com/en-us/documentation/articles/app-service-dotnet-remotely-debug-api-app/).

API Apps are documented using Swagger (http://swagger.io/) and there is a UI in the portal to allow you to run the app with sample data. To access the Swagger UI click the API App URL in the portal and add \swagger to the end. Click on the API method you are interested in and then click the Action button (POST in my example below).

image

This expands out to allow you to exercise the API.

An example API is documented here: https://azure.microsoft.com/en-us/documentation/articles/app-service-dotnet-create-api-app/

There is also a market place for API Apps which include API connectors for Office 365, Service Bus, OneDrive, Drop Box and various others. You need to install them as API apps before they can be used in other apps. All authentication to the services is done in the API app creation process and this therefore makes it easier to wire them together as the authentication is handled for you. Connectors can be used to trigger Logic Apps and also as Actions. Details of this along with the list of available connectors is here (http://azure.microsoft.com/en-us/documentation/articles/app-service-logic-use-biztalk-connectors/)

I'm going to blog in more detail about logic apps later, but for now here are a couple of tips for API Apps:

  1. In order to enable swagger and to ensure that your APIs that return data are documented correctly there is some additional code that needs to be added. This is documented here: http://blogs.msdn.com/b/hosamshobak/archive/2015/03/31/logic-app-with-simple-api-app-with-inputs-and-outputs.aspx
  2. When you create an API app, especially if you created it from the market place (e.g. Azure Storage Blob Connector, Service Bus Connector etc) you are asked for configuration at the time of creation. Once it is created, it is not obvious where to find the configuration. In the new Azure Portal, Browse to API Apps and click on the one you want to reconfigure. In the Essentials panel that appears click on the API app host link. Click the settings Icon followed by Application Settings. Scroll down and any settings for the API App will be visible and can be changed. This is useful if you need to remember which service bus topic and subscription are configured for example.

Azure Storage Version Changes

If you are unaware, older versions of the Azure Storage API will be turned off in December 2015. This means that any of your applications that use these older versions will stop working. If you are accessing the Storage API through an SDK then you most likely just need to rebuild with a newer supported version. If you are accessing the REST API directly then you will need to ensure that the code changes to support the newer API versions.

Full details of the changes can be found at: http://azure.microsoft.com/blog/2014/08/04/microsoft-azure-storage-service-version-removal/

Changing Website Settings through the Azure Portal

When using configuration in Microsoft Azure websites, ensure that you put configuration that you are likely to change often in AppSettings. This allows you to make configuration changes in the Management portal of Azure rather than having to edit the web.config file directly. An example of where you might like to do this include settings that allow you to disable site features temporarily such as during an upgrade or routine maintenance.

App settings in the web config file are names/value pairs and are accessed as follows:

System.Configuration.ConfigurationManager.AppSettings["StevesSetting"]

Which can be seen in the web.config as follows:

<configuration>
  .
  <appSettings>
      <add key="StevesSetting" value="Webconfig setting" />
  </appSettings>
  .
</configuration>

In order to manage this configuration in the portal you need to navigate to your website and click the configure tab (in the old portal)

image

and scroll down to app settings, then add in the setting you wish to change

image

Or in the old portal navigate to the website and click settings then applications settings and scroll down to the app settings section

image