Steve Spencer's Blog

Blogging on Azure Stuff

Adding Rigour to an AzureML Web Service Deployment

As a developer I want an automated mechanism to build, test and deploy everything I do, so when my team and I came to implementing something with AzureML it was the first thing I was challenged on. We have a group of analysts who want to use AzureML to build part of our system without having to translate their requirements into a language that we developers can use to build a system in code. I’d been playing around with AzureML and deployed a few services as part of a proof of concept but hadn’t looked at how we could automated the deployment process and add some control. We didn’t want to have a mechanism that would allow the analysts to build and deploy a new model to production, without some way to check whether what they have built was fit for purpose. After a bit of research I found that we could export both the experiment and the web service as JSON from AzureML. Exporting both the experiment and the web service definitions would allow us to version control the source. We could also import these definitions to allow us to move the experiment and web service to different subscriptions. Exporting and importing the experiment was relatively straight forward using PowerShell.

Export-AmlExperiementGraph & Import-AmlExperiementGraph

The full PowerShell to export and import the experiment is:

Get-AmlWorkspace -ConfigFile .\config-source.json
$exp = Get-AmlExperiment | where Description -eq '<ExperrimentName>' -ConfigFile .\config-source.json
Export-AmlExperimentGraph -ExperimentId $exp.ExperimentId -OutputFile 'C:\experiment.json' -ConfigFile .\config-source.json

Get-AmlWorkspace -ConfigFile .\config-dest.json
Import-AmlExperimentGraph -InputFile 'C:\experiment.json' -ConfigFile .\config-dest.json

Note: This relies on two config files, one to identify the source workspace and the other to identify the destination workspace. These are in the following formats:

{
"Location": "West Europe",
"WorkspaceId": "<WorkspaceID",
"AuthorizationToken": "<AuthorizationToken>"
}

You can find the workspace id and authorization tokens by opening your AzureML workspace then clicking settings.

clip_image002

clip_image004

Exporting the experiment as JSON allows you to add it to source control and version the experiment.

Importing the experiment into a new Workspace will allow you to open the workspace and view/edit the experiment and also to manually deploy the web service. This on its own will give you some control over the web service and allow you to control how and when it gets deployed and will stop the analytics team from accidentally deploying something to production and potentially breaking the system.

It is also possible to export a deployed web service and then import it into a new subscription.

When we came to trying to export the web service we ran into a few issues. Firstly there seemed to be a number of ways to export the web service definition and they seemed to produce different JSON.

https://github.com/ritwik20/AzureML-WebServices

https://github.com/hning86/azuremlps#export-amlwebservicedefinitionfromexperiment

https://docs.microsoft.com/en-us/powershell/module/azurerm.machinelearning/export-azurermmlwebservice?view=azurermps-2.2.0

We settled for the last link but also took some missing information from the first link. The process for exporting is a little more complicated.

Firstly, the web service definition needs to be exported as a JSON file. The web service import uses resource manager and requires a different mechanism to login to the experiment export/import. I used:

Login-AzureRmAccount –SubscriptionId <My Subscription ID>

This use the interactive login so if you want to automate this then you need to use the service principle 

To export the web service:

$webService = Get-AzureRmMlWebService -Name "Source Service Name" -ResourceGroupName "Source Resource Group Name" 
Export-AzureRmMlWebService -WebService $webService -OutputFile 'C:\wsexport.json' 

The JSON then needs editing to add in the new storage account and commitment plan.

The JSON can then be imported into the new subscription using New-Azure​RmMl​Web​Service 

Changing the JSON to add in the commitment plan id seemed to cause problems and I kept getting the error “Commitment Plan ID must be provided”. This error was confusing as I was including the commitment plan id in the configuration and I thought that I had it in the correct place. If you open the JSON that you export and find the storage account node then you will need to overwrite this with:

"storageAccount":{ 
     "name": "<StorageAccountName>",  
     "key": "<StorageAccountKey>" },  
"commitmentPlan": { 
      "id": "/subscriptions/0<Subscription-ID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.MachineLearning/commitmentPlans/<CommitmentPlanName>"}, 

My issue was that I had the commitment id wrong rather than in the wrong place and I found the correct id using:

Get-AzureRmMlCommitmentPlan

Once I had this configured correctly the import worked without any errors using:

New-AzureRmMlWebService -Force -ResourceGroupName "New Resource Group Name" -Name "Web Service Name" -Location "West Europe" -DefinitionFile "C:\wsexport.json" 

As we’ve used PowerShell to automate the export and import we could then easily script the config file edits and wire this in to an automated test and deploy process. We can write tests that check the web service parameters have not been changed by the analytics team and that the return data is the correct format, so we can ensure that the deployed service will at least function correctly. It’s more difficult to check whether the actual machine learning process is correct, but we will know when the interfaces are broken. We have also version controlled both the experiment and web service JSON so we can easily roll back if necessary. We decided that we needed both so that the experiment didn’t need to be copied to a new workspace and then automate a web service deploy, we just needed the web service in the new subscription, but we wanted the version control for the experiment too.

Making My Azure ML Project Oxford Sample Application More Visual

Following on from my last post where I introduce Project Oxford I’ve done a bit more work to take the project that was built and make it more visual. To summarise, Project Oxford is a set of APIs that build on top of Azure ML to provide Face, Speech, Computer Vision and Language Understanding Intelligence Service (LUIS). There was a good video from Build 2015 that I watched to provide an overview of each of the APIs.

I used the tutorials to build an application that would identify a number of people from a known list in a photograph and highlight the ones that were unknown. The Face API requires people to be trained with a set of photos first, before identification can be made. This was done by using the code in the samples. I created a folder for each person that I wanted to be trained and added different photos of each person with and without hats, and sunglasses and also with different expressions. Then each set of folders was passed to the training API. Once trained you can then use the rest of the Face API to firstly identify faces in a picture and then take each face that is found and see if they are known.

One useful tip I’ve found is to have Fiddler running whilst you are debugging as it is far easier to see any errors in the body of the response message than in the exceptions that are thrown. Details of the errors can be seen in the Face API documentation.

The process for training is as follows (Note the terminology is based around the SDK methods, but I’ve linked to the API page as this gives details about the errors etc):

  1. Create a Person Group
  2. Create a Face list for each person using Face Detect
  3. Create a Person one for each person you want to identify with the person group id and face list
  4. Train the Person Group

Note: The training does not last forever and you will need to redo it periodically. If you try and detect a person when training has expired then you will get an error response saying that the person group is unknown.

To Identify each individual in a photograph:

  1. Stream the photograph into Detect. This will return a list of faces with face ids
  2. Iterate around each Face and call Identify 
  3. Use the Identify Results to extract the names by calling Get Person.

This is where I got to with the previous post, but this wasn’t very visual and as I was working with photographs I thought it would be useful to use the data returned to draw a box around the faces that were identified and add the name of the person underneath. This was also useful to know which person was identified incorrectly. On the project Oxford web site there was the following image

I wanted to emulate this and also to take it one step further. The data returned from the face detection API provides details about gender, age, the area (face rectangle) in the picture where the face was found, face landmarks, and head pose. What the detection API did not do was to tie the name of the person to the face. We do already have this information as it was returned from the Identify API and Get Person. The attribute that links them is the face id. Using the results of the Identify API I called get person for each face identified to return the person’s name and stored this in a Dictionary along with the face ID. This then allowed me to load the original photograph into memory draw the rectangles for each face and add the text below each using the face id to extract the rectangle and match the name from the Dictionary, This could then be scaled shown in the app.

Face Recognition with Azure ML and Project Oxford

I’ve wanted to use Azure Machine Learning for a while but didn’t know where to start. Microsoft have released some gallery applications for Azure ML to take away some of the complexity and make it easy for developers to use the service. One item in the gallery that will be useful is Project Oxford. Project Oxford offers a number of features and the one I am going to talk about here is the Face API.

With the Face API you can train Azure ML with pictures of a number of people and then use the matching api to see whether any of the trained people appear in the image.

This is easy to setup and there is a good tutorial here: http://www.projectoxford.ai/doc/face/How-To/identifyperson

Firstly you will need to sign up and get a subscription key http://www.projectoxford.ai/doc/general/subscription-key-mgmt

Login to Azure portal with an Azure subscription, The link should open market place. Scroll down to find Face APIs and then click through to the purchase button and purchase. This api is currently free.

Your face api service will now be created. Once complete you need to extract the keys for use in your app. Click on your face api service then click the Manage Button

clip_image001

Click on show to view your key and copy it into your application

clip_image002

Download the face api from https://www.projectoxford.ai/sdk unzip and add to your project, then add a reference in your application.

Follow the code here: http://www.projectoxford.ai/doc/face/How-To/identifyperson

Be aware that when this is run you may get a bad request error (I used fiddler to see the error) when creating a Person Group. This seems to be due to case sensitivity and when I made the parameters lower case it worked! The sample code above is mixed case but the service seems to want all lowercase. Details of the error messages can be found here: https://dev.projectoxford.ai/docs/services/54d85c1d5eefd00dc474a0ef/operations/54f0387249c3f70a50e79b84 The body of the response contains the exact details of the error.

There are limitations on file size so I ended up editing mine down to below 4MB

Once trained you can detect multiple people in one photo graph and will identify those that it knows

I've trained it with a number of people especially as my daughter was identified as her mum :-)

Now I've added her into the training files she is not mistaken.

You might need to play around with the training files especially to take into account hats and glasses.

Enjoy