Steve Spencer's Blog

Blogging on Azure Stuff

Windows 10 IoT Core New Release

I’ve just upgraded my Raspberry Pi 2 with Windows 10 IoT Core Build Number 10531.0 (download , release notes). It fixes an issue I’ve been having with setting the application the runs when the Pi first starts up. Prior to this release my application would start up the first time and then shutdown and be replaced by the default app. It would then not start up at power up again. Now my application starts up every time I power on my Raspberry Pi Smile

It is also possible to set the computer name and set the administrator password from the Raspberry Pi administration website. Previously this was done using PowerShell.

In order to navigate to the administration page you must first know either the machine name or ip address of your Raspberry Pi. This can be found in the Windows 10 IoT Core watcher application that runs after you have installed the IoT core SDK. To access the admin website either enter the address into a browser (http://<ipaddressornameofPi>:8080) or right click on the Pi in the IoT Core Watcher application and select “Web Browse Here”. You will need to enter the username Administrator plus your password to access the site.

image

Here  you can enter a new device name (machine name) as well as change the password. A reboot will be required if you change the name.

In order to set the start up app click the Apps link on the menu panel

image

You will need to ensure that you have first deployed your application to the Raspberry Pi. If you have debugged your application using Visual Studio then a debug version will already have been installed on the Raspberry Pi.

From the Installed Apps drop down select your applications and click the Set Default button. Your application should start and replace the Default App in the running apps list. You can check this by clicking reboot or cycling the power to the Raspberry Pi and your app should start up after the Raspberry Pi has booted.

Adding Multi-Factor Authentication to ADFS

I’ve been investigating how to wire up AD to ADFS and thanks to my friend James he pointed me in the direction of multifactor authentication. The post here explains how to add in multi-factor authentication (MFA) to ADFS. There were however a couple of areas that were not clear that needed additional research.

  1. How to get a config file into  the MFA provider
  2. How to send additional claims from the MFA provider
  3. How to customise the ADFS MFA portal pages

Adding configuration into the MFA is handled in the  OnAuthenticationPipelineLoad method in the AuthenticationAdapter class. The configData parameter contains a Data property which is a file stream that allows you access to the config file. The config file can be anything you want but you need to add it when you register your plugin with ADFS. Plugin registration is doen in Powershell and you need to add the configuration as follows:

Register-AdfsAuthenticationProvider -TypeName $typeName -Name "MFA_MyProvider" -Verbose -ConfigurationFilePath c:\mfa\config.xml  see here

In your OnAuthenticationPipelineLoad method you need to process the config file

public void OnAuthenticationPipelineLoad(IAuthenticationMethodConfigData configData)

{

    if (configData != null)

    {

        if (configData.Data != null)

        {

            // load the config file

            using (StreamReader reader = new StreamReader(configData.Data, Encoding.UTF8))

            {

                try

                {

                    string config = reader.ReadToEnd();

 

                    // Read your config here

                }

            }

        }

    }

}

 

Sending additional claims is achieved in the TryEndAuthentication method of the AuthenticationAdapter class. It should already be returning and authentication method as an array of claims. You can add additional claims to this array and return them through the claims out parameter. You will need to add rules in ADFS to pass through these claims to the application if they are required.

Customising the MFA portal is done through PowerShell details are found here:

http://thinketg.com/adfs-3-0-logon-page-customization/ & https://technet.microsoft.com/en-us/library/dn280950.aspx

 

Windows 10 IoT core project issues when upgrading to VS2015RTM

Just updated my VS2015 to RTM and tried to load in my blinky Iot Project for Raspberry Pi 2. It didn’t load and I was informed that the project required updating

clip_image001

Right clicking on the project offers the option to download updates

Selecting this takes you to:

https://msdn.microsoft.com/en-us/library/windows/apps/xaml/mt188198.aspx

It looks like all I can do is to create a new blank project and copy the existing project files over.

Created a new project and copied the contents of MainPage.xaml.cs and MainPage.Xaml over the contents of the files created in the new project. I found it was quicker to do this than to copy the files over manually. Also, change the namespace (if you created a project with a different name) in both MainPage.xaml.cs and MainPage.xaml. Add in all other files you are using by right clicking on the project and clicking Add Existing Items…

Need to add the following reference:

clip_image002

In the project properties: I selected remote debugger and entered the ip address of my raspberry pi.

When I tried to debug the deployment failed because the version of the remote debugger on the raspberry pi2 was out of date. In order to upgrade it I needed to also upgrade my Windows 10 to the latest version.( https://ms-iot.github.io/content/en-US/win10/SetupPCRPI.htm ) then reflash my raspberry pi 2 sd card ( https://ms-iot.github.io/content/en-US/win10/SetupRPI.htm)

I first updated my Win 10 VM but when I ran the WindowsIoTImageHelper it would not recognise the SD card of the host machine and I couldn’t seem to force it to use the SD card on the host. I then updated my surface Pro to the latest Windows 10 and repeated the process to reflash my Pi.

With all the upgrades completed my project now deploys and runs fine on my updated Raspberry PI2.

Raspberry Pi2 , Iot Core and Azure Service Bus

Using Raspberry Pi2 on Windows 10 IoT core has a number of challenges mainly due to the limitations of both the universal app APIs and also the lack of APIs that currently run on the platform. I specifically wanted to utilise Azure Service Bus Topics to send/receive messages on my Raspberry Pi2. After a bit of searching around I decided that the easiest way to achieve this was to use the Service Bus REST API. There are a number of samples included in the documentation:

Receiving a message: https://msdn.microsoft.com/en-us/library/azure/hh690923.aspx

Sending a message: https://msdn.microsoft.com/en-us/library/azure/hh690922.aspx

The full code for the sample uses WebClient but I needed to use HttpClient so I converted the samples accordingly.

[EDIT] The above links don't work anymore so I've published my code on GitHub https://github.com/sdspencer-mvp/RaspberryPi2-UnlockTheDoor/blob/master/UnlockTheDoor/MainPage.xaml.cs 

Sending a message to the service bus requires a POST and receive and delete requires a DELETE. The following code shows how this was achieved using HttpClient

private async void SendMessage(string baseAddress, string queueTopicName, string token, string body, IDictionary<string, string> properties)

{

    string fullAddress = baseAddress + queueTopicName + "/messages" + "?timeout=60&api-version=2013-08 ";

    await SendViaHttp(token, body, properties, fullAddress, HttpMethod.Post);

}

 

 

 

// Receives and deletes the next message from the given resource (queue, topic, or subscription)

// using the resourceName and an HTTP DELETE request.

private static async System.Threading.Tasks.Task <string> ReceiveAndDeleteMessageFromSubscription(string baseAddress, string topic, string subscription, string token, IDictionary<string, string> properties)

{

    string fullAddress = baseAddress + topic + "/Subscriptions/" + subscription + "/messages/head" + "?timeout=60";

    HttpResponseMessage response = await SendViaHttp(token, "", properties, fullAddress, HttpMethod.Delete);

    string content = "";

    if (response.IsSuccessStatusCode)

    {

        // we should have retrieved a message

        content = await response.Content.ReadAsStringAsync();

    }

    return content;

}

 

 

 

private static async System.Threading.Tasks.Task<HttpResponseMessage> SendViaHttp(string token, string body, IDictionary<string, string> properties, string fullAddress, HttpMethod httpMethod )

{

    HttpClient webClient = new HttpClient();

    HttpRequestMessage request = new HttpRequestMessage()

    {

        RequestUri = new Uri(fullAddress),

        Method = httpMethod ,

 

    };

    webClient.DefaultRequestHeaders.Add("Authorization", token);

 

    if (properties != null)

    {

        foreach (string property in properties.Keys)

        {

            request.Headers.Add(property, properties[property]);

        }

    }

    request.Content = new FormUrlEncodedContent(new[] { new KeyValuePair<string, string>("", body) });

    HttpResponseMessage response = await webClient.SendAsync(request);

    if (!response.IsSuccessStatusCode)

    {

        string error = string.Format("{0} : {1}", response.StatusCode, response.ReasonPhrase);

        throw new Exception(error);

    }

    return response;

}

 

There was an issue with the GetSASToken method as some of the encryption classes weren't supported on the Universal App so I converted it to the following:

private string GetSASToken(string baseAddress, string SASKeyName, string SASKeyValue)

{

    TimeSpan fromEpochStart = DateTime.UtcNow - new DateTime(1970, 1, 1);

    string expiry = Convert.ToString((int)fromEpochStart.TotalSeconds + 3600);

    string stringToSign = WebUtility.UrlEncode(baseAddress) + "\n" + expiry;

    string hmac = GetSHA256Key(Encoding.UTF8.GetBytes(SASKeyValue), stringToSign);

    string hash = HmacSha256(SASKeyValue, stringToSign);

    string sasToken = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}",

        WebUtility.UrlEncode(baseAddress), WebUtility.UrlEncode(hash), expiry, SASKeyName);

    return sasToken;

}

 

 

public string HmacSha256(string secretKey, string value)

{

    // Move strings to buffers.

    var key = CryptographicBuffer.ConvertStringToBinary(secretKey, BinaryStringEncoding.Utf8);

    var msg = CryptographicBuffer.ConvertStringToBinary(value, BinaryStringEncoding.Utf8);

 

    // Create HMAC.

    var objMacProv = MacAlgorithmProvider.OpenAlgorithm(MacAlgorithmNames.HmacSha256);

    var hash = objMacProv.CreateHash(key);

    hash.Append(msg);

    return CryptographicBuffer.EncodeToBase64String(hash.GetValueAndReset());

}

 

This allowed me to send and receive messages on my Raspberry Pi2 using IoT core. I created the subscriptions for the topic using a separate app using the .NET SDK which is cheating I guess, but I’ll get around to converting it at some point.

 

In order to use this the following parameters are used:

 

SendMessage( BaseAddress, Username, Token, MessageBody, MessageProperties)

 

BaseAddress is “https://<yournamespace>.servicebus.windows.net/”

 

Token is the return value from the GetSASToken method. using the same base address as above and the KeyName and Key are obtained from the Azure portal and is of the format

 

Endpoint=sb://<yournamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<Key>.

 

MessageBody – This is the string value of the message body

 

MessageProperties are a Dictionary containing name/value pairs that will get added to the Request headers. For example I have set the message properties when I press the door bell button on my Raspberry PI2

 

Dictionary<string, string> properties = new Dictionary<string, string>();

properties.Add("Priority", "High");

properties.Add("MessageType", "Command");

properties.Add("Command", "BingBong");

 

These are added to the service bus message and allow me to have subscriptions that filer on Command message types as well as the specific command of BingBong

 

Receiving messages are a bit trickier as we need to create a separate task that is continually running. Once the message is received we need to get back to the main tread to execute the action for the message

await Task.Run(async () =>

{

.

.

.

string message = await ReceiveAndDeleteMessageFromSubscription(_BaseAddress

,_TopicName

, _SubscriptionName

, token, null);

 if (message.Contains("Unlock"))

{

   await Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(

      CoreDispatcherPriority.Normal,

      () =>

      {

          SwitchLED(false);

     });

}

 

.

.

}

 

You may want to put a delay in this if receiving the messages causes the app to slow down due to the message loop hogging all the resources. There’s a default timeout in the call to SendAsync and this will automatically slow the thread down.

 

I now have a working Raspberry PI2 that can send and receive message to the Azure Service bus. I’ve created a test win forms app that allows me to send messages to the Service bus and it allows me to control the Raspberry Pi2 remotely. The next phase is to build a workflow engine that hooks up to the service bus and allows me to automatically control the Raspberry Pi. 

Issues setting up Raspberry Pi, Windows 10 IoT core and Visual Studio on a Windows 10 VM

After setting up my Surface Pro with Windows 10 and IoT core I decided that in order to demo it all I needed a Windows 10 VM with it all on. I had a couple of issues that I didn’t get on my Surface Pro.

The first issue I had was that the Windows IoT core watcher application would not run properly and kept shutting down. This is a known bug and has a work around:

Launch the "Developer Command Prompt for VS2015" as Administrator
change the working directory over to "C:\Program Files (x86)\Microsoft IoT"
sn -Vr WindowsIoTCoreWatcher.exe
corflags WindowsIoTCoreWatcher.exe /32BIT+ /FORCE

 

The second issue was Visual Studio couldn’t connect to TFS online. When I tried to manage connections I got the following error:

SplitterDistance must be between Panel1MinSize and Width - Panel2MinSize.

This seems to happen on both VS 2015 Enterprise RC and Community RC editions. I found a work around as follows:

Open up Team Foundation Server online at <youraccount>.visualstudio.com. Click code, then navigate to the project you want to open, click on the solution file which then opens the solution in the web editor. Click the visual studio icon and VS opens with the team project now in team explorer. Close VS and open it again and your team project should still be  connected to team explorer

 

Now with Visual Studio working I needed to set Windows into developer mode. This can be done as follows:

Start->settings->Update & Security -> For Developers. However, when I tried this the setting page kept closing. You can also use the Group Policy editor (Gpedit.msc) as follows:

https://msdn.microsoft.com/en-us/library/windows/apps/dn706236.aspx

clip_image001

Raspberry Pi and Windows IoT Core – Push Buttons and Relays

In my previous Raspberry Pi Post I talked about using the Raspberry Pi to turn an LED on an off. Now whilst this is pretty, its not really that useful. So I wanted to use the same program but to turn on something that needed a bit more power than an LED. I’d recently acquired a solenoid (a coil with a bolt that gets draw towards the magnetised coil when 12v is applied to the solenoid’s coil). Now my Pi doesn’t have enough power on its own to drive the solenoid so I needed a mechanism to apply 12v to the coil from a 3.3V output that the PI delivers. This meant I had to think back to my school days, which in my case is a difficult task :-). I remembered that I could use a transistor to turn  on something with a bigger current from a smaller one.  I decided that as the Pi can supply both 3.3V and 5V I would use a 5V relay and a transistor to allow me to turn on a separate 12v supply to the solenoid. I tried to calculate the correct resistors for the circuit but I failed miserably so in the end I decided trial and error was my best plan. I used a NPN transistor and a resistor and I also combined the LED and resistor from the previous post. The other change that I wanted to do was to remove the timer, that was being used to turn the LED on and off, and replace it with a push button switch.

The following shows the circuit I used.

 image

I should really use a diode across the resistor to protect the transistor and I’ve even used my soldering iron without burning my fingers.

For information, the following image shows the assignment of pins for the Raspberry Pi 2:

image

Anyway, In order to change the code to use a push button I took the sample https://www.hackster.io/windowsiot/push-button-sample and added the push button code to my blinky sample and removed the timer turning on the LED.

In order to use the push button I needed to configure one of the GPIO pins for input rather than output that was used for turning on the LED. I still needed to use a timer, as I needed to read the push button pin on a regular basis to see when the input changed to low when the button was pressed.I set the time to 250 ms so that I didn’t have to hold the button down too long for it to register,  but not too long that the timer  would hog all the resources on the PI.

Now when I press the button the LED turns on, the relay clicks and the solenoid pulls the bolt across. It made me jump when I first connected it up as the solenoid made quite a loud bang and I though I’d blown something up!!

I think I know enough now of how to use the GPIO on the Raspberry PI so I am looking at how I can now connect the PI up to Azure and make it part of a distributed system.

More on this to come……

Making My Azure ML Project Oxford Sample Application More Visual

Following on from my last post where I introduce Project Oxford I’ve done a bit more work to take the project that was built and make it more visual. To summarise, Project Oxford is a set of APIs that build on top of Azure ML to provide Face, Speech, Computer Vision and Language Understanding Intelligence Service (LUIS). There was a good video from Build 2015 that I watched to provide an overview of each of the APIs.

I used the tutorials to build an application that would identify a number of people from a known list in a photograph and highlight the ones that were unknown. The Face API requires people to be trained with a set of photos first, before identification can be made. This was done by using the code in the samples. I created a folder for each person that I wanted to be trained and added different photos of each person with and without hats, and sunglasses and also with different expressions. Then each set of folders was passed to the training API. Once trained you can then use the rest of the Face API to firstly identify faces in a picture and then take each face that is found and see if they are known.

One useful tip I’ve found is to have Fiddler running whilst you are debugging as it is far easier to see any errors in the body of the response message than in the exceptions that are thrown. Details of the errors can be seen in the Face API documentation.

The process for training is as follows (Note the terminology is based around the SDK methods, but I’ve linked to the API page as this gives details about the errors etc):

  1. Create a Person Group
  2. Create a Face list for each person using Face Detect
  3. Create a Person one for each person you want to identify with the person group id and face list
  4. Train the Person Group

Note: The training does not last forever and you will need to redo it periodically. If you try and detect a person when training has expired then you will get an error response saying that the person group is unknown.

To Identify each individual in a photograph:

  1. Stream the photograph into Detect. This will return a list of faces with face ids
  2. Iterate around each Face and call Identify 
  3. Use the Identify Results to extract the names by calling Get Person.

This is where I got to with the previous post, but this wasn’t very visual and as I was working with photographs I thought it would be useful to use the data returned to draw a box around the faces that were identified and add the name of the person underneath. This was also useful to know which person was identified incorrectly. On the project Oxford web site there was the following image

I wanted to emulate this and also to take it one step further. The data returned from the face detection API provides details about gender, age, the area (face rectangle) in the picture where the face was found, face landmarks, and head pose. What the detection API did not do was to tie the name of the person to the face. We do already have this information as it was returned from the Identify API and Get Person. The attribute that links them is the face id. Using the results of the Identify API I called get person for each face identified to return the person’s name and stored this in a Dictionary along with the face ID. This then allowed me to load the original photograph into memory draw the rectangles for each face and add the text below each using the face id to extract the rectangle and match the name from the Dictionary, This could then be scaled shown in the app.

Getting Started with Raspberry Pi 2 and Windows 10 IoT Core

I've got my Raspberry Pi 2 this week and promptly downloaded the Windows 10 IoT core for it.

Scott Hanselman's blog post covers most of what you need to get started

http://www.hanselman.com/blog/SettingUpWindows10ForIoTOnYourRaspberryPi2.aspx

I've summarised the bits that I either didn't read properly or had to go searching for :-)

Download the Windows 10 IoT core and follow the instructions here: http://ms-iot.github.io/content/win10/SetupRPI.htm

In the zip file that is downloaded there is also an MSI file. Install this on your dev machine and you will get an IoT Watcher application that shows all devices on your network. It shows you all the details you need to remote debug your IoT Core device. If you right click on the device you can copy the ip address. This was really useful for this because the only display I could connect my Pi to was my TV (Mainly due to having the wrong cables or no hdmi port on my monitors). Although it was quite impressive to see such a small device on a big screen, it wasn't very practical, plus I keep getting kicked off as the family want to use it to actually watch TV! I'm going to get myself a cheap small monitor just for my Pi. The IoT Watcher application allowed me to check that the Pi was running and also to get its IP address

In order to configure your device including changing the password and setting the machine name the following commands are useful

http://ms-iot.github.io/content/win10/samples/PowerShell.htm

To get started developing for your device download the samples from here: https://github.com/ms-iot/samples

I started with the Blinky sample and this can be the basis for your applications, I picked this one as it shows how to use the GPIO to control something. When this is loaded in Visual Studio 2015 the MainPage.Xaml.cs file is where all the work is done. InitGPIO() sets up the pins for connecting the LED to and there is a timer that ticks to turn the LED on and off

Debugging the application can be done directly on the device and this needs configuring. In order for this to deploy you need to ensure that authentication is turned off in VS as won’t deploy otherwise. When setting the device in VS, I could not get the device to appear in the search tool so I manually configured it with its IP address. This can be done (or to change the device) in the debug section of project properties. Once deployed you can set break points in the code which is running on the device and debug it remotely.

Now I've got that working I've dusted off my soldering iron and the rest of my electronics kit and I am off to play. More later.

Face Recognition with Azure ML and Project Oxford

I’ve wanted to use Azure Machine Learning for a while but didn’t know where to start. Microsoft have released some gallery applications for Azure ML to take away some of the complexity and make it easy for developers to use the service. One item in the gallery that will be useful is Project Oxford. Project Oxford offers a number of features and the one I am going to talk about here is the Face API.

With the Face API you can train Azure ML with pictures of a number of people and then use the matching api to see whether any of the trained people appear in the image.

This is easy to setup and there is a good tutorial here: http://www.projectoxford.ai/doc/face/How-To/identifyperson

Firstly you will need to sign up and get a subscription key http://www.projectoxford.ai/doc/general/subscription-key-mgmt

Login to Azure portal with an Azure subscription, The link should open market place. Scroll down to find Face APIs and then click through to the purchase button and purchase. This api is currently free.

Your face api service will now be created. Once complete you need to extract the keys for use in your app. Click on your face api service then click the Manage Button

clip_image001

Click on show to view your key and copy it into your application

clip_image002

Download the face api from https://www.projectoxford.ai/sdk unzip and add to your project, then add a reference in your application.

Follow the code here: http://www.projectoxford.ai/doc/face/How-To/identifyperson

Be aware that when this is run you may get a bad request error (I used fiddler to see the error) when creating a Person Group. This seems to be due to case sensitivity and when I made the parameters lower case it worked! The sample code above is mixed case but the service seems to want all lowercase. Details of the error messages can be found here: https://dev.projectoxford.ai/docs/services/54d85c1d5eefd00dc474a0ef/operations/54f0387249c3f70a50e79b84 The body of the response contains the exact details of the error.

There are limitations on file size so I ended up editing mine down to below 4MB

Once trained you can detect multiple people in one photo graph and will identify those that it knows

I've trained it with a number of people especially as my daughter was identified as her mum :-)

Now I've added her into the training files she is not mistaken.

You might need to play around with the training files especially to take into account hats and glasses.

Enjoy

Introducing the Azure App Service

Last month Microsoft announced the Azure App Service (http://azure.microsoft.com/blog/2015/03/24/announcing-azure-app-service/). The App Service incorporates Web (sites) Apps and Mobile apps and introduces two new services: API Apps and Logic Apps.

API Apps allows you to build small RESTful services that can be combined together with Web, Mobile and/or Logic apps to build your application.

There is new tooling for Visual Studio (http://blogs.msdn.com/b/visualstudio/archive/2015/03/24/introducing-the-azure-api-apps-tools-for-visual-studio-2013.aspx) to help you build API apps, as well as providing the ability to debug your API App when it is deployed in Azure (http://azure.microsoft.com/en-us/documentation/articles/app-service-dotnet-remotely-debug-api-app/).

API Apps are documented using Swagger (http://swagger.io/) and there is a UI in the portal to allow you to run the app with sample data. To access the Swagger UI click the API App URL in the portal and add \swagger to the end. Click on the API method you are interested in and then click the Action button (POST in my example below).

image

This expands out to allow you to exercise the API.

An example API is documented here: https://azure.microsoft.com/en-us/documentation/articles/app-service-dotnet-create-api-app/

There is also a market place for API Apps which include API connectors for Office 365, Service Bus, OneDrive, Drop Box and various others. You need to install them as API apps before they can be used in other apps. All authentication to the services is done in the API app creation process and this therefore makes it easier to wire them together as the authentication is handled for you. Connectors can be used to trigger Logic Apps and also as Actions. Details of this along with the list of available connectors is here (http://azure.microsoft.com/en-us/documentation/articles/app-service-logic-use-biztalk-connectors/)

I'm going to blog in more detail about logic apps later, but for now here are a couple of tips for API Apps:

  1. In order to enable swagger and to ensure that your APIs that return data are documented correctly there is some additional code that needs to be added. This is documented here: http://blogs.msdn.com/b/hosamshobak/archive/2015/03/31/logic-app-with-simple-api-app-with-inputs-and-outputs.aspx
  2. When you create an API app, especially if you created it from the market place (e.g. Azure Storage Blob Connector, Service Bus Connector etc) you are asked for configuration at the time of creation. Once it is created, it is not obvious where to find the configuration. In the new Azure Portal, Browse to API Apps and click on the one you want to reconfigure. In the Essentials panel that appears click on the API app host link. Click the settings Icon followed by Application Settings. Scroll down and any settings for the API App will be visible and can be changed. This is useful if you need to remember which service bus topic and subscription are configured for example.