Steve Spencer's Blog

Blogging on Azure Stuff

Generating your IoT Hub Shared Access Signature for your ESP 8266 using Azure Functions

In my last 2 posts I showed how you can connect your ESP 8266 to the IoT hub to receive messages from the hub and also to send messages. One of the issue I had was generating the Shared Access Signature (SAS) which is required to connect to the IoT hub. I was unable to generate this on the device so I decided to use Azure Functions. The code required is straight forward and can be found here: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-security#security-tokens

To create an Azure Function, go to the Azure management portal click the menu icon in the top left and select “Create a Resource”

image

Search for “Function”

image

and select “Function App” and click Create

image

Complete the form

image

And click Review and Create to accept the defaults or click next and work through the wizard if you want to change from the default values.

image

Click create to kick of the deployment of your new Azure Function. Once the deployment is complete navigate to the Function by clicking “Go To Resource”. You now need to create your function.

Click the + sign next to “Functions”. I used the In-portal editor as it was the easiest to use at the time as I already had most of the code copied from the site mentioned above.

image

Click In-Portal, then Continue and choose the Webhook + API template and click Create

image

Your function is now ready for editing. It will have some default code in there to give you an idea how to start

image


We’re going to use the previous SAS code in here and modify it to accept a json payload with the parameters you need for the SAS to be created.

The json we’ll use is as follows:

{
     "resourceUri":"[Your IOT Hub Name].azure-devices.net/devices/[Your DeviceId]",
     "expiryInSeconds":86400,
     "key":"[SAS Key from IoT hub]"
}

You can get you SAS key from the IoT hub in the Azure Portal in the devices section. Click on the device

image

Then copy the Primary or Secondary key.

Back to the function. In the editor Paste the following code:

C# function

#r "Newtonsoft.Json"

using System;

using System.Net;

using Microsoft.AspNetCore.Mvc;

using Microsoft.Extensions.Primitives;

using Newtonsoft.Json;

using System.Globalization;

using System.Net.Http;

using System.Security.Cryptography;

using System.Text;

public static async Task<IActionResult> Run(HttpRequest req, ILogger log)

{

     log.LogInformation("C# HTTP trigger function processed a request.");

     string token = "";

     try

     {

          string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

          dynamic data = JsonConvert.DeserializeObject(requestBody);

          int expiryInSeconds = (int)data?.expiryInSeconds;

          string resourceUri = data?.resourceUri;

          string key = data?.key;

          string policyName = data?.policyName;

          TimeSpan fromEpochStart = DateTime.UtcNow - new DateTime(1970, 1, 1);

          string expiry = Convert.ToString((int)fromEpochStart.TotalSeconds + expiryInSeconds);

          string stringToSign = WebUtility.UrlEncode(resourceUri) + "\n" + expiry;

          HMACSHA256 hmac = new HMACSHA256(Convert.FromBase64String(key));

          string signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));

          token = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}", WebUtility.UrlEncode(resourceUri), WebUtility.UrlEncode(signature), expiry);

          if (!String.IsNullOrEmpty(policyName))

          {

               token += "&skn=" + policyName;

          }

     }

     catch(Exception ex)

     {

          return (ActionResult)new OkObjectResult($"{ex.Message}");

     }

     return (ActionResult)new OkObjectResult($"{token}");

}

Click Save and Run and make sure that there are no compilation errors. To use the function you need to post the json payload to the following address:

https://[your Function Name].azurewebsites.net/api/HttpTrigger1?code=[your function access key]

To retrieve your function access key, click Manage and copy your key from the Function Keys section

image

We’re now ready to use this in micropython on your ESP 8266. I created a function to retrieve the SAS

def getsas(hubname, deviceid, key):

    import urequests

    import ujson

    dict = {}

    dict["resourceUri"] = hubname+'.azure-devices.net/devices/'+deviceid

    dict["key"] = key

    dict["expiryInSeconds"]=86400

    payload = ujson.dumps(dict)

    response = urequests.post('https://[your function name].azurewebsites.net/api/HttpTrigger1?code=[your function access key]', data=payload)

    return response.text

In my connectMQTT() function from the first post I replaced the hard coded SAS string with a call to the getsas function. The function returns a SAS which is valid for 24 hours so you will need to retrieve a new SAS once 24 hours has elapsed.


I can now run my ESP 8266 code without modifying it to give it a new SAS each time I want to use it. I always forgot and wondered why it never worked the next time I used it. I can now both send and receive data from/to the ESP 8266 and also generate a SAS to access the IoT hub. The next step is to use the data received by the hub in an application and send action messages back to the ESP 8266 if changes are made. I look forward to letting you know how I got on with that in a future post.

Sending data from the ESP 8266 to the Azure IoT hub using MQTT and MicroPython

In my previous post I showed you how to connect your ESP 8266 to the Azure IoT hub and be able to receive messages from the IoT hub to turn on a LED. In this post I'll show you how to send data to the IoT hub. For this I need to use a sensor that I will read at regular intervals and then send the data back to the IoT hub. I picked a temperature and humidity sensor I had from the kit of sensors I bought

image

This sensor is compatible with the DHT MicroPython library. I order to connect to the IoT hub use the same connect code that is in my previous post. The difference with sending is you need a end point for MQTT to send you temperature and humidity data to. The topic to send to is as follows:

devices/<your deviceId>/messages/events/

So using the same device id as in the last post then my send topic would be devices/esp8266/messages/events/

To send a message to the IoT hub use the publish method. This needs the topic plus the message you want to send. I concatenated the temperature and humidity and separated them with a comma for simplicity

import dht

import time

sensor = dht.DHT11(machine.Pin(16))

mqtt=connectMQTT()

sendTopic = 'devices/<your deviceId>/messages/events/'

while True:

    sensor.measure()

    mqtt.publish(sendTopic,str(sensor.temperature())+','+str(sensor.humidity()),True)

    time.sleep(1)

The code above is all that is required to read the sensor every second and send the data to the IoT hub.

In Visual Studio Code with the Azure IoT Hub Toolkit extension installed, you can monitor the messages that are sent to your IoT hub. In the devices view, right click on the device that has sent the data and select “Start Monitoring Built-in Event Endpoint”

v NO FOLDER OPENED 
You have not yet opened 
Open Fol 
> OUTLINE 
v AZURE IOT HUB 
v o recnepsiotu)l 
> Modules 
> Interfaces (Preview) 
Send D2C Message to 10T Hub 
Send C2D Message to Device 
Invoke Device Direct Method 
Edit Device Twin 
Start Monitoring Built-in Event Endpoint 
Start Receiving C2D Message 
Generate Code 
Generate SAS Token for Device 
Get Device Info 
Copy Device Connection 
Delete Device 
> Distributed Tracing Setting (Preview) 
> Endpoints 
"body": "23 54 
"applicationPro 
"mqtt-retain" 
[ 10THubFbni tor]

This then displays the messages that are received by your IoT hub in the output window

PROBLEMS 
OUTPUT DEBUG coNSOLE 
Azure IOT Hub Toolkit 
[10THub"bnitor] Created partition receiver [1] for consumerGroup [$Defau1t] 
[10THub"bnitor] [9:12:39 PM] Message received from [recnepsiotoøl] : 
"body": "23 54 
"applicationproperties 
"mqtt-retain": "true 
[10THubFbnitor] [9:14:28 PM] Message received from [recnepsiotoøl] : 
"body": "23,54 
"applicationproperties 
"mqtt-retain": "true

You can see in the body of the received message the temperature and humidity values that were sent.

I still need to sort out generating the Shared Access Signature and also programmatically access the data I send to the IoT hub. I hope to have blog posts for these soon.

Connecting the ESP 8266 to Azure IoT Hub using MQTT and MicroPython

Recently  was introduced to the ESP 8266 processor which is a low cost IoT device with built in Wi-Fi, costing around £3 - £4 for a development board. The thing that interested me (apart from price) was the device is Arduino compatible and will also run MicroPython. The version I purchased from Amazon was the NodeMcu variant with built in power and serial port via a microUsb port, so it makes an ideal board to start with as there are no additional components required.

clip_image001

This board however did not have MicroPython installed and that required a firmware change. The instructions were fairly straight forward and I followed this tutorial.

After installing MicroPython you can connect to the device using a terminal emulator via the USB serial port. Check in Device Manager to find the COM port number and the default baud rate is 115200. I used the Arduino Serial Monitor tool. In the terminal emulator you can press enter and you should get back the python REPL prompt. If not then you have the COM port or Baud rate wrong.

image

You can write you python directly into here but its easier to write the python in you PC then run it on the device. For this I use ampy

In Command Prompt install ampy using:

pip install adafruit-ampy

This allows you to connect to your device. Close the terminal emulator to free up the COM port then type the following to list the files on your device:

ampy --port COM4 --baud 115200 ls

The MicroPython Quick Ref will summarise how to access the GPIO ports etc but in order to connect to the IoT hub you will need to configure the Wi-Fi on the device. This can be done using the network module.

So create a new text file on your PC and write the code to connect to your Wi-Fi. To test this you can use ampy to run the python on the device:

ampy --port COM4 --baud 115200 run networking.py

Its a good idea to use print statements to help debug as once the run has complete the output will be reflected back in your Command Prompt.

Now you are connected to Wi-Fi we can start to look at connecting to the IoT hub. I am assuming that you already have your IoT hub set up. We now need to configure you new device. Navigate to the IoT hub in your Azure Portal. In Explorers click IoT Devices, then New

image

Enter your device id, the name your device will be known as. All your devices need a name that is unique to your IoT hub. Then click Save. This will auto generate the keys needed to generate the shared access signature needed to access the IoT hub later.

image

Once created you may need to click refresh in the devices list to see you new device. Click the device and copy the primary key, you will ned this for later to generate the Shared Access Signature used in the connection string. In order to generate a new Shared Access Token you can use Visual Studio Code with the Azure IoT Hub Toolkit extension installed. This puts a list of devices and endpoints in the explorer view and allows you to create a new Shared Access Token. find your device in the Devices list, Right click and select Generate SAS Token For Device

image

You will be prompted to enter the number of hours the token is valid for and the new SAS token will appear in the output window:

image

SharedAccessSignature sr=[your iothub name].azure-devices.net%2Fdevices%2Fesp8266&sig=bSpX6UMM5hdUKXHfTagZF7cNKDwKnp7I3Oi9LWTZpXI%3D&se=1574590568

The shared access signature is made up of the full address of your device, a time stamp indicating how long the signature is valid for and the whole thing is signed. You can take this an use it to test your access to IoT hub, so make sure you make the time long enough to allow you to test. The ESP8266 doesn't have a clock that can be used to generate the correct time so you will need to create the SAS off board. I’m going to use an Azure function with the code here to generate it.

Back to Python now. In order to connect to the IoT hub you will need to use the MQTT protocol. MicroPython uses umqtt.simple.

There are a few things required before you can connect.

Firstly the Shared Access Signature that you created above.

Next you will need to get the DigiCert Baltimore Root certificate that Iot Hub uses for SSL. This can be found here. Copy the text from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE-----, including both the Begin and End lines. Remove the quotes and replace the \r\n with real new line in your text editor then save the file as something like baltimore.cer.

Next you will need a ClientId. For IoT hub the ClientId is the name of your device in IoT Hub. In this example it is esp8266

Next you will new a Username. For IoT hub, this is the full cname of your IoT Hub with your client id and a version. e.g. [your iothub name].azure-devices.net/esp8266//?api-version=2018-06-30

The following code should allow you to connect to the IoT Hub:

def connectMQTT():
     from umqtt.simple import MQTTClient

    CERT_PATH = "baltimore.cer"
     print('getting cert')
     with open(CERT_PATH, 'r') as f:
         cert = f.read()
     print('got cert')
     sslparams = {'cert':cert}

   CLIENT_ID='esp8266'
     Username='yourIotHub.azure-devices.net/esp8266/?api-version=2018-06-30'
     Password='SharedAccessSignature sr=yourIotHub.azure-devices.net%2Fdevices%2Fesp8266&sig=bSpX6UMM5hdUKXHfTagZF7cNKDwKnp7I3Oi9LWTZpXI%3D&se=1574590568'

   

    mqtt=MQTTClient(client_id=CLIENT_ID,server='yourIotHub.azure-devices.net',port=8883,user=Username,password=Password, keepalive=4000, ssl=True, ssl_params=sslparams)


     mqtt.set_callback(lightLed)
     mqtt.connect(False)

    mqtt.subscribe('devices/esp8266/messages/devicebound/#')
     flashled(4,0.1, blueled)


    return mqtt

set_callback requires a function which will be called when there is a device message sent from the IoT Hub. Mine just turns a Led on or off

def lightLed(topic, msg):
     if msg== b'on':
         statusled.on()
     else:
         statusled.off()

connect(False) means that the topic this device subscribes to will persist after the device disconnects.

I’ve also configured the device to connect to its bound topics so that any message sent to the device will call the callback function.

Now we need to have a process loop so that we can receive the messages. The ESP8266 does not seem to run async code so we need to call the wait_msq function to get any message back from the IoT hub

mqtt=connectMQTT()
print('connected...')
while True:
     mqtt.wait_msg()

save your python as networking.py (and make sure that all the code you wrote initially to connect to Wi-Fi is included) then run ampy again:

ampy --port COM4 --baud 115200 run networking.py

Your device should run now. I’ve used the Led flash to show me progress for connecting to Wi-Fi then connecting to IoT Hub and also through to receiving a message. There is a blue LED on the board which I’ve been using as well as a standard LED which is turned on/off based upon the device message received from the IoT Hub. The blue LED is GPIO 2.

In order to send a message from the IoT hub to your device then you can do this from the Azure Portal in the devices view. Click on the device then click Message To Device

image

Enter the Message Body (on or off) and click Send Message

image

Alternatively you can do this in Visual Studio Code by right clicking the device and selecting Send C2D Message To Device and enter the message in the box that pops up

image

In my example the Led lights when I enter on and turns off when I enter off. ampy is likely to timeout during this process, but that’s ok as the board will still be running. As we’ve put the message retrieval inside a loop then the board will continue to run. To stop it running you will need to reset the board by pressing the reset button.

My next step is to sort out automatically generating the Shared Access Signature  and then I’ll look at sending data to the IoT Hub

Migrating Azure Scheduled Web Jobs to Logic Apps

If you have scheduler jobs running in Azure you may have received an email recently stating that the scheduler is being retired and that you need to move your schedules off by 31st December 2019 at the latest and you also will not be able to view your schedules via the portal after 31st October.

This is all documented in the following post:  https://azure.microsoft.com/en-us/updates/extending-retirement-date-of-scheduler/

There is an alternative to the Scheduler and that is Logic Apps and there is a link on the page to show you how to migrate.

I’m currently using the scheduler to run my webjobs on various schedules from daily and weekly to monthly. Webjobs are triggered by using an HTTP Post request and I showed how to set this up using the scheduler in a previous post :

Creating a Scheduled Web Job in Azure

I will build on that post and show how you can achieve the same thing using Logic Apps. You will need the following information in order to configure the Logic App: Webhook URL, Username, Password

You can find these in the app service that is running your webjob.  Click “Webjobs”, select the job you are interested in the click “Properties”. This will display the properties panel where you can retrieve all these values.

image

Now you need to create a Logic App. In the Azure Portal dashboard screen click “Create a Resource” and enter Logic App in the search box, then click “Create”

image

Complete the form and hit Create

image

Once the resource has created you can then start to build your schedule. Opening the Logic App for the first time should take you to the Logic App Designer. Logic Apps require a trigger to start them running and there are lots of different triggers but the one we are interesting in, is the Recurrence trigger

image

Click “Recurrence” and this will be added to the Logic App designer surface for you to configure

I want to set my schedule to run at 3am every day so I select frequency to be Day and interval to be 1, then click “Add New Parameter”

image

Select “At these hours” & “At these minutes”. Two edit boxes appear and you can add 3 in the hours box and 0 in the minutes box. You have now set up the schedule. We now need to configure the Logic App to trigger the web service. As as discussed above we can use a web hook.

All we have in the Logic App is a trigger that starts the Logic App at 3am UTC, we now need to add an Action step that starts the web job running.

Below the Recurrence box there is a box called “+ New Step”, click this and then search for “HTTP”

image

Select the top HTTP option

image

Select POST as the method and Basic as Authentication, then enter your url, username and password

The web job is now configured and the Logic App can be saved by clicking the Save button. If you want to rename each of the steps so you can easily see what you have configured then click “…” and select “Rename”

image

You can test the Logic App is configured correctly by triggering it to run. This will ignore the schedule and run the HTTP action immediately

image

If the request was successful then you should see ticks appear on the two actions or if there are errors you will see a red cross and be able to see the error message

image

If the web job successfully ran then open the web job portal via the app services section to see if your web job has started.

If you want to trigger a number of different web jobs on the same schedule then you can add more HTTP actions below the one you have just set up. If you want to delay running a job for a short while you can add a Delay task.

If you want to run on a weekly or monthly schedule then you will need to create a new Logic App with a Recurrence configured to the schedule you want and then add the HTTP actions as required.

The scheduler trigger on the Logic App will be enabled as soon as you click Save. To stop it triggering you can Disable the Logic App on the Overview screen once you exit the Designer

image

Hopefully this has given you an insight in to how to get started with Logic Apps. Take a look at the different triggers and actions as see that you can do a lot more than just scheduling web jobs

Adding Application Insights Logging to your code

This is the fourth of a series about Application Insights and Log analytics. I’ve shown you how to add existing logs, using the log analytics query language to view you logs and how to enhance your query to drill down and get to the logs you are interested in. This post is about how you can add logs from your code and provide the information to allow you refine your queries and help you to diagnose your faults more easily

If you don’t already have application insights then you can create a new instance in the Azure portal (https://portal.azure.com/)

Get your application insights key from the azure portal. Click on your application insights instance and navigate to the Overview section then copy your instrumentation key. You will need this in your code.

image

In your project, add application insights via nuget :

Install-Package Microsoft.ApplicationInsights -Version 2.10.0

In you code you need to assign the key to Application Insights as follows:

TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
configuration.InstrumentationKey = “put your key here”;

To log details using application insights then you need a telemetry client.

TelemetryClient telemetry = new TelemetryClient(configuration);

The telemetry client has a larger number of features than I am not going to talk about here as I am just interested in logging today. There are three methods of interest: TrackEvent, TrackException and TrackTrace.

I use TrackEvent to log out things like start and end of methods of if something specific occurs that I want to log; TrackException is for logging Exception details and TrackTrace is for everything else.

telemetry.TrackEvent("Some Important Work Started");
try {
     telemetry.TrackTrace("I'm logging out the details of the work that is being done", SeverityLevel.Information); } catch(Exception ex) {
     telemetry.TrackException(ex); } telemetry.TrackEvent("Some Important Work Completed");

You now have the basics for logging. This will be useful to some extent, but it will be difficult to follow the traces when you have a busy system with logs of concurrent calls to the same methods. To assist you to filter your logs it would be useful to provide some identifying information that you can add to your logs to allow you to track and trace calls through your system. You could add these directly to your logs but this then makes your logs bloated and difficult to read. Application Insights provides a mechanism to pass properties along with the logs which will appear in the  Log Analytics data that is returned from your query. Along with each log you can pass a dictionary of properties. I add to the set of properties as the code progresses to provide identifying information to assist with filtering the logs.I generally add in each new identifier as they are created. I can then use these in my queries to track the calls through my system and remove the ones I am not interested in. Diagnosing faults then becomes a lot easier. To make this work then you need to be consistent with the naming of the properties so that you always use the same name for the same property in different parts of the system. Also try and be consistent about when you use TrackEvent and TrackTrace. You can set levels for your traces based upon the severity level (Verbose, Information, Warning, Error, Critical)

TelemetryConfiguration.Active.InstrumentationKey = Key;
TelemetryClient telemetry = new TelemetryClient(); 
var logProperties = new Dictionary();

logProperties.Add("CustomerID", "the customer id pass through from elsewhere");

telemetry.TrackEvent("Some Important Work Started", logProperties);
try
{
      var orderId = GenerateOrder();
      logProperties.Add("OrderID", orderId.ToString());
      telemetry.TrackTrace("I just created an order", logProperties);

      var invoiceId = GenerateInvoice();
      logProperties.Add("InvoiceID", invoiceId.ToString());
      telemetry.TrackTrace("I've just created an invoice", logProperties);

      SendInvoice(invoiceId);
}
catch (Exception ex)
{
      telemetry.TrackException(ex, logProperties);
}
telemetry.TrackEvent("Some Important Work Completed", logProperties);
telemetry.Flush();

Flush needs to be called at the end to ensure that the data is sent to Log Analytics. In the code above you can see that I’ve added a CustomerId, OrderId and InvoiceId to the log properties and pass the log properties to each of the telemetry calls. Each of the logs will contain the properties that were set at the time of logging. I’ve generally wrap all this code so that I do not have to pass in the log properties into each call. I can add to the log properties whenever I have new properties and then each of the telemetry calls will include the log properties.

When we look at the logs via log analytics will can see the additional properties on the logs and then use them in our queries.

image

image

The log properties appear in customDimensions and you can see how the invoice log has the invoice id as well as the customer id and order id. The order log only has the customer id and order id.

You can add the custom dimensions to your queries as follows:

union traces, customEvents, exceptions

|order by timestamp asc

| where customDimensions.CustomerID == "e56e4baa-9e1d-4c3c-b498-365bf2807a5f"

You can also see in the logs the severity level which allows you to filter your logs to a sensible level. You need to plan your logs carefully and set an appropriate level to stop you flooding your logs with unnecessary data until you need it.

I’ve now shown you how to add logs to your application. You can find out more about the other methods available on the telemetry api here

Refining your Azure Log Analytics Queries

This is part 2 of a series of post about Log Analytics in Azure. In Part 1 I discussed how to access log analytics and use it to query your Exceptions. I also showed you how to display your output as a graph.

In this post we will look at some other tables, how we can view them and how we can refine the details we want to view.

I’ve been using Application Insights in my code to add my application logs and these log into a number of different tables depending upon which API call is used.

If you look at the tables we have with Application Insights, you can see that as well as exceptions there are a number of other tables

image

The ones I am interested in are traces, custom events and exceptions. Traces is used for general logging, custom events are used to indicate something has happened, for example, The start and end of some activity. Exceptions are used when something has gone wrong. You can query each of these tables separately.

image

image

image

What you can see with these three logs is that we can easily retrieve the data but it would be useful if it could be done in one query. For that you need to use the “union” keyword as follows:

union traces, exceptions, customEvents

| where customDimensions.Source <> "ApplicationInsightsProfiler"

Note, I need to add in the where clause as the ApplicationInsights Profiler is enabled on my site and I am not currently interested in those logs

If you run this query you will get a snapshot of the data in each of the table which is not always that useful

image

What would be useful is if I could order the logs by the timestamp.

To do this add another pipe and use the “order by” keywords and pick the “timestamp” column. I’ve added “asc” as I want to show my oldest log first. You can reverse it by using “desc” instead.

union traces, exceptions, customEvents

| where customDimensions.Source <> "ApplicationInsightsProfiler"

| order by timestamp asc

image

Now my logs are in a sensible order and I can see what is happening. The issue I have now is that I’ve got too much information on the screen to be able to view everything I need, plus the different tables have information in different columns. You can see with the events that the details do not appear in the message column making it difficult to view the event details. In order to control what I see I can use projection.This is achieved using the “project” keyword. To make best use of “project” you need to identify the columns of interest in each of the table we are using. Projection also allows you to order the columns. The order of the columns after “project” is the order they will appear in the results

union traces, exceptions, customEvents

| where customDimensions.Source <> "ApplicationInsightsProfiler"

| project timestamp, itemType, name, message, problemId, customDimensions

|order by timestamp asc

“timestamp” is the date/time of the log

“itemType” will show trace, customEvent or exception

“Name” contains the name of the custom event

“message” contains the details of the trace

“problemId” shows the top level details of the exception and custom

“customDimensions” shows custom properties that have been attached to the log

This results in the following log output:

image

You can see now that the logs are in a more usable format and I can drill down a little by clicking on the > next to the log. By using projection however you will limit the columns that are returned. If you need to drill down to get information you have filtered out, then you will need to run a different query. One example of this is when you get an exception, This projection will only give you the problemId so you will need to run a query on the exceptions table to bring back all the exceptions details.

In my next post I will show you how to use custom logging in your code with Application Insights.

Querying Exception Logs in Azure Log Analytics

In a previous post I’ve talked about how you can add logs to Azure Log Analytics. This post is about how you can make use of that logging . The key to Log Analytics (once your log data is in) is its query language.

You can navigate to Log Analytics from the Azure Portal. I’m using Application Insights for the examples and you can get to Log Analytics from the menu bar or by clicking search in the left hand panel and then Log analytics

image

image

Once in Log Analytics there will be an area for queries

image

An area for your data sources

image

and a query explorer where you can find queries that you or your team have saved previously.

The data sources section is a useful place to start because double clicking a data source will add it to the query. So starting with double clicking “exceptions” the press the Run button. This will query the exceptions logs and return all the exception logs that happening in the last 24 hours (as indicated by the time range next to the run button). If you want to add a time period to your query so that you can use it in a dashboard for example. There are some date functions to help. If you are unsure about how to add query parameters then you can go to the data that is returned and click the plus button next to the item you want to add to your query as below:

image

This will make the query look as follows:

exceptions

| where timestamp == todatetime('2019-06-26T18:21:49.1473946Z')

This is useful as you can add in >= to the query to find all logs that happened after this time but if you want to get all logs that happened over a specific period you can use the DateTime functions by typing a space after the greater than sign and see a list of the available functions

image

I use the “ago” function which also has help tips once you select it

image

As you can see there are examples for minutes, hours and days.

Queries are also built up using the pipe symbol so you can easily append.

If you want to summarise your data so you can get a count of each of the exceptions then you add a new pipe using the summarize keyword and the count function.You need to tell the query which property you wish to count. If you look at the “filter on” screen shot above you will see that there is a type property in the log record. If we summarize that property with count then the query will return all the exceptions in the timeframe and how often they have occurred

image

The query language also has a use “render” keyword that allows you to return the query in a variety of graphs

image

So the final query looks like this

exceptions

| where timestamp > ago(70d)

| summarize count() by type

| render piechart

image

Clicking the save button allows you to save your queries so that you can use them later or share them with other uses who share the same log analytics instance

image

In my next post I will show how you can use some of the other log tables, ordering and selecting the columns you wish to display

Using Azure DevOps to Restart a Web App

Recently I had an issue with one of the web sites I was supporting and it seemed to be falling over each night and it was difficult to work out what was wrong. Whilst I was working it out I need a mechanism to restart the website over night so that I could then take the time to figure out exactly what was wrong. There were a number of ways to achieve this but the simplest one for me was to use an Azure DevOps Release pipeline triggered on a schedule. The Azure DevOps “Azure App Service Manage” task allows me to achieve this.

image

To get started create a new release pipeline with an Empty job

image

Click on the “1 job, 0 task” link and then click on the “+” in Agent Job.

image

Enter “App service” in the search box and select “Azure App Service Manage” from the list of tasks that appear and click “Add”.

image

The task will default to Swap Slots but you can change this. Select your Azure Subscription and click “Authorize” if you haven’t already authorised your Azure Subscription.

image

Clicking “Authorize” will take you through the sign in process where you will need to enter your username and password for the Azure subscription that contains your app service. Once authorised select the Action you want to perform. Currently the list contains:

image

Select “Restart App Service” then pick your app service from the “App Service name” list, Also change the display name as it defaults to “Swap Slots:”

image

Your pipeline is now configured to restart your app service.

image

You now need to trigger this. Click on the pipeline tab

image

Then click on the Schedule button

image

Enable the trigger and select your desired schedule, edit the release pipelines title and click save. Your web site will now restart based upon the schedule you picked.

You can restart multiple web apps with a single release pipeline

image

You are able to chain each website or do it in parallel by changing the pre-deployment conditions:

image

To chain them select “after stage” and in parallel select “after release” for each stage.

When the schedule is run a new release is created and the web apps will restart and you will be able to see the status of each attempt in the same way you do with your standard releases.

Adding Security Policies To Azure API Management

The Azure API Management service allows you to publish your APIs both internally and externally and to control who and what can access them. Out of the box you will get a standard API key for each of you users who sign up to the API, but this is often not enough meet the security requirements for you or your partners. API Management allows you to add a more fine grained security model you each of your APIs and this can be done using the policy feature. Policies are used for more than just security and there are numerous policies that allow you to change the behaviour of your API through configuration. Documentation for the types of policies can be found here. Sample policy examples can be found here.

Two policies that I am going to discuss here will allow you to restrict access to your API through IP Whitelisting and through validating JWT claims. I will also discuss how you can put different controls onto your API for different partners.

Policies can be set at different levels and the documentation will highlight the areas where they are applicable. For security policies I am going to talk about protecting at the API level and at the product level. Adding a policy at the API level will be applicable to all subscribers to the API whereas adding the policy at the product level will be applicable to all subscribers to the product. A product can contain multiple APIs and and API can be in multiple products. So we can add in protection at either level depending upon what your exact requirements are. The policies are the same but their impact will depend upon where they are applied.

 

Lets start with API level policies. To add or edit policies then you need to navigate to your API in the Azure Management portal. Then click on the API option, then click on the API you wish to protect

image

The easiest way to add a policy is to click the Add Policy link in the inbound section.

image

Click Filter IP Addresses and Add IP Filter

image

This form allows you to add ranges or single IP addresses to both allow or deny.When you have finished click Save.

You will now see the policy in the policy editor view. If you are happier to add this in manually or want to copy this and version control the config then you can access this via the Code Editor menu on the Inbound processing policies box

image

image

Appling this policy on the API means that only IP addresses within this range can access this specific API and can be useful to ensure that this specific API is blocked from being accessed regardless of the product has been subscribed to. Its also useful if you want to block access from specific IP addresses. However, you may have different partners who have different security arrangements or that you want to give different permissions to . To allow for this you will need to add the policy at the product level.

To edit the policy at the product level, click Products, pick the product you want to secure.

image

In this example I have a new More Secure API that I’ve created and there’s an access control section which allows you to pick the users who have access to this API

image

So I’ve immediately blocked access to this API to guest users and we can add user authentication to  the API if we want, such as OAuth 2.0 and OpenID connect.

However, this post is talking about adding security policies and if we want to allow only specific IP addresses to access this API we can edit the policy at the Product level. To access the policy definition click Policies

image

You’ll notice that this is just the editor view and the easiest way is to add the policy at the API level using the wizard and copy the config to here. Products are a mechanism to allow you to group and protect APIs which means that from a management point of view you could create a product for each of your partners making it easier to maintain the security details for each and make it easier to disable access and remove only the security policies that apply to the specific partner. Managing this at the API level means that you will end up with a large number of security policies relating to a large number of partners making it difficult to manage. Security polices at the Product level are more important when you want to do some specific protection like checking claims in a signed JWT. The Product level policy allows you to have different signing keys for each product meaning that you can have different signing keys for each of your partners (assuming one product per partner).

image

This policy requires a JWT signed with the key eW91ci0yNTYtYml0LXNlY3JldA== and that also has the claim admin=true. If there is an error then 401 is returned with the message “You have failed the security checks please contact your administrator”

To summarise, we can add policies at both API and product level. Product level polices allow us to create a new product for each of our partners and then add specific security policies to the product tailored to our specific partners needs. The product level policy makes it easier to manage the security policies at a partner level but we can allso add global security policies at the API level such as blocking access from certain IP address ranges. Policies can do a lot more than security so check out the links at the start of the post for further information

My Video Library

Introduction to Azure Machine Learning Studio – Video walk through recorded March 2019

Introduction to Azure Log Analytics – Recorded at Dot Net Sheffield November 2018

Azure Machine Learning for Developers – Recorded at Dot Net Sheffield November 2018

Introduction to Microsoft Flow – Video walk through recorded September 2018

Easy Integration with Flow and Logic Apps – Recorded at Dot Net Sheffield August 2018

Add Existing Logs to Azure Log Analytics – Video walk through recorded June 2018

Visual Studio Team Service Load Testing – Video walk through recorded April 2018

Introduction to Azure Service Fabric – Recorded at Dot Net Sheffield March 2017

Building Scalable and Resilient Web Apps with Microsoft Azure – Recorded at Dot Net Sheffield March 2017

Service Fabric for the Microservices Win, baby! – Recorded at Microsoft’s UK TechDays Online Sept 2016