Steve Spencer's Blog

Blogging on Azure Stuff

Exporting Logs from Application Insights using Continuous Export

This is the fifth post of my series of posts about Log Analytics and Application Insights. The previous post talked about adding custom logging to you code using Application Insights. Now you’ve got your logging into Application Insights you can run log analytics queries and build dash boards, alerts etc. Sometime though you want to use this data in other systems and it would be useful if you could export the data and use it else where. This post will show you how you can regularly export the data from Application Insights into Azure Storage. Once it is in Storage it can easily be moved into other systems or used else where such as PowerBI. This can be achieved using the Continuous Export feature of Application Insights

To enable Continuous Export, login to the Azure management portal and navigate to Application Insights. Click on the instance you want Continuous Export enabled. The scroll down the options on the left until you find the Configure section and click on Continuous Export

image

To use Continuous Export you will need to configure a storage account. Click Add:

image

Click Data types to export:

image

I was only interested in the logs I emitted from my custom logging, so selected Custom Event, Exception and Trace then clicked OK

image

Next pick a storage location. Make sure you use one where your Application Insights instance is located otherwise you will be charged egress fees to move the data to a new datacentre.

image

You can now pick an existing storage account or create a new one. Upon selecting a storage account you can pick an existing  container or create a new one.Once a blob container is selected click OK.

Continuous Export is now configured. You will not see anything in the storage container until the next set of logs are sent to Application Insights.

My log analytics query shows the following logs have been generated:

image

If you look at the continuous Export configuration page you will see that the last updated date has changed.

image

Now look in blob storage. You should see a folder that is<ApplicationInsights_ServiceName>_<ApplicationInsights_InstrumentationKey

image

Click through and you will see a number of folders. One for each of the logs that I enabled when setting up Continuous Export. Click through one of them and you will see a folder for the date and then a folder for the hour of the logs. Then a file containing the logs for that hour.

image

You will get a row of json data for each row output for the log query. Note the whole logs emitted will be in each of the folders.

e.g.

{
     "event": [
         {
             "name": "Some Important Work Completed",
             "count": 1
         }
     ],
     "internal": {
         "data": {
             "id": "a guid is here",
             "documentVersion": "1.61"
         }
     },
     "context": {
         "data": {
             "eventTime": "2020-03-22T18:12:30.2553417Z",
             "isSynthetic": false,
             "samplingRate": 100.0
         },
         "cloud": {},
         "device": {
             "type": "PC",
             "roleInstance": "yourcomputer",
             "screenResolution": {}
         },
         "session": {
             "isFirst": false
         },
         "operation": {},
         "location": {
             "clientip": "0.0.0.0",
             "continent": "Europe",
             "country": "United Kingdom",
             "province": "Nottinghamshire",
             "city": "Nottingham"
         },
         "custom": {
             "dimensions": [
                 {
                     "CustomerID": "4df16004-2f1b-48c0-87d3-c1251a5db3f6"
                 },
                 {
                     "OrderID": "5440d1cf-5d06-4b0e-bffb-fad522af4ad1"
                 },
                 {
                     "InvoiceID": "a7d5a8fb-2a2e-4697-8ab4-f7bf8b8dbe18"
                 }
             ]
         }
     }
}

As the data is now out of Application Insights you can move it where ever you need it. You will also need to manage the blob storage data too otherwise you will end up with the logs stored in two places and the storage costs will be doubled.

One example of subsequent usage is exporting the data to Event Hub. As the data is in blob storage you can use a function with a blob trigger to read the blob in a row at a time and publish the data onto Event Hub:

[FunctionName("ContinuousExport")]
public static async void Run([BlobTrigger("logs/{name}", Connection = "ConitnuousExportBlobSetting")]Stream myBlob, string name,
     [EventHub("logging", Connection = "EventHubConnectionAppSetting")] IAsyncCollector<string> outputEvents,TraceWriter log)
{
     log.Info($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
     StreamReader textReader = new StreamReader(myBlob);
     while (!textReader.EndOfStream)
     {
         string line = textReader.ReadLine();
         log.Info(line);
         await outputEvents.AddAsync(line);
     }
}

Note: This is an example, so will need additional code to make sure that you don’t exceed the Event Hub maximum message size

So with Continuous Export you can extract your log data from Application insights and move it to other systems for processing.

Adding Application Insights Logging to your code

This is the fourth of a series about Application Insights and Log analytics. I’ve shown you how to add existing logs, using the log analytics query language to view you logs and how to enhance your query to drill down and get to the logs you are interested in. This post is about how you can add logs from your code and provide the information to allow you refine your queries and help you to diagnose your faults more easily

If you don’t already have application insights then you can create a new instance in the Azure portal (https://portal.azure.com/)

Get your application insights key from the azure portal. Click on your application insights instance and navigate to the Overview section then copy your instrumentation key. You will need this in your code.

image

In your project, add application insights via nuget :

Install-Package Microsoft.ApplicationInsights -Version 2.10.0

In you code you need to assign the key to Application Insights as follows:

TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
configuration.InstrumentationKey = “put your key here”;

To log details using application insights then you need a telemetry client.

TelemetryClient telemetry = new TelemetryClient(configuration);

The telemetry client has a larger number of features than I am not going to talk about here as I am just interested in logging today. There are three methods of interest: TrackEvent, TrackException and TrackTrace.

I use TrackEvent to log out things like start and end of methods of if something specific occurs that I want to log; TrackException is for logging Exception details and TrackTrace is for everything else.

telemetry.TrackEvent("Some Important Work Started");
try {
     telemetry.TrackTrace("I'm logging out the details of the work that is being done", SeverityLevel.Information); } catch(Exception ex) {
     telemetry.TrackException(ex); } telemetry.TrackEvent("Some Important Work Completed");

You now have the basics for logging. This will be useful to some extent, but it will be difficult to follow the traces when you have a busy system with logs of concurrent calls to the same methods. To assist you to filter your logs it would be useful to provide some identifying information that you can add to your logs to allow you to track and trace calls through your system. You could add these directly to your logs but this then makes your logs bloated and difficult to read. Application Insights provides a mechanism to pass properties along with the logs which will appear in the  Log Analytics data that is returned from your query. Along with each log you can pass a dictionary of properties. I add to the set of properties as the code progresses to provide identifying information to assist with filtering the logs.I generally add in each new identifier as they are created. I can then use these in my queries to track the calls through my system and remove the ones I am not interested in. Diagnosing faults then becomes a lot easier. To make this work then you need to be consistent with the naming of the properties so that you always use the same name for the same property in different parts of the system. Also try and be consistent about when you use TrackEvent and TrackTrace. You can set levels for your traces based upon the severity level (Verbose, Information, Warning, Error, Critical)

TelemetryConfiguration.Active.InstrumentationKey = Key;
TelemetryClient telemetry = new TelemetryClient(); 
var logProperties = new Dictionary();

logProperties.Add("CustomerID", "the customer id pass through from elsewhere");

telemetry.TrackEvent("Some Important Work Started", logProperties);
try
{
      var orderId = GenerateOrder();
      logProperties.Add("OrderID", orderId.ToString());
      telemetry.TrackTrace("I just created an order", logProperties);

      var invoiceId = GenerateInvoice();
      logProperties.Add("InvoiceID", invoiceId.ToString());
      telemetry.TrackTrace("I've just created an invoice", logProperties);

      SendInvoice(invoiceId);
}
catch (Exception ex)
{
      telemetry.TrackException(ex, logProperties);
}
telemetry.TrackEvent("Some Important Work Completed", logProperties);
telemetry.Flush();

Flush needs to be called at the end to ensure that the data is sent to Log Analytics. In the code above you can see that I’ve added a CustomerId, OrderId and InvoiceId to the log properties and pass the log properties to each of the telemetry calls. Each of the logs will contain the properties that were set at the time of logging. I’ve generally wrap all this code so that I do not have to pass in the log properties into each call. I can add to the log properties whenever I have new properties and then each of the telemetry calls will include the log properties.

When we look at the logs via log analytics will can see the additional properties on the logs and then use them in our queries.

image

image

The log properties appear in customDimensions and you can see how the invoice log has the invoice id as well as the customer id and order id. The order log only has the customer id and order id.

You can add the custom dimensions to your queries as follows:

union traces, customEvents, exceptions

|order by timestamp asc

| where customDimensions.CustomerID == "e56e4baa-9e1d-4c3c-b498-365bf2807a5f"

You can also see in the logs the severity level which allows you to filter your logs to a sensible level. You need to plan your logs carefully and set an appropriate level to stop you flooding your logs with unnecessary data until you need it.

I’ve now shown you how to add logs to your application. You can find out more about the other methods available on the telemetry api here

Refining your Azure Log Analytics Queries

This is part 2 of a series of post about Log Analytics in Azure. In Part 1 I discussed how to access log analytics and use it to query your Exceptions. I also showed you how to display your output as a graph.

In this post we will look at some other tables, how we can view them and how we can refine the details we want to view.

I’ve been using Application Insights in my code to add my application logs and these log into a number of different tables depending upon which API call is used.

If you look at the tables we have with Application Insights, you can see that as well as exceptions there are a number of other tables

image

The ones I am interested in are traces, custom events and exceptions. Traces is used for general logging, custom events are used to indicate something has happened, for example, The start and end of some activity. Exceptions are used when something has gone wrong. You can query each of these tables separately.

image

image

image

What you can see with these three logs is that we can easily retrieve the data but it would be useful if it could be done in one query. For that you need to use the “union” keyword as follows:

union traces, exceptions, customEvents

| where customDimensions.Source <> "ApplicationInsightsProfiler"

Note, I need to add in the where clause as the ApplicationInsights Profiler is enabled on my site and I am not currently interested in those logs

If you run this query you will get a snapshot of the data in each of the table which is not always that useful

image

What would be useful is if I could order the logs by the timestamp.

To do this add another pipe and use the “order by” keywords and pick the “timestamp” column. I’ve added “asc” as I want to show my oldest log first. You can reverse it by using “desc” instead.

union traces, exceptions, customEvents

| where customDimensions.Source <> "ApplicationInsightsProfiler"

| order by timestamp asc

image

Now my logs are in a sensible order and I can see what is happening. The issue I have now is that I’ve got too much information on the screen to be able to view everything I need, plus the different tables have information in different columns. You can see with the events that the details do not appear in the message column making it difficult to view the event details. In order to control what I see I can use projection.This is achieved using the “project” keyword. To make best use of “project” you need to identify the columns of interest in each of the table we are using. Projection also allows you to order the columns. The order of the columns after “project” is the order they will appear in the results

union traces, exceptions, customEvents

| where customDimensions.Source <> "ApplicationInsightsProfiler"

| project timestamp, itemType, name, message, problemId, customDimensions

|order by timestamp asc

“timestamp” is the date/time of the log

“itemType” will show trace, customEvent or exception

“Name” contains the name of the custom event

“message” contains the details of the trace

“problemId” shows the top level details of the exception and custom

“customDimensions” shows custom properties that have been attached to the log

This results in the following log output:

image

You can see now that the logs are in a more usable format and I can drill down a little by clicking on the > next to the log. By using projection however you will limit the columns that are returned. If you need to drill down to get information you have filtered out, then you will need to run a different query. One example of this is when you get an exception, This projection will only give you the problemId so you will need to run a query on the exceptions table to bring back all the exceptions details.

In my next post I will show you how to use custom logging in your code with Application Insights.

Querying Exception Logs in Azure Log Analytics

In a previous post I’ve talked about how you can add logs to Azure Log Analytics. This post is about how you can make use of that logging . The key to Log Analytics (once your log data is in) is its query language.

You can navigate to Log Analytics from the Azure Portal. I’m using Application Insights for the examples and you can get to Log Analytics from the menu bar or by clicking search in the left hand panel and then Log analytics

image

image

Once in Log Analytics there will be an area for queries

image

An area for your data sources

image

and a query explorer where you can find queries that you or your team have saved previously.

The data sources section is a useful place to start because double clicking a data source will add it to the query. So starting with double clicking “exceptions” the press the Run button. This will query the exceptions logs and return all the exception logs that happening in the last 24 hours (as indicated by the time range next to the run button). If you want to add a time period to your query so that you can use it in a dashboard for example. There are some date functions to help. If you are unsure about how to add query parameters then you can go to the data that is returned and click the plus button next to the item you want to add to your query as below:

image

This will make the query look as follows:

exceptions

| where timestamp == todatetime('2019-06-26T18:21:49.1473946Z')

This is useful as you can add in >= to the query to find all logs that happened after this time but if you want to get all logs that happened over a specific period you can use the DateTime functions by typing a space after the greater than sign and see a list of the available functions

image

I use the “ago” function which also has help tips once you select it

image

As you can see there are examples for minutes, hours and days.

Queries are also built up using the pipe symbol so you can easily append.

If you want to summarise your data so you can get a count of each of the exceptions then you add a new pipe using the summarize keyword and the count function.You need to tell the query which property you wish to count. If you look at the “filter on” screen shot above you will see that there is a type property in the log record. If we summarize that property with count then the query will return all the exceptions in the timeframe and how often they have occurred

image

The query language also has a use “render” keyword that allows you to return the query in a variety of graphs

image

So the final query looks like this

exceptions

| where timestamp > ago(70d)

| summarize count() by type

| render piechart

image

Clicking the save button allows you to save your queries so that you can use them later or share them with other uses who share the same log analytics instance

image

In my next post I will show how you can use some of the other log tables, ordering and selecting the columns you wish to display

Adding existing logs to Log Analytics

I created a video to walk you through how to add existing logs to Log Analytics. There have been some changes to the way you do this.

The location of the settings to configure this has now move to Log Analytics in the Azure Portal. Previously, this was in the Operations Management Suite (OMS).

Logon to your Azure Portal (https://portal.azure.com) and click through to your log analytics workspace.  Then click on Advanced Settings

image

The Advanced Settings page will allow you to configure your data sources and where your logs will be pulled from. The rest of the video is the same.

image

Azure Key Vault Logging and Events with Log Analytics

Following on from my previous blog post (http://blogs.recneps.net/post/Setting-up-Azure-Key-Vault-with-Audit-logging) which explains how to set up Azure key vault with logging enabled, this post explains how to access the details of these logs and also to create an alert so you can see if someone is accessing the key vault from an unknown ip address (for example)

Open the Azure portal and navigate to the Resource Groups section and pick the resource group that we configured last time which contains the key vault and log analytics resources

image

Click your log analytics item, to open Log Analytics.

You can then select Log Search

image

This screen allows you to create your own query or select from existing ones.

image

Selecting “All Collected Logs” will show you the logs for the last day. I’ve highlighted the areas where you can change the time period, see the query and also click on Advanced Analytics to give a richer environment for analysing your logs.

image

If you want to query just for the Key Vault Audit logs then you can use the following query:

search * | where Category=="AuditEvent"

image

This will default to a list view, but clicking the Table button will format the data in an easier to read table.

image

You can sort and filter on the column headers. This can also be achieved using the order by clause as follows:

search * |where Category=="AuditEvent"  | order by TimeGenerated desc

A blog post discussing the query language can be found here

We are interested in all calls where someone has tried to access a Secret from the key vault. For that we are looking for an AuditEvent with an OperationName of SecretGet. If we also want to restrict the columns we retrieve then you can use “project” e.g.

search * | where Category=="AuditEvent"  and OperationName == "SecretGet"
| order by TimeGenerated desc
| project TimeGenerated, OperationName, CallerIPAddress, ResultSignature, requestUri_s

image

Now we are familiar with writing queries we can look at alerting. I’d like to set up an alert when the key vault is access from an IP Address other than the one where my application is running. This can be done as follows:

search * | where Category=="AuditEvent" and CallerIPAddress != "51.140.184.51"

This ip address is actually the Azure Portal and is shown when you view the resource group that contains the key vault.I’m using this ip address so that I will actually get an alert (at the wrong time) when my application runs

Click New Alert Rule

image

The following screen should appear

image

The Alert Target should be the Log Analytics we’ve been using and the Target Criteria (when clicked) should show the query we’ve just written

image

We need to configure the rule for when this alert should be triggered. I’m interested when at least 1 attempt has been made in the last 5 minutes to access the Key Vault from an unknown location, so I set the threshold to be zero and click Done. We’ve now configured the logic to determine when the event is fired. Now we need to say what we want to happen when it fires.Firstly we need to give the alert a name and description

image

Now we need to configure how we are alerted. For this you need to create an action group. An action group allows you to define a collection of activities that will happen when the alert is fired. Click New Action Group

image

Action Types can be any of the following:

image

An action group can have multiple actions and you can select both email and SMS in a single action.Once you have created your Action Group you need to select in then click “Create alert rule”

image

Your alert is now set up and running. You can view/edit alerts by selecting Monitor in the Azure Portal

image

then click Alerts (preview), you will be able to see the alerts that have fired.

image

Click Manage Rules to edit the alert.

When the alert is fired I will get an email containing the details of the alert.

Log analytics is a powerful tool and whilst this series of posts has been related to auditing of Key Vault we can use log analytics for a wide variety of log sources such as Application Insights. We can also use the same mechanism for alerting to these other log sources,

The next post is a video that shows you how to connect existing log files to log analytics

Setting up Azure Key Vault with Audit logging

Azure Key Vault is a good way to share secrets with your partners in a way that allows you to have control over the access to each of the assets in Azure. We also need to know who is accessing the resources and from where so that we can monitor for suspicious activity. This post will talk through setting up the key vault and then configuring logging to keep track of the audit information for your certificates, keys and secrets. For each application that you want to access your resources you will need to create some credentials that the application can use.

To allow an application to access key vault an App Registration needs to be added to Azure Active Directory (AAD). This effectively sets up a username and password that the application can use for credentials.

Open the azure portal (http://portal.azure.com) and navigate to Active Directory.

Click "App registrations"

clip_image002

Then "New application registration"

clip_image001

Name needs to be unique within your AD, select Web API/API and enter sign-on url. If you not building a website then enter anything in here. It might be useful to use a url related to your existing domain with application name appended. It doesn’t need to be a valid url. The click “Create”

Once created copy the Application ID as this is equivalent to a username to be used when calling the Key Vault in code. You now need to create the password.

Click Settings then Keys

clip_image002

clip_image004

clip_image006

Enter a name in the description field and select a duration, then click Save. The new key value will be displayed. You will need to copy this as it will not be visible again once you leave this page. This will be used as the password.

clip_image002[5]

Now create the Key Vault. To do that it is a good idea to put it in a specific resource group, especially if you are creating a set of resources that the key vault is going to access or if you are going to setup third party access. Once the Resource Group has been created, select it and add a Key Vault. When the Create Key Vault panel appears, click Access Policies, click "Add new"

clip_image004[5]

Pick the application you just created in AAD and select Get in Secret permissions, Save then go back to the main Key Vault pane and click Create

You have just given the application we created earlier access to just retrieving secrets. As you can see from the access policy you can give the application permissions to access a combination of Keys, Secrets and Certificates with the minimum access of Get. The Key Vault security is at the vault level and you cannot protect individual secrets at the user level. By granting only Get access on the Secret the application will not be able to list the Secrets available and will only be able to retrieve secrets it knows the names of.

Now the Key vault is set up and can be accessed, we want to know who is accessing the vault and from where. Out of the box this is not enabled and requires additional configuration and resources to allow us to be able to retrieve this audit information. This is achieved by enabling diagnostic logs in the Key Vault.

Before you can enable this you need to create a new storage account in this resource group to store the logs, then add Application Insights to the resource group

clip_image002[7]

Once these have been provisioned, navigate to the Key Vault you just created & click Diagnostic logs

clip_image004[7]

Click "Turn on diagnostics"

clip_image006[6]

Select “Archive to Storage Account” and Pick the storage account you’ve just created

Select “Send to Log Analytics” and Create a new OMS workspace in your resource group

clip_image008

Once created select this for Log Analytics

clip_image009

select the AuditEvent log and click Save. 

Now any changes to the Key Vault plus any access from your application will be logged and visible via log analytics. There’s a 10 – 15 minute delay between accessing the Key Vault and the log appearing.

To Add a Secret to the vault, Navigate to the vault, click Secrets then Add

clip_image010

Select Manual from the Upload options, enter a name and the secret

clip_image011

Remember the name you gave the Secret as you will need this in your code when accessing the key vault. This secret will now have a unique identifier that you will use. The one I’ve just created is:

https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You should see in the logs this secret being created and also when it gets accessed.

Accessing the KeyVault in C# can be seen here: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-use-from-web-application

The application in the example uses settings as defined below:

ClientID is the Application ID we created in the application registration in AD

ClientSecret is the key you created (that you had to save as it wasn’t visible again) as part of creating the application registration in AD.

Each Key, Secret and Certificate has a unique url which is used as the SecretURI e.g. https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You now have your key vault set up with audit logging and are able to access it. My next blog post will talk you through how to access the logs and also how to set up alerting

Windows Azure Roles Fail to run when deployed to Azure

Recently I was helping out at the Azure Bootcamp in London and during the labs a common theme kept occurring when the labs were deployed to a real Azure account. The roles failed to run and it appeared that the deployment was taking forever.  This is something I experienced first hand when I was starting out with Azure. There is a way to diagnosing these deployment errors and it is by using IntelliTrace. During deployment you can enable IntelliTrace as part of the publish dialog

image

The IntelliTrace option is only available if you have Visual Studio 2010 Ultimate. Once deployed to Azure the Roles will attempt to start and any errors during this phase will lead to the symptoms mentioned above. You can then connect to your Azure environment using the Server Explorer in Visual Studio to retrieve the IntelliTrace files which can be opened in Visual Studio and show any exceptions that may have been thrown. Further information can be found here. Once you have diagnosed your issue please ensure at you then disable the IntelliTrace by redeploying the fixed application as it will have a negative impact on performance if left enabled.

 

Getting back to the problem we have at the Bootcamp, the issues was that the deployed application was trying to writing information to Azure storage and the connection string was still pointing to Development storage. This was strange because none of the deployed applications had got to the Azure storage part of the lab so you would have thought that there was no need for a connection string. Luckily I had the exact same problem with one of my earlier deployments and it turns out that when a project is created the Diagnostic plug-in is automatically enabled. The diagnostic plug-in requires its own connection string to Azure storage so that the diagnostic information can be stored. Looking at the role configuration in Visual Studio you can see the Diagnostic plug-in configuration.

image

To fix the deployment issue click the button next to the connection string text box and enter the details of your Azure Storage account.

image

You will need to redeploy the application or upload the new ServiceConfiguration.cscfg to fix this issue. If this still does not resolve the issue then try disabling the Diagnostics plug-in and redeploy.

Logging with Log4Net

A while ago I was looking for an alternative to System.Diagnostics.Trace and found Log4Net. I know the patterns and practices group have created enterprise logging but I find Log4Net nice and easy to use. I created some documentation and samples on my website at http://www.recneps.co.uk/log4net.aspx.

The BlackMarble.Diagnostics.Logging libraries encapulate both System.Diagnostic.Trace and Log4Net so that we can use a standard way of logging and then configure at runtime one we want to use.