Steve Spencer's Blog

Blogging on Azure Stuff

Adding Security Policies To Azure API Management

The Azure API Management service allows you to publish your APIs both internally and externally and to control who and what can access them. Out of the box you will get a standard API key for each of you users who sign up to the API, but this is often not enough meet the security requirements for you or your partners. API Management allows you to add a more fine grained security model you each of your APIs and this can be done using the policy feature. Policies are used for more than just security and there are numerous policies that allow you to change the behaviour of your API through configuration. Documentation for the types of policies can be found here. Sample policy examples can be found here.

Two policies that I am going to discuss here will allow you to restrict access to your API through IP Whitelisting and through validating JWT claims. I will also discuss how you can put different controls onto your API for different partners.

Policies can be set at different levels and the documentation will highlight the areas where they are applicable. For security policies I am going to talk about protecting at the API level and at the product level. Adding a policy at the API level will be applicable to all subscribers to the API whereas adding the policy at the product level will be applicable to all subscribers to the product. A product can contain multiple APIs and and API can be in multiple products. So we can add in protection at either level depending upon what your exact requirements are. The policies are the same but their impact will depend upon where they are applied.

 

Lets start with API level policies. To add or edit policies then you need to navigate to your API in the Azure Management portal. Then click on the API option, then click on the API you wish to protect

image

The easiest way to add a policy is to click the Add Policy link in the inbound section.

image

Click Filter IP Addresses and Add IP Filter

image

This form allows you to add ranges or single IP addresses to both allow or deny.When you have finished click Save.

You will now see the policy in the policy editor view. If you are happier to add this in manually or want to copy this and version control the config then you can access this via the Code Editor menu on the Inbound processing policies box

image

image

Appling this policy on the API means that only IP addresses within this range can access this specific API and can be useful to ensure that this specific API is blocked from being accessed regardless of the product has been subscribed to. Its also useful if you want to block access from specific IP addresses. However, you may have different partners who have different security arrangements or that you want to give different permissions to . To allow for this you will need to add the policy at the product level.

To edit the policy at the product level, click Products, pick the product you want to secure.

image

In this example I have a new More Secure API that I’ve created and there’s an access control section which allows you to pick the users who have access to this API

image

So I’ve immediately blocked access to this API to guest users and we can add user authentication to  the API if we want, such as OAuth 2.0 and OpenID connect.

However, this post is talking about adding security policies and if we want to allow only specific IP addresses to access this API we can edit the policy at the Product level. To access the policy definition click Policies

image

You’ll notice that this is just the editor view and the easiest way is to add the policy at the API level using the wizard and copy the config to here. Products are a mechanism to allow you to group and protect APIs which means that from a management point of view you could create a product for each of your partners making it easier to maintain the security details for each and make it easier to disable access and remove only the security policies that apply to the specific partner. Managing this at the API level means that you will end up with a large number of security policies relating to a large number of partners making it difficult to manage. Security polices at the Product level are more important when you want to do some specific protection like checking claims in a signed JWT. The Product level policy allows you to have different signing keys for each product meaning that you can have different signing keys for each of your partners (assuming one product per partner).

image

This policy requires a JWT signed with the key eW91ci0yNTYtYml0LXNlY3JldA== and that also has the claim admin=true. If there is an error then 401 is returned with the message “You have failed the security checks please contact your administrator”

To summarise, we can add policies at both API and product level. Product level polices allow us to create a new product for each of our partners and then add specific security policies to the product tailored to our specific partners needs. The product level policy makes it easier to manage the security policies at a partner level but we can allso add global security policies at the API level such as blocking access from certain IP address ranges. Policies can do a lot more than security so check out the links at the start of the post for further information

Adding existing logs to Log Analytics

I created a video to walk you through how to add existing logs to Log Analytics. There have been some changes to the way you do this.

The location of the settings to configure this has now move to Log Analytics in the Azure Portal. Previously, this was in the Operations Management Suite (OMS).

Logon to your Azure Portal (https://portal.azure.com) and click through to your log analytics workspace.  Then click on Advanced Settings

image

The Advanced Settings page will allow you to configure your data sources and where your logs will be pulled from. The rest of the video is the same.

image

Using Azure Logic Apps to Import CSV to SQL Server

When Logic Apps first came out I wrote a blog post explaining how to convert a CSV file into XML.A lot of this is still relevant, especially the integration account and the schemas and maps that are in my github repo. This post will show how Logic Apps are now even simpler to use with flat file decoding and also show how to insert the CSV data into a SQL server. The SQL part of the blog was adapted from this post: https://pellitterisbiztalkblog.wordpress.com/2016/11/14/upload-flat-file-on-azure-sql-database-using-azure-logic-app/

Logic Apps has evolved since I last wrote about this topic and you now no longer need to create a function to transform our csv to xml.

clip_image001

The Transform XML connector is used now with the same maps we used in the previous post

In order to add the individual rows to the database there are a number of things you need to do. We will use an XML schema mapping in a stored procedure to extract the data from the transformed xml.

In your SQL database you will need to add a stored procedure, table and an XML schema, The SQL Scripts to create the table, stored procedure and xml schema have been added to the github repo. The stored procedure takes the xml file that has been transformed and uses the xml schema to extract the firstname, middlename and surname from the xml and then store the data in the employees table. In the logic app you need to add a SQL server connector and configure the connection to your Azure SQL database and also add in the stored procedure with the parameter as the output from the Transform XML.

image

The only other thing I needed to do to get this working was to remove the first row of the csv file as it contained the header fields and I didn’t want that inserted into the database.

image

The “length” expression is:  length(variables('csvdata'))

image

The “indexOf” expression is: indexOf(variables('csvdata'),'\r\n')

However if you add this in the editor the back slash will be delimited and you will end up with \\r\\n which will not work. To fix this you will need to click the View Code button, search for the \r\n and remove the extra back  slash

The “substring” expression is: substring(variables('csvdata'),add(variables('firstnewlineposition'),2),sub(variables('csvlength'),add(variables('firstnewlineposition'),2)))

The trigger for my Logic App was when a new file was added to OneDrive, so click the run button and then drop a file into the configured OneDrive location and the csv entries should be added to your database.

Cloud Load Testing Behind a Firewall with Visual Studio Team Services

Recently I’ve been looking at how we can load test one of our services so that we are able to understand the load our partners can put onto our systems before we start to have any issues. We used the Cloud-based Load Testing (CLT) service of Visual Studio Team Services (VSTS). I created a short video showing you how to easily setup a url based load test. The next stage of our load testing was to load test the service that our partners provide, Their service required IP whitelisting to connect which meant the CLT service would not be able to connect. Luckily for us the CLT service allows you to deploy agents into your own infrastructure  to carry out the load testing and they are controlled by the same CLT service that we used to load test our own service. This blog post will show you how to install the agent and configure the load test for this scenario.

The load test agent is installed using a PowerShell script which can be obtained from here. Open the PowerShell as administrator and don’t forget to unblock the script

clip_image001

To enable you to run the script and to configure the agents to talk to the CLT service, you will need to create a PAT token in VSTS.

Login and click on your user icon, then select Security

clip_image001[5]

Select Personal Access Tokens, then Add

clip_image001[7]

Fill in the form and click Create Token at the bottom of the page

clip_image001[9]

This creates your token and this is the only time you will be able to access the token

clip_image001[11]

You need to make sure that you copy it now as it you will not be able to access it after you have left the page,

Gong back to PowerShell, run the following command

.\ManageVSTSCloudLoadAgent.ps1 -TeamServicesAccountName StevesVSTS -PATToken 37abawavsmsgj6hpakltwhdjt4jrqsmup2jx62hlcxbju2l2tbja -ConfigureAgent -AgentGroupName StevesInternalTest

This will take a few minutes to run. If you didn’t run PowerShell as an Administrator you might see errors. When it has run the VSTSLoadAgentService should be installed and running

clip_image001[13]

Now need to configure a load test. Following on from my video, you can create a url test

clip_image001[17]

I ran a test web app in Visual Studio using localhost as the address

clip_image001[19]

When the test has been created select the Settings tab, click “Use self-provisioned agents” and select your agent from the list and add the number of agents you want to use. You could install the agents on a number of machines in your environment using the same script, then you will be able to add more than 1 agent if required. As I only installed the one agent I can only select 1. You can see how many agents the CTL service can see

image

If there are less than you think you will need to check to make sure the service was installed without error and that it is running 

Save the load test and run it.

clip_image001[23]

Whilst the test was running the performance meters in Visual Studio showed that the web page was being loaded.

When the test is complete you should see that it has not cost you any VUM (Virtual User Minutes) as the load tests are running on your own agents

clip_image001[21]

The Cloud Load Test service allows you to load test both publicly accessible and private websites and services. As long as the servers running the load test agents have outbound access to the internet for HTTPS then we are able to load test private sites and services and the load test does not cost anything apart from the cost of the infrastructure that the agents are running on locally.

Azure Key Vault Logging and Events with Log Analytics

Following on from my previous blog post (http://blogs.recneps.net/post/Setting-up-Azure-Key-Vault-with-Audit-logging) which explains how to set up Azure key vault with logging enabled, this post explains how to access the details of these logs and also to create an alert so you can see if someone is accessing the key vault from an unknown ip address (for example)

Open the Azure portal and navigate to the Resource Groups section and pick the resource group that we configured last time which contains the key vault and log analytics resources

image

Click your log analytics item, to open Log Analytics.

You can then select Log Search

image

This screen allows you to create your own query or select from existing ones.

image

Selecting “All Collected Logs” will show you the logs for the last day. I’ve highlighted the areas where you can change the time period, see the query and also click on Advanced Analytics to give a richer environment for analysing your logs.

image

If you want to query just for the Key Vault Audit logs then you can use the following query:

search * | where Category=="AuditEvent"

image

This will default to a list view, but clicking the Table button will format the data in an easier to read table.

image

You can sort and filter on the column headers. This can also be achieved using the order by clause as follows:

search * |where Category=="AuditEvent"  | order by TimeGenerated desc

A blog post discussing the query language can be found here

We are interested in all calls where someone has tried to access a Secret from the key vault. For that we are looking for an AuditEvent with an OperationName of SecretGet. If we also want to restrict the columns we retrieve then you can use “project” e.g.

search * | where Category=="AuditEvent"  and OperationName == "SecretGet"
| order by TimeGenerated desc
| project TimeGenerated, OperationName, CallerIPAddress, ResultSignature, requestUri_s

image

Now we are familiar with writing queries we can look at alerting. I’d like to set up an alert when the key vault is access from an IP Address other than the one where my application is running. This can be done as follows:

search * | where Category=="AuditEvent" and CallerIPAddress != "51.140.184.51"

This ip address is actually the Azure Portal and is shown when you view the resource group that contains the key vault.I’m using this ip address so that I will actually get an alert (at the wrong time) when my application runs

Click New Alert Rule

image

The following screen should appear

image

The Alert Target should be the Log Analytics we’ve been using and the Target Criteria (when clicked) should show the query we’ve just written

image

We need to configure the rule for when this alert should be triggered. I’m interested when at least 1 attempt has been made in the last 5 minutes to access the Key Vault from an unknown location, so I set the threshold to be zero and click Done. We’ve now configured the logic to determine when the event is fired. Now we need to say what we want to happen when it fires.Firstly we need to give the alert a name and description

image

Now we need to configure how we are alerted. For this you need to create an action group. An action group allows you to define a collection of activities that will happen when the alert is fired. Click New Action Group

image

Action Types can be any of the following:

image

An action group can have multiple actions and you can select both email and SMS in a single action.Once you have created your Action Group you need to select in then click “Create alert rule”

image

Your alert is now set up and running. You can view/edit alerts by selecting Monitor in the Azure Portal

image

then click Alerts (preview), you will be able to see the alerts that have fired.

image

Click Manage Rules to edit the alert.

When the alert is fired I will get an email containing the details of the alert.

Log analytics is a powerful tool and whilst this series of posts has been related to auditing of Key Vault we can use log analytics for a wide variety of log sources such as Application Insights. We can also use the same mechanism for alerting to these other log sources,

The next post is a video that shows you how to connect existing log files to log analytics

Setting up Azure Key Vault with Audit logging

Azure Key Vault is a good way to share secrets with your partners in a way that allows you to have control over the access to each of the assets in Azure. We also need to know who is accessing the resources and from where so that we can monitor for suspicious activity. This post will talk through setting up the key vault and then configuring logging to keep track of the audit information for your certificates, keys and secrets. For each application that you want to access your resources you will need to create some credentials that the application can use.

To allow an application to access key vault an App Registration needs to be added to Azure Active Directory (AAD). This effectively sets up a username and password that the application can use for credentials.

Open the azure portal (http://portal.azure.com) and navigate to Active Directory.

Click "App registrations"

clip_image002

Then "New application registration"

clip_image001

Name needs to be unique within your AD, select Web API/API and enter sign-on url. If you not building a website then enter anything in here. It might be useful to use a url related to your existing domain with application name appended. It doesn’t need to be a valid url. The click “Create”

Once created copy the Application ID as this is equivalent to a username to be used when calling the Key Vault in code. You now need to create the password.

Click Settings then Keys

clip_image002

clip_image004

clip_image006

Enter a name in the description field and select a duration, then click Save. The new key value will be displayed. You will need to copy this as it will not be visible again once you leave this page. This will be used as the password.

clip_image002[5]

Now create the Key Vault. To do that it is a good idea to put it in a specific resource group, especially if you are creating a set of resources that the key vault is going to access or if you are going to setup third party access. Once the Resource Group has been created, select it and add a Key Vault. When the Create Key Vault panel appears, click Access Policies, click "Add new"

clip_image004[5]

Pick the application you just created in AAD and select Get in Secret permissions, Save then go back to the main Key Vault pane and click Create

You have just given the application we created earlier access to just retrieving secrets. As you can see from the access policy you can give the application permissions to access a combination of Keys, Secrets and Certificates with the minimum access of Get. The Key Vault security is at the vault level and you cannot protect individual secrets at the user level. By granting only Get access on the Secret the application will not be able to list the Secrets available and will only be able to retrieve secrets it knows the names of.

Now the Key vault is set up and can be accessed, we want to know who is accessing the vault and from where. Out of the box this is not enabled and requires additional configuration and resources to allow us to be able to retrieve this audit information. This is achieved by enabling diagnostic logs in the Key Vault.

Before you can enable this you need to create a new storage account in this resource group to store the logs, then add Application Insights to the resource group

clip_image002[7]

Once these have been provisioned, navigate to the Key Vault you just created & click Diagnostic logs

clip_image004[7]

Click "Turn on diagnostics"

clip_image006[6]

Select “Archive to Storage Account” and Pick the storage account you’ve just created

Select “Send to Log Analytics” and Create a new OMS workspace in your resource group

clip_image008

Once created select this for Log Analytics

clip_image009

select the AuditEvent log and click Save. 

Now any changes to the Key Vault plus any access from your application will be logged and visible via log analytics. There’s a 10 – 15 minute delay between accessing the Key Vault and the log appearing.

To Add a Secret to the vault, Navigate to the vault, click Secrets then Add

clip_image010

Select Manual from the Upload options, enter a name and the secret

clip_image011

Remember the name you gave the Secret as you will need this in your code when accessing the key vault. This secret will now have a unique identifier that you will use. The one I’ve just created is:

https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You should see in the logs this secret being created and also when it gets accessed.

Accessing the KeyVault in C# can be seen here: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-use-from-web-application

The application in the example uses settings as defined below:

ClientID is the Application ID we created in the application registration in AD

ClientSecret is the key you created (that you had to save as it wasn’t visible again) as part of creating the application registration in AD.

Each Key, Secret and Certificate has a unique url which is used as the SecretURI e.g. https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You now have your key vault set up with audit logging and are able to access it. My next blog post will talk you through how to access the logs and also how to set up alerting

Creating a Scheduled Web Job in Azure

It’s been a while since I’ve talked about web jobs, but they are still around.I needed to modify one of mine recently and configure it as a scheduled web job.

You can deploy your web job from the Azure Portal. Web jobs are part of App Services and are deployed by selecting your app service you want and clicking the web jobs service.

image

Click the Add button.

image

Enter a name for your web job, Browse a zip or exe containing your web job

Select Triggered from the Type drop down. This change the UI to allow you to select Scheduled

image

The Schedule is triggered using a CRON expression.

Alternatively, configure the web job as a manual trigger, then use the Azure Scheduler to trigger the web job.. When you have configured your web job, click on its properties in the Azure Portal and you should see a web hook url.

https://yourwebapp.scm.azurewebsites.net/api/triggeredwebjobs/yourwebjob/run

Now create a new scheduler Job

image

Select the method as Post and paste in the webhook url.Once you've completed this configuration you can then configure the schedule

image

Using the scheduler allows you to configure retry policies and also error actions

Custom ASP.NET MVC app running in a Container on Service Fabric

In an earlier post, I talked about how to create a Docker container on Windows that housed a custom ASP.Net MVC app. What I want to show now is how you can get this container running in Service Fabric.

I created 3 identical virtual machines all capable of running Docker as in my earlier post. Now I needed to make my three VMs into a Service fabric cluster. These two posts explain how:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-standalone-deployment-preparation

My 3 VMs are called sf0, sf1 & sf2 and I needed to  put these into my cluster config. I picked the ClusterConfig.Unsecure.MultiMachine config file that comes with the Service Fabric files and changed it to include my 3 VMs, so my nodes look like this:

"nodes": [
{
      "nodeName": "sf0",
      "iPAddress": "sf0",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc1/r0",
      "upgradeDomain": "UD0"
},
{
      "nodeName": "sf1",
      "iPAddress": "sf1",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc2/r0",
      "upgradeDomain": "UD1"
},
{
      "nodeName": "sf2",
      "iPAddress": "sf2",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc3/r0",
      "upgradeDomain": "UD2"
}
],

I then remoted onto one of the machines and ran the following PowerShell:

.\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.json

This will check all the machines in the ClusterConfig.json file to see if they are configured correctly and report any errors. I got the following error:

Machine 'sf2' is not reachable on port 445. Check connectivity/open ports. Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.1.222:445

This meant I needed to open the correct firewall ports on my VM. I got this error for all the machines in the cluster. Once I fixed this and reran the PowerShell, the tests passed which meant I could install Service Fabric on each of the machines as follows:

.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json –AcceptEULA

When this completes successfully you should see something like this:

Your cluster is successfully created! You can connect and manage your cluster using Microsoft Azure Service Fabric Explorer or Powershell. To connect through Powershell, run 'Connect-ServiceFabricCluster

I could connect to Service Fabric Explorer using: http://sf0:19080

Now I have my cluster running I needed to create a Service Fabric App and deploy it to the cluster. Make sure that you have installed the Service Fabric SDK, then run Visual Studio. Create a new Service Fabric project. When the project is created, right click on the services node, Select Add->New Service Fabric Service

clip_image001[4]

Then pick Guest Container and enter the name in your Docker Hub repository where your Docker image resides.

clip_image001

This will add in the necessary files to your service fabric project. If you remember from my earlier post, the website was hosted on port 8000 of the container. We need to tell service fabric about this and also we may want to map this to a different port.

If you open the containers ServiceManifest file

clip_image001[8]

Add an endpoint with the endpoint you want Service Fabric to use to publish the website out

clip_image001[10]

In this example I’m using the same port. If you want to map the port to a different one then changes this to something else e.g. If I wanted to use http://sf0:8080 as the website then I would change the Service Manifest to this:

image

You also need to tell service fabric about the Container port that is published. This is done in the application manifest file:

image

This is set to 8000 as that is the port exposed by the Docker container

Now deploy your application to service fabric. It may take a while to initialise your container as it will need to be downloaded from Docker Hub before it will run. Once it is running you should see it as Ready in the Service Fabric Explorer

image

Error updating SSL certificates in Azure App Services

I was asked to update the SSL certificates on a website that was hosted in Azure Web Apps. No problem I thought.

Go to the Azure Portal,

Select the website you want to update.

When the blade appears scroll down the left panel and select SSL Certificates

image

image

 

remove the binding, by clicking … at the end of the binding row and select Delete

image

Now remove the certificate by clicking .. at the end of the certificate row and select Delete

This is where I got an error

image

It took a short while to resolve this.

I tried a few things like restarting the site & checking the staging slot, but I still got the error. Finally, I checked other sites in the same app service plan and I had the same certificate used for another Web App (both using the same domain url). Once I removed the binding from that site, I could delete the certificate and upload a new one. I had to then add the new bindings to both sites.