Steve Spencer's Blog

Blogging on Azure Stuff

Azure Key Vault Logging and Events with Log Analytics

Following on from my previous blog post (http://blogs.recneps.net/post/Setting-up-Azure-Key-Vault-with-Audit-logging) which explains how to set up Azure key vault with logging enabled, this post explains how to access the details of these logs and also to create an alert so you can see if someone is accessing the key vault from an unknown ip address (for example)

Open the Azure portal and navigate to the Resource Groups section and pick the resource group that we configured last time which contains the key vault and log analytics resources

image

Click your log analytics item, to open Log Analytics.

You can then select Log Search

image

This screen allows you to create your own query or select from existing ones.

image

Selecting “All Collected Logs” will show you the logs for the last day. I’ve highlighted the areas where you can change the time period, see the query and also click on Advanced Analytics to give a richer environment for analysing your logs.

image

If you want to query just for the Key Vault Audit logs then you can use the following query:

search * | where Category=="AuditEvent"

image

This will default to a list view, but clicking the Table button will format the data in an easier to read table.

image

You can sort and filter on the column headers. This can also be achieved using the order by clause as follows:

search * |where Category=="AuditEvent"  | order by TimeGenerated desc

A blog post discussing the query language can be found here

We are interested in all calls where someone has tried to access a Secret from the key vault. For that we are looking for an AuditEvent with an OperationName of SecretGet. If we also want to restrict the columns we retrieve then you can use “project” e.g.

search * | where Category=="AuditEvent"  and OperationName == "SecretGet"
| order by TimeGenerated desc
| project TimeGenerated, OperationName, CallerIPAddress, ResultSignature, requestUri_s

image

Now we are familiar with writing queries we can look at alerting. I’d like to set up an alert when the key vault is access from an IP Address other than the one where my application is running. This can be done as follows:

search * | where Category=="AuditEvent" and CallerIPAddress != "51.140.184.51"

This ip address is actually the Azure Portal and is shown when you view the resource group that contains the key vault.I’m using this ip address so that I will actually get an alert (at the wrong time) when my application runs

Click New Alert Rule

image

The following screen should appear

image

The Alert Target should be the Log Analytics we’ve been using and the Target Criteria (when clicked) should show the query we’ve just written

image

We need to configure the rule for when this alert should be triggered. I’m interested when at least 1 attempt has been made in the last 5 minutes to access the Key Vault from an unknown location, so I set the threshold to be zero and click Done. We’ve now configured the logic to determine when the event is fired. Now we need to say what we want to happen when it fires.Firstly we need to give the alert a name and description

image

Now we need to configure how we are alerted. For this you need to create an action group. An action group allows you to define a collection of activities that will happen when the alert is fired. Click New Action Group

image

Action Types can be any of the following:

image

An action group can have multiple actions and you can select both email and SMS in a single action.Once you have created your Action Group you need to select in then click “Create alert rule”

image

Your alert is now set up and running. You can view/edit alerts by selecting Monitor in the Azure Portal

image

then click Alerts (preview), you will be able to see the alerts that have fired.

image

Click Manage Rules to edit the alert.

When the alert is fired I will get an email containing the details of the alert.

Log analytics is a powerful tool and whilst this series of posts has been related to auditing of Key Vault we can use log analytics for a wide variety of log sources such as Application Insights. We can also use the same mechanism for alerting to these other log sources,

The next post is a video that shows you how to connect existing log files to log analytics

Setting up Azure Key Vault with Audit logging

Azure Key Vault is a good way to share secrets with your partners in a way that allows you to have control over the access to each of the assets in Azure. We also need to know who is accessing the resources and from where so that we can monitor for suspicious activity. This post will talk through setting up the key vault and then configuring logging to keep track of the audit information for your certificates, keys and secrets. For each application that you want to access your resources you will need to create some credentials that the application can use.

To allow an application to access key vault an App Registration needs to be added to Azure Active Directory (AAD). This effectively sets up a username and password that the application can use for credentials.

Open the azure portal (http://portal.azure.com) and navigate to Active Directory.

Click "App registrations"

clip_image002

Then "New application registration"

clip_image001

Name needs to be unique within your AD, select Web API/API and enter sign-on url. If you not building a website then enter anything in here. It might be useful to use a url related to your existing domain with application name appended. It doesn’t need to be a valid url. The click “Create”

Once created copy the Application ID as this is equivalent to a username to be used when calling the Key Vault in code. You now need to create the password.

Click Settings then Keys

clip_image002

clip_image004

clip_image006

Enter a name in the description field and select a duration, then click Save. The new key value will be displayed. You will need to copy this as it will not be visible again once you leave this page. This will be used as the password.

clip_image002[5]

Now create the Key Vault. To do that it is a good idea to put it in a specific resource group, especially if you are creating a set of resources that the key vault is going to access or if you are going to setup third party access. Once the Resource Group has been created, select it and add a Key Vault. When the Create Key Vault panel appears, click Access Policies, click "Add new"

clip_image004[5]

Pick the application you just created in AAD and select Get in Secret permissions, Save then go back to the main Key Vault pane and click Create

You have just given the application we created earlier access to just retrieving secrets. As you can see from the access policy you can give the application permissions to access a combination of Keys, Secrets and Certificates with the minimum access of Get. The Key Vault security is at the vault level and you cannot protect individual secrets at the user level. By granting only Get access on the Secret the application will not be able to list the Secrets available and will only be able to retrieve secrets it knows the names of.

Now the Key vault is set up and can be accessed, we want to know who is accessing the vault and from where. Out of the box this is not enabled and requires additional configuration and resources to allow us to be able to retrieve this audit information. This is achieved by enabling diagnostic logs in the Key Vault.

Before you can enable this you need to create a new storage account in this resource group to store the logs, then add Application Insights to the resource group

clip_image002[7]

Once these have been provisioned, navigate to the Key Vault you just created & click Diagnostic logs

clip_image004[7]

Click "Turn on diagnostics"

clip_image006[6]

Select “Archive to Storage Account” and Pick the storage account you’ve just created

Select “Send to Log Analytics” and Create a new OMS workspace in your resource group

clip_image008

Once created select this for Log Analytics

clip_image009

select the AuditEvent log and click Save. 

Now any changes to the Key Vault plus any access from your application will be logged and visible via log analytics. There’s a 10 – 15 minute delay between accessing the Key Vault and the log appearing.

To Add a Secret to the vault, Navigate to the vault, click Secrets then Add

clip_image010

Select Manual from the Upload options, enter a name and the secret

clip_image011

Remember the name you gave the Secret as you will need this in your code when accessing the key vault. This secret will now have a unique identifier that you will use. The one I’ve just created is:

https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You should see in the logs this secret being created and also when it gets accessed.

Accessing the KeyVault in C# can be seen here: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-use-from-web-application

The application in the example uses settings as defined below:

ClientID is the Application ID we created in the application registration in AD

ClientSecret is the key you created (that you had to save as it wasn’t visible again) as part of creating the application registration in AD.

Each Key, Secret and Certificate has a unique url which is used as the SecretURI e.g. https://recneps-vault.vault.azure.net/secrets/recnepssvsb-key

You now have your key vault set up with audit logging and are able to access it. My next blog post will talk you through how to access the logs and also how to set up alerting

Adding Rigour to an AzureML Web Service Deployment

As a developer I want an automated mechanism to build, test and deploy everything I do, so when my team and I came to implementing something with AzureML it was the first thing I was challenged on. We have a group of analysts who want to use AzureML to build part of our system without having to translate their requirements into a language that we developers can use to build a system in code. I’d been playing around with AzureML and deployed a few services as part of a proof of concept but hadn’t looked at how we could automated the deployment process and add some control. We didn’t want to have a mechanism that would allow the analysts to build and deploy a new model to production, without some way to check whether what they have built was fit for purpose. After a bit of research I found that we could export both the experiment and the web service as JSON from AzureML. Exporting both the experiment and the web service definitions would allow us to version control the source. We could also import these definitions to allow us to move the experiment and web service to different subscriptions. Exporting and importing the experiment was relatively straight forward using PowerShell.

Export-AmlExperiementGraph & Import-AmlExperiementGraph

The full PowerShell to export and import the experiment is:

Get-AmlWorkspace -ConfigFile .\config-source.json
$exp = Get-AmlExperiment | where Description -eq '<ExperrimentName>' -ConfigFile .\config-source.json
Export-AmlExperimentGraph -ExperimentId $exp.ExperimentId -OutputFile 'C:\experiment.json' -ConfigFile .\config-source.json

Get-AmlWorkspace -ConfigFile .\config-dest.json
Import-AmlExperimentGraph -InputFile 'C:\experiment.json' -ConfigFile .\config-dest.json

Note: This relies on two config files, one to identify the source workspace and the other to identify the destination workspace. These are in the following formats:

{
"Location": "West Europe",
"WorkspaceId": "<WorkspaceID",
"AuthorizationToken": "<AuthorizationToken>"
}

You can find the workspace id and authorization tokens by opening your AzureML workspace then clicking settings.

clip_image002

clip_image004

Exporting the experiment as JSON allows you to add it to source control and version the experiment.

Importing the experiment into a new Workspace will allow you to open the workspace and view/edit the experiment and also to manually deploy the web service. This on its own will give you some control over the web service and allow you to control how and when it gets deployed and will stop the analytics team from accidentally deploying something to production and potentially breaking the system.

It is also possible to export a deployed web service and then import it into a new subscription.

When we came to trying to export the web service we ran into a few issues. Firstly there seemed to be a number of ways to export the web service definition and they seemed to produce different JSON.

https://github.com/ritwik20/AzureML-WebServices

https://github.com/hning86/azuremlps#export-amlwebservicedefinitionfromexperiment

https://docs.microsoft.com/en-us/powershell/module/azurerm.machinelearning/export-azurermmlwebservice?view=azurermps-2.2.0

We settled for the last link but also took some missing information from the first link. The process for exporting is a little more complicated.

Firstly, the web service definition needs to be exported as a JSON file. The web service import uses resource manager and requires a different mechanism to login to the experiment export/import. I used:

Login-AzureRmAccount –SubscriptionId <My Subscription ID>

This use the interactive login so if you want to automate this then you need to use the service principle 

To export the web service:

$webService = Get-AzureRmMlWebService -Name "Source Service Name" -ResourceGroupName "Source Resource Group Name" 
Export-AzureRmMlWebService -WebService $webService -OutputFile 'C:\wsexport.json' 

The JSON then needs editing to add in the new storage account and commitment plan.

The JSON can then be imported into the new subscription using New-Azure​RmMl​Web​Service 

Changing the JSON to add in the commitment plan id seemed to cause problems and I kept getting the error “Commitment Plan ID must be provided”. This error was confusing as I was including the commitment plan id in the configuration and I thought that I had it in the correct place. If you open the JSON that you export and find the storage account node then you will need to overwrite this with:

"storageAccount":{ 
     "name": "<StorageAccountName>",  
     "key": "<StorageAccountKey>" },  
"commitmentPlan": { 
      "id": "/subscriptions/0<Subscription-ID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.MachineLearning/commitmentPlans/<CommitmentPlanName>"}, 

My issue was that I had the commitment id wrong rather than in the wrong place and I found the correct id using:

Get-AzureRmMlCommitmentPlan

Once I had this configured correctly the import worked without any errors using:

New-AzureRmMlWebService -Force -ResourceGroupName "New Resource Group Name" -Name "Web Service Name" -Location "West Europe" -DefinitionFile "C:\wsexport.json" 

As we’ve used PowerShell to automate the export and import we could then easily script the config file edits and wire this in to an automated test and deploy process. We can write tests that check the web service parameters have not been changed by the analytics team and that the return data is the correct format, so we can ensure that the deployed service will at least function correctly. It’s more difficult to check whether the actual machine learning process is correct, but we will know when the interfaces are broken. We have also version controlled both the experiment and web service JSON so we can easily roll back if necessary. We decided that we needed both so that the experiment didn’t need to be copied to a new workspace and then automate a web service deploy, we just needed the web service in the new subscription, but we wanted the version control for the experiment too.

Using Bots For Form Filling

After watching James Mann’s talk on the bot framework at DDD I decided to look at whether the Bot framework can offer an alternative to web pages for filling in forms. The Microsoft Bot Framework is a framework that helps you build bots that can run in a variety of messaging apps such as Skype, Skype For Business, Facebook and Slack. There are a number of ways to help you build the bot and you can interface with the Cognitive Services to add intelligence to your bot using services such as LUIS.

The simplest mechanism for retrieving data from a user is to use FormFlow within the BotBuilder SDK. This streamlines the creation of a bot to collect data from a user. In order to start building Bots you will need Visual Studio 2017 installed along with the  Bot Builder extensions. This quick start will help you get the prerequisites sorted. The FormFlow example creates a Bot that will walk you through ordering a sandwich and along with the advanced example you can add optional ingredients along with some simple terms you can add to make it easier to pick multiple items from a list.

In order to debug and test your Bot there is a local emulator that allows you to run your Bots in a local test messaging app. You will need to download and install this emulator prior to testing.

I followed the examples through and found that there is a compilation error with the sample code in the FormBuilder section:

image

This conversation on stackoverflow seems to resolve the issue.

image

When you run the Bot in the Visual Studio debugger, it creates a web service and you need to use the url of this service in the Bot emulator which is run separately.

image

I need to take the URL http://localhost:3979/, add /api/messages and use it to configure the Bot emulator

image

Now click connect. Your Bot is now ready to test. To start the Bot you need to type a message. It doesn’t matter what you type, but “Hello” is probably a polite way to start Smile

image

To interact with the Bot you can either enter the number of the sandwich you want or type something and the Bot will try and make sense of what you type. If you type some text that does not appear in the list like “chickn” then the Bot wont understand, but typing “chicken” will give you all the matching answers to reduce your options. you also need to type whole words. so Chick wont match whereas Chicken will.

image

The FormBuilder example allows you to add validation and also custom code. The custom code allows you to ask for specific toppings and also say “everything except olives pickles”

image

There is also validation included in the FormBuilder example and it allows you to check to make sure that the address starts with a number

image

image

Optional stages can be added. For example if you pick Foot Long then you get a free cookie or drink

image

FormFlow is a good way to start with your Bot if you want to capture user input. The next stage for me is to work out how this can be applied to a more complex form filling scenario where there are dynamic lists which change based upon the input answers from previous questions.

Creating a Scheduled Web Job in Azure

It’s been a while since I’ve talked about web jobs, but they are still around.I needed to modify one of mine recently and configure it as a scheduled web job.

You can deploy your web job from the Azure Portal. Web jobs are part of App Services and are deployed by selecting your app service you want and clicking the web jobs service.

image

Click the Add button.

image

Enter a name for your web job, Browse a zip or exe containing your web job

Select Triggered from the Type drop down. This change the UI to allow you to select Scheduled

image

The Schedule is triggered using a CRON expression.

Alternatively, configure the web job as a manual trigger, then use the Azure Scheduler to trigger the web job.. When you have configured your web job, click on its properties in the Azure Portal and you should see a web hook url.

https://yourwebapp.scm.azurewebsites.net/api/triggeredwebjobs/yourwebjob/run

Now create a new scheduler Job

image

Select the method as Post and paste in the webhook url.Once you've completed this configuration you can then configure the schedule

image

Using the scheduler allows you to configure retry policies and also error actions

Azure Relay Hybrid Connections

If you are using the Azure App Service to host your web site and you want to connect to an on-premises server then there a number of ways you can do this. One of the simplest is to use the hybrid connection. Hybrid connections have had a bit of a revamp lately and they used to require a BizTalk service to be created, now you just need a Service Bus Relay. You can generally use the hybrid connection to communicate to your back end server over TCP and you will need to install an agent on your server (or a server that  can reach the one you want to connect to) called the Hybrid Connection Manager (HCM). HCM will make an outbound connection to the Service Bus Relay over ports 80 and 443, so you are unlikely to need firewall ports changing.

Hybrid connections are limited to a specific server name and port and your code in the Azure App Service will address the service as if it was in your local network, but will only be able to connect to the machine and port configured in the Hybrid Connection. Instructions for configuring your hybrid connection and HCM are here.

I have setup a number of the old BizTalk style hybrid connections and the new way is a lot easier to do. I ran into a few connectivity issues when I first created the Relay hybrid connection and there were a few things I found that helped me to find out where the issues were. Firstly the link I provided to configure the hybrid connection has a troubleshooting section which talks about tcpping. You can run this in the debug console in Azure and it will check to see if your HCM is talking to the same relay as the one in your app service. To get to the debug console, log in to your azure portal, select the app service you want to diagnose. Scroll down to Advanced Tools and click Go.

image

This will take you to your Kudu dashboard where you can do a lot of nice things, such as process explorer, diagnostic dumps, log streaming and debug console

The address will be https://[your namespace].scm.azurewebsites.net/

The debug console will allow you to browse and edit files directly in your application without the need to ftp. This is really useful when trying to check configuration issues.

If you want to check connectivity from your server machine to the Azure Relay then you can use telnet. You might need to add the telnet feature to Windows by using:

dism /online /Enable-Feature /FeatureName:TelnetClient (From https://www.rootusers.com/how-to-enable-the-telnet-client-in-windows-10/)

in a command prompt type

telnet [your relay namespace].servicebus.windows.net 80 or

telnet [your relay namespace].servicebus.windows.net 443

Then a blank screen denotes successful connectivity (from: https://social.technet.microsoft.com/wiki/contents/articles/2055.troubleshooting-connectivity-issues-in-the-azure-appfabric-service-bus.aspx)

You can also use PowerShell to check:

Test-Netconnection -ComputerName [your relay namespace].servicebus.windows.net -Port 443

This all checks that you are connected to the relay, the final thing you need to check is whether you can actually resolve the dns of the target service from the server where HCM is running. This needs to be the host name of the server and not the fully qualified name. This also needs to match the machine name you configured in the hybrid connection.The easiest way to do this for me was to put the address of WCF service I wanted to connect to into a browser on the machine running HCM.

Hopefully I’ve given you a few pointers to help identify why your hybrid connection does not connect.

Custom ASP.NET MVC app running in a Container on Service Fabric

In an earlier post, I talked about how to create a Docker container on Windows that housed a custom ASP.Net MVC app. What I want to show now is how you can get this container running in Service Fabric.

I created 3 identical virtual machines all capable of running Docker as in my earlier post. Now I needed to make my three VMs into a Service fabric cluster. These two posts explain how:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-standalone-deployment-preparation

My 3 VMs are called sf0, sf1 & sf2 and I needed to  put these into my cluster config. I picked the ClusterConfig.Unsecure.MultiMachine config file that comes with the Service Fabric files and changed it to include my 3 VMs, so my nodes look like this:

"nodes": [
{
      "nodeName": "sf0",
      "iPAddress": "sf0",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc1/r0",
      "upgradeDomain": "UD0"
},
{
      "nodeName": "sf1",
      "iPAddress": "sf1",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc2/r0",
      "upgradeDomain": "UD1"
},
{
      "nodeName": "sf2",
      "iPAddress": "sf2",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc3/r0",
      "upgradeDomain": "UD2"
}
],

I then remoted onto one of the machines and ran the following PowerShell:

.\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.json

This will check all the machines in the ClusterConfig.json file to see if they are configured correctly and report any errors. I got the following error:

Machine 'sf2' is not reachable on port 445. Check connectivity/open ports. Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.1.222:445

This meant I needed to open the correct firewall ports on my VM. I got this error for all the machines in the cluster. Once I fixed this and reran the PowerShell, the tests passed which meant I could install Service Fabric on each of the machines as follows:

.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json –AcceptEULA

When this completes successfully you should see something like this:

Your cluster is successfully created! You can connect and manage your cluster using Microsoft Azure Service Fabric Explorer or Powershell. To connect through Powershell, run 'Connect-ServiceFabricCluster

I could connect to Service Fabric Explorer using: http://sf0:19080

Now I have my cluster running I needed to create a Service Fabric App and deploy it to the cluster. Make sure that you have installed the Service Fabric SDK, then run Visual Studio. Create a new Service Fabric project. When the project is created, right click on the services node, Select Add->New Service Fabric Service

clip_image001[4]

Then pick Guest Container and enter the name in your Docker Hub repository where your Docker image resides.

clip_image001

This will add in the necessary files to your service fabric project. If you remember from my earlier post, the website was hosted on port 8000 of the container. We need to tell service fabric about this and also we may want to map this to a different port.

If you open the containers ServiceManifest file

clip_image001[8]

Add an endpoint with the endpoint you want Service Fabric to use to publish the website out

clip_image001[10]

In this example I’m using the same port. If you want to map the port to a different one then changes this to something else e.g. If I wanted to use http://sf0:8080 as the website then I would change the Service Manifest to this:

image

You also need to tell service fabric about the Container port that is published. This is done in the application manifest file:

image

This is set to 8000 as that is the port exposed by the Docker container

Now deploy your application to service fabric. It may take a while to initialise your container as it will need to be downloaded from Docker Hub before it will run. Once it is running you should see it as Ready in the Service Fabric Explorer

image

Error updating SSL certificates in Azure App Services

I was asked to update the SSL certificates on a website that was hosted in Azure Web Apps. No problem I thought.

Go to the Azure Portal,

Select the website you want to update.

When the blade appears scroll down the left panel and select SSL Certificates

image

image

 

remove the binding, by clicking … at the end of the binding row and select Delete

image

Now remove the certificate by clicking .. at the end of the certificate row and select Delete

This is where I got an error

image

It took a short while to resolve this.

I tried a few things like restarting the site & checking the staging slot, but I still got the error. Finally, I checked other sites in the same app service plan and I had the same certificate used for another Web App (both using the same domain url). Once I removed the binding from that site, I could delete the certificate and upload a new one. I had to then add the new bindings to both sites.

Custom ASP.NET MVC app running in a Windows Container

With the introduction of Windows Containers on  Window Server 2016 and the ability to run containers in Service Fabric I thought it was time to investigate Windows Containers and I wanted to know how to build one that will run a web site using IIS.

As I’m new to containers, although I’ve done a very similar exercise with Docker on Linux, I decided to follow the Windows Quick Start Guide. I hit a few problems early on so I’ve put the steps I followed here:

After opening a PowerShell window as administrator I ran the following commands:

Install-Module -Name DockerMsftProvider -Repository PSGallery –Force – Ran OK
Install-Package -Name docker -ProviderName DockerMsftProvider – Had an error
WARNING: Cannot verify the file SHA256. Deleting the file.
WARNING: C:\Users\ADMINI~1.DEV\AppData\Local\Temp\DockerMsftProvider\Docker-1-12-2-cs2-ws-beta.zip does not exist
Install-Package : Cannot find path 'C:\Users\ADMINI~1.DEV\AppData\Local\Temp\DockerMsftProvider\Docker-1-12-2-cs2-ws-beta.zip' because it does not exist.

 

Not sure what was causing this to fail but I followed the instructions to manually install (from https://github.com/OneGet/MicrosoftDockerProvider/issues/15)

Start-BitsTransfer -Source https://dockermsft.blob.core.windows.net/dockercontainer/docker-1-12-2-cs2-ws-beta.zip -Destination /docker.zip
Get-FileHash -Path /docker.zip -Algorithm SHA256
mkdir C:\Users\Administrator\AppData\Local\Temp\DockerMsftProvider\
cp .\docker.zip C:\Users\Administrator\AppData\Local\Temp\DockerMsftProvider\
cd C:\Users\Administrator\AppData\Local\Temp\DockerMsftProvider\
cp .\docker.zip Docker-1-12-2-cs2-ws-beta.zip
Install-Package -Name docker -ProviderName DockerMsftProvider -Verbose Restart-Computer –Force

After Rebooting I tried to download and run a sample container

docker run microsoft/dotnet-samples:dotnetapp-nanoserver

but I got the following error

docker : C:\Program Files\Docker\docker.exe: error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.25/containers/create: open //./pipe/docker_engine: The

system cannot find the file specified..

At line:1 char:1

+ docker run microsoft/dotnet-samples:dotnetapp-nanoserver

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (C:\Program File...ile specified..:String) [], RemoteException

+ FullyQualifiedErrorId : NativeCommandError

It turns out that the docker service wasn’t running after the reboot, so open services.msc and find the docker service to start it.

Running the same command again will download the image from docker hub, create a container from it and then run it. As this is a visual container it runs once and then stops.

clip_image001

Every time I do docker run microsoft/dotnet-samples:dotnetapp-nanoserver it creates a new container. What I want to do is to run one that is stopped and view the output on the screen.

For this I needed to start the container I had already created. If you run

docker –ls –a

This will list all the containers that are both running and stopped and you can see from the image below that I had run docker run a number of time. Each time it tried to download the image (which was already downloaded) and then create a new container from it.

clip_image001[6]

docker container start -a 12d382ae0bd6 (-a attached STDOUT so you can see the output)

clip_image001[8]

Now I know how to create and start containers I wanted to build one of my own. This is easier than I first though as there a lots of base templates stored on docker hub and git hub.

https://docs.microsoft.com/en-us/virtualization/windowscontainers/samples#Application-Frameworks

https://hub.docker.com/r/microsoft/

I picked one on docker hub that has IIS and ASP.Net installed already, so all I needed to do after was to add my own website and configure IIS correctly. Using docker pull,

docker pull microsoft/aspnet (see https://hub.docker.com/r/microsoft/aspnet/)

This retrieves the template from Docker Hub and I want to use that template to install my ASP.Net MVC site and configure IIS to serve the pages on port 8000. Following the instructions here (https://docs.microsoft.com/en-us/dotnet/articles/framework/docker/aspnetmvc). I published my MVC site and copied the publish folder to my docker machine. Then I needed to create a Dockerfile recipe to instruct docker what to install in my image. So I created a folder that contained the Dockerfile and also the published website as below

clip_image001[10]

The contents of the Dockerfile are:

# The FROM instruction specifies the base image. You are
# extending the microsoft/aspnet image.
FROM microsoft/aspnet
# Next, this Dockerfile creates a directory for your application
RUN mkdir C:\sdsweb
# configure the new site in IIS.
RUN powershell -NoProfile -Command \
Import-module IISAdministration; \
New-IISSite -Name "sdsweb" -PhysicalPath C:\sdsweb -BindingInformation "*:8000:"
# This instruction tells the container to listen on port 8000.
EXPOSE 8000
# The final instruction copies the site you published earlier into the container.
ADD sdswebsource/ /sdsweb

Now I need to run this to create the image

In PowerShell, I changed directory to the folder containing my Dockerfile, then ran

docker build -t sdsweb .

This has created an image and we need to now get this running as a container

clip_image001[12]

using docker run again

docker run -d -p 8000:8000 --name sdsweb sdsweb

clip_image001[14]

My container is now running and I should be able to view the web pages in my browser on port 8000, but I need to know the IP address first

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" sdsweb

clip_image001[16]

Now I can browser to http://172.17.97.235:8000

clip_image001[18]

I changed the default web page to show the machine name serving the pages under Getting Started. Listing the containers will show the container ID and this matches the machine name displayed on the web page

image

That’s it running in a container. There are a couple more things I’d like to do before I’ve finished. The first is to make sure that when my Windows Server restarts, then my sdsweb container also starts. At the moment it will not start as I didn’t add a restart parameter when I called docker run. Adding –restart always will cause the container to restart when windows restarts.

docker run -d -p 8000:8000 --name sdsweb --restart always sdsweb

The final thing I want to do is to be able to share this image so I’d like to push it up to docker hub

docker login - enter username and password

docker push recneps/sdsweb

clip_image001[20]

Then to use it on another machine

docker pull recneps/sdsweb

clip_image002

In my next post I am going to look at how I can create a container that can be hosted in Service Fabric