Steve Spencer's Blog

Blogging on Azure Stuff

Service Fabric: Resolving External Service Address

I am using Azure Service Fabric to host my application but I’ve deployed it on premises using a 3 machine cluster (running version Microsoft.Azure.ServiceFabric.WindowsServer.5.1.156.9590). It was easy to deploy and I only needed to run PowerShell on 1 of the nodes to configure up all 3. I followed the instructions here.

From Visual Studio I deployed my application which consisted of a number of stateless services and a WCF service. When everything is running in the cluster it all works fine but I wanted to access the WCF service from outside of the cluster. The first issue was that the actual address of the service is not known but you can see the address if you look at the Service Fabric Explorer of the cluster. Navigating through to the application on one of the nodes returns the url of the service e.g.

net.tcp://192.168.56.122:8081/4f341989-ec72-4cd5-8778-6e11e01dc727/968d5932-935a-4773-b83b-fa99f59d9073-131148669247596041

You don’t want to use this url directly as it could change depending upon the configuration of your cluster and the health of each of the nodes. Service Fabric provides a mechanism for discovering the address of the end point using the Service Resolver. If you are running in the cluster then you can use the default resolver and this will return the url of the end point which you can connect to. However, when you are outside of the cluster you need to tell the resolver where to look for the cluster.

Again if you look at the Service Fabric Explorer you can find out the ports used in the cluster e.g.

<ClientConnectionEndpoint Port="19000" />
<LeaseDriverEndpoint Port="9026" />
<ClusterConnectionEndpoint Port="19001" />
<HttpGatewayEndpoint Port="19080" Protocol="http" />
<ServiceConnectionEndpoint Port="9027" />
<ApplicationEndpoints StartPort="20001" EndPort="20031" />
<EphemeralEndpoints StartPort="20032" EndPort="20062" />

The example here shows how to connect to the resolver in an Azure hosted environment.

ServicePartitionResolver resolver = new  ServicePartitionResolver("mycluster.cloudapp.azure.com:19000", "mycluster.cloudapp.azure.com:19001");

This example provides a list of endpoints to try on both ports 19000 & 19001. Mapping this to my environment I used the ip address of the node on which I ran the PowerShell which also is the node that displays the Service Fabric Explorer. I also needed to know the application name in order for the resolver to find the end point I was after. The code below is part of a console application that attempts to call a WCF service from outside of the cluster. I’ve highlighted the service name and addresses I’ve used

string uri = "fabric:/ServiceFabricApp/FileStoreServiceStateless";
Binding binding = WcfUtility.CreateTcpClientBinding();
// Create a partition resolver
var serviceResolver = new ServicePartitionResolver(new string[] { "192.168.56.122:19000" , "192.168.56.122:19001" });
 
// create a  WcfCommunicationClientFactory object.
var clientFactory = new WcfCommunicationClientFactory
                    (clientBinding: binding, servicePartitionResolver: serviceResolver);
 
var client = new ServicePartitionClient>(
                    clientFactory,
                    new Uri(uri), partitionKey: Microsoft.ServiceFabric.Services.Client.ServicePartitionKey.Singleton);
 
var result = client.InvokeWithRetry(svc => svc.Channel.GetDocuments("Document", "1000848776", null));
However, when the code ran it always locked up on the call to InvokeWithRetry. On further investigation by calling ResolveAsyc first, I determined that may application was locking up when trying to resolve the address of the service. This took me a long time to figure out what was wrong and I tried a number of different addresses and ports with no luck. It was only after trying to run the code here, which lists all the services in a cluster, in the Visual Studio debugger that things started to work. This was confusing because I’d already tried loads of different things. The only difference was that the Development Service Fabric was running. I then ran my console app and no lock up occurred. Turning off the development service fabric and the console app locked up again. I moved the console app on to another computer that didn’t have the development service fabric installed and everything worked fine.

The good thing about this is that everything seems to be working and and I’ve learnt more about service fabric now Smile

How to emulate Azure Service Bus Topic Subscription Filtering in RabbitMQ

When creating a subscription to an Azure Service Bus Topic you can add a filter which will determine which messages to send to the subscription based upon the properties of the message.

image

This is done by passing a SqlFilter to the Create Subscription method

e.g.

if (!_NamespaceManager.SubscriptionExists(topic, subscription))
{
    if (!String.IsNullOrEmpty(filter))
    {
        SqlFilter strFilter = new SqlFilter(filter);
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription, strFilter);
        bSuccess = true;
    }
    else
    {
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription);
        bSuccess = true;
    }
}

Where strFilter is a string representing the properties that you want to filter on e.g.

// Create a "LowMessages" filtered subscription.

SqlFilter lowMessagesFilter = new SqlFilter("MessageNumber <= 3");

namespaceManager.CreateSubscription("TestTopic","LowMessages",lowMessagesFilter);

Applying properties to messages makes it easier to configure multiple subscribers to sets of messages rather than having multiple subscribers that receive all the messages, providing you with a flexible approach to building your messaging applications.

Subscriptions are effectively individual queues that each subscriber uses to hold the messages that a relevant to the subscriptions

When a message is pushed onto a Topic the Service Bus will look at all the subscriptions for the Topic and determine which messages are relevant to the subscription. If it is relevant then the subscription will receive the message into its queue. If there are no subscriptions capable of receiving the message then the message will be lost unless the topic is configured to throw an exception when there are no subscriptions to receive the message.

This approach is useful if most of the message data is stored in the properties (which are subject to a size limit of 64KB) and the body content is serialised to the same object (or the body object types are known).

Receiving messages on a Service Bus Subscription is as follows:

MessagingFactory messageFactory = MessagingFactory.CreateFromConnectionString(_ConnectionString);
SubscriptionClient client = messageFactory.CreateSubscriptionClient(topic, subscription);
message = await client.ReceiveAsync(new TimeSpan(0, 5, 0));
if (message != null)
{
    properties = message.Properties;
    body = message.GetBody<MyCustomBodyData>();
    if (processMessage != null)
    {
        // do some work
    }
    message.Complete();
}

Over the past few months I have been looking at RabbitMQ and trying to apply my Service Bus knowledge, as well as looking at the differences. Routing messages based upon the message properties rather than a routing key defined in the message is still applicable in the RabbitMQ world and RabbitMQ is configurable enough to work in this way. RabbitMQ requires more configuration than Service Bus but there is a mechanism called Header Exchange which can be used to route messages based upon the properties of the message.

The first thing to do is to create the exchange, then assign a queue to it based upon a set of filter criteria. I’ve been creating my exchanges with an alternate exchange to allow me to receive message that are not handled in a default queue. The code to create the exchange and queue that subscribes to messages where the ClientId property is “Client1” and the FileType property is “transaction”.

// Create Header Exchange with alternate-exchange

IDictionary<String, Object> args4 = new Dictionary<String, Object>();

args4.Add("alternate-exchange", alternateExchangeNameForHeaderExchange);

channel.ExchangeDeclare(HeaderExchangeName, "headers", true, false, args4);

channel.ExchangeDeclare(alternateExchangeNameForHeaderExchange, "fanout");

//Queue for Header Exchange Client1 & transaction

Dictionary<string, object> bindingArgs = new Dictionary<string, object>();

bindingArgs.Add("x-match", "all"); //any or all

bindingArgs.Add("ClientId", "Client1");

bindingArgs.Add("FileType", "transaction");

channel.QueueDeclare(HeaderQueueName, true, false, false, args5);

channel.QueueBind(HeaderQueueName, HeaderExchangeName, "", bindingArgs);

//queue for Header Exchange alternate exchange (all other)

channel.QueueDeclare(unroutedMessagesQueueNameForHeaderExchange, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueNameForHeaderExchange, alternateExchangeNameForHeaderExchange, "");

This will setup the exchange and queue in RabbitMQ and now you can send a message to the exchange with the correct properties as follows:

IBasicProperties properties = channel.CreateBasicProperties();
properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("ClientId", "Client1");
properties.Headers.Add("FileType", "transaction");


string routingkey = "header.key";
var message = "Hello World";
var body = Encoding.UTF8.GetBytes(message);

channel.BasicPublish(exchange: TopicName,
                                routingKey: routingkey,
                                basicProperties: properties,
                                body: body);

Receiving messages from the queue is as follows:

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    var routingKey = ea.RoutingKey;
    Byte[] FileTypeBytes = (Byte[])ea.BasicProperties.Headers["FileType"];
    Byte[] ClientIDBytes = (Byte[])ea.BasicProperties.Headers["ClientId"];
    string FileType = System.Text.Encoding.ASCII.GetString(FileTypeBytes);
    string ClientID = System.Text.Encoding.ASCII.GetString(ClientIDBytes);
    Console.WriteLine(" [x] Received '{0}':'{1}' [{2}] [{3}]",
                        routingKey,
                        message,
                        ClientID,
                        FileType);
    EventingBasicConsumer c = model as EventingBasicConsumer;
    if (c != null)
    {
        c.Model.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
};
channel.BasicConsume(queue: queueProcessorBaseName + textBox1.Text,
                        noAck: false,
                        consumer: consumer);

Again an out of the box feature for Service Bus can also be implemented in RabbitMQ but it is much simpler to use in Service Bus. The use of properties to route messages offers a much more flexible approach but does require that the body of the messages are either not used or are understood by each consumer. Service Bus offers more flexibility as the query string can contain a variety of operators whereas RabbitMQ matches all or some of the header values and not a range.

Why do I need Pre and Post Approval Steps in my Release Pipeline?

TFS Release Manager and Octopus Deploy, both support the concept of approval steps, but why do you need both pre-release and post-release approval steps. When I first started to look at the automated release tools such as TFS release manager I could understand the reason behind the pre-release approval. This step is your quality gate and adds some controls into your process. When creating your release pipeline you will setup a number of environments (e.g. Test, UAT, Pre production, Production) and at each stage you can make a different set of people responsible for allowing the deployment onto each environment.

When the developers have finished their new piece of code and it has been tested in their own environments, they will want to get it onto the servers so that the testers can test it in a more formal way. Currently this may involve the developer talking to the testers and the developers hanging around whilst the testers finish what they are doing so that they can free up the servers ready for the deploy. The developers will then deploy the software to the test environment, hopefully using some method of automation. With release manager, the developers can kick of an automated build and when it is completed it can then automatically create a release ready for deployment. This release can be configured to automatically deploy to the target environment.

The developer can force the software on to the test environment without the tester being ready. The testers for example may be finishing off testing a previous release and need some time before they are ready to accept the software. They may also have a set of criteria that need to be met before they will accept the software onto their environment.

Adding a pre-release approval step that allows the test team to “Accept” the release gives control to the test team and allows them to accept and indeed reject a release. This “Pause” the process allows the testers to check that all the developer quality gates have been met and therefore allow them push back to the developers if they are not happy. As the deployments can be automated, the testers can use the approval process to also control when the new software is to be deployed into their environment, allowing them to complete their current set of tests first. It also frees up the developer, so that they are not hanging around waiting to deploy. Similarly, moving on to UAT, Pre-Prod or Production, a pre-release approval step can be configured with different approvers who then become the gate keepers for each environment.

A pre-release approval step makes a lot of sense and provides order and control to a process and remove a lot of user error from the process.

So what about a post-release approval step, why would you need one? It wasn’t until I started to use TFS Release manager to automatically deploy my applications to Azure Websites where the need to have pre-release approval process became clear. Once I had released my software onto the test environment, I needed a mechanism to allow the testers to be able to reject a release if it failed testing for whatever reason. The post release approval step allowed them to have this power. By adding both a pre and post release approval step for each environment allowed the environment owner to accept the release into the environment when they are ready for it and when they are satisfied that the developers have done their jobs correctly. They can also control when it is ready to move to the next stage in the process. If after completing testing the software is ready to release to UAT then the tester can approve the release which pushes it to the next environment. If the tester is not happy with the release then they can reject it and the release does not move forwards. The tester can comment on the reason for rejection and the release will show red for failure on the dash board. Adding pre and post approval steps to each environment moves the control of software releases onto each environment to a group of people who are responsible for what happens on each.

Using these approval steps can also act as a sanity check to ensure that software releases do not accidentally get pushed onto an environment if someone kicks of the wrong build for example.

I’ve created a release pipeline for my applications which use pre and post approval steps for releases to Test and UAT, I don’t’ have a pre-production environment, but production utilises the staging slots feature of Azure Websites to allow me to deploy the release to staging prior to actually going live. The production environment only has a pre-release approval step, but as it is only going to staging, there is an additional safe guard to allow the coordinated live release when the business is ready.

Both Pre and Post release approval steps provide a useful feature to put the control of the release with the teams that are responsible for each environment. The outcome of each approval process can be visible, which also highlights if and when there are issues with the quality of the software being released.

Dead Letters with Azure Service Bus and RabbitMQ

Firstly, what are dead letters?

When a  message is received in a messaging system, something tries to process it. The message is normally understood by the system and can be processed, sometimes however the messages are not understood and can cause the receiving process to fail. The failure could be caught by the systems and dealt with but in extreme situations the message could cause the receiving process to crash. Messages that cannot be delivered or that fail when processed need to be removed from the queue and stored somewhere for later analysis. A message that fails in this way is called a dead letter and the location where these dead letters reside is called a dead letter queue. Queuing systems such as Azure Service Bus, Rabbit MQ and others have mechanisms to handle this type of failure. Some systems handle them automatically and others require configuration.

Dead letter queues are the same as any other queue except that they contain dead letters. As they are queues they can be processed in the same way as the normal queues except that they have a different address to the normal queue. I’ve already discussed Service Bus Dead Letter Queue addressing in a previous post and this is still relevant today.

On RabbitMQ a Dead Letter queue is just another queue and is addressed in the same way as any other queue. The difference is in the way the Dead Letter queue is setup. Firstly you create a dead letter queue and then you add it to the queue you want to use it with.

To set up the dead letter queue, declare a “direct” exchange and bind a queue to it:

channel.ExchangeDeclare(DeadLetterExchangeName, "direct");
channel.QueueDeclare(DeadLetterQueueName, true, false, false, null);
channel.QueueBind(DeadLetterQueueName, DeadLetterExchangeName, DeadLetterRoutingKey, null);

I’ve used a dead letter routing key that is related to the queue I want to use it from with an additional “dl”. The routing key needs to be unique so that only messages you want to go to this specific dead letter queue will be delivered. e.g. Payments.Received.DL

Now we need to attach the dead letter queue to the correct queue, so when I created my new queue I needed to add the dead letter queue to it

IDictionary<String, Object> args3 = new Dictionary<String, Object>();
args3.Add("x-dead-letter-exchange", DeadLetterExchangeName);
args3.Add("x-dead-letter-routing-key", DeadLetterRoutingKey);
channel.QueueDeclare(queueName, true, false, false, args3);
channel.QueueBind(queueName, TopicName, paymentsReceivedRoutingKey)
;

Whilst there is a lot of flexibility with RabbitMQ, Dead Letter queues come out of the box with Azure Service Bus. Each topic and queue has  one and is enabled by default. RabbitMQ however allows each topic subscription to have their own dead letter queue which allows you to have a finer grained control over what to do with each type of failed message.

Now we have these dead letter queues and we know how to access them, how do we get messages into them.

In Azure Service Bus, there is a mechanism that will automatically put the message in the dead letter queue if the message fails to be delivered 10 times (default). However, you may wish to handle bad messages yourself in code without relying upon the system to do this for you. If a message is delivered 10 times before failure, you are utilising system resources when the message is being processed and these resources could be used to process valid messages. When the message is receive and validation of the message has failed or there is an error whilst processing that you have caught, then you can explicitly send the message to the dead letter queue by calling the dead letter method on the message object.

BrokeredMessage receivedMessage = subscriptionClient.EndReceive(result);

if (receivedMessage != null)
{
    Random rdm = new Random();
    int num = rdm.Next(100);
    Console.WriteLine("Random={0}", num);
    if (num < 10)
    {
        receivedMessage.DeadLetter("Randomly picked for deadletter", "error 123");
        Console.WriteLine("Deadlettered");
    }
    else
    {
        receivedMessage.Complete();
    }
}

My test code, above, randomly sends 10% of my message to the dead letter queue.

In Rabbit MQ will be published to the dead letter queue if one of the following occurs:

  1. The message is rejected by calling BasicNack or BasicReject
  2. The TTL (Time to Live) expires
  3. The queue length limit is exceeded

I’ve written a similar piece of test code for RabbitMQ

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
                       
    var message = Encoding.UTF8.GetString(body);
    Random random = new Random((int)DateTime.Now.Ticks);
    int randomNumber = random.Next(0, 100);
    if (randomNumber > 30)
    {
        channel.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
    else
    {
        if (randomNumber > 10)
        {
            channel.BasicNack(ea.DeliveryTag,false, true);
            Console.WriteLine(" [xxxxx] NAK {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
        }
        else
        {
            Console.WriteLine(" [xxxxx] DeadLetter {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
            channel.BasicNack(ea.DeliveryTag, false, false);
        }
    }
    Thread.Sleep(200);
};
channel.BasicConsume(queue: "hello",
                        noAck: false,
                        consumer: consumer);

If you look at the code you will see that there are two places where BasicNack is called and only one of them sends them to the dead letter queue. BasicNack takes 3 parameters and the last one is “requeue”. Setting requeue to true will put the message back on the originating queue whereas setting requeue to false will publish the message on the dead letter queue.

Both RabbitMQ and Service Bus have the dead letter queue concept and can be used in a similar way. Service Bus has one configured by default and has both an automatic and manual mechanism for publishing message to the dead letter queue. RabbitMQ requires more configuration and does not have the same automation for dead lettering but it can be configured with more flexibility.

System.Web.Mvc not found after deploying to Azure Web Apps using Release Manager

I’m currently evaluating Release Manager in Visual Studio Team Services and I am using it to deploy website to Azure Web Apps. I recently tried to deploy an Asp.Net MVC 4 application and ran into some issues.

I’ve created a build that packages and zips up my web application which runs successfully.I’ve linked a Release pipeline to this build and I can deploy to my test Azure site without any errors, but when I try and run the web application I get the following error:

Could not load file or assembly 'System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

image

I’m using Visual Studio 2013 with MVC as a nuget package.Looking at the properties of System.Web.Mvc I can see that it is set to Copy Local = True

image

I tried a few different things to try to get the assembly to be copied like redoing the nuget install and eventually I toggled the Copy Local to False, saved the project file and then set it back to true. When I looked at the diff of the project file I found an additional property

image

This seems to fix the build. When I checked this in and rebuild, System.Web.Mvc now appears in the zip file. The build was then release to Azure and the web app worked correctly.

Unhandled Messages with Azure Service Bus and RabbitMQ

One of the requirements for our messaging system is to be able to build a system to process messages and either

  1. Have a default handler and then add custom handlers as and when they are required without needing to recode the main system.
  2. Be notified if a message is put onto a topic and there isn’t a process to handle the message.

In RabbitMQ this is relatively straight forward and requires creating an alternate-exchange, adding it as a property to your main exchange and then creating a queue to service the alternate-exchange

 

IDictionary<String, Object> args2 = new Dictionary<String, Object>();

args2.Add("alternate-exchange", alternateExchangeName);

channel.ExchangeDeclare(mainExchangeName, "direct", false, false, args2);

channel.ExchangeDeclare(alternateExchangeName, "fanout");

// Adds a queue bound to the unhandled messages exchange

channel.QueueDeclare(unroutedMessagesQueueName, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueName, alternateExchangeName, "");

Now when a message is published on the main exchange and there is no subscription to handle the message, then the message will automatically appear on the unrouted message queue. This solution will solve both the scenarios we were looking for.

I was interested however understanding how to do this in the Azure Service Bus and whilst it is possible isn’t not as straight forward and will require some code to setup. Topics can be configured to throw an exception if there is no subscription available to process the message when the message is sent. So When the topic is created it needs to be configured to enable this exception to be thrown.

NamespaceManager namespaceManager =

               NamespaceManager.CreateFromConnectionString(_ConnectionString);

TopicDescription td = new TopicDescription(topic)

{

          EnableFilteringMessagesBeforePublishing = true

};

await namespaceManager.CreateTopicAsync(td);

 

Now when a message is sent we need to handle the exception and do something with the message. This is the difference between RabbitMQ and Service Bus. In RabbitMQ the message will automatically end up in the unhandled message queue. In service bus we will need to actually add it to the unhandled message queue when the message is sent. This means that at each message producer, the code will need to handle the exception:

try

{

     client.Send(message);

}

catch(NoMatchingSubscriptionException ex)

{

     // Do something here to process the unhandled message

     // Probably put it on an unhandled message queue

}

Note, however, that if you had a subscription that was a catch all (for example logging all the messages) then unhandled messages would not appear as they are already being handled by the catch all subscription.

PowerShell DSC Composite Resources

When working with PowerShell DSC your scripts often get big and difficult to follow. If you are not careful you will end up copy and pasting configuration. I don’t like Copy/Paste coding so I was looking for a mechanism to allow me to reuse my DSC scripts. I came across Composite Resources.

Composite resources look very similar to you main DSC configuration but along with parameters they allow you to write reusable configuration. The following blog post has a good introduction to composite resources:

http://blogs.msdn.com/b/powershell/archive/2014/02/25/reusing-existing-configuration-scripts-in-powershell-desired-state-configuration.aspx

I followed this post but I had a few issues trying to get my composite resource to be recognised by my DSC configuration. The main issues is that the composite resource requires a very specific structure within which the files need to be put in order for it to be recognised. It then needs copying to somewhere on the DSC module path. On my computer this was C:\Program Files\WindowsPowerShell\Modules. The structure is as follows:

 

           image

MyModule.psd1 & MyCompositeResource.psd1 are manifest files created using New-ModuleManifest. This effectively creates a GUID for the module and the resource. Once these files are created open up MyCompositeResource.psd1 and edit the following line :

RootModule = ‘MyCompositeResource.schema.psm1'

To check to see if the module is configured and structured correctly, go to PowerShell and type:

Get-DscResource -Name MyCompositeResource

If it is configured correctly then PowerShell will return details of the module

ImplementedAs        Name                               Module            Properties

-------------                   ----                                    ------                 ----------

Composite               MyCompositeResource     MyModule       {ServiceName, exeFullPath, sourcePath, destinat...

If you want to use the composite resource on a pull server then the whole MyModules folder needs zipping and a checksum file creating, then copy it to the modules folder on your pull server where all the other modules reside.

This now works well until I added a Script resource to my composite resource. I was creating a composite resource to install a set of windows services and the process I was following required me to uninstall and then reinstall the service using a script. The script block worked fine until I had multiple services using the same composite resource. I then started getting errors when creating the MOF file complaining that I had duplicate keys for my script blocks.

“Add-NodeKeys : The key properties combination 'some script' is duplicated for keys 'GetScript,SetScript,TestScript' of resource 'Script' in node 'nodename'. Please make sure key properties are unique for each resource in a node”

After a bit of searching I found this post: https://www.briantist.com/how-to/use-duplicate-dsc-script-resources-in-loop/ which explains how to resolve the problem. I copied the Replace-Using script to the top of my composite resource file and then piped the GetScript, TestScript and SetScripts to Replace-Using

e.g.

Script Service.UrlAcl
{
    GetScript =  {$using:ServiceName + "UrlAclGet"} |Replace-Using
    TestScript =
    {
        if([String]::IsNullOrEmpty($using:UrlAcl))
        {
             Write-Verbose -Message "urlacl not configured in parameters"
            return $true                   
        }
        else
        {
           [String] $resp = (netsh http show urlacl url=$using:UrlAcl | findstr -i $using:UrlAcl)
           Write-Verbose -Message "netsh returned $resp for url $using:UrlAcl"
           if( [string]::IsNullOrEmpty($resp) -OR ($resp.IndexOf($using:UrlAcl) -eq -1) )
           {
                 # The url is not registered
                Write-Verbose -Message "urlacl=$using:UrlAcl not registered"
                return $false
            }
            else
            {
                Write-Verbose -Message "urlacl=$using:UrlAcl IS registered"
                    return $true
            }
          }
    } |Replace-Using
    SetScript =
    {            
        $fullun= $using:un
        $domainpos = $fullun.IndexOf("\")
        if ($domainpos -ne -1 )
        {
           $fullun=$fullun.Substring($domainpos+1)
        }
        Write-Verbose -Message "Setting urlacl= $using:UrlAcl for $fullun"
        netsh http add urlacl url=$using:UrlAcl user=$fullun

     } |Replace-Using
     DependsOn = "[Script]Service.Install"
    }

I now have a working composite resource that I can add to my pull server to configure up Windows Services

Unlock The Door Demo Software on GitHub

If you attended my DDD East Anglia talk “A Raspberry Pi2, Azure ML and Project Oxford to unlock that door!” where I integrate a Raspberry Pi running Windows 10 IoT core with the service bus , Project Oxford for face recognition and a Windows Store App to take my picture and hopefully unlock my door. Yes I did bring a door with me. Thanks for attending and for your nice comments.

I have started to put my code up on GitHub. The code for the Raspberry Pi is already there - https://github.com/sdspencer-mvp/RaspberryPi2-UnlockTheDoor. More will appear later as I tidy it up and remove all my config secrets Winking smile

I will be repeating this talk at Smart Devs in Hereford on 12 October 2015 and again at DDD North in Sunderland on 24 October 2015.

Windows 10 IoT Core New Release

I’ve just upgraded my Raspberry Pi 2 with Windows 10 IoT Core Build Number 10531.0 (download , release notes). It fixes an issue I’ve been having with setting the application the runs when the Pi first starts up. Prior to this release my application would start up the first time and then shutdown and be replaced by the default app. It would then not start up at power up again. Now my application starts up every time I power on my Raspberry Pi Smile

It is also possible to set the computer name and set the administrator password from the Raspberry Pi administration website. Previously this was done using PowerShell.

In order to navigate to the administration page you must first know either the machine name or ip address of your Raspberry Pi. This can be found in the Windows 10 IoT Core watcher application that runs after you have installed the IoT core SDK. To access the admin website either enter the address into a browser (http://<ipaddressornameofPi>:8080) or right click on the Pi in the IoT Core Watcher application and select “Web Browse Here”. You will need to enter the username Administrator plus your password to access the site.

image

Here  you can enter a new device name (machine name) as well as change the password. A reboot will be required if you change the name.

In order to set the start up app click the Apps link on the menu panel

image

You will need to ensure that you have first deployed your application to the Raspberry Pi. If you have debugged your application using Visual Studio then a debug version will already have been installed on the Raspberry Pi.

From the Installed Apps drop down select your applications and click the Set Default button. Your application should start and replace the Default App in the running apps list. You can check this by clicking reboot or cycling the power to the Raspberry Pi and your app should start up after the Raspberry Pi has booted.

Adding Multi-Factor Authentication to ADFS

I’ve been investigating how to wire up AD to ADFS and thanks to my friend James he pointed me in the direction of multifactor authentication. The post here explains how to add in multi-factor authentication (MFA) to ADFS. There were however a couple of areas that were not clear that needed additional research.

  1. How to get a config file into  the MFA provider
  2. How to send additional claims from the MFA provider
  3. How to customise the ADFS MFA portal pages

Adding configuration into the MFA is handled in the  OnAuthenticationPipelineLoad method in the AuthenticationAdapter class. The configData parameter contains a Data property which is a file stream that allows you access to the config file. The config file can be anything you want but you need to add it when you register your plugin with ADFS. Plugin registration is doen in Powershell and you need to add the configuration as follows:

Register-AdfsAuthenticationProvider -TypeName $typeName -Name "MFA_MyProvider" -Verbose -ConfigurationFilePath c:\mfa\config.xml  see here

In your OnAuthenticationPipelineLoad method you need to process the config file

public void OnAuthenticationPipelineLoad(IAuthenticationMethodConfigData configData)

{

    if (configData != null)

    {

        if (configData.Data != null)

        {

            // load the config file

            using (StreamReader reader = new StreamReader(configData.Data, Encoding.UTF8))

            {

                try

                {

                    string config = reader.ReadToEnd();

 

                    // Read your config here

                }

            }

        }

    }

}

 

Sending additional claims is achieved in the TryEndAuthentication method of the AuthenticationAdapter class. It should already be returning and authentication method as an array of claims. You can add additional claims to this array and return them through the claims out parameter. You will need to add rules in ADFS to pass through these claims to the application if they are required.

Customising the MFA portal is done through PowerShell details are found here:

http://thinketg.com/adfs-3-0-logon-page-customization/ & https://technet.microsoft.com/en-us/library/dn280950.aspx