Steve Spencer's Blog

Blogging on Azure Stuff

How to emulate Azure Service Bus Topic Subscription Filtering in RabbitMQ

When creating a subscription to an Azure Service Bus Topic you can add a filter which will determine which messages to send to the subscription based upon the properties of the message.

image

This is done by passing a SqlFilter to the Create Subscription method

e.g.

if (!_NamespaceManager.SubscriptionExists(topic, subscription))
{
    if (!String.IsNullOrEmpty(filter))
    {
        SqlFilter strFilter = new SqlFilter(filter);
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription, strFilter);
        bSuccess = true;
    }
    else
    {
        await _NamespaceManager.CreateSubscriptionAsync(topic, subscription);
        bSuccess = true;
    }
}

Where strFilter is a string representing the properties that you want to filter on e.g.

// Create a "LowMessages" filtered subscription.

SqlFilter lowMessagesFilter = new SqlFilter("MessageNumber <= 3");

namespaceManager.CreateSubscription("TestTopic","LowMessages",lowMessagesFilter);

Applying properties to messages makes it easier to configure multiple subscribers to sets of messages rather than having multiple subscribers that receive all the messages, providing you with a flexible approach to building your messaging applications.

Subscriptions are effectively individual queues that each subscriber uses to hold the messages that a relevant to the subscriptions

When a message is pushed onto a Topic the Service Bus will look at all the subscriptions for the Topic and determine which messages are relevant to the subscription. If it is relevant then the subscription will receive the message into its queue. If there are no subscriptions capable of receiving the message then the message will be lost unless the topic is configured to throw an exception when there are no subscriptions to receive the message.

This approach is useful if most of the message data is stored in the properties (which are subject to a size limit of 64KB) and the body content is serialised to the same object (or the body object types are known).

Receiving messages on a Service Bus Subscription is as follows:

MessagingFactory messageFactory = MessagingFactory.CreateFromConnectionString(_ConnectionString);
SubscriptionClient client = messageFactory.CreateSubscriptionClient(topic, subscription);
message = await client.ReceiveAsync(new TimeSpan(0, 5, 0));
if (message != null)
{
    properties = message.Properties;
    body = message.GetBody<MyCustomBodyData>();
    if (processMessage != null)
    {
        // do some work
    }
    message.Complete();
}

Over the past few months I have been looking at RabbitMQ and trying to apply my Service Bus knowledge, as well as looking at the differences. Routing messages based upon the message properties rather than a routing key defined in the message is still applicable in the RabbitMQ world and RabbitMQ is configurable enough to work in this way. RabbitMQ requires more configuration than Service Bus but there is a mechanism called Header Exchange which can be used to route messages based upon the properties of the message.

The first thing to do is to create the exchange, then assign a queue to it based upon a set of filter criteria. I’ve been creating my exchanges with an alternate exchange to allow me to receive message that are not handled in a default queue. The code to create the exchange and queue that subscribes to messages where the ClientId property is “Client1” and the FileType property is “transaction”.

// Create Header Exchange with alternate-exchange

IDictionary<String, Object> args4 = new Dictionary<String, Object>();

args4.Add("alternate-exchange", alternateExchangeNameForHeaderExchange);

channel.ExchangeDeclare(HeaderExchangeName, "headers", true, false, args4);

channel.ExchangeDeclare(alternateExchangeNameForHeaderExchange, "fanout");

//Queue for Header Exchange Client1 & transaction

Dictionary<string, object> bindingArgs = new Dictionary<string, object>();

bindingArgs.Add("x-match", "all"); //any or all

bindingArgs.Add("ClientId", "Client1");

bindingArgs.Add("FileType", "transaction");

channel.QueueDeclare(HeaderQueueName, true, false, false, args5);

channel.QueueBind(HeaderQueueName, HeaderExchangeName, "", bindingArgs);

//queue for Header Exchange alternate exchange (all other)

channel.QueueDeclare(unroutedMessagesQueueNameForHeaderExchange, true, false, false, null);

channel.QueueBind(unroutedMessagesQueueNameForHeaderExchange, alternateExchangeNameForHeaderExchange, "");

This will setup the exchange and queue in RabbitMQ and now you can send a message to the exchange with the correct properties as follows:

IBasicProperties properties = channel.CreateBasicProperties();
properties.Headers = new Dictionary<string, object>();
properties.Headers.Add("ClientId", "Client1");
properties.Headers.Add("FileType", "transaction");


string routingkey = "header.key";
var message = "Hello World";
var body = Encoding.UTF8.GetBytes(message);

channel.BasicPublish(exchange: TopicName,
                                routingKey: routingkey,
                                basicProperties: properties,
                                body: body);

Receiving messages from the queue is as follows:

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
    var message = Encoding.UTF8.GetString(body);
    var routingKey = ea.RoutingKey;
    Byte[] FileTypeBytes = (Byte[])ea.BasicProperties.Headers["FileType"];
    Byte[] ClientIDBytes = (Byte[])ea.BasicProperties.Headers["ClientId"];
    string FileType = System.Text.Encoding.ASCII.GetString(FileTypeBytes);
    string ClientID = System.Text.Encoding.ASCII.GetString(ClientIDBytes);
    Console.WriteLine(" [x] Received '{0}':'{1}' [{2}] [{3}]",
                        routingKey,
                        message,
                        ClientID,
                        FileType);
    EventingBasicConsumer c = model as EventingBasicConsumer;
    if (c != null)
    {
        c.Model.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
};
channel.BasicConsume(queue: queueProcessorBaseName + textBox1.Text,
                        noAck: false,
                        consumer: consumer);

Again an out of the box feature for Service Bus can also be implemented in RabbitMQ but it is much simpler to use in Service Bus. The use of properties to route messages offers a much more flexible approach but does require that the body of the messages are either not used or are understood by each consumer. Service Bus offers more flexibility as the query string can contain a variety of operators whereas RabbitMQ matches all or some of the header values and not a range.

Why do I need Pre and Post Approval Steps in my Release Pipeline?

TFS Release Manager and Octopus Deploy, both support the concept of approval steps, but why do you need both pre-release and post-release approval steps. When I first started to look at the automated release tools such as TFS release manager I could understand the reason behind the pre-release approval. This step is your quality gate and adds some controls into your process. When creating your release pipeline you will setup a number of environments (e.g. Test, UAT, Pre production, Production) and at each stage you can make a different set of people responsible for allowing the deployment onto each environment.

When the developers have finished their new piece of code and it has been tested in their own environments, they will want to get it onto the servers so that the testers can test it in a more formal way. Currently this may involve the developer talking to the testers and the developers hanging around whilst the testers finish what they are doing so that they can free up the servers ready for the deploy. The developers will then deploy the software to the test environment, hopefully using some method of automation. With release manager, the developers can kick of an automated build and when it is completed it can then automatically create a release ready for deployment. This release can be configured to automatically deploy to the target environment.

The developer can force the software on to the test environment without the tester being ready. The testers for example may be finishing off testing a previous release and need some time before they are ready to accept the software. They may also have a set of criteria that need to be met before they will accept the software onto their environment.

Adding a pre-release approval step that allows the test team to “Accept” the release gives control to the test team and allows them to accept and indeed reject a release. This “Pause” the process allows the testers to check that all the developer quality gates have been met and therefore allow them push back to the developers if they are not happy. As the deployments can be automated, the testers can use the approval process to also control when the new software is to be deployed into their environment, allowing them to complete their current set of tests first. It also frees up the developer, so that they are not hanging around waiting to deploy. Similarly, moving on to UAT, Pre-Prod or Production, a pre-release approval step can be configured with different approvers who then become the gate keepers for each environment.

A pre-release approval step makes a lot of sense and provides order and control to a process and remove a lot of user error from the process.

So what about a post-release approval step, why would you need one? It wasn’t until I started to use TFS Release manager to automatically deploy my applications to Azure Websites where the need to have pre-release approval process became clear. Once I had released my software onto the test environment, I needed a mechanism to allow the testers to be able to reject a release if it failed testing for whatever reason. The post release approval step allowed them to have this power. By adding both a pre and post release approval step for each environment allowed the environment owner to accept the release into the environment when they are ready for it and when they are satisfied that the developers have done their jobs correctly. They can also control when it is ready to move to the next stage in the process. If after completing testing the software is ready to release to UAT then the tester can approve the release which pushes it to the next environment. If the tester is not happy with the release then they can reject it and the release does not move forwards. The tester can comment on the reason for rejection and the release will show red for failure on the dash board. Adding pre and post approval steps to each environment moves the control of software releases onto each environment to a group of people who are responsible for what happens on each.

Using these approval steps can also act as a sanity check to ensure that software releases do not accidentally get pushed onto an environment if someone kicks of the wrong build for example.

I’ve created a release pipeline for my applications which use pre and post approval steps for releases to Test and UAT, I don’t’ have a pre-production environment, but production utilises the staging slots feature of Azure Websites to allow me to deploy the release to staging prior to actually going live. The production environment only has a pre-release approval step, but as it is only going to staging, there is an additional safe guard to allow the coordinated live release when the business is ready.

Both Pre and Post release approval steps provide a useful feature to put the control of the release with the teams that are responsible for each environment. The outcome of each approval process can be visible, which also highlights if and when there are issues with the quality of the software being released.

Dead Letters with Azure Service Bus and RabbitMQ

Firstly, what are dead letters?

When a  message is received in a messaging system, something tries to process it. The message is normally understood by the system and can be processed, sometimes however the messages are not understood and can cause the receiving process to fail. The failure could be caught by the systems and dealt with but in extreme situations the message could cause the receiving process to crash. Messages that cannot be delivered or that fail when processed need to be removed from the queue and stored somewhere for later analysis. A message that fails in this way is called a dead letter and the location where these dead letters reside is called a dead letter queue. Queuing systems such as Azure Service Bus, Rabbit MQ and others have mechanisms to handle this type of failure. Some systems handle them automatically and others require configuration.

Dead letter queues are the same as any other queue except that they contain dead letters. As they are queues they can be processed in the same way as the normal queues except that they have a different address to the normal queue. I’ve already discussed Service Bus Dead Letter Queue addressing in a previous post and this is still relevant today.

On RabbitMQ a Dead Letter queue is just another queue and is addressed in the same way as any other queue. The difference is in the way the Dead Letter queue is setup. Firstly you create a dead letter queue and then you add it to the queue you want to use it with.

To set up the dead letter queue, declare a “direct” exchange and bind a queue to it:

channel.ExchangeDeclare(DeadLetterExchangeName, "direct");
channel.QueueDeclare(DeadLetterQueueName, true, false, false, null);
channel.QueueBind(DeadLetterQueueName, DeadLetterExchangeName, DeadLetterRoutingKey, null);

I’ve used a dead letter routing key that is related to the queue I want to use it from with an additional “dl”. The routing key needs to be unique so that only messages you want to go to this specific dead letter queue will be delivered. e.g. Payments.Received.DL

Now we need to attach the dead letter queue to the correct queue, so when I created my new queue I needed to add the dead letter queue to it

IDictionary<String, Object> args3 = new Dictionary<String, Object>();
args3.Add("x-dead-letter-exchange", DeadLetterExchangeName);
args3.Add("x-dead-letter-routing-key", DeadLetterRoutingKey);
channel.QueueDeclare(queueName, true, false, false, args3);
channel.QueueBind(queueName, TopicName, paymentsReceivedRoutingKey)
;

Whilst there is a lot of flexibility with RabbitMQ, Dead Letter queues come out of the box with Azure Service Bus. Each topic and queue has  one and is enabled by default. RabbitMQ however allows each topic subscription to have their own dead letter queue which allows you to have a finer grained control over what to do with each type of failed message.

Now we have these dead letter queues and we know how to access them, how do we get messages into them.

In Azure Service Bus, there is a mechanism that will automatically put the message in the dead letter queue if the message fails to be delivered 10 times (default). However, you may wish to handle bad messages yourself in code without relying upon the system to do this for you. If a message is delivered 10 times before failure, you are utilising system resources when the message is being processed and these resources could be used to process valid messages. When the message is receive and validation of the message has failed or there is an error whilst processing that you have caught, then you can explicitly send the message to the dead letter queue by calling the dead letter method on the message object.

BrokeredMessage receivedMessage = subscriptionClient.EndReceive(result);

if (receivedMessage != null)
{
    Random rdm = new Random();
    int num = rdm.Next(100);
    Console.WriteLine("Random={0}", num);
    if (num < 10)
    {
        receivedMessage.DeadLetter("Randomly picked for deadletter", "error 123");
        Console.WriteLine("Deadlettered");
    }
    else
    {
        receivedMessage.Complete();
    }
}

My test code, above, randomly sends 10% of my message to the dead letter queue.

In Rabbit MQ will be published to the dead letter queue if one of the following occurs:

  1. The message is rejected by calling BasicNack or BasicReject
  2. The TTL (Time to Live) expires
  3. The queue length limit is exceeded

I’ve written a similar piece of test code for RabbitMQ

var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
    var body = ea.Body;
                       
    var message = Encoding.UTF8.GetString(body);
    Random random = new Random((int)DateTime.Now.Ticks);
    int randomNumber = random.Next(0, 100);
    if (randomNumber > 30)
    {
        channel.BasicAck(ea.DeliveryTag, false);
        Console.WriteLine(" [x] Received {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
    }
    else
    {
        if (randomNumber > 10)
        {
            channel.BasicNack(ea.DeliveryTag,false, true);
            Console.WriteLine(" [xxxxx] NAK {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
        }
        else
        {
            Console.WriteLine(" [xxxxx] DeadLetter {0} rk {1} ex {2} ct {3}", message, ea.RoutingKey, ea.Exchange, ea.ConsumerTag);
            channel.BasicNack(ea.DeliveryTag, false, false);
        }
    }
    Thread.Sleep(200);
};
channel.BasicConsume(queue: "hello",
                        noAck: false,
                        consumer: consumer);

If you look at the code you will see that there are two places where BasicNack is called and only one of them sends them to the dead letter queue. BasicNack takes 3 parameters and the last one is “requeue”. Setting requeue to true will put the message back on the originating queue whereas setting requeue to false will publish the message on the dead letter queue.

Both RabbitMQ and Service Bus have the dead letter queue concept and can be used in a similar way. Service Bus has one configured by default and has both an automatic and manual mechanism for publishing message to the dead letter queue. RabbitMQ requires more configuration and does not have the same automation for dead lettering but it can be configured with more flexibility.

System.Web.Mvc not found after deploying to Azure Web Apps using Release Manager

I’m currently evaluating Release Manager in Visual Studio Team Services and I am using it to deploy website to Azure Web Apps. I recently tried to deploy an Asp.Net MVC 4 application and ran into some issues.

I’ve created a build that packages and zips up my web application which runs successfully.I’ve linked a Release pipeline to this build and I can deploy to my test Azure site without any errors, but when I try and run the web application I get the following error:

Could not load file or assembly 'System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

image

I’m using Visual Studio 2013 with MVC as a nuget package.Looking at the properties of System.Web.Mvc I can see that it is set to Copy Local = True

image

I tried a few different things to try to get the assembly to be copied like redoing the nuget install and eventually I toggled the Copy Local to False, saved the project file and then set it back to true. When I looked at the diff of the project file I found an additional property

image

This seems to fix the build. When I checked this in and rebuild, System.Web.Mvc now appears in the zip file. The build was then release to Azure and the web app worked correctly.

Making My Azure ML Project Oxford Sample Application More Visual

Following on from my last post where I introduce Project Oxford I’ve done a bit more work to take the project that was built and make it more visual. To summarise, Project Oxford is a set of APIs that build on top of Azure ML to provide Face, Speech, Computer Vision and Language Understanding Intelligence Service (LUIS). There was a good video from Build 2015 that I watched to provide an overview of each of the APIs.

I used the tutorials to build an application that would identify a number of people from a known list in a photograph and highlight the ones that were unknown. The Face API requires people to be trained with a set of photos first, before identification can be made. This was done by using the code in the samples. I created a folder for each person that I wanted to be trained and added different photos of each person with and without hats, and sunglasses and also with different expressions. Then each set of folders was passed to the training API. Once trained you can then use the rest of the Face API to firstly identify faces in a picture and then take each face that is found and see if they are known.

One useful tip I’ve found is to have Fiddler running whilst you are debugging as it is far easier to see any errors in the body of the response message than in the exceptions that are thrown. Details of the errors can be seen in the Face API documentation.

The process for training is as follows (Note the terminology is based around the SDK methods, but I’ve linked to the API page as this gives details about the errors etc):

  1. Create a Person Group
  2. Create a Face list for each person using Face Detect
  3. Create a Person one for each person you want to identify with the person group id and face list
  4. Train the Person Group

Note: The training does not last forever and you will need to redo it periodically. If you try and detect a person when training has expired then you will get an error response saying that the person group is unknown.

To Identify each individual in a photograph:

  1. Stream the photograph into Detect. This will return a list of faces with face ids
  2. Iterate around each Face and call Identify 
  3. Use the Identify Results to extract the names by calling Get Person.

This is where I got to with the previous post, but this wasn’t very visual and as I was working with photographs I thought it would be useful to use the data returned to draw a box around the faces that were identified and add the name of the person underneath. This was also useful to know which person was identified incorrectly. On the project Oxford web site there was the following image

I wanted to emulate this and also to take it one step further. The data returned from the face detection API provides details about gender, age, the area (face rectangle) in the picture where the face was found, face landmarks, and head pose. What the detection API did not do was to tie the name of the person to the face. We do already have this information as it was returned from the Identify API and Get Person. The attribute that links them is the face id. Using the results of the Identify API I called get person for each face identified to return the person’s name and stored this in a Dictionary along with the face ID. This then allowed me to load the original photograph into memory draw the rectangles for each face and add the text below each using the face id to extract the rectangle and match the name from the Dictionary, This could then be scaled shown in the app.

Setting Custom Domain for Traffic Manager and Azure Websites

Recently I’ve been looking at using traffic manager to front up websites hosted in Azure Websites. I needed to setup a custom domain name instead of using mydomain.trafficmanager.net.

In order to use Traffic Manager with an Azure website the website needs to be setup using a Standard Hosting Plan.

Each website you want to be included in the traffic manager routing will need to be added as an endpoint in the traffic manager portal.

Once you have this setup you will need to add the DNS CNAME record for your domain. This needs to be configured at your Domain provider. You set the CNAME to point to mydomain.trafficmanager.net

In order for the traffic to be routed to your Azure hosted website(s), each website setup as an endpoint in traffic manager will need to have your mapped domain e.g. www.mydomain.com  configured. This is done under settings->Custom Domains and SSL in the new portal and under the configure tab –> manage domains (or click the Manage Domains button)

If you don’t add this then you will see this 404 error page whenever you try to navigate to the site through the traffic manager custom domain name:

image

Azure Websites: Blocking access to the azurewebsites.net url

I’ve been setting up one of our services as the backend service for Azure API management. Part of this process we have mapped DNS to point to the service. As the service is hosted in Azure Websites there are now two urls that exist which can be used to access the service. I wanted to stop a user from accessing the site using the azurewebsites.net url and only access it via the mapped domain. This is easy to achieve and can be configured in the web.config file of the service.

In the <system.webServer> section add the following configuration

<rewrite>
    <rules>
        <rule name="Block traffic to the raw azurewebsites url"  patternSyntax="Wildcard" stopProcessing="true">
          <match url="*" />
          <conditions>
            <add input="{HTTP_HOST}" pattern="*azurewebsites.net*" />
          </conditions>
          <action type="CustomResponse" statusCode="403" statusReason="Forbidden"
          statusDescription="Site is not accessible" />
        </rule>
    </rules>
</rewrite>

Now if I try and access my site through the azurewebsites.net url, I get a 403 error, but accessing through the mapped domain is fine.

Azure Media Services Live Media Streaming General Availability

Yesterday Scott Guthrie announced a number of enhancements to Microsoft Azure. One of the enhancements is the General Availability of Azure Media Services Live Media Streaming. This gives us the ability to stream live events on a service that has already been used to deliver big events such as the 2014 Sochi Winter Olympics and the 2014 FIFA World Cup.

I’ve look at this for a couple of our projects and found it relatively fast and easy to set up a live media event even from my laptop using its built in camera. There’s a good blob post that walks you through the process of setting up the Live Streaming service. I used this post and was quickly streaming both audio and video from my laptop.

The main piece of software that you need to install is a Video/Audio Encode the supports Smooth Streaming or RTMP. I used the WireCast encoder as specified in the post. You can try out the encoder for 2 weeks as long as you don’t mind seeing the Wirecast Logo on your video (which is removed if you buy a license). Media services pricing can be found here

The Media Services team have provided a MPEG-DASH player to help you test your live streams.

It appears that once you have created a stream that is is still accessible on demand after the event has completed.Also there is around a 20s delay when you receive the stream on your player.

Azure Websites Slots and Configuration

One of the conundrums we have with deploying sites to test means that there is often a lot of configuration that is needed on a test site that is different to a live site. There is also the time and risk of deploying a new instance into the production sites once testing has completed.

Azure websites has introduced deployments slots which allows you to have multiple deployments and swap between them in a similar way you could do with the production and staging slots in cloud services. Websites has the added advantage that you can have more than two slots and you can call them whatever you want.

One approach we are looking at to ensure consistency with what is deployed is to configure up a number of slots on the website for a variety of uses e.g. Production, Staging, UAT. The issue with having multiple slots is that there are often sets of configurations that are required to ensure that each slot will work with the correct backend. By default all configuration stored in the appsettings in web.config will move with the slot. Details of the exact configuration settings that move with the deployment can be found here (http://azure.microsoft.com/en-gb/documentation/articles/web-sites-staged-publishing/)

For example, in my web.config file I have the following setting

<appSettings>

<add key="about" value="This is the web.config text" />

</appSettings>

This setting can be overridden in the Azure portal(s) and these by default will follow the deployment and not stay with the slot.

image

So in this example the "about" config will be set to "This is Now the Staging slot" and when the staging slot is swapped to be production, the new production configuration will also be "This is Now the Staging slot"

This is not necessarily what you want on production. Websites has a feature, that is currently unsupported by the management portal(s), which allows specific configuration items to become sticky i.e. they stay with the slot. There is a powershell cmdlet which allows individual appsettings to be marked as sticky and remain with the slot regardless of the deployment that is in the slot and they will also remain in the slot when the slots are swapped.

This can be set for both Appsettings and connection strings by running the following commands

Set-AzureWebsite -Name somesite -SlotStickyAppSettingNames @("about", "another_config_key")

Set-AzureWebsite -Name somesite -SlotStickyConnectionStringNames @("a_connection_string", "some_other_connectionstring")

After running the commands the example above will still have the configuration setting above, but once the deployment is swapped from the staging slot to production the configuration will remain on the staging slot.

This approach should now allow us to deploy to a UAT slot with UAT configuration and allow the customer to test, when they are happy we can move the same deployment that has just been tested to the staging slot with production configuration and be tested in isolation to live to ensure that it works. When you are happy that the staging slot is working this can then be swapped out to production.

For a more detailed introduction to slots and configuration see:

http://azure.microsoft.com/en-gb/documentation/articles/web-sites-staged-publishing/

http://blog.amitapple.com/post/2014/11/azure-websites-slots/#.VG22ik1yaAg

Moving an Azure Website to a separate set of Virtual Machines

When an Azure Website is created and is in production it will most likely be running in a Standard or Basic configuration. These are both sets of Virtual Machines and can be shared across your websites. In the old portal you could only scale the group of websites together but the new Azure Management portal now allows you to move your websites on to different virtual machines so that if one site is more heavily loaded than others it can be scaled out separately if required. The set of virtual machines is known as a Web Hosting Plan. If you want to move one or more of your websites to a different set of virtual machines then you will need to create a new web hosting plan for this.

In the new portal click on “Browse” in the left hand bar

image

This brings up the Browse Menu.

image

Select “Web hosting plans”

image

You can see that I only have 1 web hosting plan and it is currently hosting two websites. I would like to move them onto separate virtual machines so that I can scale them out independently.

To do this I need to navigate to the web site I wish to move.

image

The top menu needs to be expanded by clicking the 3 dots on the right of the menu bar. this then displays the web hosting plan button.

image

Clicking this displays the web hosting plan associated with this web site

image

Clicking on the new hosting plan option allows you to create a new plan

image

I’ve selected a standard small instance to host my website.

After clicking OK the new hosting plan will be created and the website moved to it. After a short while you should see that the hosting plan has changed in this website as well.

image

Note: you now have two hosting plans both of which will be a separate billing entity. I am also led to believe that if you move everything off of a hosting plan you will still be charged for it.Hosting plans can be deleted once all the websites have been moved off of it. This is done in the Web hosting plan page. right click on the plan you want to delete and select the Delete option