Microsoft simplifies IoT and data analysis further

Next week is the yearly Hannover Messe in Hannover, Germany.

It is the main fair for manufacturing companies and their partners and Microsoft will, of course, have a strong present.

I’m actually going this year and must say that I’m quite excited. If a reader of this blog happens to be in Hannover next week, give me a shout; could be fun to meet in real life.

Microsoft has just announced a number of solutions that will greatly simplify IoT and help businesses speed up their digital transformation.

Microsoft IoT Central, a new software-as-a-service (SaaS) offering that reduces the complexity of IoT solutions. Microsoft IoT Central is a fully managed SaaS offering for customers and partners that enables powerful IoT scenarios without requiring cloud solution expertise.

A new preconfigured solution called Connected Factory also looks very promising.

If you are more into time series analysis, check out Azure Time Series Insights.

Time Series Insights gives you a near real time global view of your data across various event sources and lets you quickly validate IoT solutions and avoid costly downtime of mission-critical devices. It helps you discover hidden trends, spot anomalies, conduct root-cause analysis in near real-time, all without writing a single line of code through its simple and intuitive user experience. Additionally, it provides rich API’s to enable you to integrate its powerful capabilities in your own existing workflow or application.

See you in Hannover!

Posted in Azure, IoT | Tagged | Leave a comment

Simplifying IoT Architecture

I’ve been working with IoT projects for the last couple of years. A very common pattern is illustrated in the figure below:

Old architecture pattern

Your devices are sending in data. The Azure IoT hub is used as cloud gateway or ingestion point. You persist all the incoming messages so you can retrieve them later and at the same time you forward them to the Event Hub for (near) real-time processing. This is not illustrated, but you can have a consumer picking off messages from the Event Hub.

Until recently you had to create two consumer groups on your IoT hub and have Azure Stream Analytics do the forwarding. I’ve shown two jobs here, but depending on the load, you might have been able to do with just one, including two select statements.

With the introduction of Endpoints and Routes in the IoT hub and the Archive functionality in th Event Hub this pattern can be simplified quite a lot, cutting out components, hence making the architecture simpler, more manageable and more robust.

The new pattern is illustrated below:

New architecture pattern

We now use endpoints and routes to forward the messages to the Event Hub. It is possible to use filtering, so if this was done in Azure Stream Analytics it is not a problem.

Messages are archived to blob storage directly from the Event Hub. Note, that archived data is written in Apache Avro format.

Posted in Azure, IoT | Tagged , | Leave a comment

Minecraft Management Code Example

A couple of years ago I wrote a blob post where I mentioned a small taskbar utility I had written to stop and start an Azure virtual machine.

In the post I promised to put the code on GitHub. Well, that never happened and today someone asked again, so here is a link to OneDrive and a Zip-file.

Knock yourself out, but please note that this is sample code, no guarantees, bla bla bla.

Posted in Azure, Code | Leave a comment

Azure Management Libraries

This is the second blog post in the small series on experiences and learnings gained while setting up a Minecraft server for the kids. The first spoke primarily about Azure Automation, this one will touch upon the new .NET libraries for Azure Management.

The challenge was the following: enable the kids to start the virtual machine running the Minecraft server without giving them access to the overall subscription

We create a small app running in the taskbar. When the app starts it will show a yellow triangle indicating that the status of the virtual machine is being established.

image

Depending on the whether the instance status is StoppedDeallocated or ReadyRole either a red cross

image

or a green check mark will be shown

image

Right clicking will display the menu items (they should be self-explanatory)

image

For this to work a couple of setting values are required. They are the following:

  • Service Name: This is the name of the cloud service where you virtual machine is deployed.
  • Virtual Machine: This is the name of the virtual machine.
  • Management Certificate: The thumbprint of the management certificate for your subscription.
  • Subscription ID: The ID for your Azure subscription.

Easiest way to get the thumbprint and subscription ID is using the PowerShell command Get-AzurePublishSettingsFile. This will download a file containing both as well as some other information.

<?xml version="1.0" encoding="utf-8"?>
<PublishData>
  <PublishProfile
    SchemaVersion="2.0"
    PublishMethod="AzureServiceManagementAPI">
    <Subscription
      ServiceManagementUrl="https://management.core.windows.net"
      Id="5fbxxxxxxxxxxxxxxxxxxxxxxxxxxxfe06e"
      Name="[Name of your Azure subscripton]"
      ManagementCertificate="MIIKPAIBAzCeI2S2N5Sbz4kAyL60DtKY=" />
  </PublishProfile>
</PublishData>

The settings dialog can be seen below.

image

Note: Yes, I know. If you changes the values of the service name and the virtual machine you could start and stop other VMs, so this is not something you would give to your evil nephew. However, for the case of my kids then, with the fear of loosing their pocket money for the next 200 years, I think we are OK.

So much for the app, but how does it work? How to communicate with Azure?

Create a new project in Visual Studio (I’m using 2013, so I don’t know if it will work in 2012).

Load the Microsoft Azure Management Libraries using Nuget. This package contains everything.

image

You could do with only the Microsoft Azure Compute Management Library if you want to minimize the footprint, but why settle for anything but the whole package.

Before we can do anything we need to authenticate towards Azure.

The way this is currently done is by using a X509 certificate. So in my helper class I’ve created a small method returning a SubscriptionCloudCredentials. It can be seen below.

public SubscriptionCloudCredentials GetCredentials()
{
    return new CertificateCloudCredentials(this.subscriptionId, 
        new X509Certificate2(Convert.FromBase64String(this.base64EncodedCert)));
}

The subscriptionId and base64EncodedCert are two member variables containing the ID and certificate thumbprint.

Using the CloudContext it is possible to create a ComputeManagementClient. I’ve defined a private member

private ComputeManagementClient computeManagement;

and create it like

computeManagement =
    CloudContext.Clients.CreateComputeManagementClient(GetCredentials());

To get the DeploymentStatus you can call the following:

var status = this.computeManagement
	    .Deployments
	    .GetByName(this.serviceName, this.virtualMachineName)
	    .Status;

Where this.serviceName and this.virtualMachineName are two private string members containing the two values respectively.

To start the virtual machine I’ve defined an async method

public async Task StartVMAsync(DeploymentStatus status)

The reason for passing in the status is to check that if

status.Equals(DeploymentStatus.Running)

we return.

The actual call to start the virtual machine is

var task = await this.computeManagement
	.VirtualMachines
	.StartAsync(this.serviceName, this.virtualMachineName, this.virtualMachineName,
    new CancellationToken());

Likewise a StopVMAsync method is defined containing the call to stop the virtual machine:

var task = await this.computeManagement
	.VirtualMachines
	.ShutdownAsync(this.serviceName, 
				   this.virtualMachineName, 
				   this.virtualMachineName,
		new VirtualMachineShutdownParameters()
		{
			PostShutdownAction = PostShutdownAction.StoppedDeallocated
		},
		new CancellationToken());

And that is basically it. Of course the above should be packaged nicely and then called from the taskbar app.

Time permitting I will push the code to GitHub, Codeplex or similar for people to download.

The official Service Management Client Library Reference can be found on MSDN.

Posted in Azure | 7 Comments

Azure Automation

As indicated in my last blog post this is the first of two posts describing my experiences and learning in connection with setting up a Minecraft server.

In this one we will look at Azure Automation and create a small scheduled runbook or job that will ensure the server is closed down for the night (to save on the pocket money).

If you don’t have an Azure subscription, you can get a free trial.

I will not go into details of how to actually set up the Minecraft server as Jon Buckley has already created this excellent instruction video. If you know Azure and don’t want to see the whole video, the steps are the following:

  • Create a new VM using the Windows Server gallery image.
  • Create a new endpoint opening up port 25565.
  • Open up the Windows firewall on the VM to allow traffic to this port.
  • Download Minecraft.

To leverage Azure Automation, you’ll need to activate the preview feature. This can be done from the Preview Features page.

It may take a few minutes for the feature to be activated. Once available you will see a new menu item in the left navigation bar.

image

Select this and click the Create button at the bottom of the page to create a new Automation account.

I have create one called strobaek. Note, that the Automation feature is currently only available in the region East US.

image

Azure Automation authenticates to Microsoft Azure subscriptions using certificate-based authentication. You can create a new management certificate in a number of ways. I usually open up a Visual Studio command-prompt and issue the following command

makecert -sky exchange -r -n CN=KarstenCert -pe -a sha1 -len 2048 -ss My "KarstenCert.cer"

This will insert a new certificate in the Personal certificate store

image

Export this certificate twice, both without the private key (as DER encoded binary X.509 .CER) and with the private key (as .pfx file).

You should end up with something like this:

image

Now that we have a management certificate we need to upload it to the Azure Management Portal.

Log in – if you are not already – and select Settings in the left navigation bar; it is the last menu item.

Select Management Certificates from the top menu and click Upload at the bottom of the screen. Browse for you CER-file and select OK.

Make a note of the Subscription and Subscription ID as we will need these later (I have blanked out some of my subscription ID in the figure below)

image

OK, that was one part of the deal using the .CER file. Now for the second part using the .PFX file you also created. For your Azure Automation account to be able to authenticate to your Azure subscription, you’ll need to upload the certificate .PFX file. You’ll create what is known as an Asset in the Azure Automation account. This way it can be consistently leveraged across multiple runbooks.

Click on the Automation item in the left navigation bar and enter the Azure Automation account you created earlier. Click on the Assets tab and then Add Setting at the bottom. When prompted, select Add Credential.

image

On the Define Credential page, select Certificate in the Credential Type list and enter a name

image

Click the Next button and browse to the .PFX file to upload the certificate. Enter the password used while exporting the certificate and press OK.

image

Your new asset has now been created

image

Next step is to create a connection asset. Doing so allows you to easily relate your Azure subscription name, subscription ID and management certificate together as a centralized definition for use in all of your runbooks.

Again click Add Setting, but this time select Add Connection

image

On the Configure connection page, select Azure as the Connection Type and enter a Name that matches your Azure subscription name recorded earlier.

image

Click Next

Enter the name of the management certificate asset previously uploaded/created and enter your Azure subscription ID (which you should also have recorded previously).

We are now ready to create the actual runbook.

There is a few lines of code that are used to connect a runbook to your Azure subscription using the management certificate asset and connection asses that were previously defined. To promote easy maintenance of runbooks, it is recommended to centralize this code into one runbook, called e.g. Connect-Azure, that other runbooks can reference.

The Azure Automation team has made this approach super-easy by providing us with a standard runbook template on the Azure Automation Script Center.

Go to the script center and download the Connect-Azure runbook template.

On the details page of your Azure Automation account, click the Runbooks tab.

At the bottom of the page click the Import button. Browse to the Connect-Azure.ps1 file just downloaded and click OK to import the template.

image

On the Runbooks tab click on Connect-Azure to drill into the details of the runbook.

Then click the Author tab and click the Publish button at the bottom of the page to publish the runbook. Until this is done the runbook is in “draft” mode and can be edited, but not used.

When prompted select Yes to confirm that you really want to publish the runbook.

We now have the fundamentals for creating our own runbook.

Click New | App Services | Automation | Runbook | Quick Create

image

Enter a name, e.g. Stop-VMs and a description, e.g. ‘Stop all VMs at night’. Select your automation account from the drop down and verify the subscription is correct. Then click Create.

Note that runbook automation scripts are defined using PowerShell workflows. As such, the recommended practice is to name runbooks using a PowerShell verb-noun cmdlet naming convention.

On the runbook page you should see the new runbook after creation is done.

image

Drill into the detailed property pages of the runbook.

Click the Author tab and then the Draft tab to begin editing the PowerShell code for the new runbook.

The first thing to do is leverage the Connect-Azure runbook to connect to your Azure subscription. Inside the Workflow code block enter the following:

workflow Stop-VMs
{
    # Specify Azure Subscription Name
    $subName = '[Enter your Azure subscription name]'
    
    # Connect to Azure Subscription
    Connect-Azure -AzureConnectionName $subName
        
    Select-AzureSubscription -SubscriptionName $subName
}
Remember to replace the value for $subName with the correct value (which you recorded earlier).

Now that we are connected to the subscription we can enter the code to actually stop and deallocate the VMs.

    $vmList = ('App1','App2','App3','DC')
    $svcName = 'mycloudservice'
    
    foreach($vm in $vmList)
    {
        $anon = Get-AzureVM -ServiceName $svcName -Name $vm
        Write-Output $anon.Name $anon.InstanceStatus
        
        if ($anon.InstanceStatus -eq 'ReadyRole')
        {
            Stop-AzureVM -ServiceName $svcName -Name $anon.Name -Force
        }
    }

Update the two variables $vmList and $svcName with the name of the virtual machines you wish to stop and the name of the cloud service they live in.

The whole script is shown below for your convenience.

workflow Stop-MyVMs
{
    # Specify Azure Subscription Name
    $subName = '[Enter your Azure subscription name]'
    
    # Connect to Azure Subscription
    Connect-Azure -AzureConnectionName $subName
        
    Select-AzureSubscription -SubscriptionName $subName

    $vmList = ('App1','App2','App3','DC')
    $svcName = 'mycloudservice'
    
    foreach($vm in $vmList)
    {
        $anon = Get-AzureVM -ServiceName $svcName -Name $vm
        Write-Output $anon.Name $anon.InstanceStatus
        
        if ($anon.InstanceStatus -eq 'ReadyRole')
        {
            Stop-AzureVM -ServiceName $svcName -Name $anon.Name -Force
        }
    }
}

Click the Save button at the bottom of the page.

Once the runbook is saved you can test it to confirm that it runs successfully.

Click the Test buttton next to the Save button. NOTE: When you test the runbook it is actually executed against your subscription, hence if you test the new Stop-VMs runbook, your virtual machines will be stopped.

When the runbook is tested and confirmed that it executes successfully, it can be published.

Click the Publish button on the bottom toolbar (and confirm when prompted) and then click the Published tab to confirm that is has been published successfully.

image

The final step is to create a schedule and attach it to the runbook. This is to make sure the Minecraft server is automatically stopped and deallocated when not being used (read: when the kids are supposed to sleep). To execute a runbook on a scheduled basis, we can link the runbook to a recurring schedule.

Next to the Author tab you can see the Schedule tab. Click this.

image

Click Link to a New Schedule and give the schedule a name

image

and click next.

On the Configure Schedule page set type as Daily and a start time, e.g. 21:00. Note, the time is not adjusted for daylight saving. However, the time entered seems to be based on the time on the work station creating the schedule. If I enter 21:00 the runbook is executed at 21:00 CET which is my local daylight saving adjusted time.

image

Click OK and you are done!

The next post will look at how to use the Azure Management Libraries from a small .NET library to start and stop our virtual machine.

Posted in Azure | Leave a comment

Azure Automation, Azure Management Libraries and Minecraft

For a long time the “household trolls” have been bugging me about setting up their own Minecraft server. I finally got around to do this the other day, using Microsoft Azure for the hosting. All is working great, but it left me with two challenges, namely making sure it is turned off then not used, so nothing is charged for it and making it possible for the kids to actually start it up, without annoying me or giving them access to my whole Azure subscription.

The following couple of blog posts will address how we dealt with these two items.

The first will look at setting up and using Azure Automation to create a small runbook to turn off the VM each night, should it have been left on.

The second will show how I created a small utility running in the taskbar, giving the status of the VM and allowing the user to turn a give VM on and off and just this VM. To do this we will use the Azure Management Libraries.

Stay tuned 🙂

Posted in Azure | Leave a comment

Cleaning up an Azure Deployment

You most likely know this scenarios: you have deployed a nice environment into Windows Azure. Just a few AD/DC, a few front-end servers, a couple of application servers and of course two or three SQL Servers.

Now you don’t need the VMs any more and you want to remove everything. And you spend the rest of the afternoon clicking around in the Management Portal, first to remove all the VMs. They wait around until the Disks are no long registered as being attached to the VM and then delete the Disks and the underlying VHD-files.

Would it not be nice, if this could be done in a single command (or maybe two if you just wanted to remove the VMs)?

Look no further. I have created two small PowerShell scripts that will do exactly this: Remove VMs and remove Disks and underlying VHDs.

Let us first remove the VMs.

$remove = $false

if(-not $remove)
{
    Write-Host "Are you sure you want to do this?"
    Write-Host "Change bool to true"
    return
}

$serviceName = <Service Name>

$azureVMs = Get-AzureVM –Serviceame $serviceName | select Name

foreach($azureVM in $azureVMs)
{
    Remove-AzureVM -ServiceName $serviceName -Name $azureVM
}

I like to have a safe guard at the top of my scripts. Set the constant to the name of your Cloud Service (the $serviceName variable).

To remove all Disks and underlying VHD-files for VMs having belonged to a given Cloud Service run the following:

$remove = $false

if(-not $remove)
{
    Write-Host "Are you sure you want to do this?"
    Write-Host "Change bool to true"
    return
}

$serviceName = <Service Name>
$azureDisks = Get-AzureDisk | select DiskName, AttachedTo

foreach($azureDisk in $azureDisks)
{
    if($azureDisk.AttachedTo.HostedServiceName -eq $serviceName)
    {
        Remove-AzureDisk -DiskName $azureDisk -DeleteVHD
    }
}

If you have played around with the scripts for the automated Share Point deployment found at GitHub I have created a few scripts that from the created configuration files will Export all settings, remove the VMs and (re)deploy them. More about this in a later post.

Posted in Azure | Leave a comment

Installing SQL Server 2012 in a Mirror Setup

In my series on creating a SharePoint farm in Windows Azure we last time created the virtual machines for the two front-end servers, the two application servers and the three servers to be used by SQL Server.

In this sixth post we will look at how to enable SQL Server for high availability by enabling them in a mirror setup.

First of all, this is not really directly related to Windows Azure. The steps taken are the same as you would, should you enable a mirror on-premises, but as it is not something you do every day – at least I don’t – I thought it might be of interest.

Due to several factors, the structure on the underlying storage in Windows Azure being one of them, you cannot run a SQL Server Cluster in Windows Azure, so if you require redundancy and fast failover, you need something else, like a mirror.

There are two modes of database mirroring – synchronous and asynchronous. With synchronous mirroring, transactions cannot commit on the principal until all transaction log records have been successfully copied to the mirror (but not necessarily replayed yet). This guarantees that if a failure occurs on the principal and the principal am mirror are synchronized, committed transactions are present in the mirror when it comes online – in other words, it is possible to achieve zero data loss.

Synchronous mirroring can be configured to provide automatic failover, through the use of a third SQL Server instance called the witness server (usually hosted on another physically separate server). The sole purpose of the witness is to agree (or not) with the mirror that the principal cannot be contacted. If the witness and mirror agree, that mirror can initiate failover automatically. If synchronous mirroring is configured with a witness, the operating mode is known as high-availability mode and povides a hot standby solution. When no witness is defined, the operating mode is known as high-safety mode, which provides a warm standby solution.

With asynchronous mirroring there is no such guarantee, because transactions can commit on the principal without having to wait for database mirroring to copy all the transaction’s log records. This configuration can offer higher performance because transactions do not have to wait, and it is often used when the principal and mirror servers are separated by large distances (that is, implying a large network latency and possible lower network bandwidth). Consequently, the operating mode is also known as high-performance mode and provides a warm standby solution.

If a failure occurs on the principal, a mirroring failover occurs, either manually (in the high-performance and high-safety modes) or automatically (only in the high-availability mode). The mirror database is brought online after all the transaction log records have been replayed (that is, after recovery has completed). The mirror becomes the new principal and the applications can reconnect to it. The amount of downtime required depends on how long it takes for the failure to be detected and how much transaction log needs to be replayed before the mirror database can be brought online.

In the previous post on the subject we created three VMs and attached an extra disk. As was the case for the two domain controllers, you need to log on to the SQL Servers and attach the disk.

Before we begin I must make a comment about the screen shots. If some of the text is missing it is because I have removed it due to confidentiality related issues. I apologize.

First task is to install .NET 3.51. If this is not done the installation of SQL Server might hang. We do this by using Add Feature.

Open the Server Manager and select Manage and then Add Roles and Features.

image

The Add Roles and Features Wizard will be displayed

image

Click Next.

In the Select installation type dialog ensure the correct option is selected.

image

Click Next.

Select the server in the Select destination server dialog. Then click Next.

image

Select the feature (.NET Framework 3.5 Features)

image

 

image

Click Next.

You get a change to confirm the selections.

image

If you are satisfied click Install.

The installation will now begin. You can either Close the dialog right away or you can wait until the installation has completed.

image

We are now ready with the actual installation of SQL Server.

Download and attach media (here SQL Server 2012 SP1 Enterprise Edition)

image

Run the Setup.exe file.

Select Installation in the menu to the left.

image

On the installation page, select the New SQL Server stand-alone installation.

image

This will install the setup support files. Once that is done, click OK.

image

Accept the suggested product key or enter the correct one.

image

Click Next.

In the License Terms dialog accept the terms and click Next.

image

Ensure that all is green in the Setup Support Rules dialog. If the Windows Firewall rule is yellow it is most likely because port 1433 is not open. It will have no influence on the installation, but may be an issue later on.

Click Next.

image

In the dialog for Setup Role select the top option (SQL Server Feature Installation)

image

Click Next.

On the dialog for the Feature Selection select the required features. In my case I did not require Analysis Services nor Reporting Services, but I did select to install the management tools.

image

Click Next.

The setup process will now determine if any process will be blocked. If all is green click Next.

image

Accept the default setting for the instance configuration.

imageAcc

Click Next.

You should have enough space for the installation of the actual bits.

image

Click Next.

If you have installed SQL Server 2008 and R2 you will notice that the default values for the Server Configuration has changed.

You can keep the default settings, but if you plan to use this server in a mirror setup – which is the subject of this blog post – I will recommend that you use a domain account. It will make setting up the security during the mirror configuration so much easier. The reason being that the local account on Server 1 does not know anything about the local account on Server 2.

image

If you just keep the default values you can always change them later using the SQL Server Configuration Manager.

Click Next.

In the Database Engine Configuration dialog on the Server Configuration tab keep the default value for the Authentication Mode.

image

Select the Data Directories tab.

Change the Data root directory to the additional disk we attached. Again, if this had been a production setup, you would spread your directories over a lot more drives.

image

Select the FILESTREAM tab.

You want to select both the Enable FILESTREAM for Transact-SQL access and the Enable FILESTREAM for file I/O access options. You don’t need to enable the last one.

image

Click Next.

image

In the Error Reporting option, click Next

Setup will run some additional checks. If all is green in the Installation Configuration Rules click Next.

image

Click Install.

image

The installation will begin and it is time to go and get a cup of coffee.

If all goes as expected, you should have a lot of green markers and you can close the dialog and exit the setup.

image

One down and two more to go. Repeat the above process for the other two SQL Servers. Once all are installed we will have the primary, mirror partner and witness servers and we are ready to enable the mirror.

However, before actually do this, we need to install SharePoint. The reason is that the mirror is enabled by backing up and restoring databases, hence we need something “in” SQL so to speak. As I am not a SharePoint person, I will refrain from trying to describe the process.

Therefore:

And we have a working SharePoint installed in to the primary SQL Server (in this case SP-SQL1).

First step is to ensure that all SQL logins are present on the primary and mirror server.

Then we must ensure that all databases are running with recovery mode set to full. RDP into SP-SQL1 (the primary) and open op SQL Server Management Studio.

Ensure there are no data connections to the SQL Server (you may want to close down the SP site).

Execute the following T-SQL to set recovery mode:

USE master;
GO
ALTER DATABASE AdminContent SET RECOVERY FULL;

We then first backup the database

USE master;
GO

BACKUP DATABASE AdminContent
TO DISK = ‘F:\BackUp\AdminContent.bak’
WITH FORMAT
GO

and afterwards the log:

USE master;
GO

BACKUP LOG AdminContent
TO DISK = ‘F:\BackUp\AdminContent_log.bak’

GO

Copy the files to the mirror partner (SP-SQL2). Ensure they are placed in the same location, e.g. F:\BackUp in the above example. It is not a requirement, but the syntax of the T-SQL is slightly different if the location is different.

Connect to the mirror partner (SP-SQL2) from the open Management Studio or RDP into the server and open SSMS from here.

First we restore the database

USE master;
GO

RESTORE DATABASE AdminContent
FROM DISK = ‘F:\BackUp\AdminContent.bak’
WITH NORECOVERY
GO

and then the log

USE master;
GO

RESTORE LOG AdminContent
FROM DISK = ‘F:\BackUp\AdminContent_log.bak’
WITH FILE=1, NORECOVERY
GO

We are now ready to enable mirroring. In SSMS right click on one of the databases and select Tasks and then Mirrror….

image

In the Database Properties dialog, click Configure Security.

image

The first step in the configuration wizard is to decide whether or not a witness server should be used. As want automatic failover, I select the Yes option and click Next.

image

In the dialog to choose what servers to configure, ensure that all three are selected.

image

Click Next.

During the configuration you will have to connect to each SQL Server.

As I was working from SP-SQL1 and this is going to by my Principal server I am already logged in and can just click Next.

image

Next select the Mirror server. This is going to be SP-SQL2. Click Connect and enter your credentials. When done click Next.

image

Repeat the steps for the Witness server.

image

Click Next.

You now have to set up the Service Accounts information. If you your SQL Servers are running under a domain account this is going to be easy. If not you will afterwards have to enable the local accounts on each server. Doable, but a lot more hassle.

Enter the required information for each server.

image

Click Next.

You have made it to the end of the wizard and can review the information.

image

Click Danish. Sorry Finish. Bad joke.

If all goes well you should see something like the figure below. Click Close to close the dialog.

image

When you close the dialog you can either start the mirror right away or you can do so later.

I just hit the Start Mirroring button.

image

If the mirror process is able to start you will return to the initial dialog and the status should be Synchronizing.

image

Looking in SSMS at the databases you can also see that mirroring has been set up and is active.

image

This was a really long post for which I apologize.

There is a lot more to SQL Server mirroring than the above, but I hope it will serve as an introduction and maybe enable people not working with SQL on a daily basis to get up and running more quickly.

Posted in Azure | 20 Comments

Default size of OS-image in Windows Azure VMs

In my recent series of blog posts (first can be found here) on creating a SharePoint farm in Windows Azure, I showed how to extend the OS disk from the initial 30 GB.

With the general availability of virtual machines in Windows Azure the default size has been increased to 127 GB, so the trick is less of importance now.

Posted in Azure | Leave a comment

Rolling out Images for SharePoint farm in Windows Azure

In this fifth post in the series on creating a SharePoint farm in Windows Azure we will look at the main script used to create the VMs.

For those who might have missed the previous posts they are:

The script will automatically create and domain join the remaining 7 virtual machines required by our design: 2 web front-end servers, 2 application servers and 3 SQL Servers (one principal, one mirror and one witness).

An upcoming post will talk about the SQL Server installation, but just a few comments at this point. As described in the initial post, the SQL Servers are installed in a high-safety mode, that is the database mirroring session operates synchronously and uses a witness as well as the principal server and mirror server. For better performance mirroring can be enabled in high-performance mode where the database mirroring session operates asynchronously and uses only the principal server and mirror server. Note however, that the only form of role switching is forces service (with possible data loss).

To ensure we target the correct account we first set the active subscription:

# your imported subscription name
$subscriptionName = “MySubscription”
$storageAccount = “mystorageaccount”
Select-AzureSubscription $subscriptionName
Set-AzureSubscription $subscriptionName -CurrentStorageAccount $storageAccount

Next we set the Cloud Service Parameters. This is the “public” container holding all the VMs. It is also what allows up to load balance the two front-end servers as they will share the same VIP. Remember, that the service name must be unique, so SP-Service is most likely already taken.

# Cloud Service Parameters
$serviceName = “SP-Service”
$serviceLabel = “SP-Service”
$serviceDesc = “Cloud Service for SharePoint Farm”

Some more configuration options about base images and the virtual network.

# Image create in post: Creating a Base Image for use in Windows Azure
$spimage = ‘spbase100gbws2008r2’
$sqlimage = ‘base100gbsysprep’
$vnetname = ‘SP-VNET’
$subnetNameWFE = ‘SP-WFESubnet’
$subnetNameApp = ‘SP-AppSubnet’
$subnetNameSql = ‘SP-SqlSubnet’
$ag = ‘SP-AG’
$primaryDNS = ‘10.1.1.4’

As shown in the first post we will place the three layers (front-end, application and database) in three different availability sets.

# Availability Sets
$avset1 = ‘avset1’
$avset2 = ‘avset2’
$avset2 = ‘avset3’

The domain settings from when we configured the domain

# Domain Settings
$domain = ‘lab’
$joindom = ‘lab.azure’
$domuser = ‘administrator’
$dompwd = ‘P@ssw0rd’
$advmou = ‘OU=AzureVMs,DC=lab,DC=azure’

The location of the VHD-files

# MediaLocation
$mediaLocation =
http://mystorageaccount.blob.core.windows.net/vhds/

 

Next we set the configuration for the different VMs. Please note, that I have just set the size to Small and Medium.

Also note, that I have defined a prope port and path for the two front-end servers. This is what the Load Balancer (LB) uses to check if traffic should be forwarded to the servers.

It will also be obvious, that I have only create/attached one extra disk to the SQL Servers. In a production setup you should not place data, log and temporary files on the same disk.

# Create SP WFE1
$size = “Small”
$vmStorageLocation = $mediaLocation + “sp-wfe1.vhd”
$spwfe1 = New-AzureVMConfig -Name ‘sp-wfe1’ -AvailabilitySetName $avset1 -ImageName $spimage -InstanceSize $size -MediaLocation $vmStorageLocation |
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd -Domain $domain -DomainUserName $domuser -DomainPassword $dompwd -MachineObjectOU $advmou -JoinDomain $joindom |
Add-AzureEndpoint -Name ‘http’ -LBSetName ‘lbhttp’ -LocalPort 80 -PublicPort 80 -Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath ‘/healthcheck/iisstart.htm’ |
Set-AzureSubnet $subnetNameWFE

# Create SP WFE2
$size = “Small”
$vmStorageLocation = $mediaLocation + “sp-wfe2.vhd”
$spwfe2 = New-AzureVMConfig -Name ‘sp-wfe2’ -AvailabilitySetName $avset1 -ImageName $spimage -InstanceSize $size -MediaLocation $vmStorageLocation |
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd -Domain $domain -DomainUserName $domuser -DomainPassword $dompwd -MachineObjectOU $advmou -JoinDomain $joindom |
Add-AzureEndpoint -Name ‘http’ -LBSetName ‘lbhttp’ -LocalPort 80 -PublicPort 80 -Protocol tcp -ProbeProtocol http -ProbePort 80 -ProbePath ‘/healthcheck/iisstart.htm’ |
Set-AzureSubnet $subnetNameWFE

# Create SP App1
$size = “Small”
$vmStorageLocation = $mediaLocation + “sp-app1.vhd”
$spapp1 = New-AzureVMConfig -Name ‘sp-app1’ -AvailabilitySetName $avset2 -ImageName $spimage -InstanceSize $size -MediaLocation $vmStorageLocation |
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd -Domain $domain -DomainUserName $domuser -DomainPassword $dompwd -MachineObjectOU $advmou -JoinDomain $joindom |
Set-AzureSubnet $subnetNameApp

# Create SP App2
$size = “Small”
$vmStorageLocation = $mediaLocation + “sp-app2.vhd”
$spapp2 = New-AzureVMConfig -Name ‘sp-app2’ -AvailabilitySetName $avset2 -ImageName $spimage -InstanceSize $size -MediaLocation $vmStorageLocation |
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd -Domain $domain -DomainUserName $domuser -DomainPassword $dompwd -MachineObjectOU $advmou -JoinDomain $joindom |
Set-AzureSubnet $subnetNameApp

# Create SQL Server1
$size = “Medium”
$vmStorageLocation = $mediaLocation + “sp-sql1.vhd”
$spsql1 = New-AzureVMConfig -Name ‘sp-sql1’ -AvailabilitySetName $avset3 -ImageName $sqlimage -InstanceSize $size -MediaLocation $vmStorageLocation |
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd -Domain $domain -DomainUserName $domuser -DomainPassword $dompwd -MachineObjectOU $advmou -JoinDomain $joindom |
Add-AzureDataDisk -CreateNew -DiskSizeInGB 100 -DiskLabel ‘datalog’ -LUN 0 |
Set-AzureSubnet $subnetNameSql

# Create SQL Server 2
$size = “Medium”
$vmStorageLocation = $mediaLocation + “sp-sql2.vhd”
$spsql2 = New-AzureVMConfig -Name ‘sp-sql2’ -AvailabilitySetName $avset3 -ImageName $sqlimage -InstanceSize $size -MediaLocation $vmStorageLocation |
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd -Domain $domain -DomainUserName $domuser -DomainPassword $dompwd -MachineObjectOU $advmou -JoinDomain $joindom |
Add-AzureDataDisk -CreateNew -DiskSizeInGB 100 -DiskLabel ‘datalog’ -LUN 0 |
Set-AzureSubnet $subnetNameSql

# Create SQL Server 3 (Witness)
$size = “Medium”
$vmStorageLocation = $mediaLocation + “sp-sql3.vhd”
$spsql3 = New-AzureVMConfig -Name ‘sp-sql3’ -ImageName $sqlimage -InstanceSize $size -MediaLocation $vmStorageLocation |
Add-AzureProvisioningConfig -WindowsDomain -Password $dompwd -Domain $domain -DomainUserName $domuser -DomainPassword $dompwd -MachineObjectOU $advmou -JoinDomain $joindom |
Add-AzureDataDisk -CreateNew -DiskSizeInGB 100 -DiskLabel ‘datalog’ -LUN 0 |
Set-AzureSubnet $subnetNameSql

$dns1 = New-AzureDns -Name ‘dns1’ -IPAddress $primaryDNS

Last thing is to call New-AzureVM to actually create the VMs.

New-AzureVM -ServiceName $serviceName -ServiceLabel $serviceLabel `
-ServiceDescription $serviceDesc `
-AffinityGroup $ag -VNetName $vnetname -DnsSettings $dns1 `
-VMs $spwfe1,$spwfe2,$spapp1,$spapp2,$spsql1,$spsql2,$spsql3

 

Now go grab a cup of coffee and wait for your VMs to be provisioned, domain joined and started.

When done you should see something like the following in the PowerShell windows:

image

Looking in the portal:

image

In the next post we will look at how to set up the SQL servers in a mirror. Not really an Azure subject, but still something you what to do to ensure redundancy.

Posted in Azure | Leave a comment