Creating a SharePoint Farm in Windows Azure

I have recently been involved in a (pilot) project moving a public facing SharePoint farm to Windows Azure. This will be the first of a series of blog posts “from the trenches” about the different steps taken to achieve this goal.

First a few words on the architecture. For the pilot it was decided that a separate Active Directory forest would be created in Azure and as no other on-premise resources was required we avoided the task of setting up a VPN. Time permitting I may describe these tasks in a future blog post.

A few words on availability sets and Windows Azure. To ensure the availability of the application you would use multiple Virtual Machines (VMs). By using multiple VMs, you can make sure that your application is available during local network failures, local disk hardware failures, and any planned downtime that the platform may required.

You manage the availability of your application that uses multiple VMs by adding the machines to an availability set. Availability sets are directly related to fault domains and update domains. A fault domain in Windows Azure is defined by avoiding single points of failure, like the network switch or power unit of a rack of servers. In fact, a fault domain is closely equivalent to a rack of physical servers. When multiple virtual machines are connected together in a cloud service, an availability set can be used to ensure that the machines are located in different fault domains. Also, by placing the VMs into the same availability set, you ensure that the fabric controller will never shut all of them down at the same time, e.g. for maintenance of the OS.

In all we defined four availability sets, one for the domain controllers and one for each of the tiers in the applications: The frontend servers, the application servers and the database servers.

To ensure redundancy and automatic failover the SQL Servers are established in a mirror setup with a witness server.

All VMs are placed in a Virtual Network (VNET), with each tier in its own subnet; the domain controllers also got their own. In the current release of Windows Azure all VM can by default see each other and it is not easily done to change this, but this is being worked on for future releases, hence the sub-netting.

The address scope for the VNET is 10.1.0.0/16.

The table below gives the VNET configuration:

Name Description Address Scope
VNET Main VNET 10.1.0.0/16
ADSubnet AD/DNS subnet 10.1.1.0/24
AppSubnet Application server subnet 10.1.2.0/24
WFESubnet Frontend server subnet 10.1.3.0/24
SQLSubnet SQL Server subnet 10.1.4.0/24

The figure below shows the topology of the VMs in Windows Azure.

The internal load balancer in Windows Azure will distribute load between the front-end servers.

image

 

The following table shows the configuration of the VMs.

Name Description OS Disks
DC1 Domain Controller Windows Server 2012 1 x 30 GB (OS)
1 x 20 GB
DC2 Domain Controller Windows Server 2012 1 x 30 GB (OS)
1 x 20 GB
WFE1 Frontend Server Windows Server 2008 R2 1 x 100 GB
WFE2 Frontend Server Windows Server 2008 R2 1 x 100 GB
App1 Application Server Windows Server 2008 R2 1 x 100 GB
App2 Application Server Windows Server 2008 R2 1 x 100 GB
SQL1 Primary Database Server Windows Server 2012 2 x 100 GB
SQL2 Secondary Database Server Windows Server 2012 2 x 100 GB
SQL3 Witness Database Server Windows Server 2012 2 x 100 GB

Due to business requirements SharePoint 2010 is used. The SQL Server is 2012 Enterprise Edition.

The size of the OS drive on the images supplied by the gallery in Windows Azure is only 30 GB. To ensure we had enough space “to play around” I created two base images (one for Window Server 2008 R2 and one for Windows Server 2012) each having 100 GB on the OS-drive. As I did not want to upload 100 GB I used a small trick and utility to change the header information of the VHD-file, hence expanding the drive from the original 30 GB to 100 GB. More about this in one of the future posts. All the required bits were placed on the base images. This way we only had to download and distribute them once.

The two domain controllers were created from the images supplied by the gallery.

In the next post I will look at creating the VNET.

About strobaek

.NET developer/architect. Runner, espresso drinker and lover of gourmet food.
This entry was posted in Azure. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *