Shared Storage Options in Azure: Part 1 – Azure Shared Disks

Posted on Updated on

Reading Time: 4 minutes

In an IaaS world, shared storage between virtual machines is a common ask. “What is the best way to configure shared storage?”, “What options do we have for sharing storage between these VMs?”, both are questions I’ve answered several times, so let’s go ahead and blog some of the options! The first part in this blog series titled “Shared Storage Options in Azure”, will cover Azure Shared Disks.

As I write subsequent posts in this series, I will update this post with the links to each of them.

 

When shared disks were announced in July of 2020, there was quite a bit of excitement in the community. There are so many applications that still leverage shared storage for things like Windows Server Failover Clustering, on which many applications are built like SQL Server Failover Cluster Instances. Also, while I highly recommend using a Cloud Witness, many customers migration workloads to Azure still rely on a shared disk for quorum as well. Additionally, many Linux applications leverage shared storage that were previously configured to use a shared virtual disk, or even RAW LUN mappings, for applications such as GFS2 or OCFS2.

Additional sample workloads for Azure Shared Disks can be found here: Shared Disk Sample Workloads.

There are a few limitations of shared disks, the list of which is constantly getting smaller. For now, though, let’s just go ahead and jump into it and see how to deploy them. After which, we’ll do a quick “Pros” and “Cons” list before moving on to the other shared storage options. I deployed Shared Disks in my lab using the portal first (screenshots below), but also created a Github Repository (https://github.com/matthansen0/azure-shared-storage-options) with the Azure PowerShell script and an ARM template to deploy a similar environment – feel free to use those if you’d like!

As a prerequisite (not pictured below) I created the following resources:

  • A Resource Group in the West US region
  • A Virtual Network with a single subnet
  • 2x D2s v3, Windows Server 2016 Virtual Machines (VM001, VM002) each with a single OS disk

Now that those are created, I deployed a Managed Disk (named “sharedDisk001”) just like you would if you were deploying a typical data disk.

On the “advanced” tab you will see the ability to configure the managed disk as a “shared disk”, here is where you set the max shares which specifies the maximum number of VMs that can attach that particular disk type.

After the disk is finished deploying, we head over to the first VM and attach an existing disk. You’ll note that the disk shows up as a “shared disk” and shows the number of shares left available on that disk. Since this is the first time it’s being mounted it shows 0.

After attaching the disk to the first VM, we head over and do the same thing on VM002. You’ll note that the number of shares has increased by 1 since we have now mounted the disk on VM001.

Great, now the disk is attached to both VMs! Heading over to the managed disk itself you’ll notice that the overview page looks a bit different from typical managed disks, showing information like “Managed by” and “Max Shares”.

In the properties of the disk, we can see the VM owners of that specific disk, which is exactly what we wanted to see after mounting it on each of the VMs.

Although I setup this configuration using Windows machines, you’ll notice I didn’t go into the OS. This is to say that the process, from an Azure perspective, is the same with Linux as it is with Windows VMs. Of course, it will be different within the OS, but there is nothing Azure-specific from that aspect.

Okay, here we go the Pros and Cons:

Pros:

  • Azure Shared disks allows for the use of what is considered to be “legacy clustering technology” in Azure.
  • Can be leveraged by familiar tools such as Windows Failover Cluster Manager, Scale-out File Server, and Linux Pacemaker/Corosync.
  • Premium and Ultra Disks are supported so performance shouldn’t be an issue in most cases.
  • Supports SCSI Persistent Reservations.
  • Fairly simple to setup.

Cons:

  • Does not scale well, similar to what would be expected with a SAN mapping.
  • Only certain disk types are supported.
  • ReadOnly host caching is not available for Premium SSDs with maxShares >1.
  • When using Availability Sets and Virtual Machine Scale sets, storage fault domain alignment with the VMs are not enforced on the shared data disk.
  • Azure Backup not yet supported.
  • Azure Site Recovery not yet supported.

Alright, that’s it for Azure Shared Disks! Go take a look at my Github Repository and give shared disks a shot!

Please reach out to me in the comments, LinkedIn, or Twitter with any questions or comments about this blog post or this series.  

Delete Azure Recovery Vault Backups Immediately

Posted on Updated on

Reading Time: 3 minutes

If you’re like many others, over the past few months you’ve noticed that if you configure Azure Backup, you can’t delete the vault for 14 days until after you stop backups. This is due to Soft Delete for Azure Backup. It doesn’t cost anything to keep those backups during that time, and it’s honestly a great safeguard to accidentally deleting backups and gives the option to “undelete”. Though, in some cases (mostly in lab environments) you just want to clear it out (or as was affectionately noted by a colleague of mine, “nuke it from orbit”). Let’s walk though how to do that real quickly.

When you go and stop backups and delete the data you’ll get the warning “The restore points for this backup item have been deleted and retained in a soft delete state” and you have to wait 14 days to fully delete those backups, you’ll also get an email alert letting you know.

To remove these backups immediately we need to disable soft delete, which is a configuration setting in the Recovery Services Vault. DO NOT DO THIS UNLESS YOU ABSOLUTELY MUST. As previously noted, this is a greats safeguard to have in place, and I would also suggest using ARM Resource Locks in production environments in addition to soft delete. If you’re sure though, we can go turn it off.

Alright, now that we’ve disabled Soft Delete for the vault, we have to commit the delete operation again. This means first we’ll need to “undelete” the backup, then delete it again which this time won’t be subject to the soft delete policy.

Now we can go delete it again, after which we can find that there are no backup items in the vault.

Success!!! The backup is fully deleted. So long as there are no other dependencies (policies, infrastructure, etc.) you can now delete the vault.

If you have any questions or suggestions for future blog posts feel free to comment below, or reach out to me via email, twitter, or LinkedIn.

Thanks!

Azure Web Apps with Cost Effective, Private and Hybrid Connectivity (The ASE Killer!)

Posted on Updated on

Reading Time: 10 minutesCaution: this blog post may save you significant time and money, and has been affectionately dubbed “The ASE Killer!”.

Note: If you prefer a video walk-through, you can now view my video building this solution on YouTube: https://www.youtube.com/watch?v=aeYwTLd_zB8

With the advent of the cloud came the ever so attractive PaaS service model. The first time I heard about this, I was sold. Host my application without having to manage the infrastructure, the OS, patching, scaling and all the other things that I really don’t want to do anyways – sign me up! The “catch” is though that (to misquote all those before me) “with less responsibility comes less power”. When relinquishing responsibility (the positive side of PaaS), you also relinquish some control and ability for customization. This is a conversation I have with customers at least once a week – how do I control networking for my PaaS application.

One my peers, Steve Loethen who focuses more on the application development aspects in Azure, and I were speaking and noted that it would be great if we could leverage Azure Private Link Endpoints and Azure Application Gateway to get both private connectivity from the internal network and leverage a cloud-native Web Application Firewall for internet traffic. Traditionally using the “isolated” tier of Azure App Services called an Azure Application Service Environment (which provides a completely isolated and dedicated app service environment) would be the way to go. Unfortunately the nature of a private stamp of a PaaS service inherently comes with a pretty heft price tag. There are still some great features to be had with the ASE, but using Private Link we can get to a point where all external access is blocked unless its using the WAF – but can be still be accessed over the internet network. I’ll document that configuration below.

To setup this environment we will need to:

  • Setup the Web App
  • Create a Virtual Network in Azure
  • Setup a Site-to-Site VPN
  • Setup a Private Link Endpoint for the Web App
  • Restrict Network Access to the Web App to only the Private Link Endpoint
  • Setup a Public Application Gateway

*Note* As of 3/12/20 Private Link Endpoints for App Service Web Apps is in Public Preview.

As of 10/6/20 Private Link Endpoints for App Service Web Apps is Generally Available

Let’s get started! First, we’ll create the App Service. I’m going to be using the resource group named “pri-webapp-rg” and the app service name “private-webapp.azurewebsites.net” in the West US region.

*Note* At the time of writing this post (5/18/20) Private Endpoints require the App Service to be a Pv2.

I am not going to use it in this lab, but I am going to leave App Insights enabled. If you’ve not used App Insights yet as an APM tool I highly suggest you look at instrumenting your app with this tooling.

Go ahead and add any tags if you need any for your environment, then create that App Service. Next we’ll go and create our Azure Virtual Network. This will allow us to facilitate the private network connectivity. I’m going to create this in West US, I’ll note that Private Link Endpoints do not need to be deployed in the same region as the resource, but I want to reduce latency here so I’ll deploy the vnet (and subsequent Private Link Endpoints in the following steps) as the same region as the App Service.

I’m going to use the 10.2.0.0/16 address space, and initially create two subnets. The “WebApp” subnet will be for the Private Link Endpoint and the “AppGateway” subnet will be designated for the Application Gateway.

I’m going to leave the rest of the deployment settings with their default options selected for now and create the vnet.

Once the vnet has finished creating, we need to go create a “Gateway Subnet” which will be used for the VPN Gateway to be used for the Site-to-Site VPN for hybrid connectivity. By default, it will pre-select a subnet that is available in your address space. Once that subnet is created we have our finished vnet setup.

Now that the network in Azure is setup, we need to get the VPN Gateway configured. When the “West US” region is selected on the deployment screen you’ll get a list of available virtual networks in that region. I selected the one we just created, and it will validate that there is a “GatewaySubnet” created in that vnet and will select that as the deployment subnet for the VPN Gateway. I’m going to be using a Route-based VPN so I’ll select that as my VPN type then create a new public IP address and leave the Active-Active and BGP options disabled for this lab.

The VPN gateway does take 20-30 minutes to deploy, so go grab a cup of coffee. Once that’s done we’ll need to add the “connection” for the site-to-site VPN.

For this setup we’re using a Site-to-Site IPSec VPN Connection, and this is where you’ll also set your PSK and IKE protocol. You’ll also need to create a local network gateway, which is an ARM resource used for representing the on-premises network.

The “IP Address” field is where you’ll enter the public IP address of the appliance that’s going to terminate the VPN on-premises. You’ll also need to add which networks are on-prem, this will add add the local network address space to the route table of the VPN Gateway so it knows to use that link for traffic bound for those addresses.

Once that’s created, we’ll need to go to the overview page for the VPN Gateway to get its public IP address. That will be needed to setup the on-premises side of the VPN. I’m going to use a PFSense appliance in home lab network to accomplish this setup. If you want to test this just in Azure you can also use just a vnet peered network and create an emulated “client” machine, alternatively you could also setup a point-to-site VPN for just your local machine. I won’t be showing that process here, but I have another post that discusses the setup of PFSense S2S VPN with an Azure VPN Gateway and another that uses PaloAlto for S2S VPN to Azure.

Once the S2S VPN is setup, we can now go and setup the Private Link Endpoint for the App Service.

When choosing a private link endpoint you need to choose the resource type, so here I’m using the “Microsoft.Web/sites” resource type.

As noted earlier on, we created the “WebApp” subnet to hold the private link endpoint and we’ll select that here. A very important component of private link is DNS. As is stated so eloquently in my favorite Haku:

“It’s not DNS

  There’s no way it’s DNS

   It was DNS”. 

Before deploying private link you have to consider the DNS scenario. Clients cannot call private link endpoints by their IP addresses, it has to be by their DNS names. I highly recommend some of the Microsoft Documentation on the topic, or Daniel Mauser has an amazing article on Private Link DNS Integration Scenarios. I’m going to go ahead and allow the private endpoint to create an Azure Private DNS Zone, but ultimately for this lab environment I’ll just be using a host file entry.

*Note* You can also check out my post on automated IaaS DNS Load Balancing to help with the private, hybrid DNS integration scenarios that may be required for Private Link here: DNS Load Balancing in Azure.

Alright, the private link endpoint is all setup. Now to make sure we restrict network traffic to that which only originates from the private link endpoint. App Services can use virtual network service endpoints to restrict traffic originating from vnets in Azure quite easily. Though in our case we want to make sure that on-premises traffic can also access that app service so rather than allowing access at the vnet level we’ll just be allowing the single IP of the private endpoint.

After adding the access restriction rule, you’ll see that the “Allow All” rule switches to a “Deny All” rule and is given the lowest priority on the ACL. Above that we have the single IP of the Private Link Endpoint that is allowed. As a result, the only IP that’s allowed to call this app service is that of the private link endpoint.

Great! Okay, now we’ve setup my app service, my site-to-site VPN, private link, and access restrictions. We should be all set to test connectivity from on-premises now, let’s go give that a shot. As noted above, since this is just a lab environment I’m just going to be using a host file entry on my test machine on my lab network, so lets set that up real quick.

Great, now let’s validate this in a browser and with powershell to both make sure the page loads and that it’s using the private endpoint. We can see that the page loads using the “public” DNS name, but the address is the 10.2.2.4 address that is assigned to the private link endpoint. We can also see that on a machine that’s just using the typical internet path is given a 403, since the source traffic is not coming from the private endpoint – perfect!

If we go back and look at the site-to-site VPN configuration now we’ll also see that traffic has started passing over that link which further validates the fact that the traffic is remaining private.

Now that we have confirmed the hybrid traffic is passing through the private endpoint and that all internet traffic is denied, let’s configure the Application Gateway with the Web Application Firewall SKU so that we can facilitate external traffic communicating with our private Web App (forgive my ever so descriptive naming convention) and create a new public IP address to be associated with the Application Gateway. Lastly, we’ll use the subnet that we designated earlier for the Application Gateway for the deployment.

When creating the backend pool we’ll enter the IP address of the private link endpoint. For the routing rule, since this is a lab and I don’t have a TLS certificate on-hand I’ll use an HTTP listener on the App Gateway and use the same HTTPS endpoint for the App Service that we’ve been using internally. In a production environment you’ll want to have HTTPS on the public listener. When creating the HTTP setting to use for this routing rule we want to override the host name to that of the App Service (in this case, “private-webapp.azurewebsites.net”) because remember that private link needs to be called using a DNS name and this way we can add that host name to the request header.

*Note* Since we deployed a Private DNS Zone, which is automatically attached to our virtual network, and the application gateway is deployed into the virtual network, we can use the DNS name of the app here because it will resolve correctly. Though, for clarity in this lab environment I’m just using the IP address.

After the Application Gateway is finished deploying I want to go add a DNS label to the public IP which is associated and confirm that it was applied so I can use that DNS name rather than the IP itself.

Okay, moment of truth! Remember the machine we used earlier that used the public path to get to the app service but got a 403 error because of the access restrictions? Let’s go ahead and try hitting the newly provisioned Application Gateway that does have a public listener and try the site now. We see that it’s using the DNS label and subsequent HTTP listener that we setup on the App Gateway, using the public interface and is routing us appropriately back to our private web app!

That’s all folks! As a retrospective, here’s what we’ve done:

  • Configured an App Service with Hybrid, Private Networking
  • Configured a scalable public endpoint that’s using a WAF in an IDS mode
  • Maintained the flexibility, scalability, and cost effectiveness of the non-ASE App Service

If you have any questions or suggestions for future blog posts feel free to comment below, or reach out to me via email, twitter, or LinkedIn.

Thanks!


Change Feed:

10/8/20: Private Endpoints for App Service Web Apps now GA

12/25/20: Updated with link to new blog post for “Azure Site-to-Site VPN with a PaloAlto Firewall

12/27/20: Updated with link to new blog post for “DNS Load Balancing in Azure

Azure Site-to-Site VPN with PFSense

Posted on Updated on

Reading Time: 5 minutes(Edit: I’ve also now posted about how to do this with a Palo Alto Firewall as well, you can see that post here: https://thetechl33t.com/2020/11/18/azure-site-to-site-vpn-with-palo-alto-firewall/  )

If you’re like me, you like to have a little bit more control over your network (home or business) than is available with the ISP-provided router – enter PFSense. The Netgate Appliances work very well and I’ve worked with plenty of home networks, as well as small and medium businesses that have used them as their cost-effective Router/Firewall/VPN appliance combination.

Now, how do we setup a Site-to-Site VPN with our infrastructure hosted in Azure with PFSense. It’s actually pretty easy! Let’s jump into it.

The first thing we need to do in Azure is setup a virtual network (or vnet). A virtual network is a regional construct, meaning that it cannot span multiple regions. I’m going to choose the “West US” region for now since that’s where i’ll be building my resources after this is configured.

In this particular setup I’m using 10.2.0.0/16 as my virtual network address space and have designated two subnets for a later project.

After the virtual network is configured, we’ll need to create a “Gateway Subnet” which is a specific prerequisite to deploying a VPN Gateway into the virtual network.

Alright, the network and subnets are all setup in Azure. Now let’s go create a Virtual Network Gateway to act as our PaaS VPN appliance. I’m going to be using a route-based VPN, so I’ll use that VPN type and choose the virtual network that we just created. You’ll notice that it also selects the special GatewaySubnet that was created for the VPN Gateway. I’m going to create a new public IP address to be associated with the VPN Gateway, and won’t be using active-active or BGP for this lab so I’ll leave those disabled.

The VPN Gateway takes about 20-30 minutes to deploy so go ahead and go stretch and grab a cup of coffee. Once it’s done we’ll go in and create a “connection”, which we’ll designate as “site-to-site IPSec”, then set the PSK that will be used here and on the on-premises appliance.

You’ll need to also create a local network gateway which is an Azure resource that represents the information about the on-premises VPN appliance and IP ranges. You’ll want to enter the public IP of the PFSense appliance on-premises and whatever address space you’d like to add to the route table of the virtual network so it knows to route those addresses across the VPN through the gateway.

Once that’s done we’ll go grab the public IP of the VPN Gateway from the overview page so we can go setup the PFSense side of the VPN.

Alright, now let’s go setup an IPSec VPN in PFSense. Open the IPSec VPN settings page and let’s create a Phase 1 configuration.

I will want to select the Authentication Method of Mutual PSK and enter the PSK we setup on the Connection on the VPN Gateway in the “Pre-Shared Key” field. I’m going to be connecting to some other resources with this setting so I am using both AES 128 and 256 with relative SHA 256 hashes and both DH groups 2 and 14. I recommend reviewing the documentation on cryptographic requirements and Azure VPN gateways though for reference.

Next let’s create a Phase 2 configuration for the IPSec VPN. I’m designating 10.0.0.0/8 to the VPN assuming that I may expand my Azure environment at some later point. If we wanted to be exact this would be the 10.2.0.0/16 address space that we configured in our virtual network. For the Encryption Algorithms required, please see the cryptographic requirements documentation noted above. If you wanted to automatically ping a host in Azure to bring up/keep up the tunnel you can configure that here as well.

Now that we’ve created both the Phase 1 and Phase 2 configurations we can “apply changes” to add those changes to the running configuration.

After the settings have saved the tunnel will take a minute to come up, you may take this time to spin up a quick VM in your Azure virtual network to use for testing connectivity. Once that tunnel comes up you can see the connection statistics on the IPSec Status page. Similarly if you look at the overview page of the site-to-site connection we created on the VPN Gateway on Azure you can see the tunnel status and connection statistics.

For troubleshooting purposes, there is a “VPN Troubleshoot” functionality that’s a part of Azure Network Watcher that’s built into the view of the VPN Gateway. You can select the gateway on which you’d like to run diagnostics, select a storage account where it will store the sampled data, and let it run. If there are any issues with the connection this will list them out for you. It will also list some specifics of the connection itself so if you want to dig into those you can go look at the files written to the blob storage account after the troubleshooting action is complete to get information like packets, bytes, current bandwidth, peak bandwidth, last connected time, and CPU utilization of the gateway. For further troubleshooting tips you can also visit the documentation on troubleshooting site-to-site VPNs with Azure VPN Gateways.

That’s it, all done! The site-to-site VPN is all setup. The VPN gateway in Azure really makes this process very easy, and the PFSense side is fairly easy to setup as well.

If you have any questions or suggestions for future blog posts feel free to comment below, or reach out to me via email, twitter, or LinkedIn. I hope I’ve made your day at least a little bit easier!

Thanks!

Custom “Virtual Network Operator” Role in Azure

Posted on Updated on

Reading Time: 4 minutes

*Edit* 6/9/20: At the time of writing this blog custom roles were not available in the Azure Portal. Since that feature now exists, I’ve added how to create this role through the portal to the end of this blog.

We all know that cloud environments are different than on premises. Development environments can be even more difficult, the right cross between giving freedom to produce business value and enough security and controls to maintain a good security posture. Recently I came across a scenario where the design was that the networking components (Virtual Network, S2S VPN, Peerings, Service Endpoints, UDRs, Subnets, etc.) were all under the control of a Networking team so that the Azure environment would be an extension of their local network. From a permissions standpoint though this caused some problems. The goal would be to empower developers to build whatever they need and just attach it to the vNet when connectivity was needed, with the caveat that they shouldn’t be able to modify any of the network settings or configurations in the shared vNet. This however, turned out not to be possible with default IAM roles.

In my testing, even though you’re not “modifying” anything per se in the vNet – when attaching a network interface card to a Subnet it does require write access. Reader access will give the user this error.

After looking through the default IAM roles, there aren’t any that do what I need them to do. Alas, a custom role is needed. For reference, please checkout this docs page for custom roles – https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles.

First, I need to see what actions are available to pack into the custom role so I run the following command in Azure Powershell and get these results.

Get-AzProviderOperation “Microsoft.Network/virtualNetworks/*” | FT OperationName, Operation, Description -AutoSize

Great, now I can see what is available. It looks like the actions that I need are the /subnet/join/action and /subnet/joinViaServiceEndpoint/Action. Based on the descriptions these two will essentially give “operator” role to the vnet. The assignee will be able to use the subnet(s) but not able to modify them.

Next, you can use one of the template references in the Microsoft Docs link (https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles-powershell#create-a-custom-role-with-json-template) and modify the actions.

{
  “Name”: “Custom – Network Operator Role”,
  “Id”: null,
  “IsCustom”: true,
  “Description”: “Allows for read access to Azure network and join actions for service endpoints and subnets.”,
  “Actions”: [
  “Microsoft.Network/virtualNetworks/subnets/join/action”,
  “Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action”,
  “Microsoft.Network/virtualNetworks/read”
   ],
  “NotActions”: [],
  “AssignableScopes”: [
  “/subscriptions/00000000-0000-0000-0000-000000000000”,
  “/subscriptions/11111111-1111-1111-1111-111111111111”
  ]
}

After modifying the assignable subscription IDs, save that as a .json file and use it in the following command to import the custom role.

New-AzRoleDefinition -InputFile “C:\FileFolderLocationPath\CustomNetworkOperatorRole.json”

After a portal refresh you get the custom role available as an IAM role assignment.

There you go, after assigning this role the user was able to create VMs and attach them to the Virtual Network while still leaving control of the Network configuration to the Networking Team.

————————————————————————-

————————————————————————-

EDIT 6/9/20: Now that custom roles are available in the portal I’ve added how to create this role that way below. 

Using the documentation on how to create custom IAM roles in the Azure portal you can clone any role, but I’m just going to use the “Network Contributor” role to clone. 

I am going to just “start from scratch”, though you can easily use this page to modify the permissions set of a certain role that already exists. Additionally, if you have the JSON (like we generated before with powershell) you can just upload that here.

Now go ahead and add new permissions by going to the “Microsoft Network” Category and selecting the same permissions we wrote in the JSON and used with the powershell command above.

After adding those the next page will show you the permission that you’ve added, let’s double check those and make sure we have what we need.

Looks good, now we can set a scope where this role can be applied. I have this one scoped to the whole subscription, but you can use the interface to easily change that on this page.

Alright, that’s about it – let’s take a quick look at the summary pages. You can see that the portal actually generates the JSON, which is nice if you want to assign roles using Azure Policy or any other automation tooling.

Nice! Now we can see that users can use the four above actions which will allow them to interact with, and use a shared hub-style virtual network while now allowing full contributor access.

I hope I’ve made your day at least a little bit easier!

If you have any questions or suggestions for future blog posts feel free to comment below, or reach out to me via email, twitter, or LinkedIn.

Thanks!

Isolated Virtual Machines in Azure

Posted on

Reading Time: 2 minutesIn a recent design session, I had someone mention how they wish that Azure had dedicated and/or isolated IaaS VMs like some of the other cloud providers. They were shocked with I said – they do! Azure has a number of isolated VM types that you can provision and leverage just the same as any other IaaS VM.

The link to Microsoft documentation below states that “Azure Compute offers virtual machine sizes that are Isolated to a specific hardware type and dedicated to a single customer. These virtual machine sizes are best suited for workloads that require a high degree of isolation from other customers for workloads involving elements like compliance and regulatory requirements.”

https://docs.microsoft.com/en-us/azure/security/azure-isolation#isolated-virtual-machine-sizes

Yup, you read that right, dedicated hardware. This means that using one of the isolated VM types (listed below) will guarantee that your virtual machine will be the only one running on that server instance (as of Nov. 1, 2018).

  • Standard_E64is_v3
  • Standard_E64i_v3
  • Standard_M128ms
  • Standard_GS5
  • Standard_G5
  • Standard_DS15_v2
  • Standard_D15_v2

 

If VM isolation is what you’re after, you can also consider Nested Virtualization in Azure (link below). A few VM types in Azure support Nested Hyper-V (not suitable for production, mind you) and Hyper-V Containers using Docker.

https://azure.microsoft.com/en-us/blog/nested-virtualization-in-azure/

https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container

 

Lastly, there is a new feature that recently went GA that’s sort of in-line with VM isolation – Confidential Compute. While this isn’t VM isolation, it is an isolated enclave for executing code in TEEs (Trusted Execution Environments) using Intel’s SGX capabilities. This allows for code to be executed in an environment that, while it’s being executed, can not be accessed by any source outside of the enclave (see diagrams in the links below).

https://azure.microsoft.com/en-us/blog/introducing-azure-confidential-computing/

https://azure.microsoft.com/en-us/blog/azure-confidential-computing/

 

Using physical isolation techniques in the cloud can be very different than the way we’re used to doing it on-prem. Hopefully, though, this post and its associated links will get you thinking about different ways you can design for isolation in Azure. Feel free to comment, or reach out on Twitter @MattHansen0.

Thanks, and I hope I’ve made your day at least a little bit easier.

Azure Point-to-Site VPN with RADIUS Authentication

Posted on Updated on

Reading Time: 4 minutesFor the money, it’s hard to beat the Azure VPN Gateway. Until recently though, Point-to-Site VPNs were a bit clunky because they needed mutual certificate authentication. It wasn’t bad, but it certainly wasn’t good. Thankfully, Microsoft now allows RADIUS backed authentication. This post is how you impliment said configuration.

 

To start off, here is my environment information I’m using to setup this configuration.

Virtual Network: “raidus-vnet”

Virtual Network Address Space: 10.1.0.0/24

Virtual Network VM Subnet: 10.1.0.0/28

Virtual Network Gateway Subnet: 10.1.0.16/28

VPN Gateway SKU: VpnGW1

VPN Client Address Pool: 172.28.10.0/24

Domain Controler/NPS Server Static IP: 10.1.0.10

 

 

Virtual Network (VNET) Setup:

You most likely already have a VNET where you will be configuring this setup, but if you don’t you need to create one with two subnets. One subnet for infrastructure, and one “Gateway Subnet”. The Gateway Subnet will be used automatically, and is required,  when you configure the VPN Gateway.

 

VPN Gateway Setup:

The Azure VPN Gateway is just about as easy as it gets to configure and to managed (sometimes to a fault). The only caveat you need to be aware of in this scenerio, is that RADIUS Point-to-Site authentication is only available on the SKU “VPNGW1” and above. You’ll then need to choose the vnet where you have created the VPN Gateway, and create a Public IP Address resource.

 

This will take anywhere from 20-45 minutes to provision, as noted. While that’s running, you can provision your NPS (Network Policy Server) VM. This being a test environment, I provisioned a VM to be both the domain controler and the NPS box. Make sure to set a static IP on the NPS box’s NIC in Azure, you’ll need a static for your VPN configuration. I used 10.1.0.10.

After complete,  you will need to configure the VPN Gateway’s Point-to-Site configuration. Choose “RADIUS authentication”, enter in the static IP of the will-be NPS server, and set a Server Secret. This being a test environment, my password is obviously not as secure as I hope yours would be.

 

Configure NPS:

Now, go back into that VM that was created earlier and install the NPS role.

 

After it’s installed, you need to create a Network Policy with a condiational access clause (I used a group in AD) and tell it what security type you want to allow.

 

 

 

 

Next, you’ll need to create a Client Access Policy. Here you need the IP of the VPN Gateway you created, and the shared secret. Here is the interesting bit, you can’t view the IP of your VPN Gateway in the Gateway Subnet. If you looked at “Connected Devices” in the VNET the VPN Gateway doesn’t show any IP. I know that the VPN Gateway is deployed (behind the scenes) as an H/A pair, but I would assume they’re using a floating IP that they could surface. Anyways, there is no real way to find it – but it looks like (after testing with a dozen different deployments) it uses the 4th available IP in the subnet. This subnet being a 10.1.0.16/28, the 4th IP is 10.1.0.21. This is the IP that goes in the address of the RADIUS Client.

 

 

Next, I configure NPS Accounting. You don’t have to do this, but I think it helps for the sake of connection logging and for troubleshooting. You can log a few different ways, and choose here just to use a text file to a subfolder I created called “AzureVPN”.

 

 

 

Generate VPN Client Package:

Now that everything is set, you need to generate a VPN Client Package to distribute to your users.

 

After it is installed, you can see the VPN Connection in the VPN list and users can logon using their domain credentials.

 

 

 

After logging in, we can go back and look at the accounting log which shows us the successfull authentication of that user.

 

 

There we go, connecting to an Azure VPN Gateway with RADIUS authentication using domain credentials. I think we can all thank Microsoft for this one, and not having to do cert management anymore.

I hope I’ve made your day at least a little bit easier.

Configure Azure Blob Archive Storage

Posted on Updated on

Reading Time: 4 minutesAzure storage is great. Good thought to open on right? Of course! This year Azure graced us with the ability to (preview) the new Azure Archive Storage. Obviously this is enticing, especially at it’s  (current) $0.0018/GB price point. For more cost informaiton on Azure Archive Storage you can visit the link below.

https://azure.microsoft.com/en-us/pricing/details/storage/blobs/

Now this is nice, but I found myself a bit perplexed. How do I configure a storage account as an “archive” storage account? As it turns out, you don’t. Let’s walk though configuring an archive blob tier.

First, obiously you need a storage account. The Archive access tier is currently available on either “Blob” or “General Purpose v2”.  General Purpose v2 will work the same way, you’ll just also have the ability to host non-blob storage (File, Queue, Table). I’m going to just choose Blob though for this purpose.

Account kind selected, I’ll create the storage account. You can choose whatever Access Tier you’d like, that’s the access tier all of your objects will inheret by default. I choose “Cool” here because you will have to upload data before you can archive it and the cool tier saves money initially.

Alright storage account created, let’s go open it up.

If you go to the “Configuration” tab you can see the default acces tier you selected durring creation. Here is where I was a bit confused, why don’t I have the ability to select archive? You’ll see in a bit.

Go ahead and create a container, and upload a file. I created a container with the very complex name of “container1”, and have uploaded my very important image file that I want to archive.

Can see above that the inhereted access tier is “Cool” which was set at the storage account level. If you go into the blob properties you can see at the bottom there is an option to select the access tier for that specific file. Ah! There is it, Archive!

I’ll go ahead and select Archive, and see the the following message.

Please be cognizant of this, they aren’t kidding when they say that rehydration can take a long time. We can now refresh and see that the file is set to an access tier of “archive”.

Fantastic, we’ve archived the file! Now here is where you have to be careful, while the file is in archive the only data you’re able to access is the file metadata. The file itself is NOT ACCESSABLE until rehydrated. If you try and download the file while archived you can see the following message.

Archive storage is designed to be very long-term storage that you don’t need to access immediately, thus the low cost point. If you do need to access your file, you simply go back to that object and change it’s access tier to either Cool or Hot. It will then go through the “rehydration” process to move the file back into an accessable access tier.

I urge you to take that message seriously, in this example it took about 8 hours for my 48k image file to be rehyrated. They say it takes longer for larger files, and I’m going to test that next. Though in the mean time, assume it will take quite some time to be accessable again. After which time, WHEW! I recovered my very, very important file.

There you go, how to configure Azure Blob Archive Storage.

I hope I’ve made your day at least a little bit easier.

DFSR Failure After VM Restore (DFSR Error 2104)

Posted on Updated on

Reading Time: 3 minutesI have an environment that heavily leverages DFS, and recently one of the replication member servers had to be restored from a VEEAM backup. Typically VEEAM is great and doesn’t cause any issues, in this case though DFS completely broke. I got a TON of SCOM alerts, and the event log was littered with them as well.

The DFS Replication service failed to recover from an internal database error on volume D:. Replication has been stopped for all replicated folders on this volume. Additional Information: Error: 9214 (Internal database error (-1605)) Volume: D: xxxxxx Database: D:\System Volume Information\DFSR

 

Event 2212, DFSR 

The DFS Replication service has detected an unexpected shutdown on volume D:. This can occur if the service terminated abnormally (due to a power loss, for example) or an error occurred on the volume. The service has automatically initiated a recovery process. The service will rebuild the database if it determines it cannot reliably recover. No user action is required.

Additional Information:
Volume: D:
GUID: xxxxxxxxxxxxxxx

 

Error 2104, DFSR 

The DFS Replication service failed to recover from an internal database error on volume D:. Replication has been stopped for all replicated folders on this volume.

Additional Information:
Error: 9214 (Internal database error (-1605))
Volume: xxxxxxxxxxxxxxxxxxxxxxxx
Database: D:\System Volume Information\DFSR

 

The important error here is 2104, noting the database issue. There are multiple topics out there that talk about this, but they all end up linking back to this support article.

 https://support.microsoft.com/en-us/help/2517913/distributed-file-system-replication-dfsr-no-longer-replicates-files-af

 

In the end, essentially the database that is used by DFS replication becomes corrupted. It is a system-generated database so all you need to do is disable the replication service, delete the database, and start the replication service back up. Easy? No. There are a myriad of issues with doing this, mostly because the database is hosted in “System Volume Information” on the volume that hosts the DFS Root folder, or wherever you’ve placed the replication targets. Luckily for you, I hit my head against a wall for hours on end and figured out the solution.

Step 1: Stop DFSR service (stop-service DFSR)

Step 2: Grant yourself visibility to the “System Volume Information” folder. This entails flipping the radio button in explorer to “view hidden files”, as well as unchecking the box for “hide all system protected folders”.

Step 3: Grant yourself proper permissions to the “System Volume Information” folder. Go to the root of the volume that holds the replication targets eg. D:\. You will now see a grayed-out folder with a lock on it called “System Volume Information”. Go through the normal rigamaroo to grant “Administrators” full control over the folder. You should then be able to open it up, before it would have said “Access Denied”.

Step 4: Delete or rename the “DFSR” folder inside “System Volume Information”. Unfortunately, that’s not easy. Based on what I saw, it was because the file names in the database folder exceeded the limitations of explorer ( https://thetechl33t.com/2014/04/22/varying-file-name-too-long-issues ). Here the easiest thing to use is the wonderful Robocopy /MIR! Create an empty folder in the root of the drive and copy it into the DFSR folder using the /mir flag in robocopy. This will “mirror” the source folder into the destination folder.

Now the DFSR folder should be completely empty.

 

Step 5: Start the DFS Replication service (start-service DFSR)

Step 6: Check for validating event logs.

Event 4102, DFSR 

The DFS Replication service initialized the replicated folder at local path D:\xxxxxx and is waiting to perform initial replication. The replicated folder will remain in this state until it has received replicated data, directly or indirectly, from the designated primary member.

Additional Information:
Replicated Folder Name: XXXXXXX
Replicated Folder ID: XXXXXXXXXXXXXXXXXXXX 
Replication Group Name: XXXXX\XXXX 
Replication Group ID: XXXXXXXXXX
Member ID: XXXXXXXXXXXXX

 

Event 4412, DFSR

The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.

Additional Information:
Original File Path: D:\XXXXXXX
New Name in Conflict Folder: XXXXXXXXXXX
Replicated Folder Root: D:\XXXXXXXX
File ID: XXXXXXXXXXXXXXXX
Replicated Folder Name: XXXXXXXXXXXX
Replicated Folder ID: XXXXXXXXXXXXXXX
Replication Group Name: XXXXXXXXXXXXXX
Replication Group ID: XXXXXXXXXXXXXXXXX
Member ID: XXXXXXXXXXXXXXXXXXXX

 

 

There you go! You’ve done it! Microsoft said you had to contact their support to fix it, but you crafty devil – you’ve gone and done it yourself.

I hope I’ve made your day at least a little bit easier.

Configure Server Core for IIS Remote Management

Posted on

Reading Time: 3 minutesEveryone’s familiar by now with reasons why you want to use Server Core Edition for things like IIS, DNS, etc. In a recent project I found an interesting scenario where my GUI management server couldn’t connect remotely to the IIS instance that I was running on Server 2016 Core. There are a few oddities, so I decided to blog about it – let’s get going.

TL;DR steps are as follows:

  • Install IIS Web Role
  • Install IIS Management Feature
  • Change Registry Setting for Remote Management
  • Set Management Service to start automatically
  • Connect
  • Work
  • Get a promotion
  • Get a raise
  • Get a boat

Maybe not the boat, but that’s the dream right? Anyways, here’s the nitty gritty.

 

First, we need to see if IIS is installed. Assumedly because you’re already trying to figure out how to connect to it you’ve already done this. It’s good to check anyways, just to be sure. Note that in server core it first drops you into a cmd shell. This is 2017 and everything is done in powershell now, so go ahead and launch yourself into a PS shell. Then, we’ll check if the feature is installed by running the following command

Get-WindowsFeature | Where-Object {$_.DisplayName -eq “Web Server (IIS)”}

 

Here we can see that IIS is in fact not installed, so let’s go ahead and fix that. While we install IIS, it’s also important to install the IIS Remote Management Feature as well. Otherwise, there will be no connecting remotely to the instance. I’m installing both on the same line, using the following command.

Install-WindowsFeature Web-Server, Web-Mgmt-Service

 

It shouldn’t take too long. When it’s done you’ll get your output showing it’s complete.

 

Now that everything is installed, there is actually a registry key that needs modified. RegEdit is able to launch from Server Core from the command line, and you’ll need to set the following key to “1” rather than the default setting of “0”.

HKLM\SOFTWARE\Microsoft\WebManagement\Server\EnableRemoteManagement

 

Right, now we’ve got the settings in place. Unfortunately things still don’t work. That’s because the IIS Remote Management Service is disabled by default. Let’s go ahead and fix that by setting the service startup type to “automatic”, starting the service, and querying it’s state to confirm. We will do that by using the following three commands.

Set-Service WMSVC -StartupType “Automatic”

Start-Service WMSVC

Get-Service WMSVC

 

The status is now running, so we should be good to go. Let’s give it a shot by going into the GUI management server, launching the IIS console, and connecting to the server core box.

 

It will prompt you for the server name, and a user/password combo. After which, everything should be all set!

 

 

So there you have it, we’ve configured all the required settings to remotely manage IIS on server core!

I hope this makes your day at least a little bit easier.

Thanks,