Azure

Azure Point-to-Site VPN with RADIUS Authentication

Posted on Updated on

For the money, it’s hard to beat the Azure VPN Gateway. Until recently though, Point-to-Site VPNs were a bit clunky because they needed mutual certificate authentication. It wasn’t bad, but it certainly wasn’t good. Thankfully, Microsoft now allows RADIUS backed authentication. This post is how you impliment said configuration.

 

To start off, here is my environment information I’m using to setup this configuration.

Virtual Network: “raidus-vnet”

Virtual Network Address Space: 10.1.0.0/24

Virtual Network VM Subnet: 10.1.0.0/28

Virtual Network Gateway Subnet: 10.1.0.16/28

VPN Gateway SKU: VpnGW1

VPN Client Address Pool: 172.28.10.0/24

Domain Controler/NPS Server Static IP: 10.1.0.10

 

 

Virtual Network (VNET) Setup:

You most likely already have a VNET where you will be configuring this setup, but if you don’t you need to create one with two subnets. One subnet for infrastructure, and one “Gateway Subnet”. The Gateway Subnet will be used automatically, and is required,  when you configure the VPN Gateway.

 

VPN Gateway Setup:

The Azure VPN Gateway is just about as easy as it gets to configure and to managed (sometimes to a fault). The only caveat you need to be aware of in this scenerio, is that RADIUS Point-to-Site authentication is only available on the SKU “VPNGW1” and above. You’ll then need to choose the vnet where you have created the VPN Gateway, and create a Public IP Address resource.

 

This will take anywhere from 20-45 minutes to provision, as noted. While that’s running, you can provision your NPS (Network Policy Server) VM. This being a test environment, I provisioned a VM to be both the domain controler and the NPS box. Make sure to set a static IP on the NPS box’s NIC in Azure, you’ll need a static for your VPN configuration. I used 10.1.0.10.

After complete,  you will need to configure the VPN Gateway’s Point-to-Site configuration. Choose “RADIUS authentication”, enter in the static IP of the will-be NPS server, and set a Server Secret. This being a test environment, my password is obviously not as secure as I hope yours would be.

 

Configure NPS:

Now, go back into that VM that was created earlier and install the NPS role.

 

After it’s installed, you need to create a Network Policy with a condiational access clause (I used a group in AD) and tell it what security type you want to allow.

 

 

 

 

Next, you’ll need to create a Client Access Policy. Here you need the IP of the VPN Gateway you created, and the shared secret. Here is the interesting bit, you can’t view the IP of your VPN Gateway in the Gateway Subnet. If you looked at “Connected Devices” in the VNET the VPN Gateway doesn’t show any IP. I know that the VPN Gateway is deployed (behind the scenes) as an H/A pair, but I would assume they’re using a floating IP that they could surface. Anyways, there is no real way to find it – but it looks like (after testing with a dozen different deployments) it uses the 4th available IP in the subnet. This subnet being a 10.1.0.16/28, the 4th IP is 10.1.0.21. This is the IP that goes in the address of the RADIUS Client.

 

 

Next, I configure NPS Accounting. You don’t have to do this, but I think it helps for the sake of connection logging and for troubleshooting. You can log a few different ways, and choose here just to use a text file to a subfolder I created called “AzureVPN”.

 

 

 

Generate VPN Client Package:

Now that everything is set, you need to generate a VPN Client Package to distribute to your users.

 

After it is installed, you can see the VPN Connection in the VPN list and users can logon using their domain credentials.

 

 

 

After logging in, we can go back and look at the accounting log which shows us the successfull authentication of that user.

 

 

There we go, connecting to an Azure VPN Gateway with RADIUS authentication using domain credentials. I think we can all thank Microsoft for this one, and not having to do cert management anymore.

I hope I’ve made your day at least a little bit easier.

Advertisements

Configure Azure Blob Archive Storage

Posted on

Azure storage is great. Good thought to open on right? Of course! This year Azure graced us with the ability to (preview) the new Azure Archive Storage. Obviously this is enticing, especially at it’s  (current) $0.0018/GB price point. For more cost informaiton on Azure Archive Storage you can visit the link below.

https://azure.microsoft.com/en-us/pricing/details/storage/blobs/

 

Now this is nice, but I found myself a bit perplexed. How do I configure a storage account as an “archive” storage account? As it turns out, you don’t. Let’s walk though configuring an archive blob tier.

First, obiously you need a storage account. The Archive access tier is currently available on either “Blob” or “General Purpose v2”.  General Purpose v2 will work the same way, you’ll just also have the ability to host non-blob storage (File, Queue, Table). I’m going to just choose Blob though for this purpose.

 

Account kind selected, I’ll create the storage account. You can choose whatever Access Tier you’d like, that’s the access tier all of your objects will inheret by default. I choose “Cool” here because you will have to upload data before you can archive it and the cool tier saves money initially.

 

Alright storage account created, let’s go open it up.

 

If you go to the “Configuration” tab you can see the default acces tier you selected durring creation. Here is where I was a bit confused, why don’t I have the ability to select archive? You’ll see in a bit.

 

Go ahead and create a container, and upload a file. I created a container with the very complex name of “container1”, and have uploaded my very important image file that I want to archive.

 

Can see above that the inhereted access tier is “Cool” which was set at the storage account level. If you go into the blob properties you can see at the bottom there is an option to select the access tier for that specific file. Ah! There is it, Archive!

 

I’ll go ahead and select Archive, and see the the following message.

 

 

Please be cognizant of this, they aren’t kidding when they say that rehydration can take a long time. We can now refresh and see that the file is set to an access tier of “archive”.

 

Fantastic, we’ve archived the file! Now here is where you have to be careful, while the file is in archive the only data you’re able to access is the file metadata. The file itself is NOT ACCESSABLE until rehydrated. If you try and download the file while archived you can see the following message.

 

Archive storage is designed to be very long-term storage that you don’t need to access immediately, thus the low cost point. If you do need to access your file, you simply go back to that object and change it’s access tier to either Cool or Hot. It will then go through the “rehydration” process to move the file back into an accessable access tier.

 

I urge you to take that message seriously, in this example it took about 8 hours for my 48k image file to be rehyrated. They say it takes longer for larger files, and I’m going to test that next. Though in the mean time, assume it will take quite some time to be accessable again. After which time, WHEW! I recovered my very, very important file.

 

There you go, how to configure Azure Blob Archive Storage.

I hope I’ve made your day at least a little bit easier.

Changing Azure Recovery Services Vault to LRS Storage

Posted on

Back in the classic portal with backup services it was an easy fix. Simply change the settings value of storage replication type. I’ve recently started moving my workloads to recovery serveries vaults in ARM, and noticed something peculiar. By default, the storage replication type of the vault is GRS.

 

If your needs require geographically redundant storage, that that’s perfectly fine. I however don’t have such needs, and trust in Microsoft’s ability to keep data generally available in a LRS replication topology. It should be just like it was in classic, as an option anyways, right? Strangely, the option to change the replication type for the storage configuration on the vault is grayed out.

 

 

Odd, right? I thought so, until I found this.

 

Okay, well it’s not optimal but it looks like I need to remove the backup data from the vault to change the storage replication types right? Well, I gave that a shot and no go. I had the same issue, the option was still grayed out.

I ultimately had to completely delete, and create a new recovery services vault. Once it’s initially created you can change the replication type.

 

 

Ah, finally! Then register the VM(s), run some backup jobs and voila! Confirmation that the vault is using LRS storage.

 

I hope this makes your day at least a little bit easier.

Thanks,