Cloud

Azure Point-to-Site VPN with RADIUS Authentication

Posted on Updated on

For the money, it’s hard to beat the Azure VPN Gateway. Until recently though, Point-to-Site VPNs were a bit clunky because they needed mutual certificate authentication. It wasn’t bad, but it certainly wasn’t good. Thankfully, Microsoft now allows RADIUS backed authentication. This post is how you impliment said configuration.

 

To start off, here is my environment information I’m using to setup this configuration.

Virtual Network: “raidus-vnet”

Virtual Network Address Space: 10.1.0.0/24

Virtual Network VM Subnet: 10.1.0.0/28

Virtual Network Gateway Subnet: 10.1.0.16/28

VPN Gateway SKU: VpnGW1

VPN Client Address Pool: 172.28.10.0/24

Domain Controler/NPS Server Static IP: 10.1.0.10

 

 

Virtual Network (VNET) Setup:

You most likely already have a VNET where you will be configuring this setup, but if you don’t you need to create one with two subnets. One subnet for infrastructure, and one “Gateway Subnet”. The Gateway Subnet will be used automatically, and is required,  when you configure the VPN Gateway.

 

VPN Gateway Setup:

The Azure VPN Gateway is just about as easy as it gets to configure and to managed (sometimes to a fault). The only caveat you need to be aware of in this scenerio, is that RADIUS Point-to-Site authentication is only available on the SKU “VPNGW1” and above. You’ll then need to choose the vnet where you have created the VPN Gateway, and create a Public IP Address resource.

 

This will take anywhere from 20-45 minutes to provision, as noted. While that’s running, you can provision your NPS (Network Policy Server) VM. This being a test environment, I provisioned a VM to be both the domain controler and the NPS box. Make sure to set a static IP on the NPS box’s NIC in Azure, you’ll need a static for your VPN configuration. I used 10.1.0.10.

After complete,  you will need to configure the VPN Gateway’s Point-to-Site configuration. Choose “RADIUS authentication”, enter in the static IP of the will-be NPS server, and set a Server Secret. This being a test environment, my password is obviously not as secure as I hope yours would be.

 

Configure NPS:

Now, go back into that VM that was created earlier and install the NPS role.

 

After it’s installed, you need to create a Network Policy with a condiational access clause (I used a group in AD) and tell it what security type you want to allow.

 

 

 

 

Next, you’ll need to create a Client Access Policy. Here you need the IP of the VPN Gateway you created, and the shared secret. Here is the interesting bit, you can’t view the IP of your VPN Gateway in the Gateway Subnet. If you looked at “Connected Devices” in the VNET the VPN Gateway doesn’t show any IP. I know that the VPN Gateway is deployed (behind the scenes) as an H/A pair, but I would assume they’re using a floating IP that they could surface. Anyways, there is no real way to find it – but it looks like (after testing with a dozen different deployments) it uses the 4th available IP in the subnet. This subnet being a 10.1.0.16/28, the 4th IP is 10.1.0.21. This is the IP that goes in the address of the RADIUS Client.

 

 

Next, I configure NPS Accounting. You don’t have to do this, but I think it helps for the sake of connection logging and for troubleshooting. You can log a few different ways, and choose here just to use a text file to a subfolder I created called “AzureVPN”.

 

 

 

Generate VPN Client Package:

Now that everything is set, you need to generate a VPN Client Package to distribute to your users.

 

After it is installed, you can see the VPN Connection in the VPN list and users can logon using their domain credentials.

 

 

 

After logging in, we can go back and look at the accounting log which shows us the successfull authentication of that user.

 

 

There we go, connecting to an Azure VPN Gateway with RADIUS authentication using domain credentials. I think we can all thank Microsoft for this one, and not having to do cert management anymore.

I hope I’ve made your day at least a little bit easier.

Advertisements

Configure Azure Blob Archive Storage

Posted on

Azure storage is great. Good thought to open on right? Of course! This year Azure graced us with the ability to (preview) the new Azure Archive Storage. Obviously this is enticing, especially at it’s  (current) $0.0018/GB price point. For more cost informaiton on Azure Archive Storage you can visit the link below.

https://azure.microsoft.com/en-us/pricing/details/storage/blobs/

 

Now this is nice, but I found myself a bit perplexed. How do I configure a storage account as an “archive” storage account? As it turns out, you don’t. Let’s walk though configuring an archive blob tier.

First, obiously you need a storage account. The Archive access tier is currently available on either “Blob” or “General Purpose v2”.  General Purpose v2 will work the same way, you’ll just also have the ability to host non-blob storage (File, Queue, Table). I’m going to just choose Blob though for this purpose.

 

Account kind selected, I’ll create the storage account. You can choose whatever Access Tier you’d like, that’s the access tier all of your objects will inheret by default. I choose “Cool” here because you will have to upload data before you can archive it and the cool tier saves money initially.

 

Alright storage account created, let’s go open it up.

 

If you go to the “Configuration” tab you can see the default acces tier you selected durring creation. Here is where I was a bit confused, why don’t I have the ability to select archive? You’ll see in a bit.

 

Go ahead and create a container, and upload a file. I created a container with the very complex name of “container1”, and have uploaded my very important image file that I want to archive.

 

Can see above that the inhereted access tier is “Cool” which was set at the storage account level. If you go into the blob properties you can see at the bottom there is an option to select the access tier for that specific file. Ah! There is it, Archive!

 

I’ll go ahead and select Archive, and see the the following message.

 

 

Please be cognizant of this, they aren’t kidding when they say that rehydration can take a long time. We can now refresh and see that the file is set to an access tier of “archive”.

 

Fantastic, we’ve archived the file! Now here is where you have to be careful, while the file is in archive the only data you’re able to access is the file metadata. The file itself is NOT ACCESSABLE until rehydrated. If you try and download the file while archived you can see the following message.

 

Archive storage is designed to be very long-term storage that you don’t need to access immediately, thus the low cost point. If you do need to access your file, you simply go back to that object and change it’s access tier to either Cool or Hot. It will then go through the “rehydration” process to move the file back into an accessable access tier.

 

I urge you to take that message seriously, in this example it took about 8 hours for my 48k image file to be rehyrated. They say it takes longer for larger files, and I’m going to test that next. Though in the mean time, assume it will take quite some time to be accessable again. After which time, WHEW! I recovered my very, very important file.

 

There you go, how to configure Azure Blob Archive Storage.

I hope I’ve made your day at least a little bit easier.

Changing Azure Recovery Services Vault to LRS Storage

Posted on

Back in the classic portal with backup services it was an easy fix. Simply change the settings value of storage replication type. I’ve recently started moving my workloads to recovery serveries vaults in ARM, and noticed something peculiar. By default, the storage replication type of the vault is GRS.

 

If your needs require geographically redundant storage, that that’s perfectly fine. I however don’t have such needs, and trust in Microsoft’s ability to keep data generally available in a LRS replication topology. It should be just like it was in classic, as an option anyways, right? Strangely, the option to change the replication type for the storage configuration on the vault is grayed out.

 

 

Odd, right? I thought so, until I found this.

 

Okay, well it’s not optimal but it looks like I need to remove the backup data from the vault to change the storage replication types right? Well, I gave that a shot and no go. I had the same issue, the option was still grayed out.

I ultimately had to completely delete, and create a new recovery services vault. Once it’s initially created you can change the replication type.

 

 

Ah, finally! Then register the VM(s), run some backup jobs and voila! Confirmation that the vault is using LRS storage.

 

I hope this makes your day at least a little bit easier.

Thanks,

How to move SCVMM VMs into a Cloud

Posted on

If you’ve ever added hosts to an SCVMM instance you’ll know that there’s seeminly no easy way to move the newly imported VMs from those hosts into SCVMM clouds. I’ve found the best way to do this is by using the SCVMM command-line interface, which unfortunately has a few quirks.

Set-SCVirtualMachine is the command you’ll need to use, with the flag “-Cloud” like in the example below.

Set-SCVirtualMachine -VM “NewVM1” -Cloud “Cloud1”

Unfortunately, every time I’ve tried this I’ve gotten an error saying it can’t convert the value type correctly like as shown below.

setscvm-failure

 

For whatever reason, I’ve found that the work around here is to set both the VM and the Cloud as variables and run the command again.

$VM = Get-SCVirtualMachine “NewVM1”

$Cloud = Get-SCCloud “Cloud1” 

Set-SCVirtualMachine -VM $VM -Cloud $Cloud

setscvm-success

 

Then we have success!

 

sccloud-success

 

I’ve yet to figure out why this is, but at least it works.

I hope this makes your day at least a little bit easier.

Thanks,

SCVMM Error 2912 “The configuration registry database is corrupt (0x800703F1)”

Posted on

I recently spun up a new SCVMM environment, created my first VM, and attempted to create a template only to be faced with a job error.

Error (2912)
An internal error has occurred trying to contact the Host01 server: : .

WinRM: URL: [http://Host01.lab.local:5985], Verb: [INVOKE], Method: [LoadSubkey], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/P2VSourceFixup?RegFileName=C:\Users\SVC_VMM\AppData\Local\Temp\tmp6AB5.tmp]

The configuration registry database is corrupt (0x800703F1)

Recommended Action
Check that WS-Management service is installed and running on server host01.lab.local. For more information use the command “winrm helpmsg hresult”. If host01.lab.local is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running. Refer to http://support.microsoft.com/kb/2742275 for more details.

 

I’ve seen this issue before and typically it’s because I go on auto-pilot and sysprep the VM by hand. That will cause an issue, go ahead and start the VM and login, shutdown and let VMM do the sysprep.

Unfortunately this time that wasn’t the problem, though it was similar. When I shut the VM down I accidentally hit “Turn Off” and it hard powered the VM down. A simple boot, login, and retry fixed the problems here.

 

I hope this makes your day at least a little bit easier.

Thanks,

Domain Controllers “Grayed out” in SCOM 2012

Posted on Updated on

I did a few new SCOM 2012 installs recently and noticed that after pushing the agent to the DCs, they showed up grayed out in Ops Manager. Here’s a quick tip on how to fix that.

Logon to the DC(s), and with an administrative comand prompt run the HSLockdown tool, and add the local system account to the allowed group.

C:\Program Files\System Center Operations Manager\Agent:

*NOTE* In newer version, this is now stored in “C:\Program Files\Microsoft Monitoring Agent\Agent”

agent

 

Run the command “HSLockdown /L” to show what accounts are being allowed or denied. In this case, my local system isn’t even populated.

 

L

 

 

Now run the HSLockdown tool again with the add switch to allow local system.

“HSLockdown /A “NT AUTHORITY\SYSTEM”

add

 

Restart the agent with “net stop heathservice && net start healthservice” and give it 5 minutes or so then it should be all green in your dashboard.

 

Hope this made your day at least a little easier!

 

How to mount your OneDrive as a local mapped drive: Part 2

Posted on Updated on

A while back I wrote a blog post about how to map your OneDrive as a local drive (network drive) and it has been hugely popular (contrary to anything I could have imagined)

https://thetechl33t.com/2014/03/14/how-to-mount-your-onedrive-as-a-local-mapped-drive/

 

 

I’ve even seen it referred to in the Microsoft Community Forums. So I decided to share something that I played with, starting to write a tool to automate this otherwise lengthy process. Granted at this point it’s still at something of a Version 0.1, but I’ll share it anyways.

 

There are three things you need to have to make this tool do it’s magic.

  1. Your Microsoft CID
  2. Your Email
  3. Your Password

As long as you have those things the tool will do the rest!

 

The only thing here that you need to do is have your Microsoft CID, which isn’t too hard to do. Let’s help you grab that real quick!

First

 

  • Copy the CID in the top URL bar

Second

 

 

 

 

 

Once you have this ID copied, you’re all set! You can download the poweshell script here. Right click, and run with powershell! *Note* Accessing OneDrive this way is NOT supported and may act sluggish at times.

 

In some free time I’ll be working on using the Windows Live APIs to automatically pull the CID in the next version of this application. I hope I’ve made your day a little bit easier!