Configure Azure Blob Archive Storage

Posted on

Azure storage is great. Good thought to open on right? Of course! This year Azure graced us with the ability to (preview) the new Azure Archive Storage. Obviously this is enticing, especially at it’s  (current) $0.0018/GB price point. For more cost informaiton on Azure Archive Storage you can visit the link below.

https://azure.microsoft.com/en-us/pricing/details/storage/blobs/

 

Now this is nice, but I found myself a bit perplexed. How do I configure a storage account as an “archive” storage account? As it turns out, you don’t. Let’s walk though configuring an archive blob tier.

First, obiously you need a storage account. The Archive access tier is currently available on either “Blob” or “General Purpose v2”.  General Purpose v2 will work the same way, you’ll just also have the ability to host non-blob storage (File, Queue, Table). I’m going to just choose Blob though for this purpose.

 

Account kind selected, I’ll create the storage account. You can choose whatever Access Tier you’d like, that’s the access tier all of your objects will inheret by default. I choose “Cool” here because you will have to upload data before you can archive it and the cool tier saves money initially.

 

Alright storage account created, let’s go open it up.

 

If you go to the “Configuration” tab you can see the default acces tier you selected durring creation. Here is where I was a bit confused, why don’t I have the ability to select archive? You’ll see in a bit.

 

Go ahead and create a container, and upload a file. I created a container with the very complex name of “container1”, and have uploaded my very important image file that I want to archive.

 

Can see above that the inhereted access tier is “Cool” which was set at the storage account level. If you go into the blob properties you can see at the bottom there is an option to select the access tier for that specific file. Ah! There is it, Archive!

 

I’ll go ahead and select Archive, and see the the following message.

 

 

Please be cognizant of this, they aren’t kidding when they say that rehydration can take a long time. We can now refresh and see that the file is set to an access tier of “archive”.

 

Fantastic, we’ve archived the file! Now here is where you have to be careful, while the file is in archive the only data you’re able to access is the file metadata. The file itself is NOT ACCESSABLE until rehydrated. If you try and download the file while archived you can see the following message.

 

Archive storage is designed to be very long-term storage that you don’t need to access immediately, thus the low cost point. If you do need to access your file, you simply go back to that object and change it’s access tier to either Cool or Hot. It will then go through the “rehydration” process to move the file back into an accessable access tier.

 

I urge you to take that message seriously, in this example it took about 8 hours for my 48k image file to be rehyrated. They say it takes longer for larger files, and I’m going to test that next. Though in the mean time, assume it will take quite some time to be accessable again. After which time, WHEW! I recovered my very, very important file.

 

There you go, how to configure Azure Blob Archive Storage.

I hope I’ve made your day at least a little bit easier.

Advertisements

DFSR Failure After VM Restore (DFSR Error 2104)

Posted on Updated on

I have an environment that heavily leverages DFS, and recently one of the replication member servers had to be restored from a VEEAM backup. Typically VEEAM is great and doesn’t cause any issues, in this case though DFS completely broke. I got a TON of SCOM alerts, and the event log was littered with them as well.

The DFS Replication service failed to recover from an internal database error on volume D:. Replication has been stopped for all replicated folders on this volume. Additional Information: Error: 9214 (Internal database error (-1605)) Volume: D: xxxxxx Database: D:\System Volume Information\DFSR

 

Event 2212, DFSR 

The DFS Replication service has detected an unexpected shutdown on volume D:. This can occur if the service terminated abnormally (due to a power loss, for example) or an error occurred on the volume. The service has automatically initiated a recovery process. The service will rebuild the database if it determines it cannot reliably recover. No user action is required.

Additional Information:
Volume: D:
GUID: xxxxxxxxxxxxxxx

 

Error 2104, DFSR 

The DFS Replication service failed to recover from an internal database error on volume D:. Replication has been stopped for all replicated folders on this volume.

Additional Information:
Error: 9214 (Internal database error (-1605))
Volume: xxxxxxxxxxxxxxxxxxxxxxxx
Database: D:\System Volume Information\DFSR

 

The important error here is 2104, noting the database issue. There are multiple topics out there that talk about this, but they all end up linking back to this support article.

 https://support.microsoft.com/en-us/help/2517913/distributed-file-system-replication-dfsr-no-longer-replicates-files-af

 

In the end, essentially the database that is used by DFS replication becomes corrupted. It is a system-generated database so all you need to do is disable the replication service, delete the database, and start the replication service back up. Easy? No. There are a myriad of issues with doing this, mostly because the database is hosted in “System Volume Information” on the volume that hosts the DFS Root folder, or wherever you’ve placed the replication targets. Luckily for you, I hit my head against a wall for hours on end and figured out the solution.

Step 1: Stop DFSR service (stop-service DFSR)

Step 2: Grant yourself visibility to the “System Volume Information” folder. This entails flipping the radio button in explorer to “view hidden files”, as well as unchecking the box for “hide all system protected folders”.

Step 3: Grant yourself proper permissions to the “System Volume Information” folder. Go to the root of the volume that holds the replication targets eg. D:\. You will now see a grayed-out folder with a lock on it called “System Volume Information”. Go through the normal rigamaroo to grant “Administrators” full control over the folder. You should then be able to open it up, before it would have said “Access Denied”.

Step 4: Delete or rename the “DFSR” folder inside “System Volume Information”. Unfortunately, that’s not easy. Based on what I saw, it was because the file names in the database folder exceeded the limitations of explorer ( https://thetechl33t.com/2014/04/22/varying-file-name-too-long-issues ). Here the easiest thing to use is the wonderful Robocopy /MIR! Create an empty folder in the root of the drive and copy it into the DFSR folder using the /mir flag in robocopy. This will “mirror” the source folder into the destination folder.

Now the DFSR folder should be completely empty.

 

Step 5: Start the DFS Replication service (start-service DFSR)

Step 6: Check for validating event logs.

Event 4102, DFSR 

The DFS Replication service initialized the replicated folder at local path D:\xxxxxx and is waiting to perform initial replication. The replicated folder will remain in this state until it has received replicated data, directly or indirectly, from the designated primary member.

Additional Information:
Replicated Folder Name: XXXXXXX
Replicated Folder ID: XXXXXXXXXXXXXXXXXXXX 
Replication Group Name: XXXXX\XXXX 
Replication Group ID: XXXXXXXXXX
Member ID: XXXXXXXXXXXXX

 

Event 4412, DFSR

The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.

Additional Information:
Original File Path: D:\XXXXXXX
New Name in Conflict Folder: XXXXXXXXXXX
Replicated Folder Root: D:\XXXXXXXX
File ID: XXXXXXXXXXXXXXXX
Replicated Folder Name: XXXXXXXXXXXX
Replicated Folder ID: XXXXXXXXXXXXXXX
Replication Group Name: XXXXXXXXXXXXXX
Replication Group ID: XXXXXXXXXXXXXXXXX
Member ID: XXXXXXXXXXXXXXXXXXXX

 

 

There you go! You’ve done it! Microsoft said you had to contact their support to fix it, but you crafty devil – you’ve gone and done it yourself.

I hope I’ve made your day at least a little bit easier.

Configure Server Core for IIS Remote Management

Posted on

Everyone’s familiar by now with reasons why you want to use Server Core Edition for things like IIS, DNS, etc. In a recent project I found an interesting scenario where my GUI management server couldn’t connect remotely to the IIS instance that I was running on Server 2016 Core. There are a few oddities, so I decided to blog about it – let’s get going.

TL;DR steps are as follows:

  • Install IIS Web Role
  • Install IIS Management Feature
  • Change Registry Setting for Remote Management
  • Set Management Service to start automatically
  • Connect
  • Work
  • Get a promotion
  • Get a raise
  • Get a boat

Maybe not the boat, but that’s the dream right? Anyways, here’s the nitty gritty.

 

First, we need to see if IIS is installed. Assumedly because you’re already trying to figure out how to connect to it you’ve already done this. It’s good to check anyways, just to be sure. Note that in server core it first drops you into a cmd shell. This is 2017 and everything is done in powershell now, so go ahead and launch yourself into a PS shell. Then, we’ll check if the feature is installed by running the following command

Get-WindowsFeature | Where-Object {$_.DisplayName -eq “Web Server (IIS)”}

 

Here we can see that IIS is in fact not installed, so let’s go ahead and fix that. While we install IIS, it’s also important to install the IIS Remote Management Feature as well. Otherwise, there will be no connecting remotely to the instance. I’m installing both on the same line, using the following command.

Install-WindowsFeature Web-Server, Web-Mgmt-Service

 

It shouldn’t take too long. When it’s done you’ll get your output showing it’s complete.

 

Now that everything is installed, there is actually a registry key that needs modified. RegEdit is able to launch from Server Core from the command line, and you’ll need to set the following key to “1” rather than the default setting of “0”.

HKLM\SOFTWARE\Microsoft\WebManagement\Server\EnableRemoteManagement

 

Right, now we’ve got the settings in place. Unfortunately things still don’t work. That’s because the IIS Remote Management Service is disabled by default. Let’s go ahead and fix that by setting the service startup type to “automatic”, starting the service, and querying it’s state to confirm. We will do that by using the following three commands.

Set-Service WMSVC -StartupType “Automatic”

Start-Service WMSVC

Get-Service WMSVC

 

The status is now running, so we should be good to go. Let’s give it a shot by going into the GUI management server, launching the IIS console, and connecting to the server core box.

 

It will prompt you for the server name, and a user/password combo. After which, everything should be all set!

 

 

So there you have it, we’ve configured all the required settings to remotely manage IIS on server core!

I hope this makes your day at least a little bit easier.

Thanks,

Changing Azure Recovery Services Vault to LRS Storage

Posted on

Back in the classic portal with backup services it was an easy fix. Simply change the settings value of storage replication type. I’ve recently started moving my workloads to recovery serveries vaults in ARM, and noticed something peculiar. By default, the storage replication type of the vault is GRS.

 

If your needs require geographically redundant storage, that that’s perfectly fine. I however don’t have such needs, and trust in Microsoft’s ability to keep data generally available in a LRS replication topology. It should be just like it was in classic, as an option anyways, right? Strangely, the option to change the replication type for the storage configuration on the vault is grayed out.

 

 

Odd, right? I thought so, until I found this.

 

Okay, well it’s not optimal but it looks like I need to remove the backup data from the vault to change the storage replication types right? Well, I gave that a shot and no go. I had the same issue, the option was still grayed out.

I ultimately had to completely delete, and create a new recovery services vault. Once it’s initially created you can change the replication type.

 

 

Ah, finally! Then register the VM(s), run some backup jobs and voila! Confirmation that the vault is using LRS storage.

 

I hope this makes your day at least a little bit easier.

Thanks,

Explanation: F5 LTM Full-Proxy Architecture && SSL Bridging

Posted on

The concept of a full-proxy architecture, along with SSL Bridging has seemed to confuse a good majority of people to whom I’ve attempted to explain. In that light, here we go. I could write a long drawn-out explanation of this process (and will, if requested) but most folks reading this want a quick answer. Let’s proceed.

A few things to note:

  • “Full Proxy Architecture”, this means that clients or servers on either side of the F5 never talk to each other. The client thinks the F5’s endpoint (iApp) is the server, and the server thinks the F5 is the client. They never talk to each other.
  • “SSL Bridging”, this means Client -> F5 is encrypted, then decrypted for processing, then re-encrypted, and F5 -> server is encrypted.
  • “F5” is actually a company name, this products have many other names, such as F5 BIG-IP LTM ADC.
  •  It is a networking device, not a server, you can’t RDP to it like some people have assumed (although you can SSH into the management system and the TMSH data plane).

There is typically some confusion around what certs are on what box and whether or not they match. If they use the F5, the answer is – it doesn’t matter. They ONLY need to care about, and trust the cert that’s applied by the SSL Bridging profile attached the iApp that corresponds with the endpoint for that app. In the example I’ve drawn below (thanks to a fancy bright-link board) I show that the source client (which can be a server if you want), the F5, and the destination server all have different certs. Though, again all that matters to the anyone besides the F5 is the cert that the F5 uses. Note that the steps are numbered in green.

 

I hope this makes your day at least a little bit easier.

Thanks,

 

 

WSUS App Pool Crashes with SCCM Syncronization

Posted on

I’ve seen this a few times now, sometimes with standalone WSUS but mostly with SCCM running a software update point. Every time SCCM does an update synchronization – the app pool crashes. If it runs again it will typically complete, but it’s still rather annoying especially if you have the SCOM management pack for IIS and/or SCCM. You’ll see things like the following.


Alert: ConfigMgr Server Component Issue

Source: ConfigMgr WSUS Synchronization Manager

Last modified by: System

Last modified time: 3/27/2017 4:14:02 AM Alert description: Component ConfigMgr WSUS Synchronization Manager - SCCMServer.domain.local (SMS_WSUS_SYNC_MANAGER) on server SCCMServer.domain.local is not working properly.

 


Application Error:

Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
Faulting module name: KERNELBASE.dll, version: 6.1.7601.17651, time stamp: 0x4e21213c
Exception code: 0xe0434352
Fault offset: 0x000000000000cacd
Faulting process id: 0x141c
Faulting application start time: 0x01cd64a70072cec1
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: C:\Windows\system32\KERNELBASE.dll
Report Id: 3e5d5bdc-d09a-11e1-a2f5-00155d2c1824

 


Log Name: System
Source: Microsoft-Windows-WAS
Event ID: 5074
A worker process with process id of ‘%1’ serving application pool ‘%2’ has requested a recycle because the worker process reached its allowed processing time limit.
Log Name: Application Source: Windows Server Update Services Event ID: 12072
The WSUS content directory is not accessible. System.Net.WebException: The remote server returned an error: (503) Server Unavailable. at System.Net.HttpWebRequest.GetResponse() at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.CheckContentDirWebAccess(EventLoggingType type, HealthEventLogger logger)

Log Name: Application Source: SMS Server Event ID: 7000 On 8/13/2015 3:22:40 AM, component SMS_WSUS_CONTROL_MANAGER on computer WSUS.fqdn reported:  WSUS Control Manager failed to configure proxy settings on WSUS Server “WSUS.fqdn”.
Possible cause: WSUS Server version 3.0 SP2 or above is not installed or cannot be contacted. Solution: Verify that the WSUS Server version 3.0 SP2 or greater is installed. Verify that the IIS ports configured in the site are same as those configured on the WSUS IIS website.You can receive failure because proxy is set but proxy name is not specified or proxy server port is invalid.
What it turns out to be is the WSUS App Pool has some “rapid-fail” settings on the application pool in IIS itself. They are being overrun with the overhead of the SCCM SUP sync and causing a pool recycle. It turns out, this is actually a pretty easy fix.
  • Launch IIS Manager on the server that hosts WSUS
  • Open Application Pools
  • Right click “WSUSPool”, then “Advanced Settings”
  • Change ‘Queue Length’ from the default 1,000 to 25,000. You will note this number is also the same as the maximum number of clients supported per SUP in an SCCM architecture.

  • Locate the “Private Memory Limit (KB). Default is set to “1843200” (~1.8GB) and a good practice I’ve found is to set it to “7843200” (~7.8GB). If for whatever reason you are still exceeding this limit you can set this to “0” denoting an unlimited amount.

  • Restart the “WSUSPOOL” app pool.

 

If you run a standalone WSUS instance, you can now go do a manual synchronization in the WSUS management console to test the change.

 

Or, if  you have an SCCM instance leveraging WSUS you DO NOT DO ANYTHING IN THE WSUS CONSOLE (if you didn’t know). Go ahead and launch  your SCCM console and do the sync from there.

 

 

These changes should have fixed your problems, and all should be running well! If not, I recommend you contact Microsoft (especially if you have a very large infrastructure) since there are a few more tweaks you can make in IIS.

I hope this makes your day at least a little bit easier.

Thanks,

SCOM Alert Severity and Priority Values

Posted on

I’m not sure why I have a hard time remembering which way the numeric representation goes – but here it is.

Severity:

  • Informational = 0
  • Warning = 1
  • Critical =2

Priority:

  • Low = 0
  • Medium = 1
  • High = 2

 

Thanks!