Storage

Shared Storage Options in Azure: Part 1 – Azure Shared Disks

Posted on Updated on

In an IaaS world, shared storage between virtual machines is a common ask. “What is the best way to configure shared storage?”, “What options do we have for sharing storage between these VMs?”, both are questions I’ve answered several times, so let’s go ahead and blog some of the options! The first part in this blog series titled “Shared Storage Options in Azure”, will cover Azure Shared Disks.

As I write subsequent posts in this series, I will update this post with the links to each of them.

  • Part 1: Azure Shared Disks
  • Part 2: IaaS Storage Server (coming soon)
  • Part 3: Azure Files (coming soon)
  • Part 4: Blob Storage (coming soon)
  • Part 5: Azure NetApp Files (coming soon)

When shared disks were announced in July of 2020, there was quite a bit of excitement in the community. There are so many applications that still leverage shared storage for things like Windows Server Failover Clustering, on which many applications are built like SQL Server Failover Cluster Instances. Also, while I highly recommend using a Cloud Witness, many customers migration workloads to Azure still rely on a shared disk for quorum as well. Additionally, many Linux applications leverage shared storage that were previously configured to use a shared virtual disk, or even RAW LUN mappings, for applications such as GFS2 or OCFS2.

Additional sample workloads for Azure Shared Disks can be found here: Shared Disk Sample Workloads.

There are a few limitations of shared disks, the list of which is constantly getting smaller. For now, though, let’s just go ahead and jump into it and see how to deploy them. After which, we’ll do a quick “Pros” and “Cons” list before moving on to the other shared storage options. I deployed Shared Disks in my lab using the portal first (screenshots below), but also created a Github Repository (https://github.com/matthansen0/azure-shared-storage-options) with the Azure PowerShell script and an ARM template to deploy a similar environment – feel free to use those if you’d like!

As a prerequisite (not pictured below) I created the following resources:

  • A Resource Group in the West US region
  • A Virtual Network with a single subnet
  • 2x D2s v3, Windows Server 2016 Virtual Machines (VM001, VM002) each with a single OS disk

Now that those are created, I deployed a Managed Disk (named “sharedDisk001”) just like you would if you were deploying a typical data disk.

On the “advanced” tab you will see the ability to configure the managed disk as a “shared disk”, here is where you set the max shares which specifies the maximum number of VMs that can attach that particular disk type.

After the disk is finished deploying, we head over to the first VM and attach an existing disk. You’ll note that the disk shows up as a “shared disk” and shows the number of shares left available on that disk. Since this is the first time it’s being mounted it shows 0.

After attaching the disk to the first VM, we head over and do the same thing on VM002. You’ll note that the number of shares has increased by 1 since we have now mounted the disk on VM001.

Great, now the disk is attached to both VMs! Heading over to the managed disk itself you’ll notice that the overview page looks a bit different from typical managed disks, showing information like “Managed by” and “Max Shares”.

In the properties of the disk, we can see the VM owners of that specific disk, which is exactly what we wanted to see after mounting it on each of the VMs.

Although I setup this configuration using Windows machines, you’ll notice I didn’t go into the OS. This is to say that the process, from an Azure perspective, is the same with Linux as it is with Windows VMs. Of course, it will be different within the OS, but there is nothing Azure-specific from that aspect.

Okay, here we go the Pros and Cons:

Pros:

  • Azure Shared disks allows for the use of what is considered to be “legacy clustering technology” in Azure.
  • Can be leveraged by familiar tools such as Windows Failover Cluster Manger, Scale-out File Server, and Linux Pacemaker/Corosync.
  • Premium and Ultra Disks are supported so performance shouldn’t be an issue in most cases.
  • Supports SCSI Persistent Reservations.
  • Fairly simple to setup.

Cons:

  • Does not scale well, similar to what would be expected with a SAN mapping.
  • Only certain disk types are supported.
  • ReadOnly host caching is not available for Premium SSDs with maxShares >1.
  • When using Availability Sets and Virtual Machine Scale sets, storage fault domain alignment with the VMs are not enforced on the shared data disk.
  • Azure Backup not yet supported.
  • Azure Site Recovery not yet supported.

Alright, that’s it for Azure Shared Disks! Go take a look at my Github Repository and give shared disks a shot!

Please reach out to me in the comments, LinkedIn, or Twitter with any questions or comments about this blog post or this series.  

  • Part 1: Azure Shared Disks
  • Part 2: IaaS Storage Server (coming soon)
  • Part 3: Azure Files (coming soon)
  • Part 4: Blob Storage (coming soon)
  • Part 5: Azure NetApp Files (coming soon)

Delete Azure Recovery Vault Backups Immediately

Posted on Updated on

If you’re like many others, over the past few months you’ve noticed that if you configure Azure Backup, you can’t delete the vault for 14 days until after you stop backups. This is due to Soft Delete for Azure Backup. It doesn’t cost anything to keep those backups during that time, and it’s honestly a great safeguard to accidentally deleting backups and gives the option to “undelete”. Though, in some cases (mostly in lab environments) you just want to clear it out (or as was affectionately noted by a colleague of mine, “nuke it from orbit”). Let’s walk though how to do that real quickly.

When you go and stop backups and delete the data you’ll get the warning “The restore points for this backup item have been deleted and retained in a soft delete state” and you have to wait 14 days to fully delete those backups, you’ll also get an email alert letting you know.

To remove these backups immediately we need to disable soft delete, which is a configuration setting in the Recovery Services Vault. DO NOT DO THIS UNLESS YOU ABSOLUTELY MUST. As previously noted, this is a greats safeguard to have in place, and I would also suggest using ARM Resource Locks in production environments in addition to soft delete. If you’re sure though, we can go turn it off.

Alright, now that we’ve disabled Soft Delete for the vault, we have to commit the delete operation again. This means first we’ll need to “undelete” the backup, then delete it again which this time won’t be subject to the soft delete policy.

Now we can go delete it again, after which we can find that there are no backup items in the vault.

Success!!! The backup is fully deleted. So long as there are no other dependencies (policies, infrastructure, etc.) you can now delete the vault.

If you have any questions or suggestions for future blog posts feel free to comment below, or reach out to me via email, twitter, or LinkedIn.

Thanks!

Configure Azure Blob Archive Storage

Posted on Updated on

Azure storage is great. Good thought to open on right? Of course! This year Azure graced us with the ability to (preview) the new Azure Archive Storage. Obviously this is enticing, especially at it’s  (current) $0.0018/GB price point. For more cost informaiton on Azure Archive Storage you can visit the link below.

https://azure.microsoft.com/en-us/pricing/details/storage/blobs/

Now this is nice, but I found myself a bit perplexed. How do I configure a storage account as an “archive” storage account? As it turns out, you don’t. Let’s walk though configuring an archive blob tier.

First, obiously you need a storage account. The Archive access tier is currently available on either “Blob” or “General Purpose v2”.  General Purpose v2 will work the same way, you’ll just also have the ability to host non-blob storage (File, Queue, Table). I’m going to just choose Blob though for this purpose.

Account kind selected, I’ll create the storage account. You can choose whatever Access Tier you’d like, that’s the access tier all of your objects will inheret by default. I choose “Cool” here because you will have to upload data before you can archive it and the cool tier saves money initially.

Alright storage account created, let’s go open it up.

If you go to the “Configuration” tab you can see the default acces tier you selected durring creation. Here is where I was a bit confused, why don’t I have the ability to select archive? You’ll see in a bit.

Go ahead and create a container, and upload a file. I created a container with the very complex name of “container1”, and have uploaded my very important image file that I want to archive.

Can see above that the inhereted access tier is “Cool” which was set at the storage account level. If you go into the blob properties you can see at the bottom there is an option to select the access tier for that specific file. Ah! There is it, Archive!

I’ll go ahead and select Archive, and see the the following message.

Please be cognizant of this, they aren’t kidding when they say that rehydration can take a long time. We can now refresh and see that the file is set to an access tier of “archive”.

Fantastic, we’ve archived the file! Now here is where you have to be careful, while the file is in archive the only data you’re able to access is the file metadata. The file itself is NOT ACCESSABLE until rehydrated. If you try and download the file while archived you can see the following message.

Archive storage is designed to be very long-term storage that you don’t need to access immediately, thus the low cost point. If you do need to access your file, you simply go back to that object and change it’s access tier to either Cool or Hot. It will then go through the “rehydration” process to move the file back into an accessable access tier.

I urge you to take that message seriously, in this example it took about 8 hours for my 48k image file to be rehyrated. They say it takes longer for larger files, and I’m going to test that next. Though in the mean time, assume it will take quite some time to be accessable again. After which time, WHEW! I recovered my very, very important file.

There you go, how to configure Azure Blob Archive Storage.

I hope I’ve made your day at least a little bit easier.

DFSR Failure After VM Restore (DFSR Error 2104)

Posted on Updated on

I have an environment that heavily leverages DFS, and recently one of the replication member servers had to be restored from a VEEAM backup. Typically VEEAM is great and doesn’t cause any issues, in this case though DFS completely broke. I got a TON of SCOM alerts, and the event log was littered with them as well.

The DFS Replication service failed to recover from an internal database error on volume D:. Replication has been stopped for all replicated folders on this volume. Additional Information: Error: 9214 (Internal database error (-1605)) Volume: D: xxxxxx Database: D:\System Volume Information\DFSR

 

Event 2212, DFSR 

The DFS Replication service has detected an unexpected shutdown on volume D:. This can occur if the service terminated abnormally (due to a power loss, for example) or an error occurred on the volume. The service has automatically initiated a recovery process. The service will rebuild the database if it determines it cannot reliably recover. No user action is required.

Additional Information:
Volume: D:
GUID: xxxxxxxxxxxxxxx

 

Error 2104, DFSR 

The DFS Replication service failed to recover from an internal database error on volume D:. Replication has been stopped for all replicated folders on this volume.

Additional Information:
Error: 9214 (Internal database error (-1605))
Volume: xxxxxxxxxxxxxxxxxxxxxxxx
Database: D:\System Volume Information\DFSR

 

The important error here is 2104, noting the database issue. There are multiple topics out there that talk about this, but they all end up linking back to this support article.

 https://support.microsoft.com/en-us/help/2517913/distributed-file-system-replication-dfsr-no-longer-replicates-files-af

 

In the end, essentially the database that is used by DFS replication becomes corrupted. It is a system-generated database so all you need to do is disable the replication service, delete the database, and start the replication service back up. Easy? No. There are a myriad of issues with doing this, mostly because the database is hosted in “System Volume Information” on the volume that hosts the DFS Root folder, or wherever you’ve placed the replication targets. Luckily for you, I hit my head against a wall for hours on end and figured out the solution.

Step 1: Stop DFSR service (stop-service DFSR)

Step 2: Grant yourself visibility to the “System Volume Information” folder. This entails flipping the radio button in explorer to “view hidden files”, as well as unchecking the box for “hide all system protected folders”.

Step 3: Grant yourself proper permissions to the “System Volume Information” folder. Go to the root of the volume that holds the replication targets eg. D:\. You will now see a grayed-out folder with a lock on it called “System Volume Information”. Go through the normal rigamaroo to grant “Administrators” full control over the folder. You should then be able to open it up, before it would have said “Access Denied”.

Step 4: Delete or rename the “DFSR” folder inside “System Volume Information”. Unfortunately, that’s not easy. Based on what I saw, it was because the file names in the database folder exceeded the limitations of explorer ( https://thetechl33t.com/2014/04/22/varying-file-name-too-long-issues ). Here the easiest thing to use is the wonderful Robocopy /MIR! Create an empty folder in the root of the drive and copy it into the DFSR folder using the /mir flag in robocopy. This will “mirror” the source folder into the destination folder.

Now the DFSR folder should be completely empty.

 

Step 5: Start the DFS Replication service (start-service DFSR)

Step 6: Check for validating event logs.

Event 4102, DFSR 

The DFS Replication service initialized the replicated folder at local path D:\xxxxxx and is waiting to perform initial replication. The replicated folder will remain in this state until it has received replicated data, directly or indirectly, from the designated primary member.

Additional Information:
Replicated Folder Name: XXXXXXX
Replicated Folder ID: XXXXXXXXXXXXXXXXXXXX 
Replication Group Name: XXXXX\XXXX 
Replication Group ID: XXXXXXXXXX
Member ID: XXXXXXXXXXXXX

 

Event 4412, DFSR

The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.

Additional Information:
Original File Path: D:\XXXXXXX
New Name in Conflict Folder: XXXXXXXXXXX
Replicated Folder Root: D:\XXXXXXXX
File ID: XXXXXXXXXXXXXXXX
Replicated Folder Name: XXXXXXXXXXXX
Replicated Folder ID: XXXXXXXXXXXXXXX
Replication Group Name: XXXXXXXXXXXXXX
Replication Group ID: XXXXXXXXXXXXXXXXX
Member ID: XXXXXXXXXXXXXXXXXXXX

 

 

There you go! You’ve done it! Microsoft said you had to contact their support to fix it, but you crafty devil – you’ve gone and done it yourself.

I hope I’ve made your day at least a little bit easier.

Changing Azure Recovery Services Vault to LRS Storage

Posted on

Back in the classic portal with backup services it was an easy fix. Simply change the settings value of storage replication type. I’ve recently started moving my workloads to recovery serveries vaults in ARM, and noticed something peculiar. By default, the storage replication type of the vault is GRS.

 

If your needs require geographically redundant storage, that that’s perfectly fine. I however don’t have such needs, and trust in Microsoft’s ability to keep data generally available in a LRS replication topology. It should be just like it was in classic, as an option anyways, right? Strangely, the option to change the replication type for the storage configuration on the vault is grayed out.

 

 

Odd, right? I thought so, until I found this.

 

Okay, well it’s not optimal but it looks like I need to remove the backup data from the vault to change the storage replication types right? Well, I gave that a shot and no go. I had the same issue, the option was still grayed out.

I ultimately had to completely delete, and create a new recovery services vault. Once it’s initially created you can change the replication type.

 

 

Ah, finally! Then register the VM(s), run some backup jobs and voila! Confirmation that the vault is using LRS storage.

 

I hope this makes your day at least a little bit easier.

Thanks,

Disk Performance on Server 2012 Task Manager

Posted on Updated on

Something that I’ve noticed not a lot people know, is how to get your disk performance to show up on a server’s task manager. Yes, yes, I know. With the new Microsoft model you shouldn’t be RDP’ing to servers anyways — but I still get asked how to do this. For some reason they have this by default on Windows 8, but not enabled on Servers. So here’s the one command you need to fix that.

1) The problem, no disk portion on task manager. 🙁

TaskManager

 

 

2) Run the following command to enable it.

“DiskPerf -Y”

This enables the physical and logical disk performance counters.

DiskPerf

 

 

3) Close and re-open task manager.

 

DiskYES

 

 

 

There ya go!

I hope I’ve made your day at least a little bit easier.

Config File Iteration Backup – Change Checking Config Files

Posted on Updated on

In a lot of environments that have developers that use a lot of config files, sometimes it would be nice to keep older versions of those files. Fortunately Microsoft has graced us with shadow copies so we can have “Previous Versions”. The only issue with that, is you can only can’t turn on shadow copies (as far as I know) for specific files. So what I did was write a powershell script to take care of that, in a round-about way.

What this script does, is wait until the file has been modified then copy it to an “archive” location and time stamp it so you can review older copies.

At the  beginning of this script there are two arrays that include variables of full paths to the files. “$OriginalPath” is the array that holds the full path to each file you want to watch. In the script here the two files I’m watching are “C:\configs\config1.txt” and “C:\configs\config2.txt”. Then the second array is where you want to archive the files to. In the script here it’s “C:\archive_configs\config1.txt” and “C:\archive_configs\config2.txt”.

 

What’s done after the arrays are initialize, is the time -1 minute and compares the last write value of the file in question to the current time -1 minute. If it has been modified, it copies to the archive location then modifies the name with a time stamp. Then loops back through if there are more files being checked in the array.

archive_stamped

 

What I’ve done is put this in Task Scheduler to run every 1 minute. If you want to modify that, take the line:

$1MinAgo = (get-date).AddMinutes(-1)

and you the “Minutes” portion and the “(-1)” portion can both be modified.

 

 

https://gallery.technet.microsoft.com/Config-File-Iteration-ab2a69df

 

I hope I’ve made your day a little bit easier!

 

 

 

How to mount your OneDrive as a local mapped drive: Part 2

Posted on Updated on

A while back I wrote a blog post about how to map your OneDrive as a local drive (network drive) and it has been hugely popular (contrary to anything I could have imagined)

http://thetechl33t.com/2014/03/14/how-to-mount-your-onedrive-as-a-local-mapped-drive/

 

 

I’ve even seen it referred to in the Microsoft Community Forums. So I decided to share something that I played with, starting to write a tool to automate this otherwise lengthy process. Granted at this point it’s still at something of a Version 0.1, but I’ll share it anyways.

 

There are three things you need to have to make this tool do it’s magic.

  1. Your Microsoft CID
  2. Your Email
  3. Your Password

As long as you have those things the tool will do the rest!

 

The only thing here that you need to do is have your Microsoft CID, which isn’t too hard to do. Let’s help you grab that real quick!

First

 

  • Copy the CID in the top URL bar

Second

 

 

 

 

 

Once you have this ID copied, you’re all set! You can download the poweshell script here. Right click, and run with powershell! *Note* Accessing OneDrive this way is NOT supported and may act sluggish at times.

 

In some free time I’ll be working on using the Windows Live APIs to automatically pull the CID in the next version of this application. I hope I’ve made your day a little bit easier!

 

How to mount your OneDrive as a local mapped drive

Posted on Updated on

EDIT: If you liked this post, I’ve updated my process a little bit and written a script to automate a good chunk of this! Go check out Part 2 of this blog! http://thetechl33t.com/2014/10/03/how-to-mount-your-onedrive-as-a-local-mapped-drive-part-2/

 

 

OneDrive is an online storage system by Microsoft that is included when you have an email account such as @live.com @hotmail.com etc. I use it fairly often and I was curious if I could map it locally, turns out that I can.

First of all, you need to go to https://onedrive.com and use your Windows Live account (the same you use to access Hotmail, Messenger, Windows Live Mail or MSN) to log in and create the folders you want to use by using the New menu. You can create private and shared folders and customize the access for every one of them.

onedrive_0

After your have created your folders and customized it to your liking, you will need to link your computer to your online ID so it can access them without asking for credentials every time.

Click on the Start Menu button and select Control Panel.

cnt_pannel

Select User Accounts and Family Safety.

useracct_2

Select User Accounts

useracct_3

Select Link Online IDs, on the left side of the window.

link_online_4

Click on Link Online ID.

link_online_5

If you haven´t installed the Windows Live ID provider, you will be taken to a website to download it. If not, click the “Add an online ID provider” link in the above photo and it will take you there.

download_signon_6

Now you will be taken back to the Online ID providers and click on Link Online ID to sign in.

liveID_7

Now, to get the address where to map your OneDrive´s folders, you can open Excel, Word, PowerPoint or OneNote click on File and then on Save & Send. Then click “Save to Web” and it will populate the OneDrive folders from the OnlineID you just linked, select that folder and click “Save As”.

doc_save_8

Double click on the folder you want to map and copy the folder´s address.

url_9

Now that you have that link, go back to “Computer” and click “Map Network Drive”.

computer_10

map_11

Choose a drive letter, and paste that URL in there that was copied a few steps back.

map_12

There ya go! You’ve now got your OneDrive linked locally!

drive_13

 

 

EDIT: If you liked this post, I’ve updated my process a little bit and written a script to automate a good chunk of this! Go check out Part 2 of this blog! http://thetechl33t.com/2014/10/03/how-to-mount-your-onedrive-as-a-local-mapped-drive-part-2/