Azure
Azure Data Box Disk – Order, Usage, and Performance
Reading Time: 7 minutesData Box Disk Overview
I have written in the past on the considerations of using Data Box for offline data transfers into Azure or using online methods, which was primarily focused on Data Box Heavy. Here I am going to walk through the process of obtaining a Data Box, specifically a Data Box Disk (see the Data Box Family of offerings here). The ordering process for all Data Box devices is largely the same, and this can be used as a reference for any of them. However, the primary focus of this post will be on the setup and usage of Data Box Disk.
If you’ve read my previous post in which I postulated the merit of using an offline method of transfer in many cases, you may find it odd that I am now promoting the Data Box Disk, which is only suitable for transferring a few TB of data. I maintain my position that in most cases online transfer is optimal, especially for the type of data that would be in-scope for a Data Box Disk. However, as I have noted, there are some cases where offline data transfer is needed.
Order and Setup
Ordering a Data Box is straightforward through the Azure Portal.
After you’ve selected the initial configuration items, you will choose the device type.
You will name the order and select the destination storage in Azure.
After confirming whether you’re using a Microsoft-Managed Key or Customer-Managed Key (in this case I’m using a Microsoft-Managed Key) you will enter shipping information and the order will be submitted. In each step of the process, you will receive an email with the status. For example, here is the notification that my order was created and then again when it was delivered.
When you create the job in Azure, it creates a Data Box resource, which has all of the information about the device and order including a timeline showing where the device is in the process.
The Disk arrived with the SATA to USB cable, and I hooked it up to my Intel NUC (excuse the dust!).
Copying Data
Note in the image above both the USB adapter and the ports on my device are denoted with “SS” meaning they’re USB 3.0. This is important, you will note that the Data Box Disk is an SSD which is very performant. You will also note in the email stating the device was delivered, that I have a certain period of time to get it shipped back before I start incurring additional cost.
Most enterprise servers only have USB ports to support peripherals, and thus do not invest in USB 3.0 or 3.1, leaving you with the 2.0 standard. The maximum theoretical throughput of USB 2.0 is 480 Mbps, or 60 MBps. The maximum theoretical throughput of USB 3.0 however, is 5 Gbps or 625 MBps. This is an important note, that in some cases it may be faster to even attach this to a laptop that has Gigabit network connectivity to wherever the source data is held if the servers only have USB 2.0 ports.
*Note:* I am doing this in Windows, but you can do all of the following in Linux as well.
If I look in Windows Explorer when I attach the drive I can see a volume, but it is encrypted and locked. That is intentional and a part of the security process with Azure Data Box.
The process for allowing access to each device in the Data Box family is different, but with Data Box Disk there is a utility to unlock the device, which in combination with the passkey available under the Data Box resource in Azure, will unlock the device.
At the root of the filesystem, you will see a folder for all the storage types, Table, Queue, File, Blob, and Managed Disk; what you copy here will get copied to the respective storage type at the destination.
Performance
If you have a lot of small files, one thing to note is the impact of antivirus. Especially if you’re pulling TBs worth of small files across the network to a laptop where the drive is attached, since it’s writing those files locally your antivirus will likely do in-line scanning. Depending on the data and whether your policies allow, adding an exception on your antivirus for the folder where you’re copying the data e.g. “F:\BlockBlob” may speed up your copy performance.
To test performance, I devised two tests, one with large files and one with small files. For the large files, I copied a bit over 50GB of .iso files of various Linux distributions. The copy below is simply CNT+C, CNT+V of that folder from my machine’s SSD to the Data Box Disk using Windows Explorer. In addition to the copy operation, I took a screenshot of the disk throughput and activity in Task Manager (which is a way of showing how much of the capable performance is utilized by way of disk operations queuing metrics).
You can see with a single copy job I’m getting over 300 MBps for those large files. I then also wanted to try small files, which is much more likely of a use case for Data Box Disk. For this I used a PowerShell script which is a part of another project I’m working on which will be posted soon on my GitHub to create 10,000 x 1 MB files – I again first copied them using Windows Explorer.
I was able to get just over 50MBps in write speeds, which is good considering the file sizes, but given there were no constraints on my source disk, destination disk, or CPU, this led me to believe that the bottleneck was with the copy operation itself. Next, I wanted to run a test with a multi-threaded copy operation, so I first set a baseline with a single-threaded robocopy job.
You can see this took about 3 and a half minutes and copied at roughly the same speed as Windows Explorer. Now that I have my baseline, here’s the real performance test using the multi-threading flag on robocopy.
With that flag I was able to push over 3x the amount of performance, increasing from ~50MBps to ~190MBs and reducing the copy time from 3 minutes and 33 seconds to just 58 seconds which fully utilized my hardware.
I also went back and tried the same multi-threaded copy operation with my large files and was able to increase the throughput from 334MBps to 522MBps which fully utilized my hardware as well.
Wrap-up
I finished loading my data onto the disk and utilized the data validation utility, which comes in the same download as the tool that unlocks and decrypts the drive, to generate checksums of my data on the device which I can use later to validate data integrity when it is copied into the Storage Account. After that I unmounted the device, packaged it back up and dropped it off at my local UPS store – the box already had a return label on it.
Similar to when the device was being shipped to me, I got email notifications for each step of the way including when the data copy started, and when it finished. The process is then marked as complete and all of the details are available in the portal.
You can see the data is now loaded into the Storage Account, and you will see a “databoxcopylog” folder as well, which you can use to validate the copy jobs included with the final checksum of the files.
Lastly, you will see a one-time charge for the device on your invoice, you can see here the $90 fee for the Data Box Disk in Azure Cost Management.
*Note*: You will still be charged for any transactions that take place when loading the data into your storage account.
The data is now all loaded, and I get a confirmation via email (which is also shown in the portal screenshot above) that the device has been erased in accordance with NIST 800-88r1 standards. As I noted above, the process for ordering the device is largely similar for the Data Box or the Data Box Heavy.
If you have any questions, comments, or suggestions for future blog posts please feel free to comment below, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!
Migrating Data into Azure – Online vs Offline
Reading Time: 7 minutesI frequently work with organizations who are migrating data from an on-premises datacenter into Azure. Undoubtably the question will come up “should we use an Azure Data Dox to ship that much data?”, and most of the time the room echoes in a resounding “Yes!”.
I’ve been working in Azure for many years and have seen a lot of data migrations, and while data box is a wonderful service and is yet another way that Microsoft enables and empowers customers to do what’s best for them, it’s just that, an option for what might be the best fit. I write the same thing almost every month to various people, and figured it was time to post it to use as a reference.
Note: My thoughts here are in no way indented to conflict with the official product documentation and are rather a more experience-based thought experiment to accelerate time-to-value in regard to data migration, at the bottom of this post there are links to two great official pieces of documentation that are more technically focused, please give those a read as well.
Most of the time when we think about uploading or downloading data to and from the internet, we think in terms of gigabytes – typically single-digit gigabytes at that. Even what the home-ISPs like to reference as the bandwidth heavy services like movie streaming, will typically use less than 7GB/hr. With that in mind, when we think of the amount of data that is used in an enterprise, we’re typically talking Terabytes or even for those very large organizations – Petabytes. When we talk about migrating that amount of data to a different physical location (for example, Azure) it seems outlandish to think about moving it online – or is it?
Azure Data Box
If you haven’t taken a look at the Azure Data Box Family of offerings, I highly suggest it. There are 4 different offerings of Data Box:
- Data Box Disk: 8 TB SSD for offline transfer
- Data Box: 100 TB appliance for offline transfer
- Data Box Heavy: 1 PB appliance for offline transfer
- Data Box Gateway: A VM appliance storage gateway used for managed online data transfer.
These devices ship to your location for a nominal fee, you load up the data, then ship it back, and Microsoft loads the data into the destination you choose. The idea is that up to a 40Gbps network connection on your local network is going to be much faster than it would be to send this data over the Internet, VPN, or ExpressRoute connection and is a great option.
Offline Transfer Considerations
I challenge everyone to think through this process though when considering an offline migration. Specifically, we need to think about how long it will take to get the process approved (among other factors) to move your company’s data using a shipping carrier. I’ve worked with organizations where the policy for this type of process requires a private courier, active GPS, and someone following the truck along the entire route (I’ve even seen requirements for armed guards or police escort), among many other requirements from various departments within the organization.
Let’s look at the most common components of this process that might influence the timeline of your data migration.
- Privacy & Legal Team Approvals: Depending on the data, privacy and legal may need to be involved to inspect the process for data device handling, determine who has visibility into the data, how it is destroyed upon completion of the ingestion, and potentially even determine insurance implications.
- Security Approval: From a technical controls perspective they will want to make sure proper encryption is used at the data level and hardware level, determine who controls the keys for encryption, ensure device attestation, and even certify these devices to be plugged into the datacenter based on the controls in place for certain hardware vendors.
- Ordering & Shipping: The process of receiving your Data Box takes up to 10 business days, depending on availability and other factors.
- Loading the Data: There are two points that are important here, the first is how fast can the data be retrieved (e.g., is the data passed through a source that only has a 1 Gb link, are there disk throughput limitations, do you need to limit the transfer rate to not impact other workloads, etc.). The second point to consider is write throughput on the Data Box itself, while there is ample network connectivity with each device, the larger devices are designed for capacity rather than performance and while there is good throughput, they are not designed for high I/O which is important for datasets with smaller file sizes.
- Shipping to Microsoft: Standard shipping time applies to shipping the device back to Microsoft, typically a few days.
- Microsoft transferring the data: After the device is received it is inspected for damage, then setup to copy the data to the destination you selected when you requested the Data Box – this could be a few hours to a few days depending on availability, data size, I/O size, and both the type of Data Box itself and the target storage location.
(Time to Legal Approval) + (Time to Privacy Approval) + (Time to Security Approval) + (Ordering & Shipping Time) + (Time to Load the Data) + (Shipping Time) + (Time to Unload the Data)
When thinking about these lead times it’s important to be honest with yourself. How long after you send the email, or meeting invite, will it take to get full approval from Legal, Security, and Privacy? In most cases, this is typically a few weeks and depends on the organizational processes and sensitivity of the data, sometimes up to a few months.
For example, let’s say it takes 1 month for full approval to ship the data, which is certainly a reasonable timeframe. Let’s also assume it takes 2 days to get the Data Box hooked up in the datacenter, and that you’re copying 50 TB at 5 Gbps over the LAN. With a generalized timeline, this operation would roughly look like the following:
Example: 50 TB, 5 Gbps LAN Offline Transfer with Data Box
1 Month for approval + 8 Days for shipping + 2 Days for setup + 2 Days for data copy (~26 hr. for actual data movement) + 2 days to prep for shipping + 3 days for shipping + 1 day for receiving + 1 day for copying data (likely less)
30 days + 8 days + 2 days + 2 days + 2 days + 3 days + 1 day + 1 day = ~49 days
Now let’s assume that same data was copied “online” (Internet, ExpressRoute, VPN, etc.) at even just 100 Mbps averaged across the day. In most cases organizations would be able to leverage more bandwidth than this, but it makes for easy calculations. If you copied 50 TB online, at 100Mbps, it would take ~53.5 days. In this scenario the time to copy the data online vs offline is very close, and without any of the fuss of approvals and shipping. If you assume you can use 125 Mbps of bandwidth you’re looking at ~42.5 days which is even faster than the offline mode.
At this point I’m sure there are a few people saying “yes, but what if I had a LOT of data, say 1 PB!”. I’ve done many multi-PB data migrations to Azure and have seen them go both online and offline, let’s do the calculation and see how it looks. While it may not be the case for everyone, in my experience with the increase of the dataset size comes longer approval lead times for various reasons. Additionally, these types of organizations typically also have more bandwidth capacity – again, these are generalized numbers, but in my personal experience they are realistic.
NOTE: Data Box Heavy requires a QSFP+ compatible cable, which I find is not as common in most datacenters, make sure you have one on-hand prior to receiving the device.
For this calculation let’s assume 2 PB of data that can be copied on the LAN at 10 Gbps. Keep in mind that if there was actually 2PB of data you’d need 3 Data Boxes because you get 770 TB of usable space after overhead per Data Box Heavy. Take note though, that I’m not taking the multiple Data boxes into account in the calculation, which would realistically extend the timeline.
Example: 2 PB, 10 Gbps LAN Offline Transfer with Data Box Heavy
2.5 Months for approval + 8 Days for shipping + 2 Days for setup + 22 Days for data copy + 2 days to prep for shipping + 3 days for shipping + 1 day for receiving + 4 days for copying data
75 days + 8 days + 2 days + 22 days + 2 days + 3 days + 1 day + 4 days = ~117 days (~3.9 months)
Like I said earlier, typically if an organization has this much data they have much more bandwidth – 2 Gbps for this operation would not be unreasonable to assume as a generalization. Given 2 Gbps bandwidth, it would take ~107 days to copy this data online compared to ~117 days copying it offline.
However, I will say that I’ve been in situations where an organization had other limitations such as the total available capacity on a firewall or edge router, and they would have to upgrade at significant expense to be able to handle an extra 2 Gbps so they could only do something like 250 Mbps. At that speed it would take 874 days to copy and at those speeds with that much data it certainly does not make sense to move the data online, and using a Data Box would be much more efficient to copy the data offline.
NOTE: Data Box will not ship across international borders (except countries within the European Union), please see the FAQ reference link if that is a requirement for your data transfer.
Online Transfer
If you are going to copy the data online, there are various ways to accomplish this task. In general, I see AzCopy, Azure Data Factory, Azure Data Box Gateway, or depending on the target storage location any number of other tools used for online data movement.
There are some considerations when choosing your tooling such as cost (of the tool only, ingress bandwidth to Azure is free), performance, manageability and whether there is data churn that needs to continuously be uploaded after the initial import. Keep in mind that you can also control your bandwidth with online copies and for example use less bandwidth during business hours and more at night, and some of these tools will help facilitate that for you.
I won’t go into depth on this decision process but let me know if I should write another blog on that topic.
Additional references:
The two reference links below have wonderful information about choosing a data transfer solution, and as noted earlier I HIGHLY suggest reviewing them as well. The purpose of this blog was to talk about some of the processes and procedures that’s typically not addressed when looking purely at the technology.
- Choose an Azure solution for data transfer
- Data transfer for large datasets with moderate to high network bandwidth
Conclusion
I hope going through these scenarios was helpful when considering methods for data transfer into Azure. My goal here was not to go in depth on anything in particular, but more think through the process. As takeaways, here are a few points to keep in mind about transferring large amounts of data into Azure.
- Be honest with yourself about approval timelines for shipping your company’s (and/or customer’s) data.
- Use a file transfer calculator to see how long it would actually take to transfer X data at Y speeds – it’s probably not as long as you think.
- For good reason, there will likely be a lot of meetings, documentation, email threads, and other time-consuming activities for shipping data physically – and that should also count for something in terms of cost.
- There will likely also be some of the aforementioned procedural work for online data migration, but in most cases not nearly as much.
- Online is not always going to work out, sometimes Data Box is going to be the best fit.
If you have any questions, comments, or suggestions for future blog posts please feel free to comment below or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!
Azure Mask Browser Extension
Reading Time: 2 minutesI often find myself having to do a lot of editing of both video and screenshots when using the Azure Portal and I wanted to write a short blog post about this very handy extension I’ve been using recently. The Azure Mask Extension is an open source tool written by Brian Clark, it works by masking sensitive data as you navigate the Azure Portal which is helpful for presentations, screen sharing, and content creation. Take a look below at how to install and use it.
First, browse the extension settings in your chromium-based browser. Due to naming issues, the extension had to be renamed and has been “pending review” for over a year now (3/16/21) as noted by the Github repo. In light of this, the extension can’t currently be installed from the store, and must be manually loaded. Once you’re on this screen, you’ll need to toggle “Developer Mode” once you download the extension from the Github repo use the “Load Unpacked” button to upload the extension file.
Once the extension is installed, you will see it show up on the extensions setting screen. You will then click the Azure Mask extension button in the task bar and move the slider to “Toggle All Masks”.
Next, head over to the Azure Portal and check out the masking features of this extension!
Short and sweet blog post, but this extension has saved me a lot of time when sharing my screen, presenting at conferences, recording video, and taking screenshots for blogs. If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!
Azure App Service Private Link Integration with Azure Front Door Premium
Reading Time: 5 minutesLast week , Azure Front Door Premium went into Public Preview. While this did bring about some other cool features and integrations, the one I’m most excited about today is the integration with Azure Private Link. This now allows Azure Front Door to make use of Private Link Services (not endpoints, which is what most people think about when they hear Private Link). Private Link Services allow for resource communication between two tenants, some of the most common use cases are software providers allowing private access to a solution running in their environment. Today I’m going to walk through how to connect Azure Front Door, through Private Link, to an App Service, without an ASE, the need to work with Private Link, DNS or anything of the sort. I believe this will become the new standard for hosting App Services.
With that, let’s get started! First, we need to create an Azure App Services Web App.
*Note* At the time of writing this post (03/01/2021) Private Link Service integration requires the App Service to be a Pv2.
Once the Web App is deployed, you’ll need the URL of the website and want to test it in a web browser. In this instance I’m not hosting anything in particular, simply hosting the sample page to show that it’s working.
At this point the web app is created, and you would expect to have to create a Private Link Endpoint now but since Azure Front Door Premium uses the Private Link Service functionality we can let Front Door do the work for us. With that said, let’s now go create the Azure Front Door Premium Service.
We need to make sure that the Tier is selected properly as the “Premium” SKU. After that radio button is selected, a section will populate below with different configuration options compared to the Standard Tier. The one we need to make sure to check is “Enable private link service”. After that’s selected, you will select the web app with which you want to establish Private Link connectivity from Front Door. If you would like, here you can also add a custom message. This will be what is displayed as a connection request in the Private Link Center in the next step.
On the review page, we can see that the endpoint created is a URL for Azure Front Door and this will be the public endpoint. The “Origin” is the web app to which Front Door will be establishing private connectivity.
Once Azure Front Door is done deploying, you will need to open up the Private Link Center. From there you will navigate to the “pending connections”, which is where you will see the connection request from Azure Front Door with the message you may or may not have customized. Remember that Azure Front Door uses Azure Private Link Service to connect it’s own managed Private Link Service to your Web App. You will need to “Authorize” the connection request in order for the connection to be created and allow Front Door to privately communicate with your Web App.
After the connection is approved you will notice that the “pending connection” is removed, and has been moved to “active connections”. At this point, you will also notice that access to the Web App through a browser will return an error message the same way it would if you were to have added firewall rules on the Web App. This is because it’s being configured to only allow inbound connections from Azure Front Door.
If you want to modify any of the configuration settings, you will go to the “Endpoint Manager” section of Azure Front Door, where you get the familiar interface used by both Azure Front Door and App Gateway.
In my testing, the time between clicking “Approve” in Private Link Center to the Web App being available through the Azure Front Door endpoint is anywhere between 15-30 minutes. I’m not quite sure why this is the case, though it is likely due to the service only being in preview. If you get an error message in the web browser using the Front Door URL, just grab a cup of coffee and give it some time to do its thing.
Once it’s all done though, you can use the Front Door URL in the web browser and see that it routes you to the App Service!
There we go, all set! This is really a dream configuration, and something a lot of us have been looking forward to for some time. In the past we’ve done something similar with App Gateways, and Private Link Endpoints. The beauty of the solution with Front Door Premium, is that there is no messing around with DNS or infrastructure whatsoever – you can deploy this entire solution in PaaS while taking advantage of Azure Front Door’s global presence!
Click here to get started with Azure Front Door Premium.
If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!
Shared Storage Options in Azure: Part 5 – Conclusion
Reading Time: 4 minutesPart 5, the end of the series! This has been a fun series to write, and I hope it was helpful to some of you. The impetus for this whole thing was the number of times I’ve been asked how to setup shared storage between systems (primarily VMs) in Azure. As we’ve covered, you can see there are a handful of different strategies with pros and cons to each. I’m going to close this series with a final Pros and Cons list and a few general design pattern directions.
- Part 1: Azure Shared Disks
- Part 2: IaaS Storage Server
- Part 3: Azure Storage Services
- Part 4: Azure NetApp Files
- Part 5: Conclusion
Pros and Cons:
Azure Shared Disks:
Shared Storage Options in Azure: Part 1 – Azure Shared Disks « The Tech L33T
Pros:
- Azure Shared disks allows for the use of what is considered “legacy clustering technology” in Azure.
- Can be leveraged by familiar tools such as Windows Failover Cluster Manager, Scale-out File Server, and Linux Pacemaker/Corosync.
- Premium and Ultra Disks are supported so performance shouldn’t be an issue in most cases.
- Supports SCSI Persistent Reservations
- Fairly simple to setup
Cons:
- Does not scale well, similar to what would be expected with a SAN mapping.
- Only certain disk types are supported.
- Read-Only host caching is not available for Premium SSDs with maxShares >1.
- When using Availability Sets and Virtual Machine Scale sets, storage fault domain alignment with the VMs are not enforced on the shared data disk.
- Azure Backup is not yet supported.
- Azure Site Recovery is not yet supported.
Azure IaaS Storage:
Shared Storage Options in Azure: Part 2 – IaaS Storage Server « The Tech L33T
Pros:
- More control, greater flexibility of protocols and configuration.
- Ability to migrate many workloads as-is and use existing storage configurations.
- Ability to use older, or more “traditional” protocols and configurations.
- Allows for the use of Shared Disks.
- Integration with Azure Backup.
- Incredible storage performance if the data can be cached/ephemeral (up to 3.8 million IOPS on L80s_v2).
Cons:
- Significantly more management overhead as compared to PaaS.
- More complex configurations, and cost calculations compared to PaaS.
- Higher potential for operational failure with a higher number of components.
- Broader attack surface, and more security responsibilities.
- Maximum of 80,000 uncached IOPS on any VM SKU.
Azure Storage Services (Blob and File):
Shared Storage Options in Azure: Part 3 – Azure Storage Services « The Tech L33T
Pros:
- Both are PaaS and fully managed which greatly reduces operational overhead.
- Significantly higher capacity limits as compared to IaaS.
- Ability to migrate some workloads as-is and use existing storage configurations when using SMB or BlobFuse compared to using native API connections.
- Ability to use Active Directory Authentication in Azure Files, and Azure AD Authentication in Blob and Files.
- Both integrate with Azure Backup.
- Much easier to geo-replicate compared to IaaS.
- Azure File Sync makes distributed File Share services and DFS a much better experience with Backup, Administration, Synchronization, and Disaster Recovery.
Cons:
- BlobFuse (by default) stores credentials in a text file.
- Does not support older access protocols like iSCSI.
- NFS is not yet Generally Available.
- Azure Files is limited to 100,000 IOPS (per share).
Azure NetApp Files:
Shared Storage Options in Azure: Part 4 – Azure NetApp Files « The Tech L33T
Pros:
- Incredibly high performance, depending on configuration (up to ~3.2 million IOPS/volume).
- SMB and NFS Shares both supported, with Kerberos and AD integration.
- More performance and capacity than is available on any single IaaS VM.
Cons:
- While it is deployed in most major regions, it may not yet be available where you need it yet (submit feedback if this is the case).
- Does not yet support Availability Zones, Cross-Region Replication is in Preview.
There we have it, my final list of Pros and Cons between Azure Shared Disks, DIY IaaS Storage, Azure Blob/Files, and Azure NetApp Files. Lastly, I want to end with some notes on general patterns when considering shared storage like the ones discussed in this series.
Patterns by Workload Type:
Quorum:
- If the reason you need shared storage is for a quorum vote, look into using a Cloud Witness for Failover Clusters (including SQL AlwaysOn).
- If the cloud quorum isn’t an option, shared disks is going to be an easy option, and I would go there second.
Block Storage:
- If you need shared block storage (iSCSI) for more than just quorum, chances are you need a lot of it, so I’d first recommend running IaaS storage. Start planning a migration away from this pattern though, Blob block storage on Azure is amazing and if you can port your application to use it – I would highly recommend doing so.
General File Share:
- For most generic file shares, Azure Files is going to be your best bet – with a potential use of Azure File Sync.
- Azure NetApp Files is also a strong option here since the Standard Tier is cost effective enough for it to be feasible, though ANF requires a bit more configuration than Azure Files.
- Lastly, you could always run your File Share in custom IaaS storage, but I would first look to a PaaS solution.
High-Performance File Storage:
- If your application doesn’t support the use of Blob storage, like most commercial products, Azure NetApp files is likely going to be your best bet.
- Once NFS becomes generally available, NFS on Azure Files and Blob store are going to be strong competitors – especially on Blob and ADLS.
- Depending on what “high-performance” means, and whether or not you use a scale-out software configuration, storage on IaaS could potentially be an option. This is a much more feasible option when the bulk of the data can be cached or ephemeral.
We’ve come to the end! I hope that was a useful blog series. As technologies and features advance, I’ll go back and update these, but please feel free to comment if I miss something. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions about this post, the series, or anything else!
Shared Storage Options in Azure: Part 4 – Azure NetApp Files
Reading Time: 10 minutesWelcome to Part 4 of this 5-part Series on Shared Storage Options in Azure. In this post I’ll be covering Azure NetApp Files. We have talked about other file-based shared storage in Azure already with SMB and NFS on IaaS VMs in Part 2, and again with Azure Files in Part 3. Today, I want to cover the last technology in this series – let’s get into it!
- Part 1: Azure Shared Disks
- Part 2: IaaS Storage Server
- Part 3: Azure Storage Services
- Part 4: Azure NetApp Files
- Part 5: Conclusion
Azure NetApp Files:
Azure NetApp Files (ANF) is an interesting Azure service, unlike many others. ANF is actually first-party NetApp hardware, running in Azure. This allows for customers to use the enterprise-class, high-performance capabilities of NetApp directly integrated with their Azure workloads. I will note that you can also use NetApp’s appliance called the NetApp ONTAP Cloud Volume, which is a Virtual Machine that sits in front of blob storage which you can also use for shared storage, but I won’t be covering that here as the ONTAP volumes aren’t first-party Azure. There are however, along with ONTAP, a number of great partner products that run in Azure for these type of storage solutions. Check with your preferred storage vendor, they likely have an offering.
Before we jump into it, I’ll note that there are different configurations or operations you can do to tune the performance of your ANF setup, I won’t be going into those here but will be writing another post at later time on performance benchmarking and tuning on ANF.
Initial Configuration:
Azure NetApp Files is a bit different from what you would expect with Azure Files, so I’m going to walk through a basic setup here. First of all, ANF currently requires you to be whitelisted for ANF use, to submit your subscription you’ll need to use this form.
After you’ve been whitelisted, head into the portal and create an Azure NetApp Files Account.
After it’s created, the first thing you will need to do is create a capacity pool. This is the storage from which you will create volumes later in the configuration. Note: 4TB is the smallest capacity pool that can be configured.
I’m using an automatic QoS type for this capacity pool, but you can read more about how to setup manual QoS. What is important to choose here is your service level, this cannot be changed after creating the capacity pool. I will talk more about the service levels later in this post.
Later on I’m going to be using both and NFS and an SMB share. To use an SMB share with Kerberos authentication you will need virtual network with which to integrate ANF and your source of authentication. I’m going to create a virtual network with two subnets, one for my compute and one for ANF. The ANF subnet needs to be delegated to the Azure NetApp files service so it can leverage that connection, so I’ll configure that here as well.
Now that I’ve setup the network I’m going to create my compute resources to use in my testing environment. This will be comprised of the following:
- Domain Controller
- Windows Client
- Linux Client
- Azure Bastion (used for connecting to those VMs)
I’ll use the Windows client to test the SMB share, and will test the NFS share with the Linux Client.
Domain Controller
Azure Bastion
Windows Client
Linux Client
Now that those compute hosts are all being created, I’m going to go create my NFS volume. I initially created a 4TB capacity pool, so I’ll assign 2TB to this NFS volume for now. I’m going to use NFS 4.1 but won’t be using Kerberos in the lab, my export policy is also set to allow anything within the virtual network to access it – this can be modified at any time.
Alright, the NFS volume is all setup now and we’ll come back to that later to test on the Linux Client. Now I want to setup and SMB share, which first requires that I create a connection to Active Directory. I built mine manually in my lab, but you can also use this quickstart template to auto-deploy an Active Directory Domain for you . It’s also good to know that this source can either be traditional Active Directory Domain Services or the Azure AD Domain Services.
You will want to follow the instructions in the ANF documentation to make sure you have things setup correctly. I have my domain controller set to use a static IP of 10.0.0.4, named the domain “anf.lcl” and setup a user named “anf”. Now that this is complete, I can create the Active Directory Connection.
Great! Now that we have that configured, we can use the connection in setting up the SMB share. I’ll use the rest of the 4TB capacity pool here and use the Active Directory connection we just finished to create the SMB share.
After this completes, you can jump into Active Directory and see that it creates a computer account in AD. This will be the “host” of the SMB share, and ANF will use this to verify credentials attempting to connect to the share.
Fantastic, now we have ANF created, with a 4TB capacity pool, a 2TB NFS share, a connection to Active Directory, and a 2TB SMB share. On the Volumes tab we can now see both of those shares are ready to go.
Each of the shares has a tab called “Mounting instructions”, I’m going to test the SMB share first so I’ll go grab this information. You can see the UNC path looks like an SMB share hosted by the computer “anf-bdd8.anf.local”, this is how other machines will reference the share to map it. Permission to this share can be controlled similar to how you would control them on any other Windows share, take a look at the docs to read more on how to do this.
With this information we can go use the Azure Bastion connection to jump into our Windows Client and map the network drive.
Voila! The Azure NetApp Files SMB share is mounted on our Windows Client. Now let’s go do the same thing with the NFS share: grab the mounting instructions, use the Azure Bastion Connection to connect to the Linux Client, and mount the NFS share.
Cost, Performance, Availability, and Limitations:
Performance:
As noted earlier, there are three service level tiers in Azure NetApp Files: Ultra, Premium, and Standard.
- Ultra provides up to 129 MiB/s of throughput per 1TiB of provisioned storage
- Premium: 64 MiB/s per 1 TiB
- Standard: 16 MiB/s per 1TiB
Remember that earlier I selected the standard (lowest performance tier) for my capacity pool, this tier is more designed for capacity situations than performance and is much more cost effective. With that said though, let’s do a quick performance test.
- 2TB SMB share on the “Standard” tier
- D2s_v3 Windows Client
- IOMeter tool running 4 worker nodes, with a 50% read 4Kb test
The performance capabilities of ANF are a combination of 3 main things:
- Performance Tier
- Volume Capacity
- Client Network Throughput
As I’ve mentioned in part 2 of this blog series, similar to managed disks, the performance of an ANF volume increases with its provisioned capacity. Also remember that Azure VM SKUs have an expected network throughput and this is important here because the storage in question is over the network. If the VM is only capable of 1,000 Mbps then depending on your I/O size, regardless of the ANF configuration, your tests will only ever perform at up to 1,000 Mbps.
Just to verify that the performance is tied to capacity, I’m going to increase the capacity pool and then double the size of the SMB volume from 2TB to 4TB and run the test again.
We can see that the performance roughly doubled, with no change inside the VM (since we’re not yet hitting the Network Bandwidth limitations of that VM SKU).
Now let’s run the same test using the FIO tool on our Linux Client against the NFS share.
Again we’ll go ahead and increase the capacity pool then double the size of the NFS share and run the test again.
Similar to the SMB testing, after doubling the size of the NFS share it also doubled its performance. Increases in capacity on the pool or volume can happen live, while the systems are running, with no impact.
As I mentioned earlier, I will be writing another blog post at a later time on performance benchmarking and tuning on ANF. In the meantime I recommend reading the ANF documentation on performance, for example this one on Linux Performance Benchmarking.
Availability:
Similar to what you would expect with a traditional NetApp appliance, ANF does support the use of snapshots. Keep in mind that your snapshots will consume additional storage on your ANF volume.
As earlier noted, Azure NetApp Files is a true NetApp appliance running in an Azure Datacenter and is therefore subject to the same appliance-level availability. In addition, there is a 99.99% financially backed SLA on Azure NetApp Files.
Note: Cross-Region replication is currently in Public Preview so I won’t note it as an option yet, but will edit this post once it becomes generally available.
Cost:
Pricing for Azure NetApp Files is incredibly straightforward – you pay per GB x hours provisioned.
Currently Pricing ranges from $0.14746/GB to $0.39274/GB based on performance tier. Please see the pricing page for the most up-to-date information.
You can also see this documentation on Cost Modeling for Azure NetApp Files for a deeper dive into modeling costs on ANF.
Limitations:
- While ANF is rolling out to more and more regions, since it is discrete physical hardware it doesn’t exist everywhere (yet) and may impact your deployment considerations.
- ANF does not (yet) support availability zones.
- Additional resource limitations can be found here: Resource Limits for Azure NetApp Files.
Typical Use Cases:
The most common use case for Azure NetApp Files is simple, if you need more than 80k IOPS. Now, keep in mind that IOPS isn’t always straight forward. IOPS (Input Output Operations Per Second) can vary greatly based on the workload – data size, and access patterns. For example, a machine is likely to have significantly higher IOPS if the data size is 4Kb rather than 64Kb, if all else is constant, x16 times more IOPS. Similarly, throughput (eg. MBps/GBps) will be higher based on data size. With that said, if a workload requires incredibly high performance with an application that isn’t designed to run on cloud-native platforms (eg. Blob Storage APIs, etc.) – ANF is likely the place it will land. Remember that (as of the time of writing this, January 2021) the most uncached IOPS a machine can have in Azure is 80,000 (see Part 2 of this blog series).
This comes into play often with very large database systems such as Oracle.
- https://docs.microsoft.com/en-us/azure/azure-netapp-files/solutions-benefits-azure-netapp-files-oracle-database
- https://docs.microsoft.com/en-us/azure/azure-netapp-files/performance-oracle-single-volumes
Another typical use case is SAP HANA workloads.
- https://blog.netapp.com/azure-netapp-files-sap-shared-files/
- https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel
The third most common workload for Azure Netapp Files that I’ve found is in large Windows Virtual Desktop deployments, using ANF for storing user profile data.
Pros and Cons:
Okay, here we go with the Pros and Cons for using an Azure NetApp Files for your shared storage configuration on Azure.
Pros:
- Incredibly high performance.
- SMB and NFS Shares both supported, with Kerberos and AD integration.
- More performance and capacity than is available on any single IaaS VM.
- ANF is a PaaS solution with no appliance maintenance overhead.
Cons:
- While it is deployed in most major regions, it may not be available where you need it yet (submit feedback if this is the case).
- Does not yet support Availability Zones, Cross-Region Replication is in Preview.
Alright, that’s it for Part 4 of this blog series – Shared Storage on Azure Storage Services. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions about this post, the series, or anything else!
Shared Storage Options in Azure: Part 3 – Azure Storage Services
Reading Time: 8 minutesWelcome to Part 3 of this 5-part Series on Shared Storage Options in Azure. In this post I’ll be covering Azure Storage Services. You may be thinking to yourself, wait a minute, what have we been talking about this whole time then? Azure Storage Services is easiest thought of being the term used for the services offered under an Azure Storage Account (Blob, File, Queue, Table). Given the context of this series, I’ll be discussing Azure Blob Storage and Azure File storage in this post. Though, I do want to add a disclaimer that technically Queue, and Table Storage can be “shared” also since multiple apps can call the same Queue or Table using the APIs. Since the focus here is more on the system-level, I’m not going to cover those two, but I’ll add some links to documentation where you can read more.
- Part 1: Azure Shared Disks
- Part 2: IaaS Storage Server
- Part 3: Azure Storage Services
- Part 4: Azure NetApp Files
- Part 5: Conclusion
Azure Blob Storage:
In the majority of cases, when people discuss “cloud storage” they’re talking about Blob – binary large object. What this service allows us to do is store massive amounts of unstructured “objects” in Azure. There are a couple ways we can Blob storage as shared storage from a system-level.
Shared Blob Storage:
As I mentioned in the introduction, all Azure Storage Services can be accessed over HTTP/S via API or using any of the client libraries. This means that they can all technically be “shared” storage, but what about system-level access? While I find most applications and solutions can be adapted using a client library, there is a project called “Blobfuse” which can be used for more traditional applications.
Blobfuse is an open-source project on GitHub which uses the libfuse library to pair together the Linux FUSE kernel module and the Azure Blob REST APIs to create a virtual filesystem. The result of this configuration is a mount point on a Linux machine directly to a Blob Storage Account. There can be certain challenges in using Blobfuse though, for example the result is NOT a POSIX-compliant filesystem and if you use mount the same Blob Storage from multiple machines you should keep those limitations in mind.
The default configuration for the setup of Blobfuse is to have your Storage Account name and Access Key in a plain-text configuration file sitting on your server, which is not ideal from a security perspective and should be noted. However, it is possible to use a Managed Service Identity with Blobfuse which significantly improves the security posture of the deployment (if you use a System Assigned Managed Identity) and something I would recommend over the default configuration. Lastly, Blobfuse is not available on Windows – Linux only.
As of the time of writing this blog post (January, 2021) NFS is not yet Generally Available (GA) on Blob Storage, but NFS 3.0 has been in preview since July 2020 . Once this goes GA I will update this post with that information, but won’t quote this as an option until that point.
Lastly, from a backup and disaster recovery perspective, Azure Blob Storage supports snapshots as well as Point-in-Time restore for block blobs.
Typical Use Cases:
The majority of the use cases I’ve seen that use Blob as shared storage at the system level are wanting to use consumption-based cloud storage without the overhead or limitations of a managed disk. Specifically, in applications that don’t support SMB natively and require a local mount point, that’s where Blobfuse comes into play. I have seen this with a lot of apps that are migrated into the cloud and want lower cost, higher capacity than is available from managed disks with more legacy applications where this may be the case. I’ve also seen this configuration with many HPC applications since NFS as an access protocol is not yet GA for Blob storage.
Cost, Performance, Availability and Limitations:
The cost of using Blob storage is always the same regardless of the access protocol, since as of now it all ends up going through the Azure Storage API anyways.
Blob storage is incredibly performant. There are two tiers of Blob storage, Standard and Premium. In most cases, Standard will be the appropriate tier. Premium is for storage that needs single-digit transaction times and is better suited for larger block sizes (256KiB+). Though do keep in mind, that similar to my comparison of Managed Disk Types and the cost calculation of capacity and transaction costs, in some scenarios Premium Block Blob Storage may be cheaper.
If you’re using a standard Blob storage account (not configured with a Hierarchical Namespace) which is most common, you’ll enjoy the following performance (as of January, 2021).
Image Reference: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#storage-limits
When configuring containers in your Blob Storage Account you’ll notice an access tier setting with the options “Hot, Cool, Archive”. I covered Archive Storage a few years back but it’s not really relevant to this topic. What is relevant though is the different between Hot and Cool storage. There seems to still be a lot of confusion around the difference between the two, but at it’s core the main different is transaction cost.
Similar to the difference between Premium and Standard SSD Managed disks, Hot Blob storage has a higher capacity cost but a lower transaction cost while Cool Blob Storage has a lower capacity cost and a higher transaction cost. If you’re storing data that is infrequently accessed but still needs to be constantly available, Cool Blob Storage is the way to go. If you’re storing data that has a lot of transactions then Hot Blob storage is your best bet. Don’t get caught up in the “per GB” sticker on each tier – this can be misleading to the resulting cost depending on your workload characteristics.
As far as durability and availability goes, Storage Accounts have a few different options depending on the storage service being used: LRS, ZRS, GRS, ZGRS, GRS-RA. There is a lot of information on these different redundancy levels, so take a look at the durability and availability table below and if you want to read more, click the link below.
Image Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
Additional Reading on Understanding Azure Storage Redundancy Offerings: https://techcommunity.microsoft.com/t5/azure-storage/understanding-azure-storage-redundancy-offerings/ba-p/1431700
Lastly, I recently re-created an outdated version of an infographic on capacity limits for Azure Storage Accounts and I thought I would share that here.
Feel free to reference this image using the following: https://urls.hansencloud.com/azure-storage-limits
Azure Files:
Azure Files is another storage service under the Azure Storage Account and has similar shared features, but some very distinct to itself as well. The primary purpose of Azure Files is to provide file-level storage services like you would get from a Network Attached Storage appliance or a Server providing those access protocols to a filesystem share. Azure Files provides SMB access (as well as HTTP/S API) to provisioned shares.
Access Methods:
Like I mentioned, the primary access methods for Azure Files are through the API or by using SMB (NFS v4.1 is currently in preview so I won’t be considering it an option in this post as of right now, but will update it when it goes GA). Even though SMB is most typically used in with Windows machines, shares on Azure Files can be used by Windows, Linux, or even MacOS.
Something really interesting about Azure Files though is its Azure File Sync capability which allows for a centralized file share in Azure Files which can be facilitated through agents deployed on Windows Servers which then act as a cache for the Azure Files data. This is particularly interesting because it allows the Server itself to present whichever access method it would like to the client, but use the backing of a centralized Azure Files Share.
The way Azure File Sync works at a high-level is a File Share is created, then linked to what is called a “sync group”, which facilitates the registrations from any agents deployed on Windows Servers (in Azure or on-prem).
Azure Files also allows, in conjunction with typical access key authentication, Active Directory-based authentication options . The ability to use this type of AuthN directly on the Azure Files PaaS endpoint is really interesting and makes it a great choice for a solution where you want to leverage the identity systems you already have in place. It’s also worth noting that if you’re using Azure File Sync, the deployed agent is the only one communicating with the File Share directly and the access to the data locally can be controlled through whichever method you prefer (SMB ACLs with ADDS, for example).
Lastly, from a backup and disaster recovery prospective, Azure Files supports snapshots in addition to native integration with Azure Backup.
Typical Use Cases:
I see a mix of uses with Azure Files. A mix between using it for a file-based backend for various applications and services to an environment where the data is access directly by users. A scenario I’ve run into more frequently though is when companies want to replace traditional on-prem File Servers and even things like DFS. Anywhere you want to leverage SMB in a fully managed PaaS way, Azure Files is for you.
Cost, Performance, Availability and Limitations:
Similar to Blob Storage, Azure Files has multiple Tiers to help optimize for performance and Cost.
Image Reference: https://azure.microsoft.com/en-us/pricing/details/storage/files/
Again, these tiers are priced based on capacity (provisioned or consumed) in combination with transactions and any snapshots or backups.
Performance for Azure Files is based on whether or not you use a standard storage account or a specific “Azure Files” storage account SKU which will enable “Premium” File Shares. The performance specifications for a standard storage account (eg. General Purpose v2) are the same as the limits posted for the blob storage earlier. If you’re using Premium Files though, here are the performance targets.
Image Reference: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-scale-targets
Keep in mind that the 100TB limit is per share, and you can create multiple (just like you would with traditional file shares) up to the limit of the Storage Account (5 PB by default, as stated earlier in this post).
Lastly, availability for Azure Files is no different than Blob since they’re both contained by a storage account and will be subject to the availability and durability of the storage account data redundancy setting.
Pros and Cons:
Okay, here we go with the Pros and Cons for using an Azure Storage Services (Blob & File) for your shared storage configuration on Azure.
Pros:
- Both are PaaS and fully managed which greatly reduces operational overhead.
- Significantly higher capacity limits as compared to IaaS.
- Ability to migrate workloads as-is and use existing storage configurations when using SMB or Blobfuse.
- Ability to use Active Directory Authentication in Azure Files
- Both integrate with native backup solutions.
- Both integrate with Azure Defender for Storage.
Cons:
- Blobfuse stores connection information in plain-text, by default.
- Does not support older access protocols like iSCSI.
Alright, that’s it for Part 3 of this blog series – Shared Storage on Azure Storage Services. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions or comments about this post, the series, or anything else!
Shared Storage Options in Azure: Part 2 – IaaS Storage Server
Reading Time: 9 minutesRecently, I posted the “Shared Storage Options in Azure: Part 1 – Azure Shared Disks” blog post, the first in the 5-part series. Today I’m posting Part 2 – IaaS Storage Server. While this post will be fairly rudimentary insofar as Azure technical complexity, this is most certainly an option when considering shared storage options in Azure and one that is still fairly common with a number of configuration options. In this scenario, we will be looking at using a dedicated Virtual Machine to provide shared storage through various methods. As I write subsequent posts in this series, I will update this post with the links to each of them.
- Part 1: Azure Shared Disks
- Part 2: IaaS Storage Server
- Part 3: Azure Storage Services
- Part 4: Azure NetApp Files
- Part 5: Conclusion
Virtual Machine Configuration Options:
Compute:
While it may not seem vitally important, the VM SKU you choose can impact your ability to provide storage capabilities in areas such as Disk Type, Capacity, IOPS, or Network Throughput. You can view the list of VM SKUs available on Azure at this link. As an example, I’ve clicked into the General Purpose, Dv3/Dvs3 series and you can see there are two tables that show upper limits of the SKUs in that family.
In the limits for each VM you can see there are differences between Max Cached and Temp Storage Throughput, Max Burst Cached and Temp Storage Throughput, Max uncached Disk Throughput, and Max Burst uncached Disk Throughput. All of these represent very different I/O patterns, so make sure to look carefully at the numbers.
Below are a few links to read more on disk caching and bursting:
- Disk Caching: https://docs.microsoft.com/en-us/azure/virtual-machines/premium-storage-performance#disk-caching
- Disk Bursting: Managed disk bursting – Azure Virtual Machines | Microsoft Docs
You’ll notice when you look at VM SKUs that there is an L-Series which is “storage optimized”. This may not always be the best fit for your workload, but it does have some amazing capabilities. The outstanding feature of the L-Series VMs are the locally mapped NVMe drives which as of the time of writing this post on the L80s_v2 SKU can offer 19.2TB of storage at 3.8 Million IOPS / 20,000 MBPS.
The benefits of these VMs are the extremely low latency, and high throughput local storage, but the caveat to that specific NVMe storage is that it is ephemeral. Data on those disks does not persist a reboot. This means it’s incredibly good at serving from a local cache, tempdb files, etc. though its not storage that you can use for things like a File Server backend (without some fancy start-up scripts, please don’t do this…). You will note that the maximum uncached throughput is 80,000 IOPS / 2,000 MBPS for the VM, which is the same as all of the other high spec VMs. As I am writing this, no Azure VM allows for more than that for uncached throughput – this includes Ultra Disks (more on that later).
For more information on the LSv2 series, you can read more here: Lsv2-series – Azure Virtual Machines | Microsoft Docs
Additional Links on Azure VM Storage Design:
- Azure Premium Storage: Design for high performance – Azure Virtual Machines | Microsoft Docs
- Virtual machine and disk performance – Linux – Azure Virtual Machines | Microsoft Docs
Networking:
Networking capabilities of the Virtual Machine are also important design decisions when considering shared storage, both in total throughput and latency. You’ll notice in the VM SKU charts I posted above when talking about performance there are two sections for networking, Max NICs and Expected network bandwidth Mbps. It’s important to know that these are VM SKU limitations, which may influence your design.
Expected network bandwidth is pretty straight forward, but I want to clarify that the number of Network Interfaces you mount to a VM does not change this number. For example, if your expected network bandwidth is 3200 Mbps and you have an SMB share running on that single NIC, adding a second NIC and using SMB multi-channel WILL NOT increase the total bandwidth for the VM. In that case you could expect each NIC to potentially run at 1,600 Mbps.
The last networking feature to take into consideration is Accelerated Networking. This feature allows for SR-IOV (Single Root I/O Virtualization), which by bypassing the host CPU and offloading the network traffic directly to the Network Interface can dramatically increase performance by reducing latency, jitter, and CPU utilization.
Image Reference: Create an Azure VM with Accelerated Networking using Azure CLI | Microsoft Docs
Accelerated Networking is not available on every VM though, which makes it an important design decision. It’s available on most General Purpose VMs now, but make sure to check the list of supported instance types. If you’re running a Linux VM, you’ll also need to make sure it’s a supported distribution for Accelerated Networking.
Storage:
In an obvious step, the next design decision is the storage that you attach to your VM. There are two major decision types when selecting disks for you VM – disk type, and disk size.
Disk Types:
Image Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types
As the table above shows, there are three types of Managed Disks (https://docs.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview ) in Azure. At the time of writing this, Premium/Standard SSD and Standard HDD all have a limit of 32TB per disk. The performance characteristics are very different, but I also want to point out the difference in the pricing model because I see folks make this mistake very often.
Disk Type: | Capacity Cost: | Transaction Cost: |
Standard HDD | Low | Low |
Standard SSD | Medium | Medium |
Premium SSD | High | None |
Ultra SSD | Highest (Capacity/Throughput) | None |
Transaction costs can be important on a machine whose sole purpose is to function as a storage server. Make sure you look into this before a passing glance shows the price of a Standard SSD lower than a Premium SSD. For example, here is the Azure Calculator output of a 1 TB disk across all four types that averages 10 IOPS * ((10*60*60*24*30)/10,000) = 2,592 transaction units.
Sample Standard Disk Pricing:
Sample Standard SSD Pricing:
Sample Premium SSD Pricing:
Sample Ultra Disk Pricing:
The above example is just an example, but you get the idea. Pricing gets strange around Ultra Disk due to the ability to configure performance (more on that later). Though there is a calculable break-even point for disks that have transaction costs versus those that have a higher provisioned cost.
For example, if you run an E30 (1024 GB) Standard SSD at full throttle (500 IOPS) the monthly cost will be ~$336, compared to ~$135 for a P30 (1024 GB) Premium SSD, with which you get x10 the performance. The second design decision is disk capacity. While this seems like a no-brainer (provision the capacity needed, right?) it’s important to remember that with Managed Disks in Azure, the performance scales with, and is tied to, the capacity of the disk.
Image Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#disk-size-1
You’ll note in the above image the Disk Size scales proportionally with both the Provisioned IOPS and Provisioned Throughput. This is to say that if you need more performance out of your disk, you scale it up and add capacity.
The last note on capacity is this, if you need more than 32TB of storage on a single VM, you simply add another disk and use your mechanism for combining that storage (Storage Spaces, RAID, etc.). This same method can be used to further tweak your total IOPS, but make sure you take into consideration the cost of each disk, capacity, and performance before doing this – most often it’s an insignificant cost to simply scale-up to the next size disk. Last but not least, I want to briefly talk about Ultra Disks – these things are amazing!
Unlike with the other disk types, this configuration allows you to select your disk size and performance (IOPS AND Throughput) independently! I recently worked on a design where the customer needed 60,000 IOPS, but only needed a few TB of capacity, this is the perfect scenario for Ultra Disks. They were actually able to get more performance, for less cost compared to using Premium SSDs.
To conclude this section, I want to note two design constraints when selecting disks for your VM.
- The VM SKU is still limited to a certain number of IOPS, Throughput and Disk Count. Adding together the total performance of your disks, cannot exceed the maximum performance of the VM. If the VM SKU supports 10,000 IOPS and you add 3x 60,000 IOPS Ultra Disks, you will be charged for all three of those Ultra Disks at their provisioned performance tiers but will only be able to get 10,000 IOPS out of the VM.
- All of the hardware performance may still be subject to the performance of the access protocol or configuration, more on this in the next section.
Additional Reading on Storage:
- Disk Types: Select a disk type for Azure IaaS VMs – managed disks – Azure Virtual Machines | Microsoft Docs
Software Configuration and Access Protocols:
As we come to the last section of this post, we get to the area that aligns with the purpose of this blog series – shared storage. In this section I’m going to cover some of the most common configurations and access types for shared storage in IaaS. This is by no means an exhaustive list, rather what I find most common.
Scale-Out File Server (SoFS):
First up is Sale-Out File Server, this is a software configuration inside Windows Server that is typically used with SMB shares. SoFS was introduced in Windows 2012, uses Windows Failover Clustering, and is considered a “converged” storage deployment. It’s also worth noting that this can run on S2S (Storage Space Direct), which is the method I recommend using with modern Windows Server Operating Systems. Scale-Out File Server is designed to provide scale-out file shares that are continuously available for file-based server application storage. It provides the ability to share the same folder from multiple nodes of the same cluster. It can be deployed in two configuration options, for Application Data or General Purpose. See the additional reading below for the documentation on setup guidance.
Additional reading:
- Storage Spaces Direct: Storage Spaces Direct overview | Microsoft Docs
- Scale-Out File Server: Scale-Out File Server for application data overview | Microsoft Docs
- Setup guide for 2-node SSD for RDS UPD: Deploy a two-node Storage Spaces Direct SOFS for UPD storage in Azure | Microsoft Docs
SMB v3:
Now into the access protocols – SMB has been the go-to file services protocol on Windows for quite some time now. In modern Operating Systems, SMB v3.* is an absolutely phenomenal protocol. It allows for incredible performance using things like SMB Direct (RDMA), Increasing MTU, and SMB Multichannel which can use multiple NICs simultaneously for the same file transfer to increase throughput. It also has a list of security mechanisms such as Pre-Auth Integrity, AES Encryption, Request Signing, etc. There is more information on the SMB v3 protocol below, if you’re interested, or still think of SMB in the way we did 20 years ago – check it out. The Microsoft SQL Server team even supports SQL hosting databases on remote SMB v3 shares.
Additional reading:
NFS:
NFS has been a similar staple as a file server protocol for a long while also, and whether you’re running Windows or Linux can be used in your Azure IaaS VM for shared storage. For organizations that prefer an IaaS route compared to PaaS, I’ve seen many use this as a cornerstone configuration for their Azure Deployments. Additionally, a number of HPC (High Performance Compute) workloads, such as Azure CycleCloud (HPC orchestration) or the popular Genomics Workflow Management System, Cromwell on Azure prefer the use of NFS.
Additional Reading:
- Create NFS Ubuntu Linux Server volume – Azure Kubernetes Service | Microsoft Docs
- azure-quickstart-templates/nfs-ha-cluster-ubuntu at master · Azure/azure-quickstart-templates (github.com)
iSCSI:
While I would not recommend the use of custom block storage on top of a VM in Azure if you have a choice, some applications do still have this requirement in which case iSCSI is also an option for shared storage in Azure.
Additional Reading:
That’s it! We’ve reached the end of Part 2. Okay, here we go with the Pros and Cons for using an IaaS Virtual Machine for your shared storage configuration on Azure.
Pros and Cons:
Pros:
- More control, greater flexibility of protocols and configuration.
- Depending on the use case, potentially greater performance at a lower cost (becoming more and more unlikely).
- Ability to migrate workloads as-is and use existing storage configurations.
- Ability to use older, or more “traditional” protocols and configurations.
- Allows for the use of Shared Disks.
Cons:
- Significantly more management overhead as compared to PaaS.
- More complex configurations, and cost calculations compared to PaaS.
- Higher potential for operational failure with the higher number of components.
- Broader attack surface, and more security responsibilities.
Alright, that’s it for Part 2 of this blog series – Shared Storage on IaaS Virtual Machines. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions about this post, the series, or anything else!
- Part 1: Azure Shared Disks
- Part 2: IaaS Storage Server
- Part 3: Azure Storage Services
- Part 4: Azure NetApp Files
- Part 5: Conclusion
DNS Load Balancing in Azure
Reading Time: 3 minutesThis post won’t be too long, but I wanted to expand a bit on the recent repo that I published to Github for Azure Load Balanced DNS Servers. I’ve been working in Azure the better part of a decade and the way we’ve typically approached DNS is in one of two ways. Either use (a pair of) IaaS Domain Controllers or use Azure-Provided DNS resolution. In the last year or so there have been an increasing number of architectural patterns that require private DNS resolution where it we may not necessarily care about the servers themselves.
This pattern has become especially popular with the requirements for Azure Private Link in hybrid scenarios where on-premises systems need to communicate with Azure PaaS services over private link.

The only thing the DNS forwarder is providing here is very basic DNS forwarding functionality. This is not to say that it can’t be further configured, but the same principles still apply. DNS isn’t something that needs any sort of complex failover during patch windows, but since it has to be referenced by IP we have to be careful about taking DNS servers down if there aren’t alternates configured. With a Web Server we would just put it behind a Load Balancer, but there don’t seem to be configurations published for a similar setup with DNS servers (other than using a Network Virtual Appliance) since UDP isn’t a supported health probe by Azure Load Balancers. How then do we configure a pair of “zero-touch” private DNS functionality in Azure?
When asked “What port does DNS use?”, the overwhelming majority of IT Professionals will say “UDP 53”. While that is correct, it also uses TCP 53. UDP Packets can’t be larger than 512 Bytes, and while this suffices in most cases for DNS there are certain scenarios where it does not. For example, DNS Zone Transfers (AFXR/IFXR), DNSSEC, and EDNS all have response sizes larger than 512 bytes, which is why they use TCP. This is why the DNS Service does (be default) listen on TCP 53, which is what we can use as the health probe in the Azure Load Balancer.
The solution that I’ve published on Github (https://github.com/matthansen0/azure-dnslb), contains the template to deploy this solution which has the following configuration.
- Azure Virtual Network
- 2x Windows Core Servers:
- Availability Set
- PowerShell Script to Configure Servers with DNS
- Forwarder set to Azure Multicast DNS Resolver
- Azure Load Balancer:
- TCP 53 Health Probe
- UDP/TCP 53 Listener

This template does not include patch management, but I would highly recommend using Azure Update Management, this way you can setup auto-patching and an alternate reboot schedule. If this is enabled, this solution would be a zero-touch, highly-available, private DNS solution for ~$55/mo (assuming D1 v2 VM, which can be lower if a cheaper SKU is chosen).
Since I’m talking about DNS here, the last recommendation that I’ll make is to go take a look at Azure Defender for DNS which monitors, and can alert you to suspicious activity in your DNS queries.
Alright, that’s it! I hope this solution will be helpful, and if there are options or configurations you’d like to see available in the Github repository please feel free to submit an issue or a PR! If you want to deploy it right from here, click the button below!
If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!
Azure Site-to-Site VPN with a Palo Alto Firewall
Reading Time: 9 minutesIn the past, I’ve written a few blog posts about setting up different types of VPNs with Azure.
- Azure Point-to-Site VPN with RADIUS Authentication « The Tech L33T
- Azure Web Apps with Cost Effective, Private and Hybrid Connectivity « The Tech L33T
- Azure Site-to-Site VPN with PFSense « The Tech L33T
Since the market is now full of customers who are running Palo Alto Firewalls, today I want to blog on how to setup a Site-to-Site (S2S) IPSec VPN to Azure from an on-premises Palo Alto Firewall. For the content in this post I’m running PAN-OS 10.0.0.1 on a VM-50 in Hyper-V, but the tunnel configuration will be more or less the same across deployment types (though if it changes in a newer version of PAN-OS let me know in the comments and I’ll update the post).
Alright, let’s jump into it! The first thing we need to do is setup the Azure side of things, which means starting with a virtual network (vnet). A virtual network is a regional networking concept in Azure, which means it cannot span multiple regions. I’m going to use “East US” below, but you can use whichever region makes the most sense to your business since the core networking capabilities shown below are available in all Azure regions.


With this configuration I’m going to use 10.0.0.0/16 as the overall address space in the Virtual Network, I’m also going to configure two subnets. The “hub” subnet is where I will host any resources. In my case, I’ll be hosting a server there to test connectivity across the tunnel. The “GatewaySubnet” is actually a required name for a subnet that will later house our Virtual Network Gateway (PaaS VPN Appliance). This subnet could be created later in the portal interface for the Virtual Network (I used this method in my PFSense VPN blog post), but I’m creating it ahead of time. Note that this subnet is name and case sensitive. The gateway subnet does not need a full /24, (requirements for the subnet here), it will do for my quick demo environment.

Now that we have the Virtual Network deployed, we need to create the Virtual Network Gateway. You’ll notice that once we choose to deploy it in the “vpn-vnet” network that we created, it will automatically recognize the “GatewaySubnet” and will deploy into that subnet. Here we will choose a VPN Gateway type, and since I’ll be using a route-based VPN, select that configuration option. I won’t be using BGP or an active-active configuration in this environment so I’ll leave those disabled. Validate, and create the VPN Gateway which will serve as the VPN appliance in Azure. This deployment typically takes 20-30 minutes so go crab a cup of coffee and check those dreaded emails.


Alright, now that the Virtual Network Gateway is created we want to create “connection” to configure the settings needed on the Azure side for the site-to-site VPN.

Here we’ll name the connection, set the connection type to “Site-to-Site (IPSec)”, set a PSK (please don’t use “SuperSecretPassword123″…) and set the IKE Protocol to IKEv2. You’ll notice that you need to set a Local Network Gateway, we’ll do that next.

Let’s go configure a new Local Network Gateway, the LNG is a resource object that represents the on-premises side of the tunnel. You’ll need the public IP of the Palo Alto firewall (or otherwise NAT device), as well as the local network that you want to advertise across the tunnel to Azure.

Once that’s complete we can finish creating the connection, and see that it now shows up as a site-to-site connection on the Virtual Network Gateway, but since the other side isn’t yet setup the status is unknown. If you go to the “Overview” tab, you’ll notice it has the IP of the LNG you created as well as the public IP of the Virtual Network Gateway – you will want to copy this down as you’ll need it when you setup the IPSec tunnel on the Palo Alto.


Alright, things are just about done now on the Azure side. The last thing I want to do is kick off the deployment of a VM in the “hub” subnet that we can use to test the functionality of the tunnel. I’m going to deploy a cheap B1s Ubuntu VM. It doesn’t need a public IP and a basic Network Security Group (NSG) will do since there is a default rule that allows all from inside the Virtual Network (traffic sourced from the Virtual Network Gateway included).


Now that the test VM is deploying, let’s go deploy the Palo Alto side of the tunnel. The first thing you’ll need to do is create a Tunnel Interface (Network –> Interfaces –> Tunnel –> New). In accordance with best practices, I created a new Security Zone specifically for Azure and assigned that tunnel interface. You’ll note that it will deploy a sub interface that we’ll be referencing later. I’m just using the default virtual router for this lab, but you should use whatever makes sense in your environment.


Next we need to create an IKE Gateway. Since we set the Azure VNG to use IKEv2, we can use that setting here also. You want to select the interface that is publicly-facing to attach the IKE Gateway, in my case it is ethernet 1/2 but your configuration may vary. Typically you’ll have the IP address of the interface as an object and you can select that in the box below, but in my case my WAN interface is using DHCP from my ISP so I leave it as “none”.
It is important to point out though, that if your Palo Alto doesn’t have a public IP and is behind some other sort of device providing NAT, you’ll want to use the uplink interface and select the “local IP address” private IP object of that interface. I suspect this is an unlikely scenario, but I’ll call it out just in case.
The peer address is the public IP address of the Virtual Network Gateway of which we took note a few steps prior, and the PSK is whatever we set on the connection in Azure. Lastly, make sure the Liveness Check is enabled on the Advanced Options Screen.


Next we need an IPSec Crypto Profile. AES-256-CBC is a supported algorithm for Azure Virtual Network Gateways, so we’ll use that along with sha1 auth and set the lifetime to 8400 seconds which is longer than lifetime of the Azure VNG so it will be the one renewing the keys.

Now we put it all together, create a new IPSec Tunnel and use the tunnel interface we created, along with the IKE Gateway and IPSec Crypto Profile.

Now that the tunnel is created, we need to make appropriate configurations to allow for routing across the tunnel. Since I’m not using dynamic routing in this environment, I’ll go in and add a static route to the virtual router I’m using to advertise the address space we created in Azure to send out the tunnel interface.

Great! Now at this point I went ahead and grabbed the IP of the Ubuntu VM I created earlier (which was 10.0.1.4) and did a ping test. Unfortunately they all failed, what’s missing?


Yes yes, I did commit the changes (which always seems to get me) but after looking at the traffic logs I can see the deny action taking place on the default interzone security policy. Yes I could have not mentioned this, but hey, now if it doesn’t work perfectly for the first time for you – you can be assured you’re in good company.

Alright, if you recall we created the tunnel interface in its own Security Zone so I’ll need to create a Security Policy from my Internal Zone to the Azure Zone. You can use whatever profiles you need here, I’m just going to completely open interzone communication between the two for my lab environment. If you want machines in Azure to be able to initiate connections as well remember you’ll need to modify the rule to allow traffic in that direction as well.



Here we go, now I should have everything in order. Let’s go kick off another ping test and check a few things to make sure that the tunnel came up and shows connected on both sides of things. It looks like the new Allow Azure Security Policy is working, and I see my ping application traffic passing!

Before I go pull up the Windows Terminal screen I want to quickly check the tunnel status on both sides.



Success!!! Before I call it, I want to try a two more things so I’ll SSH into the Ubuntu VM, install Apache, edit the default web page and open it in a local browser.


At this point I do want to call out the troubleshooting capabilities for Azure VPN Gateway. There is a “VPN Troubleshoot” functionality that’s a part of Azure Network Watcher that’s built into the view of the VPN Gateway. You can select the gateway on which you’d like to run diagnostics, select a storage account where it will store the sampled data, and let it run. If there are any issues with the connection this will list them out for you. It will also list some specifics of the connection itself so if you want to dig into those you can go look at the files written to the blob storage account after the troubleshooting action is complete to get information like packets, bytes, current bandwidth, peak bandwidth, last connected time, and CPU utilization of the gateway. For further troubleshooting tips you can also visit the documentation on troubleshooting site-to-site VPNs with Azure VPN Gateways.


That’s it, all done! The site-to-site VPN is all setup. The VPN Gateway in Azure makes the process very easy and the Palo Alto side isn’t too bad either once you know what’s needed for the configuration.
If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!