Cloud

Azure Mask Browser Extension

Posted on

Reading Time: 2 minutes

I often find myself having to do a lot of editing of both video and screenshots when using the Azure Portal and I wanted to write a short blog post about this very handy extension I’ve been using recently. The Azure Mask Extension is an open source tool written by Brian Clark, it works by masking sensitive data as you navigate the Azure Portal which is helpful for presentations, screen sharing, and content creation. Take a look below at how to install and use it.

First, browse the extension settings in your chromium-based browser. Due to naming issues, the extension had to be renamed and has been “pending review” for over a year now (3/16/21) as noted by the Github repo. In light of this, the extension can’t currently be installed from the store, and must be manually loaded. Once you’re on this screen, you’ll need to toggle “Developer Mode” once you download the extension from the Github repo use the “Load Unpacked” button to upload the extension file.

Browser Extension Settings

 

Once the extension is installed, you will see it show up on the extensions setting screen. You will then click the Azure Mask extension button in the task bar and move the slider to “Toggle All Masks”.

Extension installed and toggled

 

Next, head over to the Azure Portal and check out the masking features of this extension!

Masked Azure Portal

 

Short and sweet blog post, but this extension has saved me a lot of time when sharing my screen, presenting at conferences, recording video, and taking screenshots for blogs. If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!

Azure App Service Private Link Integration with Azure Front Door Premium

Posted on Updated on

Reading Time: 5 minutes

Last week , Azure Front Door Premium went into Public Preview. While this did bring about some other cool features and integrations, the one I’m most excited about today is the integration with Azure Private Link.  This now allows Azure Front Door to make use of Private Link Services (not endpoints, which is what most people think about when they hear Private Link). Private Link Services allow for resource communication between two tenants, some of the most common use cases are software providers allowing private access to a solution running in their environment. Today I’m going to walk through how to connect Azure Front Door, through Private Link, to an App Service, without an ASE, the need to work with Private Link, DNS or anything of the sort. I believe this will become the new standard for hosting App Services.

With that, let’s get started! First, we need to create an Azure App Services Web App.

New Azure Web App

 

*Note* At the time of writing this post (03/01/2021) Private Link Service integration requires the App Service to be a Pv2.

 

Once the Web App is deployed, you’ll need the URL of the website and want to test it in a web browser. In this instance I’m not hosting anything in particular, simply hosting the sample page to show that it’s working.

Default App Service Page

 

At this point the web app is created, and you would expect to have to create a Private Link Endpoint now but since Azure Front Door Premium uses the Private Link Service functionality we can let Front Door do the work for us. With that said, let’s now go create the Azure Front Door Premium Service.

Azure Front Door Premium Creation

 

We need to make sure that the Tier is selected properly as the “Premium” SKU. After that radio button is selected, a section will populate below with different configuration options compared to the Standard Tier. The one we need to make sure to check is “Enable private link service”. After that’s selected, you will select the web app with which you want to establish Private Link connectivity from Front Door. If you would like, here you can also add a custom message. This will be what is displayed as a connection request in the Private Link Center in the next step.

Azure Front Door Creation

Azure Front Door Premium Creation

 

On the review page, we can see that the endpoint created is a URL for Azure Front Door and this will be the public endpoint. The “Origin” is the web app to which Front Door will be establishing private connectivity.

Final Configuration for Azure Front Door

 

Once Azure Front Door is done deploying, you will need to open up the Private Link Center. From there you will navigate to the “pending connections”, which is where you will see the connection request from Azure Front Door with the message you may or may not have customized. Remember that Azure Front Door uses Azure Private Link Service to connect it’s own managed Private Link Service to your Web App. You will need to “Authorize” the connection request in order for the connection to be created and allow Front Door to privately communicate with your Web App.

Private Link Setup

Private Link Center

Private Link Setup

 

After the connection is approved you will notice that the “pending connection” is removed, and has been moved to “active connections”. At this point, you will also notice that access to the Web App through a browser will return an error message the same way it would if you were to have added firewall rules on the Web App. This is because it’s being configured to only allow inbound connections from Azure Front Door.

Private Link Setup

No Public Access

 

If you want to modify any of the configuration settings, you will go to the “Endpoint Manager” section of Azure Front Door, where you get the familiar interface used by both Azure Front Door and App Gateway.

Front Door Configuration

 

In my testing, the time between clicking “Approve” in Private Link Center to the Web App being available through the Azure Front Door endpoint is anywhere between 15-30 minutes. I’m not quite sure why this is the case, though it is likely due to the service only being in preview. If you get an error message in the web browser using the Front Door URL, just grab a cup of coffee and give it some time to do its thing.

Once it’s all done though, you can use the Front Door URL in the web browser and see that it routes you to the App Service!

Azure Front Door Endpoint

 

There we go, all set! This is really a dream configuration, and something a lot of us have been looking forward to for some time. In the past we’ve done something similar with App Gateways, and Private Link Endpoints. The beauty of the solution with Front Door Premium, is that there is no messing around with DNS or infrastructure whatsoever – you can deploy this entire solution in PaaS while taking advantage of Azure Front Door’s global presence!

Click here to get started with Azure Front Door Premium.

If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!

Shared Storage Options in Azure: Part 5 – Conclusion

Posted on Updated on

Reading Time: 4 minutes

Part 5, the end of the series! This has been a fun series to write, and I hope it was helpful to some of you. The impetus for this whole thing was the number of times I’ve been asked how to setup shared storage between systems (primarily VMs) in Azure. As we’ve covered, you can see there are a handful of different strategies with pros and cons to each. I’m going to close this series with a final Pros and Cons list and a few general design pattern directions.

 

Pros and Cons:

 

Azure Shared Disks:

Shared Storage Options in Azure: Part 1 – Azure Shared Disks « The Tech L33T

Pros:

  • Azure Shared disks allows for the use of what is considered “legacy clustering technology” in Azure.
  • Can be leveraged by familiar tools such as Windows Failover Cluster Manager, Scale-out File Server, and Linux Pacemaker/Corosync.
  • Premium and Ultra Disks are supported so performance shouldn’t be an issue in most cases.
  • Supports SCSI Persistent Reservations
  • Fairly simple to setup

Cons:

  • Does not scale well, similar to what would be expected with a SAN mapping.
  • Only certain disk types are supported.
  • Read-Only host caching is not available for Premium SSDs with maxShares >1.
  • When using Availability Sets and Virtual Machine Scale sets, storage fault domain alignment with the VMs are not enforced on the shared data disk.
  • Azure Backup is not yet supported.
  • Azure Site Recovery is not yet supported.

 

Azure IaaS Storage:

Shared Storage Options in Azure: Part 2 – IaaS Storage Server « The Tech L33T

Pros:

  • More control, greater flexibility of protocols and configuration.
  • Ability to migrate many workloads as-is and use existing storage configurations.
  • Ability to use older, or more “traditional” protocols and configurations.
  • Allows for the use of Shared Disks.
  • Integration with Azure Backup.
  • Incredible storage performance if the data can be cached/ephemeral (up to 3.8 million IOPS on L80s_v2).

Cons:

  • Significantly more management overhead as compared to PaaS.
  • More complex configurations, and cost calculations compared to PaaS.
  • Higher potential for operational failure with a higher number of components.
  • Broader attack surface, and more security responsibilities.
  • Maximum of 80,000 uncached IOPS on any VM SKU.

 

Azure Storage Services (Blob and File):

Shared Storage Options in Azure: Part 3 – Azure Storage Services « The Tech L33T

Pros:

  • Both are PaaS and fully managed which greatly reduces operational overhead.
  • Significantly higher capacity limits as compared to IaaS.
  • Ability to migrate some workloads as-is and use existing storage configurations when using SMB or BlobFuse compared to using native API connections.
  • Ability to use Active Directory Authentication in Azure Files, and Azure AD Authentication in Blob and Files.
  • Both integrate with Azure Backup.
  • Much easier to geo-replicate compared to IaaS.
  • Azure File Sync makes distributed File Share services and DFS a much better experience with Backup, Administration, Synchronization, and Disaster Recovery.

Cons:

  • BlobFuse (by default) stores credentials in a text file.
  • Does not support older access protocols like iSCSI.
  • NFS is not yet Generally Available.
  • Azure Files is limited to 100,000 IOPS (per share).

 

Azure NetApp Files:

Shared Storage Options in Azure: Part 4 – Azure NetApp Files « The Tech L33T

Pros:

  • Incredibly high performance, depending on configuration (up to ~3.2 million IOPS/volume).
  • SMB and NFS Shares both supported, with Kerberos and AD integration.
  • More performance and capacity than is available on any single IaaS VM.

Cons:

  • While it is deployed in most major regions, it may not yet be available where you need it yet (submit feedback if this is the case).
  • Does not yet support Availability Zones, Cross-Region Replication is in Preview.

 

There we have it, my final list of Pros and Cons between Azure Shared Disks, DIY IaaS Storage, Azure Blob/Files, and Azure NetApp Files. Lastly, I want to end with some notes on general patterns when considering shared storage like the ones discussed in this series.

 

Patterns by Workload Type:

 

Quorum:

  • If the reason you need shared storage is for a quorum vote, look into using a Cloud Witness for Failover Clusters (including SQL AlwaysOn).
  • If the cloud quorum isn’t an option, shared disks is going to be an easy option, and I would go there second.

 

Block Storage:

  • If you need shared block storage (iSCSI) for more than just quorum, chances are you need a lot of it, so I’d first recommend running IaaS storage. Start planning a migration away from this pattern though, Blob block storage on Azure is amazing and if you can port your application to use it – I would highly recommend doing so.

 

General File Share:

  • For most generic file shares, Azure Files is going to be your best bet – with a potential use of Azure File Sync.
  • Azure NetApp Files is also a strong option here since the Standard Tier is cost effective enough for it to be feasible, though ANF requires a bit more configuration than Azure Files.
  • Lastly, you could always run your File Share in custom IaaS storage, but I would first look to a PaaS solution.

 

High-Performance File Storage:

  • If your application doesn’t support the use of Blob storage, like most commercial products, Azure NetApp files is likely going to be your best bet.
  • Once NFS becomes generally available, NFS on Azure Files and Blob store are going to be strong competitors – especially on Blob and ADLS.
  • Depending on what “high-performance” means, and whether or not you use a scale-out software configuration, storage on IaaS could potentially be an option. This is a much more feasible option when the bulk of the data can be cached or ephemeral.

 

We’ve come to the end! I hope that was a useful blog series. As technologies and features advance, I’ll go back and update these, but please feel free to comment if I miss something. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions about this post, the series, or anything else!

 

Shared Storage Options in Azure: Part 2 – IaaS Storage Server

Posted on Updated on

Reading Time: 9 minutes

Recently, I posted the “Shared Storage Options in Azure: Part 1 – Azure Shared Disks” blog post, the first in the 5-part series. Today I’m posting Part 2 – IaaS Storage Server. While this post will be fairly rudimentary insofar as Azure technical complexity, this is most certainly an option when considering shared storage options in Azure and one that is still fairly common with a number of configuration options. In this scenario, we will be looking at using a dedicated Virtual Machine to provide shared storage through various methods. As I write subsequent posts in this series, I will update this post with the links to each of them.

 

Virtual Machine Configuration Options:

Compute:

While it may not seem vitally important, the VM SKU you choose can impact your ability to provide storage capabilities in areas such as Disk Type, Capacity, IOPS, or Network Throughput. You can view the list of VM SKUs available on Azure at this link. As an example, I’ve clicked into the General Purpose, Dv3/Dvs3 series and you can see there are two tables that show upper limits of the SKUs in that family.

VM SKU Sizes in Azure

VM Performance Metrics in Azure

In the limits for each VM you can see there are differences between Max Cached and Temp Storage Throughput, Max Burst Cached and Temp Storage Throughput, Max uncached Disk Throughput, and Max Burst uncached Disk Throughput. All of these represent very different I/O patterns, so make sure to look carefully at the numbers.

Below are a few links to read more on disk caching and bursting:

 

You’ll notice when you look at VM SKUs that there is an L-Series which is “storage optimized”. This may not always be the best fit for your workload, but it does have some amazing capabilities. The outstanding feature of the L-Series VMs are the locally mapped NVMe drives which as of the time of writing this post on the L80s_v2 SKU can offer 19.2TB of storage at 3.8 Million IOPS / 20,000 MBPS.

Azure LS Series VM Specs

The benefits of these VMs are the extremely low latency, and high throughput local storage, but the caveat to that specific NVMe storage is that it is ephemeral. Data on those disks does not persist a reboot. This means it’s incredibly good at serving from a local cache, tempdb files, etc. though its not storage that you can use for things like a File Server backend (without some fancy start-up scripts, please don’t do this…). You will note that the maximum uncached throughput is 80,000 IOPS / 2,000 MBPS for the VM, which is the same as all of the other high spec VMs. As I am writing this, no Azure VM allows for more than that for uncached throughput – this includes Ultra Disks (more on that later).

For more information on the LSv2 series, you can read more here: Lsv2-series – Azure Virtual Machines | Microsoft Docs

Additional Links on Azure VM Storage Design:

Networking:

Networking capabilities of the Virtual Machine are also important design decisions when considering shared storage, both in total throughput and latency. You’ll notice in the VM SKU charts I posted above when talking about performance there are two sections for networking, Max NICs and Expected network bandwidth Mbps. It’s important to know that these are VM SKU limitations, which may influence your design.

Expected network bandwidth is pretty straight forward, but I want to clarify that the number of Network Interfaces you mount to a VM does not change this number. For example, if your expected network bandwidth is 3200 Mbps and you have an SMB share running on that single NIC, adding a second NIC and using SMB multi-channel WILL NOT increase the total bandwidth for the VM. In that case you could expect each NIC to potentially run at 1,600 Mbps.

The last networking feature to take into consideration is Accelerated Networking.  This feature allows for SR-IOV (Single Root I/O Virtualization), which by bypassing the host CPU and offloading the network traffic directly to the Network Interface can dramatically increase performance by reducing latency, jitter, and CPU utilization.  

Accelerated Networking Comparison in Azure

Image Reference: Create an Azure VM with Accelerated Networking using Azure CLI | Microsoft Docs  

Accelerated Networking is not available on every VM though, which makes it an important design decision. It’s available on most General Purpose VMs now, but make sure to check the list of supported instance types. If you’re running a Linux VM, you’ll also need to make sure it’s a supported distribution for Accelerated Networking.  

Storage:

In an obvious step, the next design decision is the storage that you attach to your VM. There are two major decision types when selecting disks for you VM – disk type, and disk size.

Disk Types:

Azure VM Disk Types

Image Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types  

As the table above shows, there are three types of Managed Disks (https://docs.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview ) in Azure. At the time of writing this, Premium/Standard SSD and Standard HDD all have a limit of 32TB per disk. The performance characteristics are very different, but I also want to point out the difference in the pricing model because I see folks make this mistake very often.  

Disk Type: Capacity Cost: Transaction Cost:
Standard HDD Low Low
Standard SSD Medium Medium
Premium SSD High None
Ultra SSD Highest (Capacity/Throughput) None

Transaction costs can be important on a machine whose sole purpose is to function as a storage server. Make sure you look into this before a passing glance shows the price of a Standard SSD lower than a Premium SSD. For example, here is the Azure Calculator output of a 1 TB disk across all four types that averages 10 IOPS * ((10*60*60*24*30)/10,000) = 2,592 transaction units.

Sample Standard Disk Pricing:

Azure Calculator Disk Pricing  

 

Sample Standard SSD Pricing:

Azure Calculator Disk Pricing  

Sample Premium SSD Pricing:

Azure Calculator Disk Pricing

Sample Ultra Disk Pricing:

Azure Calculator Disk Pricing

 

The above example is just an example, but you get the idea. Pricing gets strange around Ultra Disk due to the ability to configure performance (more on that later). Though there is a calculable break-even point for disks that have transaction costs versus those that have a higher provisioned cost.

For example, if you run an E30 (1024 GB) Standard SSD at full throttle (500 IOPS) the monthly cost will be ~$336, compared to ~$135 for a P30 (1024 GB) Premium SSD, with which you get x10 the performance. The second design decision is disk capacity. While this seems like a no-brainer (provision the capacity needed, right?) it’s important to remember that with Managed Disks in Azure, the performance scales with, and is tied to, the capacity of the disk.

Image Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#disk-size-1

You’ll note in the above image the Disk Size scales proportionally with both the Provisioned IOPS and Provisioned Throughput. This is to say that if you need more performance out of your disk, you scale it up and add capacity.

The last note on capacity is this, if you need more than 32TB of storage on a single VM, you simply add another disk and use your mechanism for combining that storage (Storage Spaces, RAID, etc.). This same method can be used to further tweak your total IOPS, but make sure you take into consideration the cost of each disk, capacity, and performance before doing this – most often it’s an insignificant cost to simply scale-up to the next size disk. Last but not least, I want to briefly talk about Ultra Disks – these things are amazing!

Ultra Disk Configuration in Azure

Unlike with the other disk types, this configuration allows you to select your disk size and performance (IOPS AND Throughput) independently! I recently worked on a design where the customer needed 60,000 IOPS, but only needed a few TB of capacity, this is the perfect scenario for Ultra Disks. They were actually able to get more performance, for less cost compared to using Premium SSDs.

To conclude this section, I want to note two design constraints when selecting disks for your VM.

  1. The VM SKU is still limited to a certain number of IOPS, Throughput and Disk Count. Adding together the total performance of your disks, cannot exceed the maximum performance of the VM. If the VM SKU supports 10,000 IOPS and you add 3x 60,000 IOPS Ultra Disks, you will be charged for all three of those Ultra Disks at their provisioned performance tiers but will only be able to get 10,000 IOPS out of the VM.
  2. All of the hardware performance may still be subject to the performance of the access protocol or configuration, more on this in the next section.

Additional Reading on Storage:

 

Software Configuration and Access Protocols:

As we come to the last section of this post, we get to the area that aligns with the purpose of this blog series – shared storage. In this section I’m going to cover some of the most common configurations and access types for shared storage in IaaS. This is by no means an exhaustive list, rather what I find most common.

Scale-Out File Server (SoFS):

First up is Sale-Out File Server, this is a software configuration inside Windows Server that is typically used with SMB shares. SoFS was introduced in Windows 2012, uses Windows Failover Clustering, and is considered a “converged” storage deployment. It’s also worth noting that this can run on S2S (Storage Space Direct), which is the method I recommend using with modern Windows Server Operating Systems. Scale-Out File Server is designed to provide scale-out file shares that are continuously available for file-based server application storage. It provides the ability to share the same folder from multiple nodes of the same cluster. It can be deployed in two configuration options, for Application Data or General Purpose. See the additional reading below for the documentation on setup guidance.

Additional reading:

SMB v3:

Now into the access protocols – SMB has been the go-to file services protocol on Windows for quite some time now. In modern Operating Systems, SMB v3.* is an absolutely phenomenal protocol. It allows for incredible performance using things like SMB Direct (RDMA), Increasing MTU, and SMB Multichannel which can use multiple NICs simultaneously for the same file transfer to increase throughput. It also has a list of security mechanisms such as Pre-Auth Integrity, AES Encryption, Request Signing, etc. There is more information on the SMB v3 protocol below, if you’re interested, or still think of SMB in the way we did 20 years ago – check it out. The Microsoft SQL Server team even supports SQL hosting databases on remote SMB v3 shares.

Additional reading:

NFS:

NFS has been a similar staple as a file server protocol for a long while also, and whether you’re running Windows or Linux can be used in your Azure IaaS VM for shared storage. For organizations that prefer an IaaS route compared to PaaS, I’ve seen many use this as a cornerstone configuration for their Azure Deployments. Additionally, a number of HPC (High Performance Compute) workloads, such as Azure CycleCloud (HPC orchestration) or the popular Genomics Workflow Management System, Cromwell on Azure prefer the use of NFS.

Additional Reading:

iSCSI:

While I would not recommend the use of custom block storage on top of a VM in Azure if you have a choice, some applications do still have this requirement in which case iSCSI is also an option for shared storage in Azure.

Additional Reading:

That’s it! We’ve reached the end of Part 2. Okay, here we go with the Pros and Cons for using an IaaS Virtual Machine for your shared storage configuration on Azure.

Pros and Cons:

 

Pros:

  • More control, greater flexibility of protocols and configuration.
  • Depending on the use case, potentially greater performance at a lower cost (becoming more and more unlikely).
  • Ability to migrate workloads as-is and use existing storage configurations.
  • Ability to use older, or more “traditional” protocols and configurations.
  • Allows for the use of Shared Disks.

Cons:

  • Significantly more management overhead as compared to PaaS.
  • More complex configurations, and cost calculations compared to PaaS.
  • Higher potential for operational failure with the higher number of components.
  • Broader attack surface, and more security responsibilities.

  Alright, that’s it for Part 2 of this blog series – Shared Storage on IaaS Virtual Machines. Please reach out to me in the comments, on LinkedIn, or Twitter with any questions about this post, the series, or anything else!

 

DNS Load Balancing in Azure

Posted on Updated on

Reading Time: 3 minutes

This post won’t be too long, but I wanted to expand a bit on the recent repo that I published to Github for Azure Load Balanced DNS Servers. I’ve been working in Azure the better part of a decade and the way we’ve typically approached DNS is in one of two ways. Either use (a pair of) IaaS Domain Controllers or use Azure-Provided DNS resolution. In the last year or so there have been an increasing number of architectural patterns that require private DNS resolution where it we may not necessarily care about the servers themselves.

This pattern has become especially popular with the requirements for Azure Private Link in hybrid scenarios where on-premises systems need to communicate with Azure PaaS services over private link.

Reference: https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns#on-premises-workloads-using-a-dns-forwarder

The only thing the DNS forwarder is providing here is very basic DNS forwarding functionality. This is not to say that it can’t be further configured, but the same principles still apply. DNS isn’t something that needs any sort of complex failover during patch windows, but since it has to be referenced by IP we have to be careful about taking DNS servers down if there aren’t alternates configured. With a Web Server we would just put it behind a Load Balancer, but there don’t seem to be configurations published for a similar setup with DNS servers (other than using a Network Virtual Appliance) since UDP isn’t a supported health probe by Azure Load Balancers. How then do we configure a pair of “zero-touch” private DNS functionality in Azure?

When asked “What port does DNS use?”, the overwhelming majority of IT Professionals will say “UDP 53”. While that is correct, it also uses TCP 53. UDP Packets can’t be larger than 512 Bytes, and while this suffices in most cases for DNS there are certain scenarios where it does not. For example, DNS Zone Transfers (AFXR/IFXR), DNSSEC, and EDNS all have response sizes larger than 512 bytes, which is why they use TCP. This is why the DNS Service does (be default) listen on TCP 53, which is what we can use as the health probe in the Azure Load Balancer.

The solution that I’ve published on Github (https://github.com/matthansen0/azure-dnslb), contains the template to deploy this solution which has the following configuration.

  • Azure Virtual Network
  • 2x Windows Core Servers:
    • Availability Set
    • PowerShell Script to Configure Servers with DNS
    • Forwarder set to Azure Multicast DNS Resolver
  • Azure Load Balancer:
    • TCP 53 Health Probe
    • UDP/TCP 53 Listener
Azure DNS Load Balanced Solution

This template does not include patch management, but I would highly recommend using Azure Update Management, this way you can setup auto-patching and an alternate reboot schedule. If this is enabled, this solution would be a zero-touch, highly-available, private DNS solution for ~$55/mo (assuming D1 v2 VM, which can be lower if a cheaper SKU is chosen).

Since I’m talking about DNS here, the last recommendation that I’ll make is to go take a look at Azure Defender for DNS which monitors, and can alert you to suspicious activity in your DNS queries.

Alright, that’s it! I hope this solution will be helpful, and if there are options or configurations you’d like to see available in the Github repository please feel free to submit an issue or a PR! If you want to deploy it right from here, click the button below!

Deploy to Azure

 

If you have any questions, comments, or suggestions for future blog posts please feel free to comment blow, or reach out on LinkedIn or Twitter. I hope I’ve made your day a little bit easier!

Custom “Virtual Network Operator” Role in Azure

Posted on Updated on

Reading Time: 4 minutes

*Edit* 6/9/20: At the time of writing this blog custom roles were not available in the Azure Portal. Since that feature now exists, I’ve added how to create this role through the portal to the end of this blog.

We all know that cloud environments are different than on premises. Development environments can be even more difficult, the right cross between giving freedom to produce business value and enough security and controls to maintain a good security posture. Recently I came across a scenario where the design was that the networking components (Virtual Network, S2S VPN, Peerings, Service Endpoints, UDRs, Subnets, etc.) were all under the control of a Networking team so that the Azure environment would be an extension of their local network. From a permissions standpoint though this caused some problems. The goal would be to empower developers to build whatever they need and just attach it to the vNet when connectivity was needed, with the caveat that they shouldn’t be able to modify any of the network settings or configurations in the shared vNet. This however, turned out not to be possible with default IAM roles.

In my testing, even though you’re not “modifying” anything per se in the vNet – when attaching a network interface card to a Subnet it does require write access. Reader access will give the user this error.

After looking through the default IAM roles, there aren’t any that do what I need them to do. Alas, a custom role is needed. For reference, please checkout this docs page for custom roles – https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles.

First, I need to see what actions are available to pack into the custom role so I run the following command in Azure Powershell and get these results.

Get-AzProviderOperation “Microsoft.Network/virtualNetworks/*” | FT OperationName, Operation, Description -AutoSize

Great, now I can see what is available. It looks like the actions that I need are the /subnet/join/action and /subnet/joinViaServiceEndpoint/Action. Based on the descriptions these two will essentially give “operator” role to the vnet. The assignee will be able to use the subnet(s) but not able to modify them.

Next, you can use one of the template references in the Microsoft Docs link (https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles-powershell#create-a-custom-role-with-json-template) and modify the actions.

{
  “Name”: “Custom – Network Operator Role”,
  “Id”: null,
  “IsCustom”: true,
  “Description”: “Allows for read access to Azure network and join actions for service endpoints and subnets.”,
  “Actions”: [
  “Microsoft.Network/virtualNetworks/subnets/join/action”,
  “Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action”,
  “Microsoft.Network/virtualNetworks/read”
   ],
  “NotActions”: [],
  “AssignableScopes”: [
  “/subscriptions/00000000-0000-0000-0000-000000000000”,
  “/subscriptions/11111111-1111-1111-1111-111111111111”
  ]
}

After modifying the assignable subscription IDs, save that as a .json file and use it in the following command to import the custom role.

New-AzRoleDefinition -InputFile “C:\FileFolderLocationPath\CustomNetworkOperatorRole.json”

After a portal refresh you get the custom role available as an IAM role assignment.

There you go, after assigning this role the user was able to create VMs and attach them to the Virtual Network while still leaving control of the Network configuration to the Networking Team.

————————————————————————-

————————————————————————-

EDIT 6/9/20: Now that custom roles are available in the portal I’ve added how to create this role that way below. 

Using the documentation on how to create custom IAM roles in the Azure portal you can clone any role, but I’m just going to use the “Network Contributor” role to clone. 

I am going to just “start from scratch”, though you can easily use this page to modify the permissions set of a certain role that already exists. Additionally, if you have the JSON (like we generated before with powershell) you can just upload that here.

Now go ahead and add new permissions by going to the “Microsoft Network” Category and selecting the same permissions we wrote in the JSON and used with the powershell command above.

After adding those the next page will show you the permission that you’ve added, let’s double check those and make sure we have what we need.

Looks good, now we can set a scope where this role can be applied. I have this one scoped to the whole subscription, but you can use the interface to easily change that on this page.

Alright, that’s about it – let’s take a quick look at the summary pages. You can see that the portal actually generates the JSON, which is nice if you want to assign roles using Azure Policy or any other automation tooling.

Nice! Now we can see that users can use the four above actions which will allow them to interact with, and use a shared hub-style virtual network while now allowing full contributor access.

I hope I’ve made your day at least a little bit easier!

If you have any questions or suggestions for future blog posts feel free to comment below, or reach out to me via email, twitter, or LinkedIn.

Thanks!

Changing Azure Recovery Services Vault to LRS Storage

Posted on

Reading Time: 2 minutesBack in the classic portal with backup services it was an easy fix. Simply change the settings value of storage replication type. I’ve recently started moving my workloads to recovery serveries vaults in ARM, and noticed something peculiar. By default, the storage replication type of the vault is GRS.

 

If your needs require geographically redundant storage, that that’s perfectly fine. I however don’t have such needs, and trust in Microsoft’s ability to keep data generally available in a LRS replication topology. It should be just like it was in classic, as an option anyways, right? Strangely, the option to change the replication type for the storage configuration on the vault is grayed out.

 

 

Odd, right? I thought so, until I found this.

 

Okay, well it’s not optimal but it looks like I need to remove the backup data from the vault to change the storage replication types right? Well, I gave that a shot and no go. I had the same issue, the option was still grayed out.

I ultimately had to completely delete, and create a new recovery services vault. Once it’s initially created you can change the replication type.

 

 

Ah, finally! Then register the VM(s), run some backup jobs and voila! Confirmation that the vault is using LRS storage.

 

I hope this makes your day at least a little bit easier.

Thanks,

How to mount your OneDrive as a local mapped drive

Posted on Updated on

Reading Time: 3 minutesEDIT: If you liked this post, I’ve updated my process a little bit and written a script to automate a good chunk of this! Go check out Part 2 of this blog! http://thetechl33t.com/2014/10/03/how-to-mount-your-onedrive-as-a-local-mapped-drive-part-2/

 

 

OneDrive is an online storage system by Microsoft that is included when you have an email account such as @live.com @hotmail.com etc. I use it fairly often and I was curious if I could map it locally, turns out that I can.

First of all, you need to go to https://onedrive.com and use your Windows Live account (the same you use to access Hotmail, Messenger, Windows Live Mail or MSN) to log in and create the folders you want to use by using the New menu. You can create private and shared folders and customize the access for every one of them.

onedrive_0

After your have created your folders and customized it to your liking, you will need to link your computer to your online ID so it can access them without asking for credentials every time.

Click on the Start Menu button and select Control Panel.

cnt_pannel

Select User Accounts and Family Safety.

useracct_2

Select User Accounts

useracct_3

Select Link Online IDs, on the left side of the window.

link_online_4

Click on Link Online ID.

link_online_5

If you haven´t installed the Windows Live ID provider, you will be taken to a website to download it. If not, click the “Add an online ID provider” link in the above photo and it will take you there.

download_signon_6

Now you will be taken back to the Online ID providers and click on Link Online ID to sign in.

liveID_7

Now, to get the address where to map your OneDrive´s folders, you can open Excel, Word, PowerPoint or OneNote click on File and then on Save & Send. Then click “Save to Web” and it will populate the OneDrive folders from the OnlineID you just linked, select that folder and click “Save As”.

doc_save_8

Double click on the folder you want to map and copy the folder´s address.

url_9

Now that you have that link, go back to “Computer” and click “Map Network Drive”.

computer_10

map_11

Choose a drive letter, and paste that URL in there that was copied a few steps back.

map_12

There ya go! You’ve now got your OneDrive linked locally!

drive_13

 

 

EDIT: If you liked this post, I’ve updated my process a little bit and written a script to automate a good chunk of this! Go check out Part 2 of this blog! http://thetechl33t.com/2014/10/03/how-to-mount-your-onedrive-as-a-local-mapped-drive-part-2/