Categories
#sharingiscaring Azure Microsoft 365

What’s cool and new in February ’23

Categories
Azure

Azure Files Backup GA

On April 29, 2020 Microsoft announced that Azure Backup can now manage snapshots for Azure Files.

Azure Files Backup announcement

This is great news because it simplifies Azure Files protection with Recovery Services Vaults. Backups are important, so they should be easy to manage. Prior to this GA release, Azure Backup could only create 1 daily snapshot of Azure Files via backup policy settings.

Configure backup

Associate vault with file shares

However you manage Azure Files backup schedules, you need a Recovery Services Vault which must be present in the same region as the storage accounts hosting file shares. Vaults are configured for geo-redundant storage (GRS) by default, but can be changed to LRS to reduce costs.

Configure Backup for Azure FileShare, selecting your storage accounts and shares.

Automate backups with a runbook

To achieve weekly/monthly/yearly and other backup intervals, an Automation account and PowerShell runbook can automate Recovery Service Vault protection of Azure Files. This sample solution uses AzureRM PowerShell modules and is what I have been using with Runbooks prior to this GA announcement.

Open your Automation Account and create a PowerShell runbook under Process Automation.

Create a runbook

Copy the contents of the example runbook, paste into the editor and Publish.

Quick note: AzureRunAsCertificates need to be renewed yearly, so set a reminder in your service management tools to avoid backup interuptions.

Schedule a runbook

Multiple schedules will be needed for weekly/monthly/yearly/other recovery points. To fit your recovery point requirements, add schedules for 2nd daily/weekly/monthly/quaterly/end-of-fiscal-year to the Shared Resources Schedules of the Automation account.

Schedules

Link the schedules to the runbook for periodic runs. Now backup snapshots are created on the schedule you defined.

Linked schedules

Monitor logs

You can use Logic Apps, Log Analytics and Resource Group Monitoring Diagnostic settings to generate some simple backup notifications.

Resource Group Diagnostics

I use a daily and a weekly Logic App to generate reports.

AzureDiagnostics 
    | where TimeGenerated >= ago(1d) 
    | where ResourceProvider == "MICROSOFT.AUTOMATION" 
    | where RunbookName_s == "PeriodicAzureFilesBackup" 
    | where ResultDescription has "Recoverypoints will be retained till" or ResultDescription has "Working on FileShare" 
    | project TimeGenerated, RunbookName_s, ResultDescription, ResourceGroup
    | distinct ResultDescription, RunbookName_s, ResourceGroup
    | sort by ResultDescription asc 

Two weekly queries generate more verbose output.

AzureDiagnostics 
    | where TimeGenerated >= ago(7d) 
    | where ResourceProvider == "MICROSOFT.AUTOMATION" 
    | where RunbookName_s == "PeriodicAzureFilesBackup" 
    | where ResultDescription has "Recoverypoints will be retained till"
    | project TimeGenerated, RunbookName_s, ResultDescription, ResourceGroup 
    | sort by TimeGenerated desc
  
AzureDiagnostics
    | where TimeGenerated >= ago(7d)
    | where ResourceProvider == "MICROSOFT.AUTOMATION"
    | where RunbookName_s == "PeriodicAzureFilesBackup"
    | where ResultDescription has "Working on FileShare"
    | distinct ResultDescription
    | sort by ResultDescription asc

Azure Files Share Snapshot Management GA

Azure Automation and Runbooks give you total control of the process and flexibility with managing schedules. That said, Azure Backup policy including additional common retention ranges is very user friendly, enabling admins and managers to confidently protect their files.

Azure Backup Policy 2020-04-29
Categories
Azure Visual Studio Code

Visual Studio Online and a Yearly Blog Post

This post refers to a previous version of my site that used GatsbyJS, prior to January 2021.

I purchased my domain last year to host a blog and email.  I did not expect to start producing a lot of posts so I wanted the blog itself to be as inexpensive as possible.  Here we are about 10 months later.


I started looking at static HTML generators like MKDocs, GitBook and Jekyl after finding a few sites that used them.They look like novel, modern and new ways to generate sites, and far beyond my mediocre HTML skills.  I wanted to deploy the site using Azure to gain experience with some DevOps workflows.


I built a blog with GatsbyJS based on a tutorial by Elena Neroslavskaya that was very helpful.

The workflow is:

  • Install node.js and generate a Gatsby site
  • customize and commit to a Git repository
  • use Azure pipelines to monitor the desired branch, build site files and deploy to an Azure Storage Account
  • use Azure CDN to quickly and cheaply serve the site globally


And it all worked great.  Managing SSL certs through Azure CDN was a challenge that took a lot of time to resolve, but I expect it will be reliable.  The site has been cheap to run.  What bothered me was managing my development environment, meaning Node.js and the project files from my Github.  If I wanted to develop at home on a workstation and then edit while at work using a more portable laptop, both systems had to have matching configurations and I needed to be good about managing my repo commits.  Since this was a group of new tools and techniques for me, I struggled.

I attended a Visual Studio Online session at Ignite 2019.

This is an Azure-hosted development environment that can be accessed via browser or Visual Studio Code.  The selling point for me was the isolated development environment usecase.  Instead of trying to maintain updates and avoid conflicts in a local environment, I build a new development environment when I need it, point it at my GitHub repo and configure it via script to install the tools I use.  In my case:

Updated 5/2/2020, Visual Studio Codespaces is the new name.

#!/bin/bash
# postcreate.sh

sudo apt-get update
npm install -g gatsby-cli
npm install

Now I can work on my Gatsby website anywhere, even access forwarded ports from localhost, and if the remote environment stops working, I can blow it away and start over.  A small thing, but very satisfying for me.

Categories
Azure

Azure File Sync and the bottomless server

One of the last workloads to migrate for my current environment is a 3-node Windows Failover Cluster with File Server for general use roles. The 3 nodes are Windows Server 2012 R2 guests on an HPE ProLiant blade Hyper-V cluster connected to an HPE StoreServ 3PAR 7200 via iSCSI.

I needed a solution that would:

  • Provide high availability and fault tolerance
  • Support a minimum of two sites
  • Integrate with Veeam and HPE StoreOnce with Catalyst deduplication
  • Provide a path to cloud infrastructure and services

I’d been testing different solutions for about a year between balancing daily requests and team projects.

DFSR could work but can have sync issues and seeding takes some effort. And there must be something else out there. It would work with Veeam VM backups. Path to cloud was questionable.

Maybe Storage Replica. It provides high speed data replication and high availability if using a stretch cluster. Running a guest cluster in our Nutanix environment with shared storage has been problematic at best and frustrating. Veeam Agents can protect a failover cluster, but not Veeam VM backups. That means backups to a Veeam Backup Repository or SMB share and then backup copy jobs to StoreOnce. The Nutanix storage fabric capacity was sized for running workloads, so subscribing it for backup storage was not feasible and purchasing compute and storage from a campus vendor would be a yearly expense with limited RoI. HPE StoreOnce supports CIFS shares for SMB but performance testing displayed slow transfer rates for backup and backup copy. Too many integrations, too many opportunities for something to go wrong, and Storage Replica is overkill for user home directories and group folders. Path to cloud also not great.

It was a lot of time spent experimenting and waiting for Azure File Sync to go from Preview at Ignite 2017 to GA in July 2018. This looked like the good stuff. A file sync service that works with on-premises Windows file shares, moves the storage hub to Azure, uses Cloud Tiering to keep the on-prem VM disk footprint small. Bottomless file server? Yeepp…

I came home from Orlando with a mission:

>Deploy AFS in dev and learn how to monitor it.
>Estimate costs for LRS or GRS storage accounts for about 10 TB of files.
>Get it all backed up.
>Ship it.

And that was September and October, where I had a lofty goal of cutting over our file shares during fall break or winter recess. There were still governance and technical configuration issues to address. I did not see how Azure backups would meet our retention needs as it appeared to only support daily jobs and 60 days retention. That is unless you contact the Azure Backup Team and ask for their sample runbook to automate on-demand backups.

I needed a solution for storing a full copy of our data on-prem for governance and local backup. Support for deduplication and cloud tiering came later with the February 2019 v5 Agent release. Prior to that I was concerned with the capacity needed on our two Nutanix Hyper-V clusters. I could have a full copy of a server endpoint on one cluster, a tiered endpoint on another cluster. More complexity.

This is a compromise but at least for proof of concept with some production lifetime, I like it. I combined 2.5" 5 TB Seagate BarraCuda spinning disk with HPE mixed use enterprise SSD in a ProLiant DL380 G9 to create a Windows Storage Pool with Tiered Storage. The performance is fair considering the cheap consumer HDD. The Virtual Disk and its volume hold the VHDX files for a VM whose sole purpose is to sync Azure Files. This is not a user facing server and there are no shared folders. The Windows Server 2016 host and Veeam 9.5u3 VM backups means Resilient Change Tracking makes quick work of protecting data.

More work to come with data migration, analytics, go live and troubleshooting.