Building a Solr Cluster with TerraForm – Part 1

So it’s no surprise that I very much have been talking about how amazing TerraForm is, and recently I’ve been doing a lot of investigation into Solr and how to build a scalable Solr Cluster.

So given the kubernetes template I wanted to try my hand at something similar. The goals of this project were the following:

  1. Build a generic template for creating a Solr cloud cluster with distributed shard.
  2. Build out the ability to scale the cluster for now using TerraForm to manually trigger increases to cluster size.
  3. Make the nodes automatically add themselves to the cluster.

And I could do this just using bash scripts and packer. But instead wanted to try my hand at cloud init.

But that’s going to be the end result, I wanted to walkthrough the various steps I go through to get to the end.  The first real step is to get through the installation of Solr on  linux machines to be implemented. 

So let’s start with “What is Solr?”   The answer is that Solr is an open source software solution that provides a means of creating a search engine.  It works in the same vein as ElasticSearch and other technologies.  Solr has been around for quite a while and is used by some of the largest companies that implement search to handle search requests by their customers.  Some of those names are Netflix and CareerBuilder.  See the following links below:

So I’ve decided to try my hand at this and creating my first Solr cluster, and have reviewed the getting started. 

So I ended up looking into it more, and built out the following script to create a “getting started” solr cluster.

sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
sudo apt-get install -y gnupg-curl
sudo wget https://www.apache.org/dist/lucene/solr/8.0.0/solr-8.0.0.zip.asc | sudo apt-key add

sudo apt-get update -y
sudo apt-get install unzip
sudo wget http://mirror.cogentco.com/pub/apache/lucene/solr/8.0.0/solr-8.0.0.zip

sudo unzip -q solr-8.0.0.zipls
sudo mv solr-8.0.0 /usr/local/bin/solr-8.0.0 -f
sudo rm solr-8.0.0.zip -f

sudo apt-get install -y default-jdk

sudo chmod +x /usr/local/bin/solr-8.0.0/bin/solr
sudo chmod +x /usr/local/bin/solr-8.0.0/example/cloud/node1/solr
sudo chmod +x /usr/local/bin/solr-8.0.0/example/cloud/node2/solr
sudo /usr/local/bin/solr-8.0.0/bin/bin/solr -e cloud -noprompt

The above will configure a “getting started solr cluster” that leverages all the system defaults and is hardly a production implementation. So my next step will be to change this. But for the sake of getting something running, I took the above script and moved it into a packer template using the following json. The above script is the “../scripts/Solr/provision.sh”

{
  "variables": {
    "deployment_code": "",
    "resource_group": "",
    "subscription_id": "",
    "location": "",
    "cloud_environment_name": "Public"
  },
  "builders": [{   
    "type": "azure-arm",
    "cloud_environment_name": "{{user `cloud_environment_name`}}",
    "subscription_id": "{{user `subscription_id`}}",

    "managed_image_resource_group_name": "{{user `resource_group`}}",
    "managed_image_name": "Ubuntu_16.04_{{isotime \"2006_01_02_15_04\"}}",
    "managed_image_storage_account_type": "Premium_LRS",

    "os_type": "Linux",
    "image_publisher": "Canonical",
    "image_offer": "UbuntuServer",
    "image_sku": "16.04-LTS",

    "location": "{{user `location`}}",
    "vm_size": "Standard_F2s"
  }],
  "provisioners": [
    {
      "type": "shell",
      "script": "../scripts/ubuntu/update.sh"
    },
    {
      "type": "shell",
      "script": "../scripts/Solr/provision.sh"
    },
    {
      "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
      "inline": [
        "/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"
      ],
      "inline_shebang": "/bin/sh -e",
      "type": "shell"
    }]
}

The only other script mentioned is the “update.sh”, which has the following logic in it, to install the cli and update the ubuntu image:

#! /bin/bash

sudo apt-get update -y
sudo apt-get upgrade -y

#Azure-CLI
AZ_REPO=$(sudo lsb_release -cs)
sudo echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
sudo curl -L https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
sudo apt-get install apt-transport-https
sudo apt-get update && sudo apt-get install azure-cli

So the above gets me to a good place for being able to create an image with it configured.

For next steps I will be doing the following:

  • Building a more “production friendly” implementation of Solr into the script.
  • Investigating leveraging cloud init instead of the “golden image” experience with Packer.
  • Building out templates around the use of Zookeeper for managing the nodes.


Configuring Terraform Development Environment

So I’ve been doing a lot of work with a set of open source tools lately, specifically TerraForm and Packer. TerraForm at its core is a method of implementing truly Infrastructure as Code, and does so by providing a simple function style language where you can create basic implementations for the cloud, and then leverage resource providers to deploy. These resource providers allow you to deploy to variety of cloud platforms (the full list can be found here). It also provides robust support for debugging, targeting, and supports a desired state configuration approach that makes it much easier to maintain your environments in the cloud.

Now that being said, like most open source tools, it can require some configuration for your local development environment and I wanted to put this post together to describe it. Below are the steps to configuring your environment.

Step 1: Install Windows SubSystem on your Windows 10 Machine

To start with, you will need to be able to leverage bash as part of the Linux Subsystem. You can enable this on a Windows 10 machine, by following the steps outlined in this guide:

https://docs.microsoft.com/en-us/windows/wsl/install-win10

Once you’ve completed this step, you will be able to move forward with VS Code and the other components required.

Step 2: Install VS Code and Terraform Plugins

For this guide we recommend VS Code as your editor, VS code works on a variety of operating systems, and is a very light-weight code editor.

You can download VS Code from this link:

https://code.visualstudio.com/download

Once you’ve downloaded and installed VS code, we need to install the VS Code Extension for Terraform.

Then click “Install” and “Reload” when completed. This will allow you to have intelli-sense and support for the different terraform file types.

Step 3: Opening Terminal

You can then perform the remaining steps from the VS Code application. Go to the “View” menu and select “integrated terminal”. You will see the terminal appear at the bottom:

By default, the terminal is set to “powershell”, type in “Bash” to switch to Bash Scripting. You can default your shell following this guidance – https://code.visualstudio.com/docs/editor/integrated-terminal#_configuration

Step 4: Install Unzip on Subsystem

Run the following command to install “unzip” on your linux subsystem, this will be required to unzip both terraform and packer.

sudo apt-get install unzip

Step 5: Install TerraForm

You will need to execute the following commands to download and install Terraform, we need to start by getting the latest version of terraform.

Go to this link:

https://www.terraform.io/downloads.html

And copy the link for the appropriate version of the binaries for TerraForm.

Go back to VS Code, and enter the following commands:

wget {url for terraform}
unzip {terraform.zip file name}
sudo mv terraform /usr/local/bin/terraform
rm {terraform.zip file name}
terraform --version

Step 6: Install Packer

To start with, we need to get the most recent version of packer. Go to the following Url, and copy the url of the appropriate version.

https://www.packer.io/downloads.html

Go back to VS Code and execute the following commands:

wget {packer url} 
unzip {packer.zip file name} 
sudo mv packer /usr/local/bin/packer
rm {packer.zip file name}

Step 7: Install Azure CLI 2.0

Go back to VS Code again, and download / install azure CLI. To do so, execute the steps and commands found here:

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt?view=azure-cli-latest

Step 8: Authenticating against Azure

Once this is done you are in a place where you can run terraform projects, but before you do, you need to authenticate against Azure. This can be done by running the following commands in the bash terminal (see link below):

https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-get-started-connect-with-cli

Once that is completed, you will be authenticated against Azure and will be able to update the documentation for the various environments.

NOTE: Your authentication token will expire, should you get a message about an expired token, enter the command, to refresh:

az account get-access-token 

Token lifetimes can be described here – https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-token-and-claims#access-tokens

After that you are ready to use Terraform on your local machine.

Where to I start – Service Fabric?

So containers have become an essential part of modern application development. I would go as far to say that containers and micro services have had a similar impact to software development as “Object Oriented Programming”.

Now that being that I have been talking to a lot of people who use Monolithic applications and are looking for a way to break down their existing applications into a micro service approach and support the idea of using existing infrastructure, and don’t necessarily want to deploy on Linux for a variety of reasons.

Now based on that option, there is an established technology that can leverage your docker containers and orchestrate them in a windows environment. And that is Service Fabric.

I find the learning curve if you are looking at a monolithic application and breaking it into micro services is a lot easier to swallow with Service Fabric, and it does help you to break up your applications to make better use compute on your machines in the cluster and you can still leverage docker.

Below are some links to help you get started with Service Fabric if you are looking for information on this technology:

Concepts and Architecture:

Service Fabric Overview:

Coding Samples:

Videos:

Book Review – Everyday Millionaire

So for something a little different, I decided to check out a book on finance, because ultimately I do love my job, but I enjoy making money :). So the question obviously becomes how to you generate enough well to some day gain the financial freedom to enjoy it.

My wife and I a while back, took the Financial Peace University course, and found it to be really insightful, and since then by applying the teachings so that course we have been able to leverage the money we make to achieve more of our goals, and honestly its been a very liberating experience overall. Our entire financial outlook completely changed in a single year’s time.

Honestly, if the past few years time have taught me anything, its the change can happen every quickly, and with almost no time (or warning) at all. In the past 6 years, my entire life changed so much that if you honestly tried to tell my past self this was all going to happen, I would have absolutely laughed at you. I went from a married man, living in a small town home, and working as a developer and architect to my current position, father of 2, and currently in a new home.

So I decided to check out Chris Hogan’s book, Everyday Millionaire, and see sort of what I could gleam from his research and insights. For those who don’t recognize the name, Chris Hogan is a financial expert that is part of Dave Ramsay’s “Ramsay Solutions” organization, and has done several books on how to ensure you set yourself up for a successful retirement.

I have to say I enjoyed this book, as part of this book Chris Hogan interviewed and did a study of over 10,000 people who all have a net worth over $1 million dollars. And to be honest his findings pretty much lined up with a lot of the things that are talked about in Dave Ramsay’s books and course.

Now admittedly, that’s not surprising, because he talks about how they found these people was to put out an open call and interview the people who came back, and the people who would be listening to him are people who are already familiar with his work. So his results may be a little skewed, but that doesn’t make his findings any less relevant.

The general message of the book is that it is possible to attain millionaire status without doing so through any one of the many myths out there. The idea that the people who are rich has some “secret sauce” or some “unnatural advantage” is not at all true.

The simple message of this book is that if you work hard, and invest smart and safely you can achieve the financial independence you are looking for, and chasing that “1 big break” is what can ultimately lead to ruin.

Below is a video describing how this is possible.

Now there is one thing I fundamentally disagree with, and its not what he’s saying, but more how he says it. The one statement he repeats often is “If you work hard then …” and makes a reference to how attainable it is.

I don’t disagree with the sentiment, but I do disagree with the phrasing. The past few years have taught me a very valuable lesson, and that lesson is that of making sure you focus on “Impact OVER activity”. I don’t believe that working hard is enough to get anywhere in life, but rather working smarter and harder is the key.

Let me put this to an example…Take the following two scenarios, and I’m borrowing these from Greg McKeown. I’m going to keep the numbers small for my point.

If you have a job as a kid with a paper route, and you work every day of the week, and ultimately make $10 a week, you can make good money. That’s $40 / month, and that can do a lot for a kid who is say 12 years old.

Now, if you could instead take a job washing cars on the weekend, and you can charge $5 per car and do 5 cars in a saturday, and 5 cars in a sunday. That means you can make $50 a week, which is $200 / month.

Now I would argue, you can work hard at the paper route, but at the end of the day the impact it has on your goals is significantly lower. It would make more sense to take the job washing cars, and make more money, and then look for other things you can do during the week. It doesn’t matter how hard you work that doesn’t change the fact that the impact is different. If I work 10x as hard at the paper route, the end result is the same.

But if I increase my efforts on washing cars and can do 7 cars a day (only 2 more) that’s now $35 / day, which is $280 / month.

See my point, at the end of the day I feel like its important to work hard, but you have to take the time to make sure that what you are working on is moving you towards your end goals in life, sitting and grinding away at a job that you don’t enjoy and has no growth potential it may get you millionaire status some day, but the risk would be lower if I focus on careers where the level of effort has an impact on the return on the investment.

Ultimately we all have a finite amount amount of time and we invest it in our careers and skills so we should focus on items that have an acceptable level of risk and a reasonable return on that investment.

Overall I recommend the box, but would advise you to keep this in mind as you read it.

Building out Azure Container Registry in Terraform

So I’ve previously done posts on the TerraForm template that I built to support creating a kubernetes cluster. The intention behind this was to provide a solution for standing up a kubernetes cluster in Azure Government. To see more information on that cluster I have a blog post here.

Now one of the questions I did get with it, is “How do we integrate this with Azure Container Registry?” And for those not familiar, Azure Container Registry is a PaaS offering that Azure provides that allows you to push your container images to a docker registry and not have to manage the underlying VM, patching, updates, and other maintenance. This allows you to just pay for the space to store the container images, which admittedly are very small.

The first part of implementing this logic was to create the Container Registry in TerraForm by using the following.

A key note is that the use of the “count” variable is to enable that this registry will not be created unless you create a “lkma” which is the VM that operates as the master.

resource "azurerm_container_registry" "container-registry" {
  count = "${lookup(var.instance_counts, "lkma", 0) == 0 ? 0 : 1}"
  name                = "containerRegistry1"
  resource_group_name = "${azurerm_resource_group.management.name}"
  location            = "${var.azure_location}"
  admin_enabled       = true
  sku                 = "Standard"

  depends_on = ["azurerm_role_assignment.kub-ad-sp-ra-kv1"]
}

So honestly didn’t require that much in the way of work. For the next part it is literally just adding a few lines of code to enable the connection between the registry and the kubernetes cluster. Those lines are the following :

echo 'Configure ACR registry for Kubernetes cluster'
kubectl create secret docker-registry <SECRET_NAME> --docker-server $5 --docker-email $6 --docker-username=$7 --docker-password $8
echo 'Script Completed'

So really that is about it. I’ve already made these changes to the GitHub template, so please check it out. The above lines of code allow a user principal information that I pass to the script to be used to connect the azure container registry to my cluster. That’s really about it.

Book Review – Innovator’s Mindset

So I have been trying to read more, and focus less on technical technology reading more books on a variety of topics. So one that I wanted to check out was the innovator’s mindset.

Right now my family has been going through a lot of changes, and at the forefront of that is the fact that my kids are a school age, we are moving, and I’ve been looking at my approach to tackling innovation and education as the mission of our family is to secure the future of our kids.

Because of that I was really keen to hear options for people use to help guide new innovative learning methods for not just myself but my children as well.

The book interesting mirrors the work of Angela Duckworth, and her book Grit. And Carol Dwreck’s book, Mindset. Both of which I am very familiar with, and enjoyed and have seen value in my life.

The interesting part he points to here is honestly that the education system. Is ripe for disruption. Many schools and institutions cling to the old ways of doing things and are afraid to take risks with how they teach.

The focus of the book is around how we as a society teach problem solving but not problem fighting, the idea of how you look at the world and see that something is wrong. That requires intelligence but more than that requires empathy. The ability to understand how people feel and to gain understanding into their situation and problem.

The intention is then that we should focus on using learning to drive outcomes, because knowledge that is not practical is wasted effort. There needs to be a way for the student or person to absorb that knowledge into the fiber of their being, into their structure of knowledge for application.

The idea of the innovators mindset is that we need to seek alternative viewpoints take risks and recognize that their is a cost to not changing, and know that failure and iterations are a part of that solution.

I enjoyed the focus on how to embrace the idea of taking risks and the kid of challenges you may run into, and overall found this book to be great. I think he focused a little too much of examples that involved social media. This is not a magic silver bullet for education.

But one point I do agree with is his focus on honest and public reflection. The idea of declaring you will do something is a great way of encouraging accountability, but to the authors point, it also encourages us to be more thoughtful of our ideas if we know that others will be reviewing and challenging these ideas. This can lead to a better more thoughtful effort and a crowd sourced solution to problems.

The biggest thing that really landed in our family is that right now the education system is very focused on consumption, and pushing kids to consume what is thrown at them. While this type of learning can work and has its place, there is such a thing as focusing on empowerment. The idea is to take an objective, and help our kids to have the resources to learn everything they need to obtain that objective. It’s an interesting piece of learning but it works.

So how did this help, my daughter, has been struggling with learning her letters in kindergarten, worksheets are like pulling teeth, flash cards are boring. She’s been having a rough time. While listening to this book, I noticed my daughter loves putting on plays and shows at home. So I asked her “let’s make a letter video.” Not only did she get excited but she pushed passed what was required. After practicing her letters making videos and her wanting to practice “for the video.” She took her test and went from struggling to pass 1 list of letters, to passing 2, and almost a third.

It occurs to me this goes beyond kids, in my own profession. Anyone can learn a technology but it becomes a lot easier when you focus on solving a specific problem and direct your learning as such.

There is no better place to learn than the foxhole. Ultimately it leads to much better drivers to success at the end of the day.

The interesting part also was the second half of the book which talked about how to as a leader foster a culture of innovation within your organization.  And the key points I would acknowledge here are that the giving peole the freedom to fail, and fail fast.  And encouraging your people to take risks.  This is something that I’ve been working on in my family and with my kids, and celebrating the fact that they “tried something new”.

Overall I recommend the book, it gave some good ideas with regard to approaching innovation that I found enlightening, provided you can get past the “education system” focus.

Breaking Down Monitoring in Azure

So let’s talk monitoring in Azure, and honestly this is a topic that makes most people start to become a “deer in headlights”. And the reason is that most executives love to say “We need to have a great monitoring story” but monitoring is a massive topic and most don’t know where to begin.

And the truth is that it is a huge and multi-faceted topic that have a variety of solutions that can be applied in a variety of ways. So I wanted to gather some resources to help give you if your trying to figure out which way is up.

MSLearn:  Provides guided learning through articles, hands on labs and videos on a variety of topics.  Some of the key ones I thought would be of interest given your ask are:

When it comes to monitoring we have several products and services that are at the core of providing that support:

Additionally we have videos at a site called Channel 9 on a variety of topics

Links for Kusto the Log Analytics Querying Language

Deploying ARM Templates through CLI

Links for Full Monitoring Solution

Links for Logs on Log Analytics and details of data captured

Migrating VMs to Azure DevTest Labs

So here’s something I’ve been running into a lot, and at the heart of it is cost management. No one wants to spend more money than they need to and that is especially true if you are running your solutions in the cloud. And for most organizations, a big cost factor can be running lower environments.

So the situation is this, you have a virtual machine, be it a developer machine, or some form of lower environment (Dev, test, QA, etc). And the simple fact is that you have to pay for this environment to be able to run testing and ensure that everything works when you deploy to production. So its a necessary evil, and extremely important if you are adhering to DevOps principals. Now the unfortunate part of this, is that you likely only really use these lower environments during work hours. They probably aren’t exposed to the outside world, and they probably aren’t being hit by customers in any fashion.

So ultimately, that means that you are paying for this environment for 24/7 but only using it for 12 hours a day. Which means you are basically paying for double what you need.

Enter DevTest Labs, which allows you to create VMs and artifacts to make it easy to spin-up and spin-down environments without any effort on your part so that you can save money with regard to running these lower environments.

Now the biggest problem then becomes, “That’s great, but how do I move my existing VMs into new DevTest lab, how do I do that?” The answer is it can be done, via script, and I’ve written a script to that here.

#The resource group information, the source and destination
resourceGroupName="<Original Resource Group>"
newResourceGroupName="<Destination Resource Group>"

#The name of the VM you wish to migrate
vmName="<VM Name>"
newVmName="<New VM Name>"

#The OS image of the VM
imageName="Windows Server 2016 Datacenter"
osType="Windows"

#The size of the VM
vmsize="<vm size>"
#The location of the VM
location="<location>"
#The admin username for the newly created vm
adminusername="<username>"

#The suffix to add to the end of the VM
osDiskSuffix="_lab.vhd"
#The type of storage for the data disks
storageType="Premium_LRS"

#The VNET information for the VM that is being migrated
vnet="<vnet name>"
subnet="<Subnet name>"

#The name of the lab to be migrated to
labName="<Lab Name>"

#The information about the storage account associated with the lab
newstorageAccountName="<Lab account name>"
storageAccountKey="<Storage account key>"
diskcontainer="uploads"

echo "Set to Government Cloud"
sudo az cloud set --name AzureUSGovernment

#echo "Login to Azure"
#sudo az login

echo "Get updated access token"
sudo az account get-access-token

echo "Create new Resource Group"
az group create -l $location -n $newResourceGroupName

echo "Deallocate current machine"
az vm deallocate --resource-group $resourceGroupName --name $vmName

echo "Create container"
az storage container create -n $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey

osDisks=$(az vm show -d -g $resourceGroupName -n $vmName --query "storageProfile.osDisk.name") 
echo ""
echo "Copy OS Disks"
echo "--------------"
echo "Get OS Disk List"

osDisks=$(echo "$osDisks" | tr -d '"')

for osDisk in $(echo $osDisks | tr "[" "\n" | tr "," "\n" | tr "]" "\n" )
do
   echo "Copying OS Disk $osDisk"

   echo "Get url with token"
   sas=$(az disk grant-access --resource-group $resourceGroupName --name $osDisk --duration-in-seconds 3600 --query [accessSas] -o tsv)

   newOsDisk="$osDisk$osDiskSuffix"
   echo "New OS Disk Name = $newOsDisk"

   echo "Start copying $newOsDisk disk to blob storage"
   az storage blob copy start --destination-blob $newOsDisk --destination-container $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey --source-uri $sas

   echo "Get $newOsDisk copy status"
   while [ "$status"=="pending" ]
   do
      status=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.status')
      status=$(echo "$status" | tr -d '"')
      echo "$newOsDisk Disk - Current Status = $status"

      progress=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.progress')
      echo "$newOsDisk Disk - Current Progress = $progress"
      sleep 10s
      echo ""

      if [ "$status" != "pending" ]; then
      echo "$newOsDisk Disk Copy Complete"
      break
      fi
   done

   echo "Get blob url"
   blobSas=$(az storage blob generate-sas --account-name $newstorageAccountName --account-key $storageAccountKey -c $diskcontainer -n $newOsDisk --permissions r --expiry "2019-02-26" --https-only)
   blobSas=$(echo "$blobSas" | tr -d '"')
   blobUri=$(az storage blob url -c $diskcontainer -n $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey)
   blobUri=$(echo "$blobUri" | tr -d '"')

   echo $blobUri

   blobUrl=$(echo "$blobUri")

   echo "Create image from $newOsDisk vhd in blob storage"
   az group deployment create --name "LabMigrationv1" --resource-group $newResourceGroupName --template-file customImage.json --parameters existingVhdUri=$blobUrl --verbose

   echo "Create Lab VM - $newVmName"
   az lab vm create --lab-name $labName -g $newResourceGroupName --name $newVmName --image "$imageName" --image-type gallery --size $vmsize --admin-username $adminusername --vnet-name $vnet --subnet $subnet
done 

dataDisks=$(az vm show -d -g $resourceGroupName -n $vmName --query "storageProfile.dataDisks[].name") 
echo ""
echo "Copy Data Disks"
echo "--------------"
echo "Get Data Disk List"

dataDisks=$(echo "$dataDisks" | tr -d '"')

for dataDisk in $(echo $dataDisks | tr "[" "\n" | tr "," "\n" | tr "]" "\n" )
do
   echo "Copying Data Disk $dataDisk"

   echo "Get url with token"
   sas=$(az disk grant-access --resource-group $resourceGroupName --name $dataDisk --duration-in-seconds 3600 --query [accessSas] -o tsv)

   newDataDisk="$dataDisk$osDiskSuffix"
   echo "New OS Disk Name = $newDataDisk"

   echo "Start copying disk to blob storage"
   az storage blob copy start --destination-blob $newDataDisk --destination-container $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey --source-uri $sas

   echo "Get copy status"
   while [ "$status"=="pending" ]
   do
      status=$(az storage blob show --container-name $diskcontainer --name $newDataDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.status')
      status=$(echo "$status" | tr -d '"')
      echo "Current Status = $status"

      progress=$(az storage blob show --container-name $diskcontainer --name $newDataDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.progress')
      echo "Current Progress = $progress"
      sleep 10s
      echo ""

      if [ "$status" != "pending" ]; then
      echo "Disk Copy Complete"
      break
      fi
   done
done 

echo "Script Completed"

So the above script breaks out into a couple of key pieces. The first part is the following parts needs to happen:

  • Create the destination resource group
  • Deallocate the machine
  • Create a container for the disks to be migrated to
echo "Create new Resource Group"
az group create -l $location -n $newResourceGroupName

echo "Deallocate current machine"
az vm deallocate --resource-group $resourceGroupName --name $vmName

echo "Create container"
az storage container create -n $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey

The next process is to identify the OS disk for the VM, and copy the disk from its current location, over to the storage account associated with DevTest lab. The next key part is to create a custom image based on that VM and then create a VM based on that image.

osDisks=$(az vm show -d -g $resourceGroupName -n $vmName --query "storageProfile.osDisk.name") 
echo ""
echo "Copy OS Disks"
echo "--------------"
echo "Get OS Disk List"

osDisks=$(echo "$osDisks" | tr -d '"')

for osDisk in $(echo $osDisks | tr "[" "\n" | tr "," "\n" | tr "]" "\n" )
do
   echo "Copying OS Disk $osDisk"

   echo "Get url with token"
   sas=$(az disk grant-access --resource-group $resourceGroupName --name $osDisk --duration-in-seconds 3600 --query [accessSas] -o tsv)

   newOsDisk="$osDisk$osDiskSuffix"
   echo "New OS Disk Name = $newOsDisk"

   echo "Start copying $newOsDisk disk to blob storage"
   az storage blob copy start --destination-blob $newOsDisk --destination-container $diskcontainer --account-name $newstorageAccountName --account-key $storageAccountKey --source-uri $sas

   echo "Get $newOsDisk copy status"
   while [ "$status"=="pending" ]
   do
      status=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.status')
      status=$(echo "$status" | tr -d '"')
      echo "$newOsDisk Disk - Current Status = $status"

      progress=$(az storage blob show --container-name $diskcontainer --name $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey --output json | jq '.properties.copy.progress')
      echo "$newOsDisk Disk - Current Progress = $progress"
      sleep 10s
      echo ""

      if [ "$status" != "pending" ]; then
      echo "$newOsDisk Disk Copy Complete"
      break
      fi
   done

   echo "Get blob url"
   blobSas=$(az storage blob generate-sas --account-name $newstorageAccountName --account-key $storageAccountKey -c $diskcontainer -n $newOsDisk --permissions r --expiry "2019-02-26" --https-only)
   blobSas=$(echo "$blobSas" | tr -d '"')
   blobUri=$(az storage blob url -c $diskcontainer -n $newOsDisk --account-name $newstorageAccountName --account-key $storageAccountKey)
   blobUri=$(echo "$blobUri" | tr -d '"')

   echo $blobUri

   blobUrl=$(echo "$blobUri")

   echo "Create image from $newOsDisk vhd in blob storage"
   az group deployment create --name "LabMigrationv1" --resource-group $newResourceGroupName --template-file customImage.json --parameters existingVhdUri=$blobUrl --verbose

   echo "Create Lab VM - $newVmName"
   az lab vm create --lab-name $labName -g $newResourceGroupName --name $newVmName --image "$imageName" --image-type gallery --size $vmsize --admin-username $adminusername --vnet-name $vnet --subnet $subnet
done 

Now part of the above is to use a ARM template to create the image. The template is below:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "existingLabName": {
      "type": "string",
      "defaultValue":"KevinLab",
      "metadata": {
        "description": "Name of an existing lab where the custom image will be created or updated."
      }
    },
    "existingVhdUri": {
      "type": "string",
      "metadata": {
        "description": "URI of an existing VHD from which the custom image will be created or updated."
      }
    },
    "imageOsType": {
      "type": "string",
      "defaultValue": "windows",
      "metadata": {
        "description": "Specifies the OS type of the VHD. Currently 'Windows' and 'Linux' are the only supported values."
      }
    },
    "isVhdSysPrepped": {
      "type": "bool",
      "defaultValue": true,
      "metadata": {
        "description": "If the existing VHD is a Windows VHD, then specifies whether the VHD is sysprepped (Note: If the existing VHD is NOT a Windows VHD, then please specify 'false')."
      }
    },
    "imageName": {
      "type": "string",
      "defaultValue":"LabVMMigration",
      "metadata": {
        "description": "Name of the custom image being created or updated."
      }
    },
    "imageDescription": {
      "type": "string",
      "defaultValue": "",
      "metadata": {
        "description": "Details about the custom image being created or updated."
      }
    }
  },
  "variables": {
    "resourceName": "[concat(parameters('existingLabName'), '/', parameters('imageName'))]",
    "resourceType": "Microsoft.DevTestLab/labs/customimages"
  },
  "resources": [
    {
      "apiVersion": "2018-10-15-preview",
      "name": "[variables('resourceName')]",
      "type": "Microsoft.DevTestLab/labs/customimages",
      "properties": {
        "vhd": {
          "imageName": "[parameters('existingVhdUri')]",
          "sysPrep": "[parameters('isVhdSysPrepped')]",
          "osType": "[parameters('imageOsType')]"
        },
        "description": "[parameters('imageDescription')]"
      }
    }
  ],
  "outputs": {
    "customImageId": {
      "type": "string",
      "value": "[resourceId(variables('resourceType'), parameters('existingLabName'), parameters('imageName'))]"
    }
  }
}

So if you run the above, it will create the new VM under the DevTest lab.

Creating Terraform Scripts from existing resources in Azure Government

Lately I’ve been doing a lot of work with TerraForm lately, and one of the questions I’ve gotten a lot is the ability to create terraform scripts based on existing resources.

So the use case is the following: You are working on projects, or part of an organization that has a lot of resources in Azure, and you want to start using terraform for a variety of reasons:

  • Being able to iterating in your infrastructure
  • Consistency of environment management
  • Code History of changes

The good new is there is a tool for that. The tool can be found here on github along with a list of pre-requisites.  I’ve used this tool in Azure Commercial and have been really happy with the results. I wanted to use this with Azure Commercial.

NOTE => The Pre-reqs are listed on the az2tf tool, but one they didn’t list I needed to install was jq, using “apt-get install jq”.

Next we need to configure our environment for running terraform.  For me, I ran this using the environment I had configured for Terraform.  In the Git repo, there is a PC Setup document that walks you through how to configure your environment with VS code and Terraform.  I then was able to clone the git repo, and execute the az2tf tool using a Ubuntu subsystem on my Windows 10 machine. 

Now, the tool, az2f, was built to work with azure commercial, and there is one change that has to be made for it to leverage azure government 

Once you have the environment created, and the pre-requisites are all present, you can open a “Terminal” window in vscode, and connect to Azure Government. 

In the ./scripts/resources.sh and ./scripts/resources2.sh files, you will find the following on line 9:

ris=`printf “curl -s  -X GET -H \”Authorization: Bearer %s\” -H \”Content-Type: application/json\” https://management.azure.com/subscriptions/%s/resources?api-version=2017-05-10” $bt $sub`

Please change this line to the following:

ris=`printf “curl -s  -X GET -H \”Authorization: Bearer %s\” -H \”Content-Type: application/json\” https://management.usgovcloudapi.net/subscriptions/%s/resources?api-version=2017-05-10” $bt $sub`

You can then run the “az2tf” tool by running the following command in the terminal:

./az2tf.sh -s {Subscription ID} -g {Resource Group Name}

This will generate the script, and you will see a new folder created in the structure marked “tf.{Subscription ID}” and inside of it will be all configuration steps to setup the environment.

Staying Organized with my “digital brain”.

Hello All, so as you probably noticed, I’m trying to do more posts, and trying to cover a wide range of topics. So for this talk, I thought I’d take time to talk about how I stay organized and stay on top of my day.

Productivity methods are a dime a dozen, and honestly everyone has their own flavor of an amalgamation of several methods to keep control of the chaos. For me, I went through a lot of iterations, and then finally settle on the system I describe here to keep myself on top of everything in my life.

Now for all the different variations out there, I know lots of people are still exploring options, which is why I decided to document mine here in hopes that it might help someone else.

So let’s start with tools, for me I use Microsoft To-Do, and its not just cause of where I work, but ultimately I use this tool because I was using Wunderlist, but ended up switching because they ended support of Wunderlist, replacing it with To-Do. So that was the driver, but I also did it, because it supports tags in the text of the items, which helps me to organize them.

So first, I break out my tasks into categories with a tag to start, the categories I use are:

  • Action: These are items that require me to take some small action, like send an email, make a phone call, reply to something, or answer a question. I try to keep these as small items.
  • Investigate: These are items that I need to research or look into, things that require me to do some digging to find an answer.
  • Discuss: These are items that I’ve made a note to get in touch with someone else and discuss a topic.
  • Build: These are my favorite kind of items, this is me taking coding action of some kind, and building something, or working out an idea. Where I am focused on the act of creating something.
  • Learn: These are items that involve my learning goals, to push myself to learn something new and keep it tactical.

Now each day, To-Do has this concept of “My Day” where you take tasks from your task list and indicate that they are going to be part of your day. Now I sort my day alphabetically so that the above items are organized in a way that lines up with how I approach them.

For me I usually tackle as many actions as I can right away and get them out of the way for the first hour of my day, and then spend the next 6 hours as a mix of new actions, and build / investigate actions.  Finally I have a set section of my week that is spent of learning activities.  The idea being to quote Bobby Axelrod, “The successful figure out how to both, handle the immediate while securing the future.”

Finally I maintain a separate list called #Waiting(…). When I am awaiting a response from someone, I change the category (like #Action) to #Waiting(name of person) and move it to the waiting list and take it off “My Day”. This let’s me put it out of my mind without losing track of the item.

After the category, I add the group, these are customer names for work, or a designation to describe the sub category of the work.  Like for example this is a monthly recurring task:

#Action – #Financial – Pay Monthly Bill’s

This allows me to quickly group the category or all “Financial” tasks if I need a big picture.  

I have been using this system for the past year and it’s done a lot to help me stay organized and measure my impact not activity.

I’ve talked previously about how import impact is over activity. And one of the downsides of many of these kinds of systems is that people tend to focus their energy on the “checking off items” and not on the overall impact of those items. I find by using this kind of grouping on the front I am able to focus energy on tasks that are high impact not low impact.

At the end of the day productivity itself is a lie and I believe that completely the idea is not to produce more, but to make every action have a return on investment.

Another book, Essentialism by Greg McKeown calls out this difference in basically saying that the key is to make the distinction of saying “what can I go big on?” or its either a “Hell Yes” or an “Absolute No”. So I find this system assists me by allowing me to make sure that I am focusing on tasks that will return dividends and not on topics that are smaller activity just to drive “checked” items.