Browsed by
Tag: Getting Started

Learning how to use Power BI Embedded

Learning how to use Power BI Embedded

So, lately I’ve been doing a lot more work with Power BI Embedded, and discussions around the implementation of Power BI Embedded within applications.

As I discussed Power BI itself can be a complicated topic, especially just to get a handle on all the licensing. Look here for an explanation on that licensing.

But another big question is even then, what does it take to implement Power BI Embedded? What kind of functionality is available? The first resource I would point you at is the Power BI Embedded Playground. This site really is fantastic, giving working samples of how to implement Power BI Embedded in a variety of use-cases and giving you the code to leverage in the process.

But more than that, leveraging a tool like Power BI Embedded, does require further training, and here are some links to tutorials and online training you might find useful:

There are some videos out there that give a wealth of good information on Power BI Embedded, and some of them can be found here.

There is a wealth of information and this is just a post to get you started, but Power BI Embedded, once you get started can make it really easy to embed for amazing analytics capabilities into your applications.

How to learn TerraForm

How to learn TerraForm

So as should surprise no-one, I’ve been doing a lot of work with TerraForm lately, and I’m a huge fan of it in general. Recently doing a post talking about the basics of modules. (which can be found here).

But one common question I’ve gotten a lot of is how to go about Learning TerraForm. Where do I start? So I wanted to do a post gathering some education resources to help.

First for the what is TerraForm, TerraForm is an open source product, created by HashiCorp which enables infrastructure-as-code, specifically designed to be cloud vendor agnostic. If you want to learn the basics, I recommend this video I did with Steve Michelotti about TerraForm and Azure Government:

But more than that, the question becomes how do I go about learning TerraForm. The first part is configuring your machine, and for that you can find a blog post I did here. There are somethings you need to do to setup your environment for terraform, and without any guidance it can be confusing.

But once you know what TerraForm is, the question becomes, how do I learn about / how to use it?

Outside of these, what I recommend is using the module registry, so one of the biggest strengths of TerraForm is a public module repository that allows you to see re-usable code written by others. I highly recommend this as a great way to see working code and play around with it. Here’s the public module registry.

So that’s a list of some resources to get you started on learning TerraForm, obviously there are also classes by PluralSight, Udemy, and Lynda. But I’ve not leveraged those, if you are a fan of structured class settings, those would be good places to start.

Working With Modules in Terraform

Working With Modules in Terraform

I’ve done a bunch of posts on TerraForm, and there seems to be a bigger and bigger demand for it. If you follow this blog at all, you know that I am a huge supporter of TerraForm, and the underlying idea of Infrastructure-as-code. The value-prop of which I think is essential to any organization that wants to leverage the cloud.

Now that being said, it won’t take long after you start working with TerraForm, before you stumble across the concept of Modules. And it also won’t take long before you see the value of those modules as well.

So the purpose of this post is to walk you through creating your first module, and give you an idea of how to do this benefit you.

So what is a module? A module in TerraForm is a way of creating smaller re-usable components that can help to make management of your infrastructure significantly easier. So let’s take for example, a basic TerraForm template. The following will generate a single VM in a Virtual Network.

provider "azurerm" {
  subscription_id = "...."
}

resource "azurerm_resource_group" "rg" {
  name     = "SingleVM"
  location = "eastus"

  tags {
    environment = "Terraform Demo"
  }
}

resource "azurerm_virtual_network" "vnet" {
  name                = "singlevm-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = "eastus"
  resource_group_name = "${azurerm_resource_group.rg.name}"

  tags {
    environment = "Terraform Demo"
  }
}

resource "azurerm_subnet" "vnet-subnet" {
  name                 = "default"
  resource_group_name  = "${azurerm_resource_group.rg.name}"
  virtual_network_name = "${azurerm_virtual_network.vnet.name}"
  address_prefix       = "10.0.2.0/24"
}

resource "azurerm_public_ip" "pip" {
  name                = "vm-pip"
  location            = "eastus"
  resource_group_name = "${azurerm_resource_group.rg.name}"
  allocation_method   = "Dynamic"

  tags {
    environment = "Terraform Demo"
  }
}

resource "azurerm_network_security_group" "nsg" {
  name                = "vm-nsg"
  location            = "eastus"
  resource_group_name = "${azurerm_resource_group.rg.name}"
}

resource "azurerm_network_security_rule" "ssh-access" {
  name                        = "ssh"
  priority                    = 100
  direction                   = "Outbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "*"
  source_address_prefix       = "*"
  destination_address_prefix  = "*"
  destination_port_range      = "22"
  resource_group_name         = "${azurerm_resource_group.rg.name}"
  network_security_group_name = "${azurerm_network_security_group.nsg.name}"
}

resource "azurerm_network_interface" "nic" {
  name                      = "vm-nic"
  location                  = "eastus"
  resource_group_name       = "${azurerm_resource_group.rg.name}"
  network_security_group_id = "${azurerm_network_security_group.nsg.id}"

  ip_configuration {
    name                          = "myNicConfiguration"
    subnet_id                     = "${azurerm_subnet.vnet-subnet.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id          = "${azurerm_public_ip.pip.id}"
  }

  tags {
    environment = "Terraform Demo"
  }
}

resource "random_id" "randomId" {
  keepers = {
    # Generate a new ID only when a new resource group is defined
    resource_group = "${azurerm_resource_group.rg.name}"
  }

  byte_length = 8
}

resource "azurerm_storage_account" "stgacct" {
  name                     = "diag${random_id.randomId.hex}"
  resource_group_name      = "${azurerm_resource_group.rg.name}"
  location                 = "eastus"
  account_replication_type = "LRS"
  account_tier             = "Standard"

  tags {
    environment = "Terraform Demo"
  }
}

resource "azurerm_virtual_machine" "vm" {
  name                  = "singlevm"
  location              = "eastus"
  resource_group_name   = "${azurerm_resource_group.rg.name}"
  network_interface_ids = ["${azurerm_network_interface.nic.id}"]
  vm_size               = "Standard_DS1_v2"

  storage_os_disk {
    name              = "singlevm_os_disk"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
  }

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "16.04.0-LTS"
    version   = "latest"
  }

  os_profile {
    computer_name  = "singlevm"
    admin_username = "uadmin"
  }

  os_profile_linux_config {
    disable_password_authentication = true

    ssh_keys {
      path     = "/home/uadmin/.ssh/authorized_keys"
      key_data = "{your ssh key here}"
    }
  }

  boot_diagnostics {
    enabled     = "true"
    storage_uri = "${azurerm_storage_account.stgacct.primary_blob_endpoint}"
  }

  tags {
    environment = "Terraform Demo"
  }
}

Now that TerraForm script shouldn’t surprise anyone, but here’s the problem, what if I want to take that template and make it deploy 10 VMs instead of 1 in that virtual network.

Now I could take lines 64-90 and lines 103-147 (a total of 70 lines) and do some copy and pasting for the other 9 VMs, which would add 630 lines of code to my terraform template. Then manually make sure they are configured the same, and add the lines of code for the load balancer, which would probably be another 20-30….

If this hasn’t made you cringe, I give up.

The better approach would be to implement a module, so the question is, how do we do that. We start with our folder structure, I would recommend the following:

  • Project Folder
    • Modules
      • Network
      • VirtualMachine
      • LoadBalancer
  • main.tf
  • terraform.tfvars
  • secrets.tfvars

Now the idea here being, that we create a folder to contain all of our modules, and then a separate folder for each. Now when I was learning about modules, this tripped me up. You can’t have the “tf” files for your modules in the same directory, especially if they have any similar named parameters like “region”. If you put them in the same directory you will get errors about duplicate variables.

Now once you have your folders, what do we put in each of them, the answer is this…main.tf. I do this because it makes it easy to reference and track the core module in my code. Being a developer and devops fan, I firmly believe in consistency.

So what does that look like, below is the file I put in “Network\main.tf”

variable "address_space" {
    type = string
    default = "10.0.0.0/16"
}

variable "default_subnet_cidr" {
    type = string 
    default = "10.0.2.0/24"
}

variable "location" {
    type = string
}

resource "azurerm_resource_group" "basic_rig_network_rg" {
    name = "vm-Network"
    location = var.location
}

resource "azurerm_virtual_network" "basic_rig_vnet" {
    name                = "basic-vnet"
    address_space       = [var.address_space]
    location            = azurerm_resource_group.basic_rig_network_rg.location
    resource_group_name = azurerm_resource_group.basic_rig_network_rg.name
}

resource "azurerm_subnet" "basic_rig_subnet" {
 name                 = "basic-vnet-subnet"
 resource_group_name  = azurerm_resource_group.basic_rig_network_rg.name
 virtual_network_name = azurerm_virtual_network.basic_rig_vnet.name
 address_prefix       = var.default_subnet_cidr
}

output "name" {
    value = "BackendNetwork"
}

output "subnet_instance_id" {
    value = azurerm_subnet.basic_rig_subnet.id
}

output "networkrg_name" {
    value = azurerm_resource_group.basic_rig_network_rg.name
}

Now there are a couple of key elements, that I make use of here, and you’ll notice that there is a variables section, a TerraForm template, and an outputs section.

It’s important to remember that every TerraForm template is self contained, similar to how you scope parameters, you pass the values into the module and then use them accordingly. And by identifying the “Output” variables, I can pass things back to the main template.

Now the question becomes, what does that look like to implement it. When I go back to my root level “main.tf”, I find I can now leverage the following:

module "network" {
  source = "./modules/network"

  address_space = var.address_space
  default_subnet_cidr = var.default_subnet_cidr
  location = var.location
}

A couple of key elements to reference here, are that the “source” property points to the module folder that contains the main.tf. And then I am mapping variables at my environment level to the module. This allows for me to control what gets passed into each instance of the module. So this shows how to get module values into the module.

The next question is how do you get them out, in my root main.tf file, I would have code like the following:

network_subnet_id = module.network.subnet_instance_id

To reference it and interface with the underlying map, I would just reference, module.network.___________ and reference the appropriate output variable.

Now I want to be clear this is probably the most simplistic module I can think of, but it illustrates how to hit the ground running and create new modules, or even use existing modules in your code.

For more information, here’s a link to the HashiCorp learn site, and here is a link to the TerraForm module registry, which is a collection of prebuilt modules that you can leverage in your code.

Weekly Links – 2/8

Weekly Links – 2/8

Another installment of Weekly Links, and we are already into February this year which still blows my mind. But so far so good, life is pure chaos with a lot going on, so much so that I don’t know where to start. But your not here for that…

Down to business…

So that’s it, and for fun stuff, I want to throw out the trailer for Locke and Key, very excited for this since we just finished season 3 of Chilling Adventures of Sabrina.

Where do I start – Microsoft AI

Where do I start – Microsoft AI

In the interest of helping to navigate the information available out there, I’ve been putting out there ideas for this “Where Do I start” series on the blog. Right now as I previously mentioned I’ve been studying for the AI-100 exam, and as part of that effort I found a lot of resources online, and I thought I’d share these in the interest of helping others.

There are a wealth of resources out there and I want to make sure I focus your attention on resources related to Microsoft AI and how you can leverage these services as accelerators for your own application development.  I wanted to draw your attention to a lot of the key resources for getting started.

Learning Videos:

Now additionally I have done some work on my github implementing the face api, which is available here:

https://github.com/KevinDMack/FacialSearchDemo

Configuring Terraform Development Environment

Configuring Terraform Development Environment

So I’ve been doing a lot of work with a set of open source tools lately, specifically TerraForm and Packer. TerraForm at its core is a method of implementing truly Infrastructure as Code, and does so by providing a simple function style language where you can create basic implementations for the cloud, and then leverage resource providers to deploy. These resource providers allow you to deploy to variety of cloud platforms (the full list can be found here). It also provides robust support for debugging, targeting, and supports a desired state configuration approach that makes it much easier to maintain your environments in the cloud.

Now that being said, like most open source tools, it can require some configuration for your local development environment and I wanted to put this post together to describe it. Below are the steps to configuring your environment.

Step 1: Install Windows SubSystem on your Windows 10 Machine

To start with, you will need to be able to leverage bash as part of the Linux Subsystem. You can enable this on a Windows 10 machine, by following the steps outlined in this guide:

https://docs.microsoft.com/en-us/windows/wsl/install-win10

Once you’ve completed this step, you will be able to move forward with VS Code and the other components required.

Step 2: Install VS Code and Terraform Plugins

For this guide we recommend VS Code as your editor, VS code works on a variety of operating systems, and is a very light-weight code editor.

You can download VS Code from this link:

https://code.visualstudio.com/download

Once you’ve downloaded and installed VS code, we need to install the VS Code Extension for Terraform.

Then click “Install” and “Reload” when completed. This will allow you to have intelli-sense and support for the different terraform file types.

Step 3: Opening Terminal

You can then perform the remaining steps from the VS Code application. Go to the “View” menu and select “integrated terminal”. You will see the terminal appear at the bottom:

By default, the terminal is set to “powershell”, type in “Bash” to switch to Bash Scripting. You can default your shell following this guidance – https://code.visualstudio.com/docs/editor/integrated-terminal#_configuration

Step 4: Install Unzip on Subsystem

Run the following command to install “unzip” on your linux subsystem, this will be required to unzip both terraform and packer.

sudo apt-get install unzip

Step 5: Install TerraForm

You will need to execute the following commands to download and install Terraform, we need to start by getting the latest version of terraform.

Go to this link:

https://www.terraform.io/downloads.html

And copy the link for the appropriate version of the binaries for TerraForm.

Go back to VS Code, and enter the following commands:

wget {url for terraform}
unzip {terraform.zip file name}
sudo mv terraform /usr/local/bin/terraform
rm {terraform.zip file name}
terraform --version

Step 6: Install Packer

To start with, we need to get the most recent version of packer. Go to the following Url, and copy the url of the appropriate version.

https://www.packer.io/downloads.html

Go back to VS Code and execute the following commands:

wget {packer url} 
unzip {packer.zip file name} 
sudo mv packer /usr/local/bin/packer
rm {packer.zip file name}

Step 7: Install Azure CLI 2.0

Go back to VS Code again, and download / install azure CLI. To do so, execute the steps and commands found here:

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt?view=azure-cli-latest

Step 8: Authenticating against Azure

Once this is done you are in a place where you can run terraform projects, but before you do, you need to authenticate against Azure. This can be done by running the following commands in the bash terminal (see link below):

https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-get-started-connect-with-cli

Once that is completed, you will be authenticated against Azure and will be able to update the documentation for the various environments.

NOTE: Your authentication token will expire, should you get a message about an expired token, enter the command, to refresh:

az account get-access-token 

Token lifetimes can be described here – https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-token-and-claims#access-tokens

After that you are ready to use Terraform on your local machine.

Where to I start – Service Fabric?

Where to I start – Service Fabric?

So containers have become an essential part of modern application development. I would go as far to say that containers and micro services have had a similar impact to software development as “Object Oriented Programming”.

Now that being that I have been talking to a lot of people who use Monolithic applications and are looking for a way to break down their existing applications into a micro service approach and support the idea of using existing infrastructure, and don’t necessarily want to deploy on Linux for a variety of reasons.

Now based on that option, there is an established technology that can leverage your docker containers and orchestrate them in a windows environment. And that is Service Fabric.

I find the learning curve if you are looking at a monolithic application and breaking it into micro services is a lot easier to swallow with Service Fabric, and it does help you to break up your applications to make better use compute on your machines in the cluster and you can still leverage docker.

Below are some links to help you get started with Service Fabric if you are looking for information on this technology:

Concepts and Architecture:

Service Fabric Overview:

Coding Samples:

Videos: