Browsed by
Tag: devops

How to leverage templates in YAML Pipelines

How to leverage templates in YAML Pipelines

So now secret that I really am a big fan of leveraging DevOps to extend your productivity. I’ve had the privilege of working on smaller teams that have accomplished far more than anyone could have predicted. And honestly the key principle that is always at the center of those efforts is treat everything as a repeatable activity.

Now, if you look at the idea of a micro service application, at it’s core its several different services that are independently deployable, and at it’s core that statement can cause a lot of excessive technical debt from a DevOps perspective.

For example, if I encapsulate all logic into separate python modules, I need a pipeline for each module, and those pipelines look almost identical.

Or if I’m deploying docker containers, my pipelines for each service likely look almost identical. See the pattern here?

Now imagine, you do this and build a robust application with 20-30 services running in containers. In the above, that means if I have to change their deployment pipeline, by adding say a new environment, I have to update between 20 – 30 pipelines, with the same changes.

Thankfully, ADO has an answer to this, in the use of templates. The idea here is we create a repo within ADO for our deployment templates, which contain the majority of the logic to deploy our services and then call those templates in each service.

For this example, I’ve built a template that I use to deploy a docker container and push it to a container registry, which is a pretty common practice.

The logic to implement it is fairly simple and looks like the following:

resources:
  repositories:
    - repository: templates
      type: git
      name: "TestProject/templates"

Using the above code will enable your pipeline to pull from a separate git repo, and then you can use the following to code to create a sample template:

parameters:
  - name: imageName
    type: string
  
  - name: containerRegistryName
    type: string

  - name: repositoryName
    type: string

  - name: containerRegistryConnection
    type: string

  - name: tag
    type: string

steps:
- task: Bash@3
  inputs:
    targetType: 'inline'
    script: 'docker build -t="${{ parameters.containerRegistryName }}/${{ parameters.imageName }}:${{ parameters.tag }}" -t="${{ parameters.containerRegistryName }}/${{ parameters.imageName }}:latest" -f="./Dockerfile" .'
    workingDirectory: '$(Agent.BuildDirectory)/container'
  displayName: "Building docker container"

- task: Docker@2
  inputs:
    containerRegistry: '${{ parameters.containerRegistryConnection }}'
    repository: '${{ parameters.imageName }}'
    command: 'push'
    tags: |
      $(tag)
      latest
  displayName: "Pushing container to registry"

Finally, you can go to any yaml pipeline in your project and use the following to reference the template:

steps:
- template: /containers/container.yml@templates
  parameters:
    imageName: $(imageName)
    containerRegistryName: $(containerRegistry)
    repositoryName: $(repositoryName)
    containerRegistryConnection: 'AG-ASCII-GSMP-boxaimarketopsacr'
    tag: $(tag)
Poly-Repo vs Mono-Repo

Poly-Repo vs Mono-Repo

So I’ve been doing a lot of DevOps work recently, and one of the bigger topic of discussions I’ve been a part of recently is this idea of Mono-Repo vs Poly-Repo. And I thought I would way in with some of my thoughts on this.

So first and foremost, let’s talk about what the difference is. Mono-Repo vs Poly-Repo, actually refers to how you store your source control. Now I don’t care if you are using Azure Dev Ops, GitHub, BitBucket, or any other solution. The idea here is whether you put the entirety of your source code in a single repo, or if you split it up into multiple repositories.

Now this doesn’t sound like a big deal, or might not make sense depending on the type of development code, but this also ties into the idea of Microservices. If you think about a micro-services, and the nature of them, then the debate about repos becomes apparent.

This can be a hot-debated statement, but most modern application development involves distributed solutions and architectures with Micro-services, whether you are deploying to a server-less environment, or even to Kubernetes, most modern applications involve a lot of completely separate micro-services that provide the total functionality.

And this is where the debate comes into play, the idea that let’s say your application is actually made of a series of smaller micro service containers that are being used to completely overall functionality. Then how do you store them in source control. Does each service get it’s own repository or do you have one repository with all your services in folders.

When we look at Mono-Repo, it’s not without benefits:

  • Easier to interact with
  • Easier to handle changes that cut across multiple services
  • Pull Requests are all localized

But it isn’t without it’s downsides:

  • Harder to control from a security perspective
  • Makes it easier to inject bad practices
  • Can make versioning much more difficult

And really in a lot of ways Poly-Repo can really read like the opposite of what’s above.

For me, I prefer poly-repo, and I’ll tell you why. Ultimately it can create some more overhead, but I find it leads to better isolation and enforcement of good development practices. But making each repo responsible for containing a single service and all of it’s components it makes for a much cleaner development experience and makes it much easier to maintain that isolation and avoid letting bad practices slip in.

Now I do believe is making repos for single purposes, and that includes things like a templates repo for deployment components and GitOps pieces. But I like the idea that to make a change to a service the workflow is:

  • Clone the Services Repo
  • Make changes
  • Test changes
  • PR changes
  • PR kicks off automated deployment

It helps to keep each of these services as independently deplorable in this model which is ultimately where you want to be as opposed to building multiple services at once.

Building CI / CD for Terraform

Building CI / CD for Terraform

I’ve made no secret on this blog of how I feel about TerraForm, and how I believe infrastructure as code is absolutely essential to managing any cloud based deployments long term.

There are so many benefits to leveraging these technologies. And for me one of the biggest is that you can manage your infrastructure deployments in the exact same manner as your code changes.

If your curious of the benefits of a CI /CD pipeline, there are a lot of posts out there. But for this post I wanted to talk about how you can take those TerraForm templates and build out a CI / CD pipeline to deploy them to your environments.

So for this project, I’ve build a TerraForm template that deploys a lot of resources out to 3 environments. And I wanted to do this in a cost saving manner, so I want to manage it in the following way.

  • Development: which will always exist but in a scaled down capacity to keep costs down.
  • Test environment: That will only be created when we are ready to begin testing and destroyed after.
  • Production: where our production application will reside.

Now for the sake of this exercise, I will only be building a deployment pipeline for the TerraForm code, and in a later post will examine how to integrate this with code changes.

Now as with everything, there are lots of ways to make something work. I’ve just showing an approach that has worked for me.

Configuring your template

The first part of this, is to build out your template to be able to easily be make configuration changes via the automated deployment pipeline.

The best way I’ve found to do this is variables, and whether your are doing automated deployment or not, I highly recommend using them. If you ever have more than just yourself working on a TerraForm template, or plan to create more than one environment. You will absolutely need variables. So it’s generally a good practice.

For the sake of this example, I declared the following variables in a file called “variables.tf”:

variable "location" {
    default = "usgovvirginia"
}

variable "environment_code" {
    description = "The environment code required for the solution.  "
}

variable "deployment_code" {
    description = "The deployment code of the solution"
}

variable "location_code" {
    description = "The location code of the solution."
}

variable "subscription_id" {
    description = "The subscription being deployed."
}

variable "client_id" {
    description = "The client id of the service prinicpal"
}

variable "client_secret" {
    description = "The client secret for the service prinicpal"
}

variable "tenant_id" {
    description = "The client secret for the service prinicpal"
}

variable "project_name" {
    description = "The name code of the project"
    default = "cds"
}

variable "group_name" {
    description = "The name put into all resource groups."
    default = "CDS"
}

Also worth noting is the client id, secret, subscription id, and tenant id above. Using Azure DevOps you are going to need to be able to deploy using a service principal. So these will be important.

Then in your main.tf, you will have the following:

provider "azurerm" {
    subscription_id = var.subscription_id
    version = "=2.0.0"

    client_id = var.client_id
    client_secret = var.client_secret
    tenant_id = var.tenant_id
    environment = "usgovernment"

    features {}
}

Now worth mentioning that when I’m working with my template I’m using a file called “variables.tfvars”, which looks like the following:

location = "usgovvirginia"
environment_code = "us1"
deployment_code = "d"
location_code = "us1"
subscription_id = "..."
group_name = "CDS"

Configuring the Pipeline

This will be important later, as you build out the automation. From here the next step is going to be to build out your Azure DevOps pipeline, and for this sample I’m going to show, I’m using a YAML pipeline:

So what I did was plan on creating a “variables.tfvars” as part of my deployment:

- script: |
    touch variables.tfvars
    echo -e "location = \""$LOCATION"\"" >> variables.tfvars
    echo -e "environment_code = \""$ENVIRONMENT_CODE"\"" >> variables.tfvars
    echo -e "deployment_code = \""$DEPLOYMENT_CODE"\"" >> variables.tfvars
    echo -e "location_code = \""$LOCATION_CODE"\"" >> variables.tfvars
    echo -e "subscription_id = \""$SUBSCRIPTION_ID"\"" >> variables.tfvars
    echo -e "group_name = \""$GROUP_NAME"\"" >> variables.tfvars
    echo -e "client_id = \""$SP_APPLICATIONID"\"" >> variables.tfvars
    echo -e "tenant_id = \""$SP_TENANTID"\"" >> variables.tfvars
displayName: 'Create variables Tfvars'

Now the next question being where do those values come from, I’ve declared as part of the variables in the pipeline, these values:

From there, because I’m deploying to Azure Government, I added an azure CLI step to make sure my command line context is pointed at Azure Government, doing the following:

- task: AzureCLI@2
  inputs:
    azureSubscription: 'Kemack - Azure Gov'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
      az cloud set --name AzureUSGovernment
      az account show

How do we do TerraForm plan / apply, the answer is pretty straightforward. I did install this extension, and use the “TerraForm Tool Installer” task, as follows:

- task: TerraformInstaller@0
  inputs:
    terraformVersion: '0.12.3'
  displayName: "Install Terraform"

After that it become pretty straight forward to implement:

- script: |
    terraform init
  displayName: 'Terraform - Run Init'

- script: |
    terraform validate
  displayName: 'Terraform - Validate tf'

- script: |
    terraform plan -var-file variables.tfvars -out=tfPlan.txt
  displayName: 'Terraform - Run Plan'

- script: |
    echo $BUILD_BUILDNUMBER".txt"
    echo $BUILD_BUILDID".txt"
    az storage blob upload --connection-string $TFPLANSTORAGE -f tfPlan.txt -c plans -n $BUILD_BUILDNUMBER"-plan.txt"
  displayName: 'Upload Terraform Plan to Blob'

- script: |
    terraform apply -auto-approve -var-file variables.tfvars
  displayName: 'Terraform - Run Apply'

Now the cool part about the above, is I took it a step further, and created a storage account in azure, and added the connection string as a secret. I then built logic that when it you run this pipeline, it will run a plan ahead of the apply, and then output that to a text file and save it in a storage account with the build number for the file name.

I personally like this as it creates a log of the activities performed during the automated build moving forward.

Now I do plan on refining this and taking steps of creating more automation around environments, so more to come.

Weekly Links – 9/9

Weekly Links – 9/9

Welcome back everyone, for another weekly links post. The important note here is its fall, which means kids are in school, leaves are going to start turning and ….

See the source image

So down to business:

Development:

Cloud:

Audio / Video:

Fun Stuff:

I’m a bit of a gamer, as he said to the surprise of no one. And its official that on September 5th, the new Gears of War 5 was available for Early Access period. The world wide release is September 10th. Very awesome. I always enjoyed Gears of War because it is one seriously intense game.

Here’s the article. Warning, Mature audiences.