Browsed by
Category: Technology

The world of technology is one of innovation and excitement. Technology for me is the one force in the world that can truly transform lives and improve the world. For me, its the one thing where we build something that leaves the world a better place.

Leveraging Private Nuget Packages to support Micro-services

Leveraging Private Nuget Packages to support Micro-services

So one of the most common elements I see come up as your build out a micro-service architecture, is that there is a fair bit of code-reuse when you do that. And managing that code can be painful, and cause its own configuration drift, that can cause management issues.

Now given the above statement, the first response is probably “But if each Micro-service is an independent unit, and meant to be self-contained, how can their be code re-use?” And my response to that, is that just because ultimately the services and independently deployable does not mean that there isn’t code that will be reused between the different services.

For example, if you are building an OrderProcessingService, that might interface with classes that support things like:

  • Reading from a queue
  • Pushing to a queue
  • Logging
  • Configuration
  • Monitoring
  • Error Handling

And these elements, should not be wildly different from each micro-service, and should have some common elements. For example, if you are leveraging KeyVault for your configuration, odds are you will have the same classes implemented across every possible service.

But this in itself creates its own challenges, and those challenges include but are not limited to:

  • Library Version Management: If each service is its own deployable unit, then as you make changes to common libraries you want to be able to manage versions on going.
  • Creating a central library for developers: Allowing developers to manage the deployment and changes to versions of that code in a centralized way is equally important.
  • Reduce Code Duplication: Personally I have a pathological hatred of code duplication. I’m off the belief that if you are copying and pasting code, you did something wrong. There are plenty of tools to handle this type of situation without doing copy and paste and adding to technical debt.

Now everyone is aware of NuGet, and I would say it would be pretty hard to do software development without it right now. But what you may not know is that Azure DevOps makes it easy to create a private NuGet feed, and then enable the packaging of NuGet packages and publishing via CI/CD.

Creating a NuGet Feed

The process of creating a feed is actually pretty straight forward, it involves going to a section in Azure DevOps called “Artifacts”.

And then select the “Create Feed”, and give the feed a name, and specify who has rights to use this feed:

And from here its pretty easy to setup the publishing of a project as a NuGet package.

Creating a package in Azure Pipelines

A few years ago, this was actually a really painful process, but now its pretty easy. You actually don’t have to do anything in Visual Studio to support this. There are options, but to me, a NuGet package is a deployment activity, so I personally believe it should be handled in the Pipeline.

So the first thing, is you need to specify the details on the configuration. If you go to “Properties” on the project:

These are important as this is the information your developers will see in their NuGet feed.

From here the next step is to configure your pipeline to enable the CI/CD. The good news is this is very easy using the following pieces:

- task: DotNetCoreCLI@2
  inputs:
    command: pack
    versioningScheme: byEnvVar
    versionEnvVar: BUILD_BUILDNUMBER
  displayName: 'dotnet pack $(buildConfiguration)'

- task: NuGetAuthenticate@0
  displayName: 'NuGet Authenticate'

- task: NuGetCommand@2
  inputs:
    command: push
    publishVstsFeed: 'Workshop/WellDocumentedNerd'
    allowPackageConflicts: true
  displayName: 'NuGet push'

Now using the above, I have specified the BUILD_BUILDNUMBER as the identifier for new versions. I do this because I find its easier to ensure that the versions are maintained properly in the NuGet feed.

One thing of note, is that NuGet Authenticate step is absolutely essential to ensure that you are logged in with the appropriate context.

Now after executing that pipeline, I would get the following in my NuGet Feed.

Consuming NuGet in a project

Now, how do we make this available to our developers, and to our build agents. This process is very easy. If you go back to the “Artifacts” section, you will see the following:

The best part is that Azure DevOps will give you the ability to pull the XML required when you select “dotnet”, and it will look something like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear />

    <add key="WellDocumentedNerd" value="....index.json" />

  </packageSources>
</configuration>

After this is done, and added to your project, whenever you build pipeline attempts to do a build of your code project it will consider this new data source in the NuGet Restore.

Why go through this?

As I mentioned above, ultimately one of the biggest headaches of a micro-service architecture for all of its benefits, is that it creates a lot of moving parts. And ultimately managing any common code, could make things difficult if you have to replicate code between projects.

So this creates a nice easy solution that allows you to manage a separate code repository with private / domain specific class libraries, and still have versioning to allow for having different versions on each service while enabling them to be independently deployable.

A Few Gotcha’s for Resiliency

A Few Gotcha’s for Resiliency

So I’ve been doing a lot of work over the past few months around availability, SLAs, Chaos Engineering, and in that time I’ve come across a few “Gotcha’s” that are important to sidestep if you want to build a stronger availability in your solutions. This is no means met to be an exhaustive list, but its at least a few tips in my experience that can help, if you are starting down this road:

Gotcha #1 – Pulling in too many services into the calculation.

As I’ve discussed in many other posts, the first step of doing a resiliency review of your application, is to figure out what functions are absolutely essential, and governed by the SLA. Because the simple fact is that SLA calculations are a business decision and agreement, so just like any good contract, the first step is to figure out the scope of what is covered.

But let’s boil this down to simple math, SLA calculations are described in depth here.

Take the following as an example:

ServiceSLA
App Service99.95%
Azure Redis99.9%
Azure SQL99.99%

Based on the above, the calculation for the SLA gives us the following:

.9995 * .999 * .9999 = .9984 = 99.84%

Now, if I look at the above, and more with Gotcha #2 on the specifics, but if I can remove the “Redis” and lower the number of services involved, the calculation changes to the following:

.9995 * .9999 = .9994 = 99.94%

Notice how removing an item from the SLA causes it to increase, part of the reason here is that I removed something with a much lower SLA, but each item in the SLA calculation will impact the final number, so where ever possible, we should make sure we have scoped our calculations to only the services involved in supporting the SLA.

Gotcha #2 – Using a caching layer incorrectly

Caching tiers are an essential part of any solution. When Redis first was created, caching tiers were seen as something that you would implement if you had aggressive performance requirements. But anymore the demands on software solutions are so high, that I would argue all major solutions have a caching tier of some kind.

Now to slightly contradict myself, those caching tiers, while important to performance of an application, should not be required as part of your SLA or availability calculation, if implemented correctly.

What I mean by that, is caching tiers are meant to be transient, meaning that they can be dropped at anytime, and the application should be able to function without it. Rather than relying on it for a persistence store. The most common case, that violate recommendations for solutions is the following:

  • User takes an action that requests data.
  • Application reaches down to data store to retrieve data.
  • Application puts data in Redis cache.
  • Application returns requested data.

The above has no issues at all, that’s what Redis is for, the problem is when the next part is this:

  • User takes an action that requests data.
  • Application pulls data from Redis and returns.
  • If data is not available, application errors out.

Given the ephemeral nature of caches, and the fact that these caches can be very hard to replicate, your application should be smart enough that if the data isn’t in redis, it will go get it from a data store.

By implementing the following, and configuring your application to use its cache only for performance optimization, you can effectively remove the Redis cache from the SLA calculation.

Gotcha #3 – Using the right event store

Now the other way I’ve seen Redis or caching tiers misused, is as a event data store. So a practice I’ve seen done over and over again is to leverage redis to store JSON objects as part of event store because of the performance benefits. There are appropriate technologies that can support this from a much better perspective and manage costs while benefiting your solutions:

  • Cosmos DB: Cosmos is really designed, exactly for this purpose, of providing high performance, and high availability for your applications. It does this by allowing you to configure the appropriate writing strategy.
  • Blob Storage: Again, Blob storage can be used as an event store, by writing objects to blob, although not my first choice, it is a viable option for managing costs.
  • Other database technologies: There are a myriad of potential options here from Maria, PostGres, MySQL, SQL, etc. All of which performance this operation better.

Gotcha #4 – Mismanaged Configuration

I did a whole post on this, but the idea that configuration cannot be changed with out causing an application event is always a concern. You should be able to change an applications endpoint without having any major hiccups in its operation.

At the end of the following

Building CI / CD for Terraform

Building CI / CD for Terraform

I’ve made no secret on this blog of how I feel about TerraForm, and how I believe infrastructure as code is absolutely essential to managing any cloud based deployments long term.

There are so many benefits to leveraging these technologies. And for me one of the biggest is that you can manage your infrastructure deployments in the exact same manner as your code changes.

If your curious of the benefits of a CI /CD pipeline, there are a lot of posts out there. But for this post I wanted to talk about how you can take those TerraForm templates and build out a CI / CD pipeline to deploy them to your environments.

So for this project, I’ve build a TerraForm template that deploys a lot of resources out to 3 environments. And I wanted to do this in a cost saving manner, so I want to manage it in the following way.

  • Development: which will always exist but in a scaled down capacity to keep costs down.
  • Test environment: That will only be created when we are ready to begin testing and destroyed after.
  • Production: where our production application will reside.

Now for the sake of this exercise, I will only be building a deployment pipeline for the TerraForm code, and in a later post will examine how to integrate this with code changes.

Now as with everything, there are lots of ways to make something work. I’ve just showing an approach that has worked for me.

Configuring your template

The first part of this, is to build out your template to be able to easily be make configuration changes via the automated deployment pipeline.

The best way I’ve found to do this is variables, and whether your are doing automated deployment or not, I highly recommend using them. If you ever have more than just yourself working on a TerraForm template, or plan to create more than one environment. You will absolutely need variables. So it’s generally a good practice.

For the sake of this example, I declared the following variables in a file called “variables.tf”:

variable "location" {
    default = "usgovvirginia"
}

variable "environment_code" {
    description = "The environment code required for the solution.  "
}

variable "deployment_code" {
    description = "The deployment code of the solution"
}

variable "location_code" {
    description = "The location code of the solution."
}

variable "subscription_id" {
    description = "The subscription being deployed."
}

variable "client_id" {
    description = "The client id of the service prinicpal"
}

variable "client_secret" {
    description = "The client secret for the service prinicpal"
}

variable "tenant_id" {
    description = "The client secret for the service prinicpal"
}

variable "project_name" {
    description = "The name code of the project"
    default = "cds"
}

variable "group_name" {
    description = "The name put into all resource groups."
    default = "CDS"
}

Also worth noting is the client id, secret, subscription id, and tenant id above. Using Azure DevOps you are going to need to be able to deploy using a service principal. So these will be important.

Then in your main.tf, you will have the following:

provider "azurerm" {
    subscription_id = var.subscription_id
    version = "=2.0.0"

    client_id = var.client_id
    client_secret = var.client_secret
    tenant_id = var.tenant_id
    environment = "usgovernment"

    features {}
}

Now worth mentioning that when I’m working with my template I’m using a file called “variables.tfvars”, which looks like the following:

location = "usgovvirginia"
environment_code = "us1"
deployment_code = "d"
location_code = "us1"
subscription_id = "..."
group_name = "CDS"

Configuring the Pipeline

This will be important later, as you build out the automation. From here the next step is going to be to build out your Azure DevOps pipeline, and for this sample I’m going to show, I’m using a YAML pipeline:

So what I did was plan on creating a “variables.tfvars” as part of my deployment:

- script: |
    touch variables.tfvars
    echo -e "location = \""$LOCATION"\"" >> variables.tfvars
    echo -e "environment_code = \""$ENVIRONMENT_CODE"\"" >> variables.tfvars
    echo -e "deployment_code = \""$DEPLOYMENT_CODE"\"" >> variables.tfvars
    echo -e "location_code = \""$LOCATION_CODE"\"" >> variables.tfvars
    echo -e "subscription_id = \""$SUBSCRIPTION_ID"\"" >> variables.tfvars
    echo -e "group_name = \""$GROUP_NAME"\"" >> variables.tfvars
    echo -e "client_id = \""$SP_APPLICATIONID"\"" >> variables.tfvars
    echo -e "tenant_id = \""$SP_TENANTID"\"" >> variables.tfvars
displayName: 'Create variables Tfvars'

Now the next question being where do those values come from, I’ve declared as part of the variables in the pipeline, these values:

From there, because I’m deploying to Azure Government, I added an azure CLI step to make sure my command line context is pointed at Azure Government, doing the following:

- task: AzureCLI@2
  inputs:
    azureSubscription: 'Kemack - Azure Gov'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
      az cloud set --name AzureUSGovernment
      az account show

How do we do TerraForm plan / apply, the answer is pretty straightforward. I did install this extension, and use the “TerraForm Tool Installer” task, as follows:

- task: TerraformInstaller@0
  inputs:
    terraformVersion: '0.12.3'
  displayName: "Install Terraform"

After that it become pretty straight forward to implement:

- script: |
    terraform init
  displayName: 'Terraform - Run Init'

- script: |
    terraform validate
  displayName: 'Terraform - Validate tf'

- script: |
    terraform plan -var-file variables.tfvars -out=tfPlan.txt
  displayName: 'Terraform - Run Plan'

- script: |
    echo $BUILD_BUILDNUMBER".txt"
    echo $BUILD_BUILDID".txt"
    az storage blob upload --connection-string $TFPLANSTORAGE -f tfPlan.txt -c plans -n $BUILD_BUILDNUMBER"-plan.txt"
  displayName: 'Upload Terraform Plan to Blob'

- script: |
    terraform apply -auto-approve -var-file variables.tfvars
  displayName: 'Terraform - Run Apply'

Now the cool part about the above, is I took it a step further, and created a storage account in azure, and added the connection string as a secret. I then built logic that when it you run this pipeline, it will run a plan ahead of the apply, and then output that to a text file and save it in a storage account with the build number for the file name.

I personally like this as it creates a log of the activities performed during the automated build moving forward.

Now I do plan on refining this and taking steps of creating more automation around environments, so more to come.

TerraForm – Using the new Azure AD Provider

TerraForm – Using the new Azure AD Provider

So by using TerraForm, you gain a lot of benefits, including being able to manage all parts of your infrastructure using HCL languages to make it rather easy to manage.

A key part of that is not only being able to manage the resources you create, but also access to them, by creating and assigning storage principals. In older versions of TerraForm this was possible using the azurerm_azuread_application and other elements. I had previously done this in the Kubernetes template I have on github.

Now, with TerraForm v2.0, there have been some pretty big changes, including removing all of the Azure AD elements and moving them to their own provider, and the question becomes “How does that change my template?”

Below is an example, it shows the creation of a service principal, with a random password, and creating an access policy for a keyvault.

resource "random_string" "kub-rs-pd-kv" {
  length = 32
  special = true
}

data "azurerm_subscription" "current" {
    subscription_id =  "${var.subscription_id}"
}
resource "azurerm_azuread_application" "kub-ad-app-kv1" {
  name = "${format("%s%s%s-KUB1", upper(var.environment_code), upper(var.deployment_code), upper(var.location_code))}"
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = true
}

resource "azurerm_azuread_service_principal" "kub-ad-sp-kv1" {
  application_id = "${azurerm_azuread_application.kub-ad-app-kv1.application_id}"
}

resource "azurerm_azuread_service_principal_password" "kub-ad-spp-kv" {
  service_principal_id = "${azurerm_azuread_service_principal.kub-ad-sp-kv1.id}"
  value                = "${element(random_string.kub-rs-pd-kv.*.result, count.index)}"
  end_date             = "2020-01-01T01:02:03Z"
}

resource "azurerm_key_vault" "kub-kv" {
  name = "${var.environment_code}${var.deployment_code}${var.location_code}lkub-kv1"
  location = "${var.azure_location}"
  resource_group_name = "${azurerm_resource_group.management.name}"

  sku {
    name = "standard"
  }

  tenant_id = "${var.keyvault_tenantid}"

  access_policy {
    tenant_id = "${var.keyvault_tenantid}"
    object_id = "${azurerm_azuread_service_principal.kub-ad-sp-kv1.id}"

    key_permissions = [
      "get",
    ]

    secret_permissions = [
      "get",
    ]
  }
  access_policy {
    tenant_id = "${var.keyvault_tenantid}"
    object_id = "${azurerm_azuread_service_principal.kub-ad-sp-kv1.id}"

    key_permissions = [
      "create",
    ]

    secret_permissions = [
      "set",
    ]
  }

  depends_on = ["azurerm_role_assignment.kub-ad-sp-ra-kv1"]
}

Now as I mentioned, with the change to the new provider, you will see a new version of this code be implemented. Below is an updated form of code that generates a service principal with a random password.

provider "azuread" {
  version = "=0.7.0"
}

resource "random_string" "cds-rs-pd-kv" {
  length = 32
  special = true
}

resource "azuread_application" "cds-ad-app-kv1" {
  name = format("%s-%s%s-cds1",var.project_name,var.deployment_code,var.environment_code)
  oauth2_allow_implicit_flow = true
}

resource "azuread_service_principal" "cds-ad-sp-kv1" {
  application_id = azuread_application.cds-ad-app-kv1.application_id
}

resource "azuread_service_principal_password" "cds-ad-spp-kv" {
  service_principal_id  = azuread_service_principal.cds-ad-sp-kv1.id
  value                = random_string.cds-rs-pd-kv.result
  end_date             = "2020-01-01T01:02:03Z"
}

Notice how much cleaner the code is, first we aren’t doing the ${} to do string interpolation, and ultimately the resources are much cleaner. So the next question is how do I connect this with my code to assign this service principal to a keyvault access policy.

You can accomplish that with the following code, which is in a different file in the same directory:

resource "azurerm_resource_group" "cds-configuration-rg" {
    name = format("%s-Configuration",var.group_name)
    location = var.location 
}

data "azurerm_client_config" "current" {}

resource "azurerm_key_vault" "cds-kv" {
    name = format("%s-%s%s-kv",var.project_name,var.deployment_code,var.environment_code)
    location = var.location
    resource_group_name = azurerm_resource_group.cds-configuration-rg.name 
    enabled_for_disk_encryption = true
    tenant_id = data.azurerm_client_config.current.tenant_id
    soft_delete_enabled = true
    purge_protection_enabled = false
 
  sku_name = "standard"
 
  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id
 
    key_permissions = [
      "create",
      "get",
    ]
 
    secret_permissions = [
      "set",
      "get",
      "delete",
    ]
  }

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = azuread_service_principal.cds-ad-sp-kv1.id

    key_permissions = [
      "get",
    ]

    secret_permissions = [
      "get",
    ]
  }
}

Notice that I am able to reference the “azuread_service_principal.cds-ad-sp-kv1.id” to access the newly created service principal without issue.

Good practices for starting with containers

Good practices for starting with containers

So I really hate the saying “best practices” mainly because it creates a belief that there is only one right way to do things. But I wanted to put together a post around some ideas for strengthening your micro services architectures.

As I’ve previously discussed, Micro service architectures are more complicated to implement but have a lot of huge benefits to your solution. And some of those benefits are:

  • Independently deployable pieces, no more large scale deployments.
  • More focused testing efforts.
  • Using the right technology for each piece of your solution.
  • I creased resiliency from cluster based deployments.

But for a lot of people, including myself the hardest part of this process is how do you structure a micro-service? How small should each piece be? How do they work together?

So here are some practices I’ve found helpful if you are starting to leverage this in your solutions.

One service = one job

One of the first questions is how small should my containers be. Is there such a thing as too small? A good rule of thumb to focus on is the idea of separation concerns. If you take every use-case and start to break it down to a single purpose, you’ll find you get to a good micro-service design pretty quickly.

Let’s look at examples, I recently worked on a solution with a colleague of mine that ended up pulling from an API, and then extracting that information to put it into a data model.

In the monolith way of thinking, that would have been 1 API call. Pass in the data and then cycle through and process it. But the problem was throughput, if I would have pulled the 67 different regions, and the 300+ records per region and processed this as a batch it would have been a mess of one gigantic API call.

So instead, we had one function that cycled through the regions, and pulled them all to json files in blob storage, and then queued a message.

Then we had another function that when a message is queued, will take that message, read in the records for that region, and process saving them to the database. This separate function is another micro-services.

Now there are several benefits to this approach, but chief among them, the second function can scale independent of the first, and I can respond to queued messages as they come in, using asynchronous processing.

Three words… Domain driven design

For a great definition of Domain-Driven Design, see here. The idea is pretty straight forward, the idea of building software and the structure of your application should mirror the business logic that is being implemented.

So for example, your micro-services should mirror what they are trying to do. Like let’s take the most straightforward example…e-commerce.

If we have to track orders, and have a process once an order is submitted of the following:

  • Orders are submitted.
  • Inventory is verified.
  • Order Payment is processed.
  • Notification is sent to supplier for processing.
  • Confirmation is sent to the customer.
  • Order is fulfilled and shipped

Looking at the above, one way to implement this would be to do the following:

  • OrderService: Manage the orders from start to finish.
  • OrderRecorderService: Record order in tracking system, so you can track the order throughout the process.
  • OrderInventoryService: Takes the contents of the order and checks it against inventory.
  • OrderPaymentService: Processes the payment of the order.
  • OrderSupplierNotificationService: Interact with a 3rd party API to submit the order to the supplier.
  • OrderConfirmationService: Send an email confirming the order is received and being processed.
  • OrderStatusService: Continues to check the 3rd party API for the status of the order.

If you notice above, outside of an orchestration they match exactly what the steps were according to the business. This provides a streamlined approach that makes it easy to make changes, and easy to understand for new team members. More than likely communication between services is done via queues.

For example, let’s say the company above wants to expand to except Venmo as a payment method. Really that means you have to update the OrderPaymentServices to be able to accept the option, and process the payment. Additionally OrderPaymentService might itself be an orchestration service between different micro-services, one per payment method.

Make them independently deployable

This is the big one, if you really want to see benefit of microservices, they MUST be independently deployable. This means that if we look at the above example, I can deploy each of these separate services and make changes to one without having to do a full application deployment.

Take the scenario above, if I wanted to add a new payment method, I should be able to update the OrderPaymentService, check-in those changes, and then deploy that to dev, through production without having to deploy the entire application.

Now, the first time I heard that I thought that was the most ridiculous thing I ever heard, but there are some things you can do to make this possible.

  • Each Service should have its own data store: If you make sure each service has its own data store, that makes it much easier to manage version changes. Even if you are going to leverage something like SQL Server, then make sure that the tables leveraged by each micro-service are used by that service, and that service only. This can be accomplished using schemas.
  • Put layers of abstraction between service communication: For example, a great common example is queuing or eventing. If you have a message being passed through, then as long as the message leaving doesn’t change, then there is no need to update the receiver.
  • If you are going to do direct API communication, use versioning. If you do have to have APIs connecting these services, leverage versioning to allow for micro-services to be deployed and change without breaking other parts of the application.

Build with resiliency in mind

If you adopt this approach to micro-services, then one of the biggest things you will notice quickly is that each micro-service becomes its own black-box. And as such I find its good to build each of these components with resiliency in mind. Things like leveraging Polly for retry, or circuit breaker patterns. These are great ways of making sure that your services will remain resilient, and it will have a cumulative affect on your application.

For example, take our OrderPaymentService above, we know that Queue messages should be coming in, with the order and payment details. We can take a microscope to this service and say, how could it fail, its not hard to get to a list like this:

  • Message comes through in a bad format.
  • Payment service can’t be reached.
  • Payment is declined (for any one of a variety of reasons)
  • Service fails while waiting on payment to be processed.

Now for some of the above, its just some simple error handling, like checking the format of the message for example. We can also build logic to check if the payment service is available, and do an exponential retry until its available.

We might also consider implementing a circuit breaker, that says if we can’t process payments after so many tries, the service switches to an unhealthy state and causes a notification workflow.

And in the final scenario, we could implement a state store that indicates the state of the payment being processed should a service fail and need to be picked up by another.

Consider monitoring early

This is the one that everyone forgets, but it dove-tails nicely out of the previous one. It’s important that there be a mechanism for tracking and monitoring the state of your micro-service. I find too often its easy to say “Oh the service is running, so that means its fine.” That’s like saying just cause the homepage loads, a full web application is working.

You should build into your micro-services the ability to track their health and enable a way of doing so for operations tools. Let’s face it, at the end of the day, all code will eventually be deployed, and all deployed code must be monitored.

So for example, looking at the above. If I build a circuit breaker pattern into OrderPaymentService, and every failure updates status stored within memory of the service that says its unhealthy. I can then expose an http endpoint that returns the status of that breaker.

  • Closed: Service is running fine and healthy
  • Half-Open: Service is experiencing some errors but still processing.
  • Open: Service is taken offline for being unhealthy.

I can then build out logic that when it gets to Half-open, and even open specific events will occur.

Start small, don’t boil the ocean

This one seems kind of ironic given the above. But if you are working on an existing application, you will never be able to convince management to allow you to junk it and start over. So what I have done in the past, is to take an application, and when you find its time to make a change to that part of the application, take the opportunity to rearchitect and make it more resilient. Deconstruct the pieces and implement a micro-service response to resolving the problem.

Stateless over stateful

Honestly this is just good practice to get used to, most container technologies, like Docker or Kubernetes or other options really favor the idea of elastic scale and the ability to start or kill a process at any time. This becomes a lot harder if you have to manage state within a container. If you must manage state I would definitely recommend using an external store for information.

Now I know not every one of these might fit your situation but I’ve found that these ten items make it much easier to transition to creating micro services for your solutions and seeing the benefits of doing so.

Fast and Furious: Configuration Drift

Fast and Furious: Configuration Drift

Unlike the movie Tokyo Drift, the phrase “you’re not in control, until you’re out of control.” Is pretty much the worst thing you can do when delivering software.

See the source image

Don’t get me wrong, I love the movie. But Configuration Drift is the kind of things that cripple an organization and also be the poison pill that runs your ability to support high availability for any solution, and increase your operation costs exponentially.

What is configuration drift?

Configuration Drift is the problem that occurs when manual changes are allowed to occur to an environment and this causes environments to change in ways that are undocumented.

Stop me if you heard this one before:

  • You deploy code to a dev environment, and everything works fine.
  • You run a battery of tests, automated or otherwise to ensure everything works.
  • You deploy that same code to a test environment.
  • You run a battery of tests, and some fail, you make some changes to the environment and things work just as you would expect.
  • You deploy to production, expecting it to all go fine, and start seeing errors and issues and end up losing hours to debugging weird issues.
  • During which you find a bunch of environment issues, and you fix each of them. You get things stable and are finally through everything.

Now honestly, that should sound pretty familiar, we’ve all lived it if I’m being honest. The problem is that this kind of situation causes configuration drift. Now what I mean by configuration drift is the situation where there is “drift” in the configuration of the environments such that they are have differences that are can cause additional problems.

If you look at the above, you will see a pattern of behavior that leads to bigger issues. For example, one of the biggest issues with the above is that the problem actually starts in the lower environments, where there are clearly configuration issues that are just “fixed” for sake of convenience.

What kind of problems does Configuration Drift create?

Ultimately by allowing configuration drift to happen, you are undermining your ability to make processes truly repeatable. You essentially create a situation where certain environments are “golden.”

So this creates a situation where each environment, or even each virtual machine can’t be trusted to run the pieces of the application.

This problem, gets even worse when you consider multi-region deployments as part of each environment. You now have to manage changes across the entire environment, not just one region.

This can cause a lot of problems:

  • Inconsistent service monitoring
  • Increased difficulty debugging
  • Insufficient testing of changes
  • Increased pressure on deployments
  • Eroding user confidence

How does this impact availability?

When you have configuration drift, it undermines the ability to deploy reliably to multiple regions, which means you can’t trust your failover, and you can’t create new environments as needed.

The most important thing to keep in mind is that the core concept behind everything here is that “Services are cattle, not pets…sometimes you have to make hamburgers.”

What can we do to fix it?

So given the above, how do we fix it? There are something you can do that are processed based, and others that are tool based to resolve this problem. In my experience, the following things are important to realize, and it starts with “admitting there is a problem.” Deadlines will always be more aggressive, demands for release will always be greater. But you have to take a step back and say this “if we have to change something, it has to be done by script.”

By forcing all changes to go through the pipeline, we can make sure everyone is aware of it, and make sure that the changes are always made every time. So there is a requirement to make sure you force yourself to do that, and it will change the above flow in the following ways:

  • You deploy code to a dev environment, and everything works fine.
  • You run a battery of tests, automated or otherwise to ensure everything works.
  • You deploy that same code to a test environment.
  • You run a battery of tests, and some fail, you make some changes to script to get it working.
  • You redeploy automatically when you change those scripts to dev, and automated tests are run on the dev environment.
  • The new scripts are run on test and everything works properly.
  • You deploy to production, and everything goes fine.

So ultimately you need to focus on simplifying and automating the changes to the environments. And there are some tools you can look at to help here.

  • Implement CI/CD, and limit access to environments as you move up the stack.
  • Require changes to be scripted as you push to test, stage, or preprod environments.
  • Leverage tools like Chef, Ansible, PowerShell, etc to script the actions that have to be taken during deployments.
  • Leverage infrastructure as code, via tools like TerraForm, to ensure that your environments are the same every time.

By taking some of the above steps you can make sure that things are consistent and ultimately limiting access to production to “machines” only.

See the source image

So ultimately the summary of this article is I waned to call attention to this issue as one that I see plague lots of organizations.

Configuring SQL for High Availability in Terraform

Configuring SQL for High Availability in Terraform

Hello All, a short post this week, but as we talk about availability, chaos engineering, etc. One of the most common data elements I see out there is SQL, and Azure SQL. SQL is a prevelant and common data store, it’s everywhere you look.

Given that, many shops are implementing infrastructure-as-code to manage configuration drift and provide increased resiliency for their applications. Which is definitely a great way to do that. The one thing that I’ve had a couple of people talk to me about that isn’t clear…how can I configure geo-replication in TerraForm.

This actually built into the TerraForm azure rm provider, and can be done with the following code:

provider "azurerm" {
    subscription_id = ""
    features {
    key_vault {
      purge_soft_delete_on_destroy = true
    }
  }
}

data "azurerm_client_config" "current" {}

resource "azurerm_resource_group" "rg" {
  name     = "sqlpoc"
  location = "{region}"
}

resource "azurerm_sql_server" "primary" {
  name                         = "kmack-sql-primary"
  resource_group_name          = azurerm_resource_group.rg.name
  location                     = azurerm_resource_group.rg.location
  version                      = "12.0"
  administrator_login          = "sqladmin"
  administrator_login_password = "{password}"
}

resource "azurerm_sql_server" "secondary" {
  name                         = "kmack-sql-secondary"
  resource_group_name          = azurerm_resource_group.rg.name
  location                     = "usgovarizona"
  version                      = "12.0"
  administrator_login          = "sqladmin"
  administrator_login_password = "{password}"
}

resource "azurerm_sql_database" "db1" {
  name                = "kmackdb1"
  resource_group_name = azurerm_sql_server.primary.resource_group_name
  location            = azurerm_sql_server.primary.location
  server_name         = azurerm_sql_server.primary.name
}

resource "azurerm_sql_failover_group" "example" {
  name                = "sqlpoc-failover-group"
  resource_group_name = azurerm_sql_server.primary.resource_group_name
  server_name         = azurerm_sql_server.primary.name
  databases           = [azurerm_sql_database.db1.id]
  partner_servers {
    id = azurerm_sql_server.secondary.id
  }

  read_write_endpoint_failover_policy {
    mode          = "Automatic"
    grace_minutes = 60
  }
}

Now above TF, will deploy two database servers with geo-replication configured. The key part is the following:

resource "azurerm_sql_failover_group" "example" {
  name                = "sqlpoc-failover-group"
  resource_group_name = azurerm_sql_server.primary.resource_group_name
  server_name         = azurerm_sql_server.primary.name
  databases           = [azurerm_sql_database.db1.id]
  partner_servers {
    id = azurerm_sql_server.secondary.id
  }

  read_write_endpoint_failover_policy {
    mode          = "Automatic"
    grace_minutes = 60
  }
}

The important elements are “server_name” and “partner_servers”, this makes the connection to where the data is being replicated. And then the “read_write_endpoint_failover_policy” setups up the failover policy.

Embracing the Chaos

Embracing the Chaos

So I’ve done quite a few posts recently about resiliency. And it’s a topic that more and more is very important to everyone as you build out solutions in the cloud.

The new buzz word that’s found its way onto the scene is Chaos engineering. And really this is a practice of building out solutions that are more resilient. That can survive faults and issues that arise, and ensure the best possibly delivery of those solutions to end customers. The simple fact is that software solutions are absolutely critical to every element of most operations, and to have them go down can ultimately break down a whole business if this is not done properly.

At its core, Chaos engineering is about pessimism :). Things are going to fail.

Sort of like every other movement, like Agile and DevOps, Chaos Engineering embraces a reality. In this case that reality is that failures will happen, and should be expected. The goal being that you assume, that there will be failures and should architected to support resiliency.

So what does that actually mean, it means that you determine the strength of the application, by doing controlled experiments that are designed to inject faults into your applications and seeing the impact. The intention being that the application grows stronger and able to handle any faults and issues while maintaining the highest resiliency possible.

How this something new?

Now a lot of people will read the above, and say that “chaos engineering” is just the latest buzz word to cover something everyone’s doing. And there is an element of truth to that, but the details are what matters.

And what I mean by that, is that there is a defined approach to doing this and doing it in a productive manner. Much like agile, and devops. In my experience, some are probably doing elements of this, but by putting a name and methodology to it, we are calling attention to the practice for those who aren’t, and helping with a guide of sorts to how we approach the problem.

There are several key elements that you should keep in mind as you find ways to grow your solution by going down this path.

  • Embrace the idea that failures happen.
  • Find ways to be proactive about failures.
  • Embrace monitoring and visibility

Sort of how Agile embraced the reality that “Requirements change”, and DevOps embrace that “All Code must be deployed.” Chaos engineering embraces that the application will experience failures. This is a fact. We need to assume that any dependency can break, or that components will fail or be unavailable. So what do we mean at a high level for each of these:

Embrace the idea…failure happens

The idea being that elements of your solution will fail, and we know this will happen. Servers go down, service interruptions occur, and to steal a quote from Batman Begins, “Sometime things just go bad.”

I was in a situation once where an entire network connection was taken down by a Squirrel.

So we should build our code and applications in such a way that embraces that failures will eventually occur and build resiliency into our applications to accommodate that. You can’t solve a problem, until you know there is one.

How do we do that at a code level? Really this comes down to looking at your application, or micro service and doing a failure mode analysis. And a taking an objective look at your code and asking key questions:

  • What is required to run this code?
  • What kind of SLA is offered for that service?
  • What dependencies does the service call?
  • What happens if a dependency call fails?

That analysis will help to inform how you handle those faults.

Find ways to be proactive about failure

In a lot of ways, this results in leveraging tools such as patterns, and practices to ensure resiliency.

After you’ve done that failure mode analysis, you need to figure out what happens when those failures occur:

  • Can we implement patterns like circuit breaker, retry logic, load leveling, and libraries like Polly?
  • Can we implement multi-zone, multi-region, cluster based solutions to lower the probability of a fault?

Also at this stage, you can start thinking about how you would classify a failure. Some failures are transient, others are more severe. And you want to make sure you respond appropriately to each.

For example, a monitoring networking outage is very different from a database being down for an extended period. So another key element to consider is how long the fault lasts for.

Embrace Monitoring and Visibility

Now based on the above, the next question is, how do I even know this is happening? With micro service architectures, applications are becoming more and more decentralized means that there are more moving parts that require monitoring to support.

So for me, the best next step is to go over all the failures, and identify how you will monitor and alerts for those events, and what your mitigations are. Say for example you want to do manual failover for your database, you need to determine how long you return failures from a dependency service before it notifies you to do a failover.

Or how long does something have to be down before an alert is sent? And how do you log these so that your engineers will have visibility into the behavior. Sending an alert after a threshold does no one any good if they can’t see when the behavior started to happen.

Personally I’m a fan of the concept her as it calls out a very important practice that I find gets overlooked more often than not.

Terraform – Key Rotation Gotcha!

Terraform – Key Rotation Gotcha!

So I did want to write about something that I discovered recently when investigating a question. The idea being Key rotation, and how TerraForm state is impacted.

So the question being this, if you have a key vault and you ask any security expert. The number one rule is that Key rotation is absolutely essential. This is important because it helps manage the blast radius of an attack, and keep the access keys changing in a way that makes it harder to compromise.

Now, those same experts will also tell you this should be done via automation, so that no human eye has ever seen that key. Which is easy enough to accomplish. If you look at the documentation released by Microsoft, here. It discusses how to rotate keys with azure automation.

Personally, I like this approach because it makes the key rotation process a part of normal system operation, and not something you as a DevOps engineer or developer have to keep track of. It also if you’re in the government space makes it easy to report for compliance audits.

But I’ve been seeing this growing movement online of people who say to have TerraForm generate your keys, and do rotation of those keys using randomization in your TerraForm scripts. The idea being that you can automate random values in your TerraForm script to generate the keys, and I do like that, but overall I’m not a fan of this.

The reason being is it makes key rotation a deployment activity, and if your environment gets large enough that you start doing “scoped” deployments, it removes any rhyme or reason from your key generation. It’s solely based on when you run the scripts.

Now that does pose a problem with state at the end of the day. Because let’s take the following code:

provider "azurerm" {
    subscription_id = "...subscription id..."
    features {
    key_vault {
      purge_soft_delete_on_destroy = true
    }
  }
}

data "azurerm_client_config" "current" {}

resource "azurerm_resource_group" "superheroes" {
  name     = "superheroes"
  location = "usgovvirginia"
}

resource "random_id" "server" {
  keepers = {
    ami_id = 1
  }

  byte_length = 8
}

resource "azurerm_key_vault" "superherovault" {
  name                        = "superheroidentities"
  location                    = azurerm_resource_group.superheroes.location
  resource_group_name         = azurerm_resource_group.superheroes.name
  enabled_for_disk_encryption = true
  tenant_id                   = data.azurerm_client_config.current.tenant_id
  soft_delete_enabled         = true
  purge_protection_enabled    = false

  sku_name = "standard"

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id

    key_permissions = [
      "create",
      "get",
    ]

    secret_permissions = [
      "set",
      "get",
      "delete",
    ]
  }

  tags = {
    environment = "Testing"
  }
}

resource "azurerm_key_vault_secret" "Batman" {
  name         = "batman"
  value        = "Bruce Wayne"
  key_vault_id = azurerm_key_vault.superherovault.id

  tags = {
    environment = "Production"
  }
}

Now based on the above, all good right? When I execute my TerraForm script, I will have a secret named “batman” with a value of “Bruce Wayne.”

But the only problem here will be if I go to the Azure Portal and change that value, say I change the value of “batman” to “dick grayson”, and then I rerun my TerraForm apply.

It will want to reset that key back to “batman”. And we’ve broken our key rotation at this point….now what?

My thoughts on this is its easy enough to wrap the “terraform apply” in a bash script and before you execute it run a “terraform refresh” and re-pull the values from the cloud to populate your TerraForm script.

If you don’t like that option, there is another solution, use the lifecycle tag within a resource to tell it to ignore updates. And prevent updates to the keyvault if the keys have changed as part of the rotation.

Configuration can be a big stumbling block when its comes to availability.

Configuration can be a big stumbling block when its comes to availability.

So let’s face it, when we build projects, we make trade-offs. And many times those trade-offs come in the form of time and effort. We would all build the most perfect software ever… if time and budget were never a concern.

So along those lines, one thing that I find gets glossed over quickly, especially with Kubernetes and micro services … configuration.

Configuration, something where likely you are looking and saying, “That’s the most ridiculous thing I’ve ever heard.” We put our configuration in a YAML file, or a web.config, and manage those values through our build pipelines. And while that might seem like a great practice, in my experience it can cause a lot more headaches in the long run than your probably expecting.

The problem with storing configuration in YAML files, or Web.configs, is that they create an illusion of being able to change these settings on the fly. An illusion that can actually cause significant headaches when you start reaching for higher availability.

The problems these configuration files can cause is the following:

Changing these files is a deployment activity

If you need to change a value for these applications, it requires changing a configuration file. Changes to configuration files usually are tightly connected to different restart process. Take App Service as a primary example, if you store your configuration in a web.config and you make a change to that file. App Service will automatically trigger a restart, which will cause a downtime even for you and or your customers.

This is further difficult in a kubernetes cluster, in that if you use a YAML file, it requires the deployment agent changing the cluster. This makes it very hard to change these values due to a change in application behavior.

For example, if you wanted to change your SQL database connection if performance degrades below a certain point. That is a lot harder to do when you referencing a connection string in a config file on pods that are deployed across a cluster.

Storing Sensitive Configuration is a problem

Let’s face it, people make mistakes. And of the biggest problems I’ve seen come up several times is that I hear the following statement, “We store normal configuration in a YAML file, and then sensitive configuration in a key vault.”

The problem here is that the concept of what “sensitive” means and that it means different things to different people. So the odds of something being miss-classified. It’s much easier to manage if you tell your team that for all settings, treat them as sensitive. It makes management a lot easier and limits you to a single store.

So what do we do…

The best way I’ve found to mitigate these issues, is to use an outside service like KeyVault to store your configuration settings, or azure configuration management service.

But that’s just step 1, step 2 is to on startup cache the configuration settings for each micro service in memory in the container, and make sure that you configure it to expire after so much time.

This helps by providing an option where by your microservices startup after deployment, reach out to a secure store, and cache the configuration settings in memory.

This also gains us several benefits that mitigate the problems above.

  • Allow for changing configuration settings on the fly: For example, if I wanted to change a connection string over to a read replica, that can be done by simply updating the configuration store, and allowing the application to move services over as they expire the cache. Or if you want even further control, you could build in a web hook that would force it to dump the configuration and re-pull it.
  • By treating all configuration as sensitive you ensure there is no accidental leaks. This also ensures that you can manage these keys at deployment time, and not have them ever be seen by human eyes.

So this is all great, but what does this actually look like from an architecture standpoint.

For AKS, its a fairly easy implementation, to create a side car for retrieving configuration, and then deploy that sidecar with any pod that is deployed.

Given this, its easy to see how you would implement separate sidecar to handle this configuration. Each service within the pod is completely oblivious to how it gets its configuration, it calls a micro-service to get it.

I personally favor the sidecar implementation here, because it allows you to easily bundle this with your other containers and minimizes latency and excessive network communication.

Latency will be low because its local to every pod, and then if you ever decide to change your configuration store, its easy to do.

Let’s take a sample here using Azure Key Vault. If you look at the following code samples, you can see how here’s a configuration could be managed.

Here’s some sample code that could easily be wrapped in a container for your configuration to keyvault:

public class KeyVaultConfigurationProvider : IConfigurationProvider
    {
        private string _clientId = Environment.GetEnvironmentVariable("clientId");
        private string _clientSecret = Environment.GetEnvironmentVariable("clientSecret");
        private string _kvUrl = Environment.GetEnvironmentVariable("kvUrl");

        public KeyVaultConfigurationProvider(IKeyVaultConfigurationSettings kvConfigurationSettings)
        {
            _clientId = kvConfigurationSettings.ClientID;
            _clientSecret = kvConfigurationSettings.ClientSecret;
            _kvUrl = kvConfigurationSettings.KeyVaultUrl;
        }

        public async Task<string> GetSetting(string key)
        {
            KeyVaultClient kvClient = new KeyVaultClient(async (authority, resource, scope) =>
            {
                var adCredential = new ClientCredential(_clientId, _clientSecret);
                var authenticationContext = new AuthenticationContext(authority, null);
                return (await authenticationContext.AcquireTokenAsync(resource, adCredential)).AccessToken;
            });

            var path = $"{this._kvUrl}/secrets/{key}";

            var ret = await kvClient.GetSecretAsync(path);

            return ret.Value;
        }
    }

Now the above code uses a single service principal to call upon keyvault to pull configuration information. This could be modified to leverage the specific pod identities for even greater security and cleaner implementation.

The next step of the above implementation would be to leverage a cache for your configuration. This could be done piecemeal as needed or in a group. There are a lot of directions you could take this but it will ultimately help you to manage configuration easier.