Learning how to use Power BI Embedded

Learning how to use Power BI Embedded

So, lately I’ve been doing a lot more work with Power BI Embedded, and discussions around the implementation of Power BI Embedded within applications.

As I discussed Power BI itself can be a complicated topic, especially just to get a handle on all the licensing. Look here for an explanation on that licensing.

But another big question is even then, what does it take to implement Power BI Embedded? What kind of functionality is available? The first resource I would point you at is the Power BI Embedded Playground. This site really is fantastic, giving working samples of how to implement Power BI Embedded in a variety of use-cases and giving you the code to leverage in the process.

But more than that, leveraging a tool like Power BI Embedded, does require further training, and here are some links to tutorials and online training you might find useful:

There are some videos out there that give a wealth of good information on Power BI Embedded, and some of them can be found here.

There is a wealth of information and this is just a post to get you started, but Power BI Embedded, once you get started can make it really easy to embed for amazing analytics capabilities into your applications.

Remote Gaming – Lessons Learned

Remote Gaming – Lessons Learned

So I’ve made no secret on this blog of my interest in gaming. And how its been something that I’ve picked back up over the past year. And I have to say the one positive that came out of the many changes COVID-19 has caused in our families life is how much we’ve embraced gaming.

About 18 months ago, I joined a small group of friends and we decided to take a stab at gaming more. And we started with Dungeons and Dragons, and playing a game night once a month.

See the source image

Now it started out great, I will admit we had a lot of fun. But the hardest part was organizing everything. From scheduling with everyone’s busy schedule, to location, child care, etc. Which honestly was a pretty difficult, coordinating the schedules of 8 adults all of which have kids.

When COVID-19 hit, we all found ourselves stuck at home, and everyone’s plans dropped. And honestly it took our monthly game night, and made it a weekly game, and its been really great. We’ve gotten much closer as friends, and honestly it gave all of us something to look forward to every week.

So that being said, we did it by taking our game and going virtual with it. And for this post I thought I would share the setup and how we took our game virtual. You don’t have to be playing dungeons and dragons, but its a great way to reconnect with people. A great side note is that we had a friend, who work took away from our area, who we used to see once a year, I now see him and game with him every Saturday, and have for the past 3 months.

Break out the Digital Tools:

For our team, we really started using the following tools to help make our game go digital and be as much fun as it was in person:

  • DND Beyond – This one to be fair we were using before the pandemic. But its become more important than before. We track our character sheets here.
  • Roll20 – We started using Roll20 to handle the digital game board. This is a great tool for managing your games and letting things play out on maps.
  • Facebook Messenger – We use this to handle the video calls, and honestly did because of familiarity of other members of our group. And things have worked pretty well, especially with Facebook rooms.
  • Discord – We leverage this tool to consolidate our chat during the game, and it’s been great. My players are able to talk, share handouts, or have direct conversations with me directly during the game.
  • OneNote – We created a shared notebook, where the players share their notes with each other to their benefit.

As I mentioned it’s been really helpful to be able to find new ways to connect as we deal with the uncertainty, and I definitely recommend stepping out of your comfort zone and finding news ways to engage, even in this crazy new world.

How to pull blobs out of Archive Storage

How to pull blobs out of Archive Storage

So if you’re building a modern application, you definitely have a lot of options for storage of data, whether that be traditional database technologies (SQL, MySQL, etc) or NoSQL (Mongo, Cosmos, etc), or even just blob storage. Of the above options, Blob storage is by far the cheapest, providing a very low cost option for storing data long term.

The best way though to ensure that you get the most value out of blob storage, is to leverage the different tiers to your benefit. By using a tier strategy for your data, you can pay significantly less to store it for the long term. You can find the pricing for azure blob storage here.

Now most people are hesitant to leverage the archive tier because the idea of having to wait for the data to be re hydrated has a tendency to scare them off. But it’s been my experience that most data leveraged for business operations, has a shelf-life, and archiving that data is definitely a viable option. Especially for data that is not accessed often, which I would challenge most people storing blobs to capture data on and see how much older data is accessed. When you compare this need to “wait for retrieval” vs the cost savings of archive, in my experience it tends to really lean towards leveraging archive for data storage.

How do you move data to archive storage

When storing data in azure blob storage, the process of upload a blob is fairly straight forward, and all it takes is setting the access tier to “Archive” to move data to blob storage.

The below code generates a random file and uploads it to blob storage:

var accountClient = new BlobServiceClient(connectionString);

            var containerClient = accountClient.GetBlobContainerClient(containerName);

            // Get a reference to a blob
            BlobClient blobClient = containerClient.GetBlobClient(blobName);

            Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);

            // Open the file and upload its data
            using FileStream uploadFileStream = File.OpenRead(localFilePath);
            var result = blobClient.UploadAsync(uploadFileStream, true);



            Console.WriteLine("Setting Blob to Archive");


How to re-hydrate a blob in archive storage?

There are two ways of re-hydrating blobs:

  1. Copy the blob to another tier (Hot or Cool)
  2. Set the access tier to Hot or Cool

It really is that simple, and it can be done using the following code:

var accountClient = new BlobServiceClient(connectionString);

            var containerClient = accountClient.GetBlobContainerClient(containerName);

            // Get a reference to a blob
            BlobClient blobClient = containerClient.GetBlobClient(blobName);


After doing the above, it will start the process of re-hydrating the blob automatically. And you need to monitor the properties of the blob which will allow you to see when it has finished hydrating.

Monitoring the re-hydration of a blob

One easy pattern for monitoring the blobs as they are rehydrated is to implement a queue and an azure function to monitor the blob during this process. I did this by implementing the following:

For the message model, I used the following to track the hydration process:

public class BlobHydrateModel
        public string BlobName { get; set; }
        public string ContainerName { get; set; }
        public DateTime HydrateRequestDateTime { get; set; }
        public DateTime? HydratedFileDataTime { get; set; }

And then implemented the following code to handle the re-hydration process:

public class BlobRehydrationProvider
        private string _cs;
        public BlobRehydrationProvider(string cs)
            _cs = cs;

        public void RehydrateBlob(string containerName, string blobName, string queueName)
            var accountClient = new BlobServiceClient(_cs);

            var containerClient = accountClient.GetBlobContainerClient(containerName);

            // Get a reference to a blob
            BlobClient blobClient = containerClient.GetBlobClient(blobName);


            var model = new BlobHydrateModel() { BlobName = blobName, ContainerName = containerName, HydrateRequestDateTime = DateTime.Now };

            QueueClient queueClient = new QueueClient(_cs, queueName);
            var json = JsonConvert.SerializeObject(model);
            string requeueMessage = Convert.ToBase64String(Encoding.UTF8.GetBytes(json));

Using the above code, when you set the blob to hot, and queue a message it triggers an azure function which would then monitor the blob properties using the following:

        public static void Run([QueueTrigger("blobhydrationrequests", Connection = "StorageConnectionString")]string msg, ILogger log)
            var model = JsonConvert.DeserializeObject<BlobHydrateModel>(msg);
            var connectionString = Environment.GetEnvironmentVariable("StorageConnectionString");

            var accountClient = new BlobServiceClient(connectionString);

            var containerClient = accountClient.GetBlobContainerClient(model.ContainerName);

            BlobClient blobClient = containerClient.GetBlobClient(model.BlobName);

            log.LogInformation($"Checking Status of Blob: {model.BlobName} - Requested : {model.HydrateRequestDateTime.ToString()}");

            var properties = blobClient.GetProperties();
            if (properties.Value.ArchiveStatus == "rehydrate-pending-to-hot")
                log.LogInformation($"File { model.BlobName } not hydrated yet, requeuing message");
                QueueClient queueClient = new QueueClient(connectionString, "blobhydrationrequests");
                string requeueMessage = Convert.ToBase64String(Encoding.UTF8.GetBytes(msg));
                queueClient.SendMessage(requeueMessage, visibilityTimeout: TimeSpan.FromMinutes(5));
                log.LogInformation($"File { model.BlobName } hydrated successfully, sending response message.");
                //Trigger appropriate behavior

By checking the ArchiveStatus, we can tell when the blob is re-hydrated and can then trigger the appropriate behavior to push that update back to your application.

Stem with Kids – Quarantine

Stem with Kids – Quarantine

So for something completely different. My family and I have been making sure that we do some STEM activities with our kids.  And if you’re like me, they are just as much fun for me as they are for the kids.

So I feel very blessed, in that both of my kids are very analytical, and that really means for me we get to do a lot of fun things that take me back to my childhood.

When I was growing up, I came from a family of educators, going back 3 generations. So education is something that has found its way into all aspects of our lives. And I’m very thankful for that because my brother and I grew up with a love for learning written into our DNA.

The other thing I grew up with was technology, and my dad had computers in our house from the earliest parts of my childhood. So I try to find activities that really flex that logical, analytical part of their brain. So here are some of the things that we do for this type of activity.

Legos and Building toys

I’ve made no secret that I’ve been a fan of Legos from when I was a kid. And honestly there is so much to do with kids for this. The most important key to success here is that you need to instill this idea that they get to enjoy the act of building.

For my kids from an early age, we drilled into them one saying “the best part about Legos is you get to build it again.” and this has instilled an idea that for my kids that the act of building is the fun part, and now I can honestly say I think they enjoy building more than playing with the models.

The other key here is what we don’t do, our kids can save 1 model they build, that’s it. The rest are broken down to start again.  What this does is makes them focus on building more. We take pictures of their creations and they get to keep those. And celebrate the effort, not the result.

We do this with other toys, my daughters personal favorite are magnatiles.  And encourage them to build and then have them take pictures.


This was a new one for us, but it was fantastic. We got my daughter the littlebits music inventor kit which can be found here. And it was amazing. It comes with wires, and electronic components but they are connected using magnets. The app gives easy to follow videos that let the kids walkthrough building the circuits and devices.

But more than that, they then have activities to do after, which I thought was pretty great. After my daughter built a synthesizer guitar it had video lessons on how to play it that added extra value to the experience.

Problem Solving Challenges

Another thing honestly is that we do a lot of problem solving challenges. Things like asking the kids to solve a problem. Part of the idea is to encourage our kids to see a problem and try to figure out ways to solve it.

His can include giving them specific Lego challenges that they need to build and testing the results. This further encourages grit in our kids by pushing them to try and solve it and encouraging the effort and not just the result.


This one is great for when they really want to build their imagination. Honestly the educational value of minecraft is pretty well documented, but it’s a great place for kids to build without resource constraints.

I’ve watched my kids build some amazing structures in minecraft just by giving them a random idea. Things like:

  • Build a bridge
  • Build a tower
  • Build a train and train station
  • Build a batcave
  • Build a warehouse for your stuff

And these type of actions can give your kids just enough direction to go and let their imagination run wild.

Craft station

So for our kids we find the best option possible to facilitate creativity, and that includes creating a art station for my daughter with a mix of different options. Her art station includes an easle and paper and items like the following:

  • Paints
  • Markers
  • Pencils
  • Colored Pencils
  • Stamps
  • Inkpads
  • Stencils

The end goal of this is to task our kids with creating things rather than consuming.


Another great activity to help kids with STEM is just cooking. Cooking involves the following:

  • Thermodynamics
  • Heat and transferance
  • States of matter
  • Physics
  • Measurements
  • Following directions

Cooking is a fun activity that fosters creativity and science. It’s a great simple activity that can make sure to stimulate their brains.

Those are some of the things that we do to pass the time and engage in stem activities with the kids. And honestly they have led to some of the best memories with my kids. And feel free to comment with what activities you are doing.

Summer time: Managing Burnout

Summer time: Managing Burnout

So its officially summer, and the past few months have been a little brutal on everyone. The world has been a very chaotic place in the past few months, with a lot of change. I don’t think there is a person alive who could have predicted the events since March 2020. And for many people there have been a wide range of emotions and situations out there. I want pretend to know the myriad of situations out there, and ultimately you reading this I’m sure have your own story with regard to the events of the past few months, whether those included isolation, depression, emotional stress, unemployment, or having your work life balance destroyed.

I’m not going to comment on any of those situations, but what I did want to share for this blog post were my thoughts on something I think universally felt by everyone…burnout.

All the events of the past few months, have left many of us feeling completely burned out. I know in my situation the events have led to 12-14 hour days for sustained periods (going on 3 months). And if you’re anything like me, the act of relaxing is something that’s not always the easiest to do. And for many of us, who had to be quarantined with kids those demands can be a lot higher.

So given that, it can be rather difficult to deal with these feelings of burnout, and I’ve been trying somethings myself and thought I would share a blog post discussing how I’m trying to manage burnout. I don’t pretend to believe that I have this figured out, but I’ve found some things that work for me, and I felt it would be good to pass them on.

Figure out what helps you relax?

Really, this is something I’ve come to realize quite heavily recently. But honestly everyone seems to think of the same thing as relaxing. And odds are for most of us the ways in which we relax have changed pretty drastically over the past few months.

For me, the problem was it always felt like there was something else to do, and it never seemed like I would really “relax” in the conventional sense. But honestly, relaxation means something different to everyone, and you might need to take a step back and redefine what it means for you.

For example, I’m the kind of person who has a very “Busy” energy. And sitting back and doing nothing is not actually relaxing, at all. I will find some way to exercise my mind or do something that engages me in different kinds of activities. So for me, I need to engage in something else that let’s me engage my mind in a different way but still satisfies my values.

The best way I can describe this, is my son came into my room at 6:30am on Father’s day and woke me up with the following sentence:

Happy Father’s day…Do you want to build a lego and play video games?

See the source image

For me, I find that that sitting down and pulling a lego set and working on it with my son, is significantly more fun and relaxing than anything else. So I’ve actually gotten to the point that there is a small backlog of lego sets in my office:

Always more lego…

So every so often, I’ll just sit down with my kids, build a lego, and they know that when we are done, I always take a picture and then turn it over to them. For me, the act of building something, with my hands, and doing it with my kids is really relaxing.

Take time for you

For me, like I mentioned above, I have to find times to engage my mind in things to really unplug and refresh, and it’s forced me to find new ways to do that, and in my case I find a lot of benefit to reading. I’ve been a comic fan for the past 27 years of my life. And one of the things I find really helps me is that most comic stories can be fairly quick reads and are something I can enjoy and engage on without making larger commitments or giving myself something else to remember.

The shelves line my office

But I find that taking time to read is something that I can do fairly easily at night. Usually my wife and I will put the kids to bed, and then our routine is to each take about an hour to do something by ourselves to recharge before we come back together and relax. This helps us to rest and shake off the craziness of the day before we hangout together.

Find new ways to replace old activities

So for most people, between COVID-19 and the craziness of everything else going on, it can be hard to engage in that social part of your life. And seeing friends and family became very difficult. We were lucky enough to find a new way to solve that problem. Prior to COVID-19, my wife and I along with a group of friends started playing TTRPGs, and specifically Dungeons and Dragons.

Now I know, D&D has a pretty nerdy reputation, but lately has seen a pretty big resurgence in the past few years. Honestly its pretty hard to call something nerdy when one of the biggest promoters of the game is Joe Manganiello.

Now, we were playing a monthly game with 5 other people before COVID-19, and since that game became weekly, with us all playing via Facebook Messenger. Since then the game has gone to weekly, and I have to tell you its been great. As the DM for the game it has more work, but I find that kind of work relaxing, and honestly its something we all collectively look forward to every week.

So most nights at some point, I end up stepping back and building the story and working on DnD as a way to relax. So we found a new way to do things and honestly even with our state going “Green” on the status, there’s been talk of doing a few in-person games, but we likely will be playing remotely every week moving forward. For those of us with kids it helps that we don’t have to find baby sitters and play from kid bedtime until midnight.

Another example I have here, is Exercise. So I have to be honest I had started Crossfit right before COVID-19, and was really enjoying it. But with COVID-19, all Gyms were shutdown, and I fell out of the habit. The main exercise option then became running, which to be honest is my personal hell. I can’t turn my brain off long enough and running becomes boring to me. But recently I found a great app that helps…Zombie’s Run. The app is here.

What I like about this is its sort of a mix of a podcast and exercise. In each “mission” there are audio snippets that will play that describe the adventures of your persona “Runner 5”, as you set out to help your home township as a runner. Being sent out to get supplies, distract zombies, etc. And as you run, periodically it will chime in with different updates on those missions and directions. It also integrates with Spotify or your chosen music player. So basically I’ll be running, and listening to music, and all of a sudden get a message “Pick up the pace, a rogue group of zombies must have heard you they are gaining on you.” (Complete with zombie sounds). And then the music resumes. It makes the process a lot more fun than before. If you want proof of that, I hate running and usually stop as soon as I can, and my second day I did a 5k.

Try Something new…

In my wife and my case, we had a smoker we got from my brother and never got around to using it. Since COVID-19 started we’ve been using that smoker almost every weekend doing a “Culinary Experiment” to help try new things. It gives us something fun to look forward to, and helps to make things easier. Our latest experiment was a 7 lb Pork Shoulder:

Sooooooooooo Gooooooooooooddddddddddddd!

So I would recommend try to find something new, and make a routine out of it. It will give you something to look forward to every day, or every week. And that can help stop the feelings of burnout.

Leveraging Private Nuget Packages to support Micro-services

Leveraging Private Nuget Packages to support Micro-services

So one of the most common elements I see come up as your build out a micro-service architecture, is that there is a fair bit of code-reuse when you do that. And managing that code can be painful, and cause its own configuration drift, that can cause management issues.

Now given the above statement, the first response is probably “But if each Micro-service is an independent unit, and meant to be self-contained, how can their be code re-use?” And my response to that, is that just because ultimately the services and independently deployable does not mean that there isn’t code that will be reused between the different services.

For example, if you are building an OrderProcessingService, that might interface with classes that support things like:

  • Reading from a queue
  • Pushing to a queue
  • Logging
  • Configuration
  • Monitoring
  • Error Handling

And these elements, should not be wildly different from each micro-service, and should have some common elements. For example, if you are leveraging KeyVault for your configuration, odds are you will have the same classes implemented across every possible service.

But this in itself creates its own challenges, and those challenges include but are not limited to:

  • Library Version Management: If each service is its own deployable unit, then as you make changes to common libraries you want to be able to manage versions on going.
  • Creating a central library for developers: Allowing developers to manage the deployment and changes to versions of that code in a centralized way is equally important.
  • Reduce Code Duplication: Personally I have a pathological hatred of code duplication. I’m off the belief that if you are copying and pasting code, you did something wrong. There are plenty of tools to handle this type of situation without doing copy and paste and adding to technical debt.

Now everyone is aware of NuGet, and I would say it would be pretty hard to do software development without it right now. But what you may not know is that Azure DevOps makes it easy to create a private NuGet feed, and then enable the packaging of NuGet packages and publishing via CI/CD.

Creating a NuGet Feed

The process of creating a feed is actually pretty straight forward, it involves going to a section in Azure DevOps called “Artifacts”.

And then select the “Create Feed”, and give the feed a name, and specify who has rights to use this feed:

And from here its pretty easy to setup the publishing of a project as a NuGet package.

Creating a package in Azure Pipelines

A few years ago, this was actually a really painful process, but now its pretty easy. You actually don’t have to do anything in Visual Studio to support this. There are options, but to me, a NuGet package is a deployment activity, so I personally believe it should be handled in the Pipeline.

So the first thing, is you need to specify the details on the configuration. If you go to “Properties” on the project:

These are important as this is the information your developers will see in their NuGet feed.

From here the next step is to configure your pipeline to enable the CI/CD. The good news is this is very easy using the following pieces:

- task: DotNetCoreCLI@2
    command: pack
    versioningScheme: byEnvVar
    versionEnvVar: BUILD_BUILDNUMBER
  displayName: 'dotnet pack $(buildConfiguration)'

- task: NuGetAuthenticate@0
  displayName: 'NuGet Authenticate'

- task: NuGetCommand@2
    command: push
    publishVstsFeed: 'Workshop/WellDocumentedNerd'
    allowPackageConflicts: true
  displayName: 'NuGet push'

Now using the above, I have specified the BUILD_BUILDNUMBER as the identifier for new versions. I do this because I find its easier to ensure that the versions are maintained properly in the NuGet feed.

One thing of note, is that NuGet Authenticate step is absolutely essential to ensure that you are logged in with the appropriate context.

Now after executing that pipeline, I would get the following in my NuGet Feed.

Consuming NuGet in a project

Now, how do we make this available to our developers, and to our build agents. This process is very easy. If you go back to the “Artifacts” section, you will see the following:

The best part is that Azure DevOps will give you the ability to pull the XML required when you select “dotnet”, and it will look something like this:

<?xml version="1.0" encoding="utf-8"?>
    <clear />

    <add key="WellDocumentedNerd" value="....index.json" />


After this is done, and added to your project, whenever you build pipeline attempts to do a build of your code project it will consider this new data source in the NuGet Restore.

Why go through this?

As I mentioned above, ultimately one of the biggest headaches of a micro-service architecture for all of its benefits, is that it creates a lot of moving parts. And ultimately managing any common code, could make things difficult if you have to replicate code between projects.

So this creates a nice easy solution that allows you to manage a separate code repository with private / domain specific class libraries, and still have versioning to allow for having different versions on each service while enabling them to be independently deployable.

A Few Gotcha’s for Resiliency

A Few Gotcha’s for Resiliency

So I’ve been doing a lot of work over the past few months around availability, SLAs, Chaos Engineering, and in that time I’ve come across a few “Gotcha’s” that are important to sidestep if you want to build a stronger availability in your solutions. This is no means met to be an exhaustive list, but its at least a few tips in my experience that can help, if you are starting down this road:

Gotcha #1 – Pulling in too many services into the calculation.

As I’ve discussed in many other posts, the first step of doing a resiliency review of your application, is to figure out what functions are absolutely essential, and governed by the SLA. Because the simple fact is that SLA calculations are a business decision and agreement, so just like any good contract, the first step is to figure out the scope of what is covered.

But let’s boil this down to simple math, SLA calculations are described in depth here.

Take the following as an example:

App Service99.95%
Azure Redis99.9%
Azure SQL99.99%

Based on the above, the calculation for the SLA gives us the following:

.9995 * .999 * .9999 = .9984 = 99.84%

Now, if I look at the above, and more with Gotcha #2 on the specifics, but if I can remove the “Redis” and lower the number of services involved, the calculation changes to the following:

.9995 * .9999 = .9994 = 99.94%

Notice how removing an item from the SLA causes it to increase, part of the reason here is that I removed something with a much lower SLA, but each item in the SLA calculation will impact the final number, so where ever possible, we should make sure we have scoped our calculations to only the services involved in supporting the SLA.

Gotcha #2 – Using a caching layer incorrectly

Caching tiers are an essential part of any solution. When Redis first was created, caching tiers were seen as something that you would implement if you had aggressive performance requirements. But anymore the demands on software solutions are so high, that I would argue all major solutions have a caching tier of some kind.

Now to slightly contradict myself, those caching tiers, while important to performance of an application, should not be required as part of your SLA or availability calculation, if implemented correctly.

What I mean by that, is caching tiers are meant to be transient, meaning that they can be dropped at anytime, and the application should be able to function without it. Rather than relying on it for a persistence store. The most common case, that violate recommendations for solutions is the following:

  • User takes an action that requests data.
  • Application reaches down to data store to retrieve data.
  • Application puts data in Redis cache.
  • Application returns requested data.

The above has no issues at all, that’s what Redis is for, the problem is when the next part is this:

  • User takes an action that requests data.
  • Application pulls data from Redis and returns.
  • If data is not available, application errors out.

Given the ephemeral nature of caches, and the fact that these caches can be very hard to replicate, your application should be smart enough that if the data isn’t in redis, it will go get it from a data store.

By implementing the following, and configuring your application to use its cache only for performance optimization, you can effectively remove the Redis cache from the SLA calculation.

Gotcha #3 – Using the right event store

Now the other way I’ve seen Redis or caching tiers misused, is as a event data store. So a practice I’ve seen done over and over again is to leverage redis to store JSON objects as part of event store because of the performance benefits. There are appropriate technologies that can support this from a much better perspective and manage costs while benefiting your solutions:

  • Cosmos DB: Cosmos is really designed, exactly for this purpose, of providing high performance, and high availability for your applications. It does this by allowing you to configure the appropriate writing strategy.
  • Blob Storage: Again, Blob storage can be used as an event store, by writing objects to blob, although not my first choice, it is a viable option for managing costs.
  • Other database technologies: There are a myriad of potential options here from Maria, PostGres, MySQL, SQL, etc. All of which performance this operation better.

Gotcha #4 – Mismanaged Configuration

I did a whole post on this, but the idea that configuration cannot be changed with out causing an application event is always a concern. You should be able to change an applications endpoint without having any major hiccups in its operation.

At the end of the following

Building CI / CD for Terraform

Building CI / CD for Terraform

I’ve made no secret on this blog of how I feel about TerraForm, and how I believe infrastructure as code is absolutely essential to managing any cloud based deployments long term.

There are so many benefits to leveraging these technologies. And for me one of the biggest is that you can manage your infrastructure deployments in the exact same manner as your code changes.

If your curious of the benefits of a CI /CD pipeline, there are a lot of posts out there. But for this post I wanted to talk about how you can take those TerraForm templates and build out a CI / CD pipeline to deploy them to your environments.

So for this project, I’ve build a TerraForm template that deploys a lot of resources out to 3 environments. And I wanted to do this in a cost saving manner, so I want to manage it in the following way.

  • Development: which will always exist but in a scaled down capacity to keep costs down.
  • Test environment: That will only be created when we are ready to begin testing and destroyed after.
  • Production: where our production application will reside.

Now for the sake of this exercise, I will only be building a deployment pipeline for the TerraForm code, and in a later post will examine how to integrate this with code changes.

Now as with everything, there are lots of ways to make something work. I’ve just showing an approach that has worked for me.

Configuring your template

The first part of this, is to build out your template to be able to easily be make configuration changes via the automated deployment pipeline.

The best way I’ve found to do this is variables, and whether your are doing automated deployment or not, I highly recommend using them. If you ever have more than just yourself working on a TerraForm template, or plan to create more than one environment. You will absolutely need variables. So it’s generally a good practice.

For the sake of this example, I declared the following variables in a file called “variables.tf”:

variable "location" {
    default = "usgovvirginia"

variable "environment_code" {
    description = "The environment code required for the solution.  "

variable "deployment_code" {
    description = "The deployment code of the solution"

variable "location_code" {
    description = "The location code of the solution."

variable "subscription_id" {
    description = "The subscription being deployed."

variable "client_id" {
    description = "The client id of the service prinicpal"

variable "client_secret" {
    description = "The client secret for the service prinicpal"

variable "tenant_id" {
    description = "The client secret for the service prinicpal"

variable "project_name" {
    description = "The name code of the project"
    default = "cds"

variable "group_name" {
    description = "The name put into all resource groups."
    default = "CDS"

Also worth noting is the client id, secret, subscription id, and tenant id above. Using Azure DevOps you are going to need to be able to deploy using a service principal. So these will be important.

Then in your main.tf, you will have the following:

provider "azurerm" {
    subscription_id = var.subscription_id
    version = "=2.0.0"

    client_id = var.client_id
    client_secret = var.client_secret
    tenant_id = var.tenant_id
    environment = "usgovernment"

    features {}

Now worth mentioning that when I’m working with my template I’m using a file called “variables.tfvars”, which looks like the following:

location = "usgovvirginia"
environment_code = "us1"
deployment_code = "d"
location_code = "us1"
subscription_id = "..."
group_name = "CDS"

Configuring the Pipeline

This will be important later, as you build out the automation. From here the next step is going to be to build out your Azure DevOps pipeline, and for this sample I’m going to show, I’m using a YAML pipeline:

So what I did was plan on creating a “variables.tfvars” as part of my deployment:

- script: |
    touch variables.tfvars
    echo -e "location = \""$LOCATION"\"" >> variables.tfvars
    echo -e "environment_code = \""$ENVIRONMENT_CODE"\"" >> variables.tfvars
    echo -e "deployment_code = \""$DEPLOYMENT_CODE"\"" >> variables.tfvars
    echo -e "location_code = \""$LOCATION_CODE"\"" >> variables.tfvars
    echo -e "subscription_id = \""$SUBSCRIPTION_ID"\"" >> variables.tfvars
    echo -e "group_name = \""$GROUP_NAME"\"" >> variables.tfvars
    echo -e "client_id = \""$SP_APPLICATIONID"\"" >> variables.tfvars
    echo -e "tenant_id = \""$SP_TENANTID"\"" >> variables.tfvars
displayName: 'Create variables Tfvars'

Now the next question being where do those values come from, I’ve declared as part of the variables in the pipeline, these values:

From there, because I’m deploying to Azure Government, I added an azure CLI step to make sure my command line context is pointed at Azure Government, doing the following:

- task: AzureCLI@2
    azureSubscription: 'Kemack - Azure Gov'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
      az cloud set --name AzureUSGovernment
      az account show

How do we do TerraForm plan / apply, the answer is pretty straightforward. I did install this extension, and use the “TerraForm Tool Installer” task, as follows:

- task: TerraformInstaller@0
    terraformVersion: '0.12.3'
  displayName: "Install Terraform"

After that it become pretty straight forward to implement:

- script: |
    terraform init
  displayName: 'Terraform - Run Init'

- script: |
    terraform validate
  displayName: 'Terraform - Validate tf'

- script: |
    terraform plan -var-file variables.tfvars -out=tfPlan.txt
  displayName: 'Terraform - Run Plan'

- script: |
    echo $BUILD_BUILDNUMBER".txt"
    echo $BUILD_BUILDID".txt"
    az storage blob upload --connection-string $TFPLANSTORAGE -f tfPlan.txt -c plans -n $BUILD_BUILDNUMBER"-plan.txt"
  displayName: 'Upload Terraform Plan to Blob'

- script: |
    terraform apply -auto-approve -var-file variables.tfvars
  displayName: 'Terraform - Run Apply'

Now the cool part about the above, is I took it a step further, and created a storage account in azure, and added the connection string as a secret. I then built logic that when it you run this pipeline, it will run a plan ahead of the apply, and then output that to a text file and save it in a storage account with the build number for the file name.

I personally like this as it creates a log of the activities performed during the automated build moving forward.

Now I do plan on refining this and taking steps of creating more automation around environments, so more to come.

TerraForm – Using the new Azure AD Provider

TerraForm – Using the new Azure AD Provider

So by using TerraForm, you gain a lot of benefits, including being able to manage all parts of your infrastructure using HCL languages to make it rather easy to manage.

A key part of that is not only being able to manage the resources you create, but also access to them, by creating and assigning storage principals. In older versions of TerraForm this was possible using the azurerm_azuread_application and other elements. I had previously done this in the Kubernetes template I have on github.

Now, with TerraForm v2.0, there have been some pretty big changes, including removing all of the Azure AD elements and moving them to their own provider, and the question becomes “How does that change my template?”

Below is an example, it shows the creation of a service principal, with a random password, and creating an access policy for a keyvault.

resource "random_string" "kub-rs-pd-kv" {
  length = 32
  special = true

data "azurerm_subscription" "current" {
    subscription_id =  "${var.subscription_id}"
resource "azurerm_azuread_application" "kub-ad-app-kv1" {
  name = "${format("%s%s%s-KUB1", upper(var.environment_code), upper(var.deployment_code), upper(var.location_code))}"
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = true

resource "azurerm_azuread_service_principal" "kub-ad-sp-kv1" {
  application_id = "${azurerm_azuread_application.kub-ad-app-kv1.application_id}"

resource "azurerm_azuread_service_principal_password" "kub-ad-spp-kv" {
  service_principal_id = "${azurerm_azuread_service_principal.kub-ad-sp-kv1.id}"
  value                = "${element(random_string.kub-rs-pd-kv.*.result, count.index)}"
  end_date             = "2020-01-01T01:02:03Z"

resource "azurerm_key_vault" "kub-kv" {
  name = "${var.environment_code}${var.deployment_code}${var.location_code}lkub-kv1"
  location = "${var.azure_location}"
  resource_group_name = "${azurerm_resource_group.management.name}"

  sku {
    name = "standard"

  tenant_id = "${var.keyvault_tenantid}"

  access_policy {
    tenant_id = "${var.keyvault_tenantid}"
    object_id = "${azurerm_azuread_service_principal.kub-ad-sp-kv1.id}"

    key_permissions = [

    secret_permissions = [
  access_policy {
    tenant_id = "${var.keyvault_tenantid}"
    object_id = "${azurerm_azuread_service_principal.kub-ad-sp-kv1.id}"

    key_permissions = [

    secret_permissions = [

  depends_on = ["azurerm_role_assignment.kub-ad-sp-ra-kv1"]

Now as I mentioned, with the change to the new provider, you will see a new version of this code be implemented. Below is an updated form of code that generates a service principal with a random password.

provider "azuread" {
  version = "=0.7.0"

resource "random_string" "cds-rs-pd-kv" {
  length = 32
  special = true

resource "azuread_application" "cds-ad-app-kv1" {
  name = format("%s-%s%s-cds1",var.project_name,var.deployment_code,var.environment_code)
  oauth2_allow_implicit_flow = true

resource "azuread_service_principal" "cds-ad-sp-kv1" {
  application_id = azuread_application.cds-ad-app-kv1.application_id

resource "azuread_service_principal_password" "cds-ad-spp-kv" {
  service_principal_id  = azuread_service_principal.cds-ad-sp-kv1.id
  value                = random_string.cds-rs-pd-kv.result
  end_date             = "2020-01-01T01:02:03Z"

Notice how much cleaner the code is, first we aren’t doing the ${} to do string interpolation, and ultimately the resources are much cleaner. So the next question is how do I connect this with my code to assign this service principal to a keyvault access policy.

You can accomplish that with the following code, which is in a different file in the same directory:

resource "azurerm_resource_group" "cds-configuration-rg" {
    name = format("%s-Configuration",var.group_name)
    location = var.location 

data "azurerm_client_config" "current" {}

resource "azurerm_key_vault" "cds-kv" {
    name = format("%s-%s%s-kv",var.project_name,var.deployment_code,var.environment_code)
    location = var.location
    resource_group_name = azurerm_resource_group.cds-configuration-rg.name 
    enabled_for_disk_encryption = true
    tenant_id = data.azurerm_client_config.current.tenant_id
    soft_delete_enabled = true
    purge_protection_enabled = false
  sku_name = "standard"
  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id
    key_permissions = [
    secret_permissions = [

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = azuread_service_principal.cds-ad-sp-kv1.id

    key_permissions = [

    secret_permissions = [

Notice that I am able to reference the “azuread_service_principal.cds-ad-sp-kv1.id” to access the newly created service principal without issue.

Good practices for starting with containers

Good practices for starting with containers

So I really hate the saying “best practices” mainly because it creates a belief that there is only one right way to do things. But I wanted to put together a post around some ideas for strengthening your micro services architectures.

As I’ve previously discussed, Micro service architectures are more complicated to implement but have a lot of huge benefits to your solution. And some of those benefits are:

  • Independently deployable pieces, no more large scale deployments.
  • More focused testing efforts.
  • Using the right technology for each piece of your solution.
  • I creased resiliency from cluster based deployments.

But for a lot of people, including myself the hardest part of this process is how do you structure a micro-service? How small should each piece be? How do they work together?

So here are some practices I’ve found helpful if you are starting to leverage this in your solutions.

One service = one job

One of the first questions is how small should my containers be. Is there such a thing as too small? A good rule of thumb to focus on is the idea of separation concerns. If you take every use-case and start to break it down to a single purpose, you’ll find you get to a good micro-service design pretty quickly.

Let’s look at examples, I recently worked on a solution with a colleague of mine that ended up pulling from an API, and then extracting that information to put it into a data model.

In the monolith way of thinking, that would have been 1 API call. Pass in the data and then cycle through and process it. But the problem was throughput, if I would have pulled the 67 different regions, and the 300+ records per region and processed this as a batch it would have been a mess of one gigantic API call.

So instead, we had one function that cycled through the regions, and pulled them all to json files in blob storage, and then queued a message.

Then we had another function that when a message is queued, will take that message, read in the records for that region, and process saving them to the database. This separate function is another micro-services.

Now there are several benefits to this approach, but chief among them, the second function can scale independent of the first, and I can respond to queued messages as they come in, using asynchronous processing.

Three words… Domain driven design

For a great definition of Domain-Driven Design, see here. The idea is pretty straight forward, the idea of building software and the structure of your application should mirror the business logic that is being implemented.

So for example, your micro-services should mirror what they are trying to do. Like let’s take the most straightforward example…e-commerce.

If we have to track orders, and have a process once an order is submitted of the following:

  • Orders are submitted.
  • Inventory is verified.
  • Order Payment is processed.
  • Notification is sent to supplier for processing.
  • Confirmation is sent to the customer.
  • Order is fulfilled and shipped

Looking at the above, one way to implement this would be to do the following:

  • OrderService: Manage the orders from start to finish.
  • OrderRecorderService: Record order in tracking system, so you can track the order throughout the process.
  • OrderInventoryService: Takes the contents of the order and checks it against inventory.
  • OrderPaymentService: Processes the payment of the order.
  • OrderSupplierNotificationService: Interact with a 3rd party API to submit the order to the supplier.
  • OrderConfirmationService: Send an email confirming the order is received and being processed.
  • OrderStatusService: Continues to check the 3rd party API for the status of the order.

If you notice above, outside of an orchestration they match exactly what the steps were according to the business. This provides a streamlined approach that makes it easy to make changes, and easy to understand for new team members. More than likely communication between services is done via queues.

For example, let’s say the company above wants to expand to except Venmo as a payment method. Really that means you have to update the OrderPaymentServices to be able to accept the option, and process the payment. Additionally OrderPaymentService might itself be an orchestration service between different micro-services, one per payment method.

Make them independently deployable

This is the big one, if you really want to see benefit of microservices, they MUST be independently deployable. This means that if we look at the above example, I can deploy each of these separate services and make changes to one without having to do a full application deployment.

Take the scenario above, if I wanted to add a new payment method, I should be able to update the OrderPaymentService, check-in those changes, and then deploy that to dev, through production without having to deploy the entire application.

Now, the first time I heard that I thought that was the most ridiculous thing I ever heard, but there are some things you can do to make this possible.

  • Each Service should have its own data store: If you make sure each service has its own data store, that makes it much easier to manage version changes. Even if you are going to leverage something like SQL Server, then make sure that the tables leveraged by each micro-service are used by that service, and that service only. This can be accomplished using schemas.
  • Put layers of abstraction between service communication: For example, a great common example is queuing or eventing. If you have a message being passed through, then as long as the message leaving doesn’t change, then there is no need to update the receiver.
  • If you are going to do direct API communication, use versioning. If you do have to have APIs connecting these services, leverage versioning to allow for micro-services to be deployed and change without breaking other parts of the application.

Build with resiliency in mind

If you adopt this approach to micro-services, then one of the biggest things you will notice quickly is that each micro-service becomes its own black-box. And as such I find its good to build each of these components with resiliency in mind. Things like leveraging Polly for retry, or circuit breaker patterns. These are great ways of making sure that your services will remain resilient, and it will have a cumulative affect on your application.

For example, take our OrderPaymentService above, we know that Queue messages should be coming in, with the order and payment details. We can take a microscope to this service and say, how could it fail, its not hard to get to a list like this:

  • Message comes through in a bad format.
  • Payment service can’t be reached.
  • Payment is declined (for any one of a variety of reasons)
  • Service fails while waiting on payment to be processed.

Now for some of the above, its just some simple error handling, like checking the format of the message for example. We can also build logic to check if the payment service is available, and do an exponential retry until its available.

We might also consider implementing a circuit breaker, that says if we can’t process payments after so many tries, the service switches to an unhealthy state and causes a notification workflow.

And in the final scenario, we could implement a state store that indicates the state of the payment being processed should a service fail and need to be picked up by another.

Consider monitoring early

This is the one that everyone forgets, but it dove-tails nicely out of the previous one. It’s important that there be a mechanism for tracking and monitoring the state of your micro-service. I find too often its easy to say “Oh the service is running, so that means its fine.” That’s like saying just cause the homepage loads, a full web application is working.

You should build into your micro-services the ability to track their health and enable a way of doing so for operations tools. Let’s face it, at the end of the day, all code will eventually be deployed, and all deployed code must be monitored.

So for example, looking at the above. If I build a circuit breaker pattern into OrderPaymentService, and every failure updates status stored within memory of the service that says its unhealthy. I can then expose an http endpoint that returns the status of that breaker.

  • Closed: Service is running fine and healthy
  • Half-Open: Service is experiencing some errors but still processing.
  • Open: Service is taken offline for being unhealthy.

I can then build out logic that when it gets to Half-open, and even open specific events will occur.

Start small, don’t boil the ocean

This one seems kind of ironic given the above. But if you are working on an existing application, you will never be able to convince management to allow you to junk it and start over. So what I have done in the past, is to take an application, and when you find its time to make a change to that part of the application, take the opportunity to rearchitect and make it more resilient. Deconstruct the pieces and implement a micro-service response to resolving the problem.

Stateless over stateful

Honestly this is just good practice to get used to, most container technologies, like Docker or Kubernetes or other options really favor the idea of elastic scale and the ability to start or kill a process at any time. This becomes a lot harder if you have to manage state within a container. If you must manage state I would definitely recommend using an external store for information.

Now I know not every one of these might fit your situation but I’ve found that these ten items make it much easier to transition to creating micro services for your solutions and seeing the benefits of doing so.