Being your own enemy….Perfectionism

Being your own enemy….Perfectionism

So this is something I’ve been thinking a lot about lately, and I figured it might be worth doing a blog post about, and that’s the topic of Perfectionism, and the idea of being your own worst enemy.

In my new role, I’m doing a lot more coding, and software development and I find myself in a position that I’ve been in before, where I have sort of a “blank slate” to build out something and have a degree of latitude to figure out the “how” I want to do things. And it can be great, but also stressful.

We’ve all heard the stories of Steve Jobs, Elon Musk, Thomas Edison, and others. Who demanded nothing less than perfection from themselves and those who they worked with. And we’ve all had that manager who’s standards were too high, or even been that person who tries to make every detail of your work perfect.

I have lots of times where I catch myself doing the later, and being the one who holds myself to a higher standard, and there can be a lot of reasons for that. Some of those reasons include:

  • Perfectionism
  • Ego
  • Fear of Failure
  • Imposter Syndrome
  • Lack of self confidence

In my experience, due to the positive nature of the word “Perfectionist” because of people like Elon Musk, or Steve Jobs, there is now this “convention” where people say “I’m just a perfectionist” to really help mask one of the other truths above.

And at the end of the day that’s telling yourself a lie, which can create this vicious circle. For example, it took me a long time to realize that my perfectionist tendencies were really imposter syndrome.

And this ultimately created a vicious cycle, as follows:

  • I would be over critical of my work, and put in more hours thinking that it “Needs to be better” or people are going to “realize I not as good at this as I think I should be.”
  • When people would identify the extra hours and work, I would hide it by saying “I’m a perfectionist, and just need to really deliver the best work possible.”
  • I would then feel increased pressure to deliver perfect every time, and repeat the cycle with more intensity.

And it should surprise no one, that the above cycle is about the furthest thing from sustainable that you can get to, and because I would take on too much and put too much pressure on myself, I would then set myself up for failure, which made my imposter syndrome a self-fulfilling prophecy.

Now after talking to friends and colleagues, I find that this type of issue is pretty common, subtle differences might be involved (Remove imposter syndrome, and replace with “Fear of Failure”). And the first thing to know, is you are not alone, there are a ton of people now starting to talk more about this. Personally I like Scott Hanselman’s discussion on this topic, found here.

But here are some tricks I’ve found to help myself avoid this, and increase the amount of work I deliver and getting to a quality I am satisfied with, and avoid the downward spiral.

Tip 1 – Keep objectives Small

This is the biggest thing I find that helps me, so let’s do it first. And I did this recently with a coding project that I needed to work on. I take every coding project and break it into small tasks, things that can be done in like 20-30 minutes. And then I break that list into two categories Essential and Nice to have.

The idea here being that I am forcing myself to take 2 minutes and define what is absolutely essential and what would be nice to have. And then as I work on this I will focus on the Essentials, while roping in as many nice to haves as I can.

Now what I did find is that this alone is not enough, as you will find ways to push to make sure both groups get done, but Tip #2, helps with that.

Tip 2 – TimeBox

The next part is that I will timebox everything I do, maybe not with tight like “I have 30 minutes to do this.” But more of a “I’m going to get X done before lunch.” And I find that by doing so, I ensure that I don’t lose focus on the larger picture.

This forces me to keep the essential at the front of my mind, and only let’s me go so far down the “rabbit hole” that is the “Nice to have” list.

At the end of the timebox, I then adopt the Scrum mentality and either add the “Nice to have” items to the backlog, or throw them out all together. This helps me feel like I’m being productive and delivering on what I need to, and can lead to a confidence boost, when I see how many “Nice to haves” I knocked out.

Tip 3 – Be clear about the desired outcome

This is the other critical one, I’m clear for myself and when I communicate with team members about this. Avoid words like “need to…” and be clear about “trying things”.

For example, I had a scenario where I wanted to implement threading in python to resolve an issue and making something more performant. At the time I hadn’t even researched threading, so I was very clear with my team members that I was going to “try” and make this work, and made sure that I went into it with the expectation of trying, not that I 100% committed which reminded myself that getting this working was not essentially.

Now as it turns out threading in python is really easy, and was pretty thrilled with the results (2 hour process down to 17 minutes). But it helps to understand that you need to make sure that you are clear about what you are “trying to do” or “experimenting with” and what the expected outcome is.

Updating version numbers for Python Packages in Azure DevOps

Updating version numbers for Python Packages in Azure DevOps

So I did a previous post on how to create package libraries in Python, and I wanted to put in a post here on how to solve a problem I immediately identified.

If you look at the setup.py, you will see that the version number, and other details are very much hard coded into the file. This is concerning as it requires a manual step to go and update this before you can do a build / publish activity. And honestly, nowadays CI/CD is the way of the word.

So to resolve this, I built a script to have the automated build agent inject the version number created by the CI/CD tool. And that code is the following:

import fileinput
import sys

filename = sys.argv[1]
text_to_search = sys.argv[2]
replacement_text = sys.argv[3]
with fileinput.FileInput(filename, inplace=True, backup='.bak') as file:
    for line in file:
        print(line.replace(text_to_search, replacement_text), end='')

I then updated my setup.py with the following:

name="packageName", 
    version="{{__BuildNumber__}}", 
    python_requires = '>=3.7',
    description="{{__BuildReason__}}", 

And that’s it, from their you just trigger this tile and inject the new build number into the file.

Tips for Job Interviews

Tips for Job Interviews

So I wanted to take a second to talk about interviewing, and how to go about interviewing for a position. I recently changed job at my current employer, and have actually had this conversation with quite a few friends, and I did want to put out a post of some quick tips for interviewing:

Tip 1 – Your Resume should really be comprehensive.

One of the number one mistakes I see people make with their resume is they get too beholden to this idea that a resume can only be 1 page, and I honestly don’t know why people believe that.

I know part of that comes from the fact that people want to keep it brief. I totally get that, but if I’m being honest if you are approaching ten years in your field, you more than likely have quite a breadth of skills and relevant experience, that should be documented. And to be honest you don’t know what specific skills they are most looking for, so it’s important to make sure you make your resume comprehensive to ensure that you check the appropriate box they are looking for.

Additionally, and I see this mistake a lot, people leave off skills because they don’t think they matter but do. For example, if you’ve ever had to work with an end customer, done a sales pitch, managed an escalation or expectations. These are valuable skills that many people just don’t have and should be documented in your resume.

Tip 2 – Build a brand deck, it’s the trailer

I have to be honest, in my experience the practice of a Cover Letter is really out of date. Most people I’ve ever known don’t put a lot of stock in them. I would argue a better medium in my experience is a “Brand Deck”, which really enables the ability to showcase your skills.

If the resume is a comprehensive view of your career and skills, the Brand Deck is the “Trailer”, the goal being that it allows you to shape the narrative around your skills and strengths and present that to a potential employer for their review. This can be a powerful tool for helping to show your skills ahead of the conversation in an interview. And ultimately help setup a more productive conversations for both potential employers and you. Instead of having the “What kind of experience do you have in Scrum?” They can see in your brand deck, “Lead team with Scrum Master role, and coordinated activities for Sprint Planning and Retrospective.

Tip 3 – Be on time, and very professional

This one I shouldn’t have to say, but I will. You need to be on the spot, and even potentially early for every interview. Also, treat this with a professional approach. And if you have a scheduled time slot be respectful of that. Don’t try to go over the allotted time, and be respectful. I have a whole blog post on how to run a meeting, and you should treat your interview in the same manner.

Tip 4 – Never use the word “Expert”. I don’t care how good you are.

This is a good rule that a friend of mine gave me. Never…EVER say you are an expert in anything. There is always “a bigger fish”, and there will always be someone who knows more than you on any topic. So I find the term “Expert” is just inviting people to play “stump the chump.” And honestly that’s a waste of your time and theirs.

Tip 5 – Be humble, hungry, and smart

A great book, that I’m going to do a post on is the Ideal Team Player, And the book really is a fascinating read, with some amazing insights into how someone can be a true team player by embodying these values. And at it’s core, the simplistic explanation of these values are:

  • Humble: Don’t have an ego, or a chip on your shoulder. Be the kind of person who isn’t in it for personal glory, but rather has a passion for the work they are doing.
  • Hungry: Be someone who has a passion for the work, and finds ways to do better and accomplish the goals set out for the team. Be someone who wants to find ways to contribute, and doesn’t need to know how to move forward.
  • Smart: Be someone who knows how to communicate with people. That you treat everyone with the same respect and know how to talk to people. You are smart enough to know what’s appropriate.

Tip 6 – Do your homework

Another key point I’ve seen people not do enough of is, do your homework up front. Research the company you are applying to, and make sure there’s a way to ensure that you make that part of the conversation. One of the most important aspects of the interview is that you are “selling yourself”, to a potential employer.

Tip 7 – Always ask questions

Another key piece of wisdom I’ve learned over the years is that you must always ask questions. ALWAYS. And don’t be afraid to ask about the culture or the team, some of the questions I’ve asked are the following:

  • What is the best part of working with this team?
  • What is the worst part of working with this team?
  • How do you feel about the leadership of the team?
  • What about the culture?
  • Do you find people are willing to discuss new ideas?

Tip 8 – Stick the landing

I got this advice from a very good friend, and honestly it is the best thing I’ve ever done in an interview. I now end every interview with the following two questions.

  • Do you have any reservations about me for this position? And if so can we address those now?
  • Can I count of you for a “Yes” vote when it comes time to hire?

What I love about these questions, is it gives you a very specific conversation that is focused on the reason you are all there. Let’s face it they are looking to see if you are a good fit, and you are looking to see the same. What I love about this question is that it gives them a chance to talk to you about anything they might have reservations about.

The biggest thing here is if they say “Well you don’t have a lot of experience in _______.” You can possibly clear up that misconception by talking about something adjacent.

The final question really helps you to show them how committed and interested you are in the position. It solidifies that you want this, and are ultimately looking for that to happen.

Tip 9 – Remember this is a sales conversation

Remember, interviewing is a sales motion, where you are selling yourself. You are selling them on why you would be an excellent addition to their team. So approach this as such, and it will always help to orient you in the right direction.

Final thoughts

Remember at the end of this process, you are interviewing them as much as they are interviewing you. You want to make sure you are going to enjoy and want to work with these people. So make sure you pay attention and ask questions during the interview.

2021…to Infinity and Beyond

2021…to Infinity and Beyond

Well all, I know I’ve been silent for a while and that was not intentional, but life just got in the way of me posting. Which I am going to endeavor to change, and get back on track with regular updates again.

So things went nuts for the month of December because of all the normal craziness of the holidays but in addition I actually changed jobs at Microsoft and was lucky enough to be able to move over to the engineering side of Microsoft. Which marked the achievement of a dream of mine, and with that came a lot of things, but I am now working on many excited new projects.

But I wanted to make sure to talk about the approach I have with regard to 2021 and moving forward. As I took some time over the past month, to reflect on 2020, the year was definitely one for the books. 2020 can be summarized as a bit chaotic of a year, and above all it marks a pretty radical change in the world, as things I’m not convinced will ever go back 100% to the way they were before.

And how I know there is no shortage of posts, blogs, memes, videos, and etc that talked about the dumpster fire that 2020 was. I prefer to use this as a time to look back at good things that have happened.

2020 in a lot of ways I think showed the strength and resiliency of the human spirit and the fact that when we pull together we can accomplish quite a lot. And also this year marked the realization for me of what it takes to enjoy life and what values drive me and define success for my life. And also how much I learned and continue to learn moving forward.

So while 2020 was a year of sacrifice, I want to focus on the positive and those were the following:

  • More time spent with my family.
  • Built a strong core group of friends around a game I really enjoy.
  • Learned and grew more as a creative this year, and really boosted the belief in my ability to create things in this world.
  • I got to work with some truly amazing people who are building solutions that are changing the world and making the world a better place.
  • I learned a new programming language (Python)
  • I’ve learned more about the values that define success for me.
  • I’ve watch my kids grow, and their minds grow in ways, and I am convinced in the future they will be WAY smarter than I ever was.
  • I reconnected with old friends, and found myself becoming part of a community that I had moved out of long ago.
  • I made new professional friends who are amazing.
  • I read a lot of really amazing books that opened my eyes to what is possible (Infinite Game, absolutely amazing).

So for 2021, I really look forward to the opportunity to grow and learn further, to find new ways to push the envelope and innovate, and to embrace the amazing life with a family I am lucky to have. For those who read this blog regularly, I hope 2021 turns out to be an amazing year for you of learning and growth, and “May the force be with you.”, and ultimately I look back on all of this, and I think it was said best on the best new show of 2020.

See the source image
Thanksgiving Turkey in the Smoker

Thanksgiving Turkey in the Smoker

And now for something completely different. This past week was Thanksgiving in the states, and for almost all of us it was different than normal as COVID-19 prevented us from getting to see our families. Here in the Mack household, we took the opportunity to try something new, and used my pellet smoker, to Smoke a turkey.

And I thought I would share the results.

Turkey Brine:

Basically the process was this, we started the night before with a turkey brine, which was the following:

Now for this we took inspiration from Alton Brown’s recipe found here. But made some slight adjustments:

Here are the ingredients:

  • 1 gallon hot water
  • 1 pound kosher salt
  • 2 quarts vegetable broth
  • 1 pound honey
  • 1 (7-pound) bag of ice
  • 1 (15 to 20-pound) turkey, with giblets removed

Now we combine the boiling water, salt, vegetable broth, and honey into a cooler and mixed everything until it had all dissolved.

Next we added the ice to keep the brine cool and let it come down in temperature to normal. We then took the turkey, and put it in and waited 16 hours.

From there the next step was to remove the turkey from the brine and dry it off. We did not rinse the bird, as my family likes it a little on the salty side, but if you don’t, you’ll want to rinse your bird.

Marinade Injection:

I’m of the belief that Turkey dries out really easy, so we decided to do everything humanly possible to get this bird to stay moist. And the next step was to put together an injection. We got inspiration from here.

Here are the ingredients:

  • 1/2 cup Butter
  • 1/2 cup Maple Syrup

And then we melted both together and allowed it to cool slightly. The idea here being it needs to be a liquid for the injector and don’t let it cool too far or you’ll be injecting sludge into your bird.

We then injected the bird over 50 times, doing small little injections about ever inch across the breast, legs, thighs, and pretty much every part of the exposed meat.

Next we put on a rub, and for this we put together about a half a cup of butter and a store bought turkey rub, we found at lowes. But really any rub that you would use on poultry is a good idea here. And rubbed under the skin of the bird.

Smoking the bird

I got my pellet smoker up to 250 degrees Fahrenheit, and then put in the bird. We used a aluminum disposable pan to keep the drippings around the bird and help with moisture. And then every hour, I would spray the turkey with apple juice.

We kept the turkey cooking until we got it to an even 165 degrees Fahrenheit.

Finally we did increase the temperature when it got to 165 degrees to 325 and let it go for another 30 minutes to make the skin crispy.

After that, enjoy!

Generating Dummy Data for Event Hubs or Blob Storage

Generating Dummy Data for Event Hubs or Blob Storage

So I was working on this as part of another project, and I thought I would share. Basically, one of the most annoying aspects of building data pipelines is getting test data to verify the results of that data.

So nothing overly ground breaking, but I thought this might be useful for anyone trying to pipe data into a data pipeline, whether that be blob storage or event hub.

So what I did was build a small generic utility to build text files full of JSON objects and then parse those files putting them onto event hub.

Now for the sake of this instance, I decoupled the code for the event hub, so that I could get more utility, and implemented this as part of a dotnet core console application. Below is the method for generating the files:

static void Main(string[] args)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true);
            var configuration = builder.Build();

            var appSettings = new AppSettings();

            ConfigurationBinder.Bind(configuration.GetSection("AppSettings"), appSettings);

            for (var f = 0; f < appSettings.NumberOfFiles; f++)
            {
                var fileName = $"{appSettings.FilePrefix}-{f}-{ DateTime.Now.ToString("MM-dd-yyyy-hh-mm-ss")}.txt";

                Console.WriteLine("-----------------------------------------------------------------------");
                Console.WriteLine($"Creating file - {fileName}");
                Console.WriteLine("-----------------------------------------------------------------------");
                Console.WriteLine("");

                //Create records for entry
                var list = new List<LogEntryModel>();
                for (var x = 0; x < appSettings.MaxNumberOfRecords; x++)
                {
                    var logEntry = new LogEntryModel();

                    logEntry.LogDateTime = DateTime.Now;
                    logEntry.LogMessage = $"Test { x } - { DateTime.Now.ToString("MM-dd-yyyy-hh-mm-ss")}";
                    logEntry.SequenceNumber = x;

                    list.Add(logEntry);
                    Console.WriteLine($"Creating line entry - { logEntry.LogMessage}");

                    var randomTime = RandomNumber(1, appSettings.MaxWaitBetweenEntries);

                    Console.WriteLine($"Thread sleep for { randomTime }");
                    Thread.Sleep(randomTime);
                    Console.WriteLine($"Sleep over - Processing file");
                }

                var filePath = $@"C:\temp\{fileName}";
                //Create text file"
                using (StreamWriter file = File.CreateText(filePath))
                {
                    JsonSerializer serializer = new JsonSerializer();
                    serializer.Serialize(file, list);
                    Console.WriteLine("Pushing Json to file");
                    Console.WriteLine("");
                }

                //Push to blob storage
                BlobServiceClient blobServiceClient = new BlobServiceClient(appSettings.BlobConnectionString);

                //Create a unique name for the container
                string containerName = "logs";

                // Create the container and return a container client object
                var containerClient = blobServiceClient.GetBlobContainerClient(containerName);

                BlobClient blobClient = containerClient.GetBlobClient(fileName);

                Console.WriteLine("Pushing File to Blob Storage");
                Console.WriteLine("");
                using FileStream uploadFile = File.OpenRead(filePath);
                var uploadTask = blobClient.UploadAsync(uploadFile, true);

                uploadTask.Wait();

                uploadFile.Close();

                Console.WriteLine("File Uploaded to Blob storage");
                Console.WriteLine("");

                var randomFileTime = RandomNumber(1, appSettings.MaxWaitBetweenFiles);
                Console.WriteLine($"Thread going to sleep for - { randomFileTime}");
                Thread.Sleep(randomFileTime);
                Console.WriteLine("Thread sleep down, moving onto next file");
                Console.WriteLine("");

                Console.WriteLine($"Started Deleting file {filePath}");
                File.Delete(filePath);
                Console.WriteLine($"Finished Deleting file {filePath}");
            }

            Console.WriteLine("All Files Processed and uploaded.");

            Console.ReadLine();
        }

In addition to creating staggered entries, it additionally outputs in an easy readable format to the console screen. Below is the method I use to generate the random numbers:

static int RandomNumber(int min, int max)
        {
            return _random.Next(min, max);
        }

Overall nothing to special, but it at least creates an easy method of generating the json objects required for pumping through a data pipeline.

Below is all I leverage for a data model for this but this could easily be swapped for any data model you like with some random elements:

public class LogEntryModel
    {
        public DateTime LogDateTime { get; set; }
        public string LogMessage { get; set; }
        public int SequenceNumber { get; set; }
    }

Now on the back end, I needed to take these blob files and parse them. And did so by doing the following:

using (var sr = new StreamReader(logFile, Encoding.UTF8))
            {
                var logs = new List<LogEntryModel>();

                var str = sr.ReadToEnd();

                logs = JsonConvert.DeserializeObject<List<LogEntryModel>>(str);

                await using (var producerClient = new EventHubProducerClient(connectionString, hubName))

                {
                    using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();

                    foreach (var logEntry in logs)
                    {
                        var txt = JsonConvert.SerializeObject(logEntry);
                        eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes(txt)));
                    }

                    await producerClient.SendAsync(eventBatch);
                    log.LogInformation($"Log of {name} with {logs.Count} rows processed.");
                }
            }

Anyway, I hope you find this helpful to get data pushed into your pipeline.

It’s that time of the year, What I learned from Hallmark movies 2020

It’s that time of the year, What I learned from Hallmark movies 2020

So much to my wife’s dismay, I actually got a lot of traffic for the previous post. So I decided to do a follow-up.

Here are my 2020 lessons from Hallmark movies:

  • If you want to marry into a royal family, and have known the prince or princess your whole life and been childhood sweethearts, then you must be a cold heartless person.
  • If you work for a royal family in any capacity involving kids you are guaranteed to marry a prince or princess at Christmas time.
  • A king or queens endorsement of your business is the only way it will be successful.
  • All websites can only be built by high price consultants that have corner offices in New York.
  • Futures are not built in “the city” they are ruined.  Living in a small town is a requirement to be happy.
  • If you own a restaurant or food truck, or work in the service industry.  Despite working a full crazy shift… your hair, teeth and clothes will be perfect with any sign of sweat.
  • Kids always love the new person in your life and become obsessed with you marrying that person within 10 minutes of meeting them.
  • Any job that requires you to move will immediately reconsider that decision based solely on the fact that you found your true love.
  • All schools teach a Christmas only curriculum starting at thanskgiving.
  • All corporate jobs will require you to be away from your family on christmas eve.
  • Despite budget cuts in music programs or other arts programs, every school has unlimited budget for Christmas decorations.
  • If your father owns the company you work for, he will require you to work through Christmas.
  • If someone assumes you are a couple, you will fall deeply in love in few days.
  • All cookie baking involves flower disasters where it ends up caked on your face, and guaranteed it will turn out culinary perfection.
Securing Blob Storage

Securing Blob Storage

So one question, that I find is largely overlooked but very much critical is specifically how to secure blob storage. The idea being that you are storing data in the cloud, using blob storage, but the question becomes how does an application make use of those storage accounts in a secure way.

And just like anything, there are a lot of steps you can take to secure your blobs to make them accessible without adding a lot of complexity to your application.

Encryption-At-Rest

So by default the azure blog storage, will encrypt your data at rest. And there’s a great write up for that here. But more than that, you can also use customer managed keys. Now the biggest benefit to that, being that you have complete control over the keys used for encryption.

Encryption-In-Transit

From a client side and transit perspective, all of the SDKs, and client communication with Azure Blob Storage is done enforcing https. So by default, the communication is secured using https. So this is abstracted for the most part.

Securing the Endpoint

You do not have to expose your blob storage accounts to the internet via a public endpoint if you don’t want to.  As denoted here (or here), you can use service endpoints to lock down traffic to a blob storage account to only traffic within a specific virtual network.  So you are not required to use public endpoints.  We also now have additional options of using Private Link to further provide isolation to allowing traffic to move through the azure backbone only. 

Just-In-Time Blob Access

Every security technology will point to “Just-In-Time” access as being a strategy for preventing breaches. And storage is not different, and the recommendation is to not use the key that could grant access to an entire blob account or container.

Instead the recommendation would be that you leverage SaaS tokens, which can be configured to control the access, scoping down to a specific blob, and setting an expiration time.  This ensures that you can leverage a single token for a single use, and lock down the actions that token can be used for.

Here is an article that goes over how to do this in C# as an example using the SDK.  Additionally, you can leverage Stored Access Policy to standardize and lock your SaaS tokens, so that no developer can create an application that violates your standards on this to ensure traffic is secured. 

This ensures that you are giving a “single use” token that cannot be violated as you’ve mentioned above.  Additionally you can make this more secure by using Service Principals and Azure AD to secure the token generation, and block all access to the blobs without Azure AD or Ad Hoc SaaS tokens. 

The best thing that we can do here is to make sure you are using short lived tokens, which is the common practice I have leveraged in the past to prevent the ability for people to be able to “crack” the key.  We also have a listing of the recommendations we have for access with blob storage. There is also additional options to enable Threat Protection and Security Center around Azure Storage.

Enabling Remote State with Terraform

Enabling Remote State with Terraform

So I’ve made no secret of my love of TerraForm. Honestly, I really like TerraForm for using infrastructure as code. Now one of the features that I really like about TerraForm, is the ability to execute a plan and see what’s going to change.

What is state in Terraform?

Terraform leverages using state to enable the ability to have the “plan/apply” functionality. Which makes it such that you can see the changes before they are applied.

How do we enable remote state?

So the process of enabling state remotely isn’t necessarily hard, and is requires a simple piece of code. For my projects, I add a “Terraform.tf”, that contains this information. NOTE: I do usually add this to the gitignore, so that I’m not checking in the keys:

terraform {
    backend "azurerm" {
        resource_group_name  = "..."
        storage_account_name = "..."
        container_name = "..."
        key = "..."
    }
}

It really is that simple, and the key part of this is it becomes very important if you are working with more than one person on deploying to the same environment. In that scenario, if you have two developers using local state, then your state can become out of sync. But this is an easy way to make sure that you manage state in a way that allows collaboration.

A simple trick to handling environments in Terraform

A simple trick to handling environments in Terraform

So for a short post, I wanted to share a good habit to get into with TerraForm. More specifically this is an easy way to handle the configuration and deployment of multiple environments and making it easier to manage in your Terraform scripts.

It doesn’t take long working with TerraForm to see the immediate value in leveraging it to build out brand new environments, but that being said it never fails to amaze me how many people I talk to, who don’t craft their templates to be highly reusable. There are lots of ways to do this, but I wanted to share a practice that I use.

The idea starts by leveraging this pattern. My projects all contain the following key files for “.tf”:

  • main.tf: This file contains the provider information, and maps up the service principal (if you are using one) to be used during deployment.
  • variables.tf: This file contains a list of all the variables leveraged in my solution, with a description for their definition.

The “main.tf” file is pretty basic:

provider "azurerm" {
    subscription_id = var.subscription_id
    version = "~> 2.1.0"

    client_id = var.client_id
    client_secret = var.client_secret
    tenant_id = var.tenant_id

    features {}
}

Notice that the above is already wired up for the variables of Subscription_id, client_id, client_secret, and tenant_id.

Now for my variables file, I have things like the following:

variable "subscription_id" {
    description = "The subscription being deployed."
}

variable "client_id" {
    description = "The client id of the service prinicpal"
}

variable "client_secret" {
    description = "The client secret for the service prinicpal"
}

variable "tenant_id" {
    description = "The client secret for the service prinicpal"
}

Now what this enables is the ability to then have a separate “.tfvars” file for each individual environment:

primarylocation = "..."
secondarylocation = "..."
subscription_id = "..."

client_id = "..."
client_secret = "..."
tenant_id = "..."

From here the process of creating the environment in TerraForm is as simple as:

terraform apply -var-file {EnvironmentName}.tfvars

And then for new environments all I have to do is create a new .tfvars file to contain the configuration for that environment. This enables me to manage the configuration for my environment locally.

NOTE: I usually recommend that you add “*.tfvars” to the gitignore, so that these files are not necessarily checked in. This prevents configuration from being checked into source control.

Another step this then makes relatively easy is the automated deployment, as I can add the following for a YAML task:

- script: |
    touch variables.tfvars
    echo -e "primarylocation = \""$PRIMARYLOCATION"\"" >> variables.tfvars
    echo -e "secondarylocation = \""$SECONDARYLOCATION"\"" >> variables.tfvars
    echo -e "subscription_id = \""$SUBSCRIPTION_ID"\"" >> variables.tfvars
    echo -e "client_id = \""$SP_APPLICATIONID"\"" >> variables.tfvars
    echo -e "tenant_id = \""$SP_TENANTID"\"" >> variables.tfvars
    echo -e "client_secret = \""$SP_CLIENTSECRET"\"" >> variables.tfvars
  displayName: 'Create variables Tfvars'

The above script then takes the build variables for the individual environment, and builds the appropriate “.tfvars” file to run for that environment.

Now this is sort of the manual approach, ideally you would leverage keyvault or vault to access the necessary deployment variables.